Alternatives to Apache Solr logo

Alternatives to Apache Solr

Splunk, Lucene, Elasticsearch, MongoDB, and Apache Spark are the most popular alternatives and competitors to Apache Solr.
134
91
+ 1
0

What is Apache Solr and what are its top alternatives?

Apache Solr is an open-source search platform built on Apache Lucene. It offers features such as full-text search, faceted search, hit highlighting, dynamic clustering, and rich document handling. However, Solr can be complex to set up and configure, and may require significant resources to run efficiently.

  1. Elasticsearch: Elasticsearch is a distributed, RESTful search engine that is highly scalable and can handle large amounts of data. Key features include real-time search, analytics, and monitoring. Pros include scalability and real-time search capabilities, while a con is the complexity of managing a large cluster.
  2. Microsoft Azure Cognitive Search: This cloud-based search service allows for building of web, mobile, and enterprise solutions with advanced search capabilities. Key features include AI-powered relevancy, autoscaling, and integration with Azure services. Pros include AI-driven search capabilities, while a con is the dependence on the Azure platform.
  3. Amazon CloudSearch: A fully managed search service that allows for the setup and scaling of a search solution without the need for infrastructure maintenance. Key features include automatic scaling, multi-AZ deployment, and simple setup. Pros include easy setup and scalability, while a con is the lack of advanced features compared to other tools.
  4. Sphinx: An open-source search engine designed for full-text search. Key features include support for multiple data sources, advanced full-text search capabilities, and easy integration with SQL databases. Pros include fast indexing and search speeds, while a con is the lack of built-in real-time search support.
  5. MeiliSearch: An open-source, fast, and relevant search engine. Key features include typo tolerance, filters, and multiple language support. Pros include simplicity and fast search speeds, while a con is the limited scalability compared to other tools.
  6. Algolia: A hosted search API that provides search-as-a-service with instant search and relevance features. Key features include typo tolerance, instant search, and analytics. Pros include ease of use and fast search speeds, while a con is the pricing model based on usage.
  7. Bonsai: A managed Elasticsearch service that provides scalable and reliable search capabilities. Key features include automated Elasticsearch deployment, scalable infrastructure, and data recovery options. Pros include ease of deployment and management, while a con is the dependency on a third-party service.
  8. SearchBlox: An enterprise search solution that offers features such as full-text search, faceted search, and multilingual support. Key features include multilingual search, real-time indexing, and customizable search results. Pros include ease of customization, while a con is the pricing model based on features.
  9. Swiftype: A cloud-based search platform that provides customizable search capabilities for web and mobile applications. Key features include real-time indexing, autocomplete, and analytics. Pros include ease of setup and integration, while a con is the dependency on a third-party service.
  10. OpenSearch: An open-source search and analytics platform derived from Elasticsearch. Key features include full-text search, analytics, and visualizations. Pros include the open-source nature and familiarity with Elasticsearch, while a con is the recent fork from the Elasticsearch project.

Top Alternatives to Apache Solr

  • Splunk
    Splunk

    It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...

  • Lucene
    Lucene

    Lucene Core, our flagship sub-project, provides Java-based indexing and search technology, as well as spellchecking, hit highlighting and advanced analysis/tokenization capabilities. ...

  • Elasticsearch
    Elasticsearch

    Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack). ...

  • MongoDB
    MongoDB

    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...

  • Apache Spark
    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Azure Search
    Azure Search

    Azure Search makes it easy to add powerful and sophisticated search capabilities to your website or application. Quickly and easily tune search results and construct rich, fine-tuned ranking models to tie search results to business goals. Reliable throughput and storage provide fast search indexing and querying to support time-sensitive search scenarios. ...

  • Redis
    Redis

    Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams. ...

  • Cassandra
    Cassandra

    Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL. ...

Apache Solr alternatives & related posts

Splunk logo

Splunk

599
1K
20
Search, monitor, analyze and visualize machine data
599
1K
+ 1
20
PROS OF SPLUNK
  • 3
    API for searching logs, running reports
  • 3
    Alert system based on custom query results
  • 2
    Dashboarding on any log contents
  • 2
    Custom log parsing as well as automatic parsing
  • 2
    Ability to style search results into reports
  • 2
    Query engine supports joining, aggregation, stats, etc
  • 2
    Splunk language supports string, date manip, math, etc
  • 2
    Rich GUI for searching live logs
  • 1
    Query any log as key-value pairs
  • 1
    Granular scheduling and time window support
CONS OF SPLUNK
  • 1
    Splunk query language rich so lots to learn

related Splunk posts

Shared insights
on
SplunkSplunkDjangoDjango

I am designing a Django application for my organization which will be used as an internal tool. The infra team said that I will not be having SSH access to the production server and I will have to log all my backend application messages to Splunk. I have no knowledge of Splunk so the following are the approaches I am considering: Approach 1: Create an hourly cron job that uploads the server log file to some Splunk storage for later analysis. - Is this possible? Approach 2: Is it possible just to stream the logs to some splunk endpoint? (If yes, I feel network usage and communication overhead will be a pain-point for my application)

Is there any better or standard approach? Thanks in advance.

See more
Shared insights
on
KibanaKibanaSplunkSplunkGrafanaGrafana

I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.

See more
Lucene logo

Lucene

169
228
2
A high-performance, full-featured text search engine library written entirely in Java
169
228
+ 1
2
PROS OF LUCENE
  • 1
    Fast
  • 1
    Small
CONS OF LUCENE
    Be the first to leave a con

    related Lucene posts

    Shared insights
    on
    SolrSolrLuceneLucene
    at

    "Slack provides two strategies for searching: Recent and Relevant. Recent search finds the messages that match all terms and presents them in reverse chronological order. If a user is trying to recall something that just happened, Recent is a useful presentation of the results.

    Relevant search relaxes the age constraint and takes into account the Lucene score of the document — how well it matches the query terms (Solr powers search at Slack). Used about 17% of the time, Relevant search performed slightly worse than Recent according to the search quality metrics we measured: the number of clicks per search and the click-through rate of the search results in the top several positions. We recognized that Relevant search could benefit from using the user’s interaction history with channels and other users — their ‘work graph’."

    See more
    Elasticsearch logo

    Elasticsearch

    34K
    26.6K
    1.6K
    Open Source, Distributed, RESTful Search Engine
    34K
    26.6K
    + 1
    1.6K
    PROS OF ELASTICSEARCH
    • 328
      Powerful api
    • 315
      Great search engine
    • 231
      Open source
    • 214
      Restful
    • 200
      Near real-time search
    • 98
      Free
    • 85
      Search everything
    • 54
      Easy to get started
    • 45
      Analytics
    • 26
      Distributed
    • 6
      Fast search
    • 5
      More than a search engine
    • 4
      Great docs
    • 4
      Awesome, great tool
    • 3
      Highly Available
    • 3
      Easy to scale
    • 2
      Potato
    • 2
      Document Store
    • 2
      Great customer support
    • 2
      Intuitive API
    • 2
      Nosql DB
    • 2
      Great piece of software
    • 2
      Reliable
    • 2
      Fast
    • 2
      Easy setup
    • 1
      Open
    • 1
      Easy to get hot data
    • 1
      Github
    • 1
      Elaticsearch
    • 1
      Actively developing
    • 1
      Responsive maintainers on GitHub
    • 1
      Ecosystem
    • 1
      Not stable
    • 1
      Scalability
    • 0
      Community
    CONS OF ELASTICSEARCH
    • 7
      Resource hungry
    • 6
      Diffecult to get started
    • 5
      Expensive
    • 4
      Hard to keep stable at large scale

    related Elasticsearch posts

    Tim Abbott

    We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

    We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

    And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

    I can't recommend it highly enough.

    See more
    Tymoteusz Paul
    Devops guy at X20X Development LTD · | 23 upvotes · 8.3M views

    Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

    It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

    I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

    We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

    If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

    The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

    Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

    See more
    MongoDB logo

    MongoDB

    91.8K
    79.2K
    4.1K
    The database for giant ideas
    91.8K
    79.2K
    + 1
    4.1K
    PROS OF MONGODB
    • 827
      Document-oriented storage
    • 593
      No sql
    • 553
      Ease of use
    • 464
      Fast
    • 410
      High performance
    • 257
      Free
    • 218
      Open source
    • 180
      Flexible
    • 145
      Replication & high availability
    • 112
      Easy to maintain
    • 42
      Querying
    • 39
      Easy scalability
    • 38
      Auto-sharding
    • 37
      High availability
    • 31
      Map/reduce
    • 27
      Document database
    • 25
      Easy setup
    • 25
      Full index support
    • 16
      Reliable
    • 15
      Fast in-place updates
    • 14
      Agile programming, flexible, fast
    • 12
      No database migrations
    • 8
      Easy integration with Node.Js
    • 8
      Enterprise
    • 6
      Enterprise Support
    • 5
      Great NoSQL DB
    • 4
      Support for many languages through different drivers
    • 3
      Drivers support is good
    • 3
      Aggregation Framework
    • 3
      Schemaless
    • 2
      Fast
    • 2
      Managed service
    • 2
      Easy to Scale
    • 2
      Awesome
    • 2
      Consistent
    • 1
      Good GUI
    • 1
      Acid Compliant
    CONS OF MONGODB
    • 6
      Very slowly for connected models that require joins
    • 3
      Not acid compliant
    • 1
      Proprietary query language

    related MongoDB posts

    Shared insights
    on
    Node.jsNode.jsGraphQLGraphQLMongoDBMongoDB

    I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery

    For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:

    1. Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have

    2. GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.

    3. MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website

    See more
    Vaibhav Taunk
    Team Lead at Technovert · | 31 upvotes · 3.9M views

    I am starting to become a full-stack developer, by choosing and learning .NET Core for API Development, Angular CLI / React for UI Development, MongoDB for database, as it a NoSQL DB and Flutter / React Native for Mobile App Development. Using Postman, Markdown and Visual Studio Code for development.

    See more
    Apache Spark logo

    Apache Spark

    2.9K
    3.5K
    140
    Fast and general engine for large-scale data processing
    2.9K
    3.5K
    + 1
    140
    PROS OF APACHE SPARK
    • 61
      Open-source
    • 48
      Fast and Flexible
    • 8
      One platform for every big data problem
    • 8
      Great for distributed SQL like applications
    • 6
      Easy to install and to use
    • 3
      Works well for most Datascience usecases
    • 2
      Interactive Query
    • 2
      Machine learning libratimery, Streaming in real
    • 2
      In memory Computation
    CONS OF APACHE SPARK
    • 4
      Speed

    related Apache Spark posts

    Conor Myhrvold
    Tech Brand Mgr, Office of CTO at Uber · | 44 upvotes · 10.1M views

    How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

    Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

    Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

    https://eng.uber.com/distributed-tracing/

    (GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

    Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

    See more
    Eric Colson
    Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 6.1M views

    The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

    Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

    At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

    For more info:

    #DataScience #DataStack #Data

    See more
    Azure Search logo

    Azure Search

    79
    220
    16
    Search-as-a-service for web and mobile app development
    79
    220
    + 1
    16
    PROS OF AZURE SEARCH
    • 4
      Easy to set up
    • 3
      Auto-Scaling
    • 3
      Managed
    • 2
      Easy Setup
    • 2
      More languages
    • 2
      Lucene based search criteria
    CONS OF AZURE SEARCH
      Be the first to leave a con

      related Azure Search posts

      Redis logo

      Redis

      58.3K
      44.9K
      3.9K
      Open source (BSD licensed), in-memory data structure store
      58.3K
      44.9K
      + 1
      3.9K
      PROS OF REDIS
      • 886
        Performance
      • 542
        Super fast
      • 513
        Ease of use
      • 444
        In-memory cache
      • 324
        Advanced key-value cache
      • 194
        Open source
      • 182
        Easy to deploy
      • 164
        Stable
      • 155
        Free
      • 121
        Fast
      • 42
        High-Performance
      • 40
        High Availability
      • 35
        Data Structures
      • 32
        Very Scalable
      • 24
        Replication
      • 22
        Great community
      • 22
        Pub/Sub
      • 19
        "NoSQL" key-value data store
      • 16
        Hashes
      • 13
        Sets
      • 11
        Sorted Sets
      • 10
        NoSQL
      • 10
        Lists
      • 9
        Async replication
      • 9
        BSD licensed
      • 8
        Bitmaps
      • 8
        Integrates super easy with Sidekiq for Rails background
      • 7
        Keys with a limited time-to-live
      • 7
        Open Source
      • 6
        Lua scripting
      • 6
        Strings
      • 5
        Awesomeness for Free
      • 5
        Hyperloglogs
      • 4
        Transactions
      • 4
        Outstanding performance
      • 4
        Runs server side LUA
      • 4
        LRU eviction of keys
      • 4
        Feature Rich
      • 4
        Written in ANSI C
      • 4
        Networked
      • 3
        Data structure server
      • 3
        Performance & ease of use
      • 2
        Dont save data if no subscribers are found
      • 2
        Automatic failover
      • 2
        Easy to use
      • 2
        Temporarily kept on disk
      • 2
        Scalable
      • 2
        Existing Laravel Integration
      • 2
        Channels concept
      • 2
        Object [key/value] size each 500 MB
      • 2
        Simple
      CONS OF REDIS
      • 15
        Cannot query objects directly
      • 3
        No secondary indexes for non-numeric data types
      • 1
        No WAL

      related Redis posts

      Russel Werner
      Lead Engineer at StackShare · | 32 upvotes · 2.2M views

      StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

      Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

      #StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

      See more
      Simon Reymann
      Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 9.2M views

      Our whole DevOps stack consists of the following tools:

      • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
      • Respectively Git as revision control system
      • SourceTree as Git GUI
      • Visual Studio Code as IDE
      • CircleCI for continuous integration (automatize development process)
      • Prettier / TSLint / ESLint as code linter
      • SonarQube as quality gate
      • Docker as container management (incl. Docker Compose for multi-container application management)
      • VirtualBox for operating system simulation tests
      • Kubernetes as cluster management for docker containers
      • Heroku for deploying in test environments
      • nginx as web server (preferably used as facade server in production environment)
      • SSLMate (using OpenSSL) for certificate management
      • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
      • PostgreSQL as preferred database system
      • Redis as preferred in-memory database/store (great for caching)

      The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

      • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
      • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
      • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
      • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
      • Scalability: All-in-one framework for distributed systems.
      • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
      See more
      Cassandra logo

      Cassandra

      3.5K
      3.5K
      507
      A partitioned row store. Rows are organized into tables with a required primary key.
      3.5K
      3.5K
      + 1
      507
      PROS OF CASSANDRA
      • 119
        Distributed
      • 98
        High performance
      • 81
        High availability
      • 74
        Easy scalability
      • 53
        Replication
      • 26
        Reliable
      • 26
        Multi datacenter deployments
      • 10
        Schema optional
      • 9
        OLTP
      • 8
        Open source
      • 2
        Workload separation (via MDC)
      • 1
        Fast
      CONS OF CASSANDRA
      • 3
        Reliability of replication
      • 1
        Size
      • 1
        Updates

      related Cassandra posts

      Thierry Schellenbach
      Shared insights
      on
      GolangGolangPythonPythonCassandraCassandra
      at

      After years of optimizing our existing feed technology, we decided to make a larger leap with 2.0 of Stream. While the first iteration of Stream was powered by Python and Cassandra, for Stream 2.0 of our infrastructure we switched to Go.

      The main reason why we switched from Python to Go is performance. Certain features of Stream such as aggregation, ranking and serialization were very difficult to speed up using Python.

      We’ve been using Go since March 2017 and it’s been a great experience so far. Go has greatly increased the productivity of our development team. Not only has it improved the speed at which we develop, it’s also 30x faster for many components of Stream. Initially we struggled a bit with package management for Go. However, using Dep together with the VG package contributed to creating a great workflow.

      Go as a language is heavily focused on performance. The built-in PPROF tool is amazing for finding performance issues. Uber’s Go-Torch library is great for visualizing data from PPROF and will be bundled in PPROF in Go 1.10.

      The performance of Go greatly influenced our architecture in a positive way. With Python we often found ourselves delegating logic to the database layer purely for performance reasons. The high performance of Go gave us more flexibility in terms of architecture. This led to a huge simplification of our infrastructure and a dramatic improvement of latency. For instance, we saw a 10 to 1 reduction in web-server count thanks to the lower memory and CPU usage for the same number of requests.

      #DataStores #Databases

      See more
      Thierry Schellenbach
      Shared insights
      on
      RedisRedisCassandraCassandraRocksDBRocksDB
      at

      1.0 of Stream leveraged Cassandra for storing the feed. Cassandra is a common choice for building feeds. Instagram, for instance started, out with Redis but eventually switched to Cassandra to handle their rapid usage growth. Cassandra can handle write heavy workloads very efficiently.

      Cassandra is a great tool that allows you to scale write capacity simply by adding more nodes, though it is also very complex. This complexity made it hard to diagnose performance fluctuations. Even though we had years of experience with running Cassandra, it still felt like a bit of a black box. When building Stream 2.0 we decided to go for a different approach and build Keevo. Keevo is our in-house key-value store built upon RocksDB, gRPC and Raft.

      RocksDB is a highly performant embeddable database library developed and maintained by Facebook’s data engineering team. RocksDB started as a fork of Google’s LevelDB that introduced several performance improvements for SSD. Nowadays RocksDB is a project on its own and is under active development. It is written in C++ and it’s fast. Have a look at how this benchmark handles 7 million QPS. In terms of technology it’s much more simple than Cassandra.

      This translates into reduced maintenance overhead, improved performance and, most importantly, more consistent performance. It’s interesting to note that LinkedIn also uses RocksDB for their feed.

      #InMemoryDatabases #DataStores #Databases

      See more