Migrate from Apache Solr to OpenSearch


OpenSearch is an open supply, distributed search engine appropriate for a wide selection of use-cases resembling ecommerce search, enterprise search (content material administration search, doc search, data administration search, and so forth), website search, software search, and semantic search. It’s additionally an analytics suite that you should utilize to carry out interactive log analytics, real-time software monitoring, safety analytics and extra. Like Apache Solr, OpenSearch gives search throughout doc units. OpenSearch additionally contains capabilities to ingest and analyze knowledge. Amazon OpenSearch Service is a totally managed service that you should utilize to deploy, scale, and monitor OpenSearch within the AWS Cloud.

Many organizations are migrating their Apache Solr primarily based search options to OpenSearch. The primary driving elements embrace decrease complete value of possession, scalability, stability, improved ingestion connectors (resembling Information Prepper, Fluent Bit, and OpenSearch Ingestion), elimination of exterior cluster managers like Zookeeper, enhanced reporting, and wealthy visualizations with OpenSearch Dashboards.

We advocate approaching a Solr to OpenSearch migration with a full refactor of your search answer to optimize it for OpenSearch. Whereas each Solr and OpenSearch use Apache Lucene for core indexing and question processing, the methods exhibit totally different traits. By planning and working a proof-of-concept, you may guarantee the most effective outcomes from OpenSearch. This weblog publish dives into the strategic issues and steps concerned in migrating from Solr to OpenSearch.

Key variations

Solr and OpenSearch Service share basic capabilities delivered by means of Apache Lucene. Nonetheless, there are some key variations in terminology and performance between the 2:

  • Assortment and index: In OpenSearch, a group known as an index.
  • Shard and reproduction: Each Solr and OpenSearch use the phrases shard and reproduction.
  • API-driven Interactions: All interactions in OpenSearch are API-driven, eliminating the necessity for guide file adjustments or Zookeeper configurations. When creating an OpenSearch index, you outline the mapping (equal to the schema) and the settings (equal to solrconfig) as a part of the index creation API name.

Having set the stage with the fundamentals, let’s dive into the 4 key parts and the way every of them will be migrated from Solr to OpenSearch.

Assortment to index

A group in Solr known as an index in OpenSearch. Like a Solr assortment, an index in OpenSearch additionally has shards and replicas.

Though the shard and reproduction idea is analogous in each the various search engines, you should utilize this migration as a window to undertake a greater sharding technique. Measurement your OpenSearch shards, replicas, and index by following the shard technique greatest practices.

As a part of the migration, rethink your knowledge mannequin. In analyzing your knowledge mannequin, you’ll find efficiencies that dramatically enhance your search latencies and throughput. Poor knowledge modeling doesn’t solely lead to search efficiency issues however extends to different areas. For instance, you would possibly discover it difficult to assemble an efficient question to implement a selected characteristic. In such instances, the answer usually entails modifying the info mannequin.

Variations: Solr permits main shard and reproduction shard collocation on the identical node. OpenSearch doesn’t place the first and reproduction on the identical node. OpenSearch Service zone consciousness can routinely make sure that shards are distributed to totally different Availability Zones (knowledge facilities) to additional enhance resiliency.

The OpenSearch and Solr notions of reproduction are totally different. In OpenSearch, you outline a main shard rely utilizing number_of_primaries that determines the partitioning of your knowledge. You then set a reproduction rely utilizing number_of_replicas. Every reproduction is a duplicate of all the first shards. So, if you happen to set number_of_primaries to five, and number_of_replicas to 1, you should have 10 shards (5 main shards, and 5 reproduction shards). Setting replicationFactor=1 in Solr yields one copy of the info (the first).

For instance, the next creates a group referred to as take a look at with one shard and no replicas.

http://localhost:8983/solr/admin/collections?
  _=motion=CREATE
  &maxShardsPerNode=2
  &title=take a look at
  &numShards=1
  &replicationFactor=1
  &wt=json

In OpenSearch, the next creates an index referred to as take a look at with 5 shards and one reproduction

PUT take a look at
{
  "settings": {
    "number_of_shards": 5,
    "number_of_replicas": 1
  }
}

Schema to mapping

In Solr schema.xml OR managed-schema has all the sphere definitions, dynamic fields, and duplicate fields together with discipline sort (textual content analyzers, tokenizers, or filters). You employ the schema API to handle schema. Or you may run in schema-less mode.

OpenSearch has dynamic mapping, which behaves like Solr in schema-less mode. It’s not essential to create an index beforehand to ingest knowledge. By indexing knowledge with a brand new index title, you create the index with OpenSearch managed service default settings (for instance: "number_of_shards": 5, "number_of_replicas": 1) and the mapping primarily based on the info that’s listed (dynamic mapping).

We strongly advocate you go for a pre-defined strict mapping. OpenSearch units the schema primarily based on the primary worth it sees in a discipline. If a stray numeric worth is the primary worth for what can be a string discipline, OpenSearch will incorrectly map the sphere as numeric (integer, for instance). Subsequent indexing requests with string values for that discipline will fail with an incorrect mapping exception. You recognize your knowledge, you recognize your discipline varieties, you’ll profit from setting the mapping instantly.

Tip: Contemplate performing a pattern indexing to generate the preliminary mapping after which refine and tidy up the mapping to precisely outline the precise index. This method helps you keep away from manually setting up the mapping from scratch.

For Observability workloads, you must think about using Easy Schema for Observability. Easy Schema for Observability (often known as ss4o) is a commonplace for conforming to a standard and unified observability schema. With the schema in place, Observability instruments can ingest, routinely extract, and mixture knowledge and create customized dashboards, making it simpler to know the system at a better stage.

Most of the discipline varieties (knowledge varieties), tokenizers, and filters are the identical in each Solr and OpenSearch. In spite of everything, each use Lucene’s Java search library at their core.

Let’s have a look at an instance:

<!-- Solr schema.xml snippets -->
<discipline title="id" sort="string" listed="true" saved="true" required="true" multiValued="false" /> 
<discipline title="title" sort="string" listed="true" saved="true" multiValued="true"/>
<discipline title="deal with" sort="text_general" listed="true" saved="true"/>
<discipline title="user_token" sort="string" listed="false" saved="true"/>
<discipline title="age" sort="pint" listed="true" saved="true"/>
<discipline title="last_modified" sort="pdate" listed="true" saved="true"/>
<discipline title="metropolis" sort="text_general" listed="true" saved="true"/>

<uniqueKey>id</uniqueKey>

<copyField supply="title" dest="textual content"/>
<copyField supply="deal with" dest="textual content"/>

<fieldType title="string" class="solr.StrField" sortMissingLast="true" />
<fieldType title="pint" class="solr.IntPointField" docValues="true"/>
<fieldType title="pdate" class="solr.DatePointField" docValues="true"/>

<fieldType title="text_general" class="solr.TextField" positionIncrementGap="100">
<analyzer sort="index">
    <tokenizer class="solr.StandardTokenizerFactory"/>
    <filter class="solr.ASCIIFoldingFilterFactory" preserveOriginal="false" />
    <filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
<analyzer sort="question">
    <tokenizer class="solr.StandardTokenizerFactory"/>
    <filter class="solr.ASCIIFoldingFilterFactory" preserveOriginal="false" />
    <filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>

PUT index_from_solr
{
  "settings": {
    "evaluation": {
      "analyzer": {
        "text_general": {
          "sort": "customized",
          "tokenizer": "commonplace",
          "filter": [
            "lowercase",
            "asciifolding"
          ]
        }
      }
    }
  },
  "mappings": {
    "properties": {
      "title": {
        "sort": "key phrase",
        "copy_to": "textual content"
      },
      "deal with": {
        "sort": "textual content",
        "analyzer": "text_general"
      },
      "user_token": {
        "sort": "key phrase",
        "index": false
      },
      "age": {
        "sort": "integer"
      },
      "last_modified": {
        "sort": "date"
      },
      "metropolis": {
        "sort": "textual content",
        "analyzer": "text_general"
      },
      "textual content": {
        "sort": "textual content",
        "analyzer": "text_general"
      }
    }
  }
}

Notable issues in OpenSearch in comparison with Solr:

  1. _id is all the time the uniqueKey and can’t be outlined explicitly, as a result of it’s all the time current.
  2. Explicitly enabling multivalued isn’t obligatory as a result of any OpenSearch discipline can include zero or extra values.
  3. The mapping and the analyzers are outlined throughout index creation. New fields will be added and sure mapping parameters will be up to date later. Nonetheless, deleting a discipline isn’t doable. A useful ReIndex API can overcome this drawback. You should use the Reindex API to index knowledge from one index to a different.
  4. By default, analyzers are for each index and question time. For some less-common eventualities, you may change the question analyzer at search time (within the question itself), which can override the analyzer outlined within the index mapping and settings.
  5. Index templates are additionally an effective way to initialize new indexes with predefined mappings and settings. For instance, if you happen to repeatedly index log knowledge (or any time-series knowledge), you may outline an index template so that each one the indices have the identical variety of shards and replicas. It may also be used for dynamic mapping management and element templates

Search for alternatives to optimize the search answer. For example, if the evaluation reveals that town discipline is solely used for filtering relatively than looking out, take into account altering its discipline sort to key phrase as a substitute of textual content to remove pointless textual content processing. One other optimization might contain disabling doc_values for the user_token discipline if it’s solely meant for show functions. doc_values are disabled by default for the textual content datatype.

SolrConfig to settings

In Solr, solrconfig.xml carries the gathering configuration. All kinds of configurations pertaining to the whole lot from index location and formatting, caching, codec manufacturing facility, circuit breaks, commits and tlogs all the way in which as much as sluggish question config, request handlers, and replace processing chain, and so forth.

Let’s have a look at an instance:

<codecFactory class="solr.SchemaCodecFactory">
<str title="compressionMode">`BEST_COMPRESSION`</str>
</codecFactory>

<autoCommit>
    <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
    <openSearcher>false</openSearcher>
</autoCommit>

<autoSoftCommit>
    <maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
    </autoSoftCommit>

<slowQueryThresholdMillis>1000</slowQueryThresholdMillis>

<maxBooleanClauses>${solr.max.booleanClauses:2048}</maxBooleanClauses>

<requestHandler title="/question" class="solr.SearchHandler">
    <lst title="defaults">
    <str title="echoParams">express</str>
    <str title="wt">json</str>
    <str title="indent">true</str>
    <str title="df">textual content</str>
    </lst>
</requestHandler>

<searchComponent title="spellcheck" class="solr.SpellCheckComponent"/>
<searchComponent title="counsel" class="solr.SuggestComponent"/>
<searchComponent title="elevator" class="solr.QueryElevationComponent"/>
<searchComponent class="solr.HighlightComponent" title="spotlight"/>

<queryResponseWriter title="json" class="solr.JSONResponseWriter"/>
<queryResponseWriter title="velocity" class="solr.VelocityResponseWriter" startup="lazy"/>
<queryResponseWriter title="xslt" class="solr.XSLTResponseWriter"/>

<updateRequestProcessorChain title="script"/>

Notable issues in OpenSearch in comparison with Solr:

  1. Each OpenSearch and Solr have BEST_SPEED codec as default (LZ4 compression algorithm). Each provide BEST_COMPRESSION instead. Moreover OpenSearch affords zstd and zstd_no_dict. Benchmarking for various compression codecs can also be accessible.
  2. For close to real-time search, refresh_interval must be set. The default is 1 second which is nice sufficient for many use instances. We advocate rising refresh_interval to 30 or 60 seconds to enhance indexing pace and throughput, particularly for batch indexing.
  3. Max boolean clause is a static setting, set at node stage utilizing the indices.question.bool.max_clause_count setting.
  4. You don’t want an express requestHandler. All searches use the _search or _msearch endpoint. Should you’re used to utilizing the requestHandler with default values then you should utilize search templates.
  5. Should you’re used to utilizing /sql requestHandler, OpenSearch additionally allows you to use SQL syntax for querying and has a Piped Processing Language.
  6. Spellcheck, often known as Did-you-mean, QueryElevation (generally known as pinned_query in OpenSearch), and highlighting are all supported throughout question time. You don’t must explicitly outline search parts.
  7. Most API responses are restricted to JSON format, with CAT APIs as the one exception. In instances the place Velocity or XSLT is utilized in Solr, it have to be managed on the applying layer. CAT APIs reply in JSON, YAML, or CBOR codecs.
  8. For the updateRequestProcessorChain, OpenSearch gives the ingest pipeline, permitting the enrichment or transformation of knowledge earlier than indexing. A number of processor levels will be chained to type a pipeline for knowledge transformation. Processors embrace GrokProcessor, CSVParser, JSONProcessor, KeyValue, Rename, Cut up, HTMLStrip, Drop, ScriptProcessor, and extra. Nonetheless, it’s strongly beneficial to do the info transformation outdoors OpenSearch. The perfect place to try this can be at OpenSearch Ingestion, which gives a correct framework and numerous out-of-the-box filters for knowledge transformation. OpenSearch Ingestion is constructed on Information Prepper, which is a server-side knowledge collector able to filtering, enriching, reworking, normalizing, and aggregating knowledge for downstream analytics and visualization.
  9. OpenSearch additionally launched search pipelines, just like ingest pipelines however tailor-made for search time operations. Search pipelines make it simpler so that you can course of search queries and search outcomes inside OpenSearch. At the moment accessible search processors embrace filter question, neural question enricher, normalization, rename discipline, scriptProcessor, and personalize search rating, with extra to come back.
  10. The next picture exhibits tips on how to set refresh_interval and slowlog. It additionally exhibits you the opposite doable settings.
  11. Sluggish logs will be set like the next picture however with far more precision with separate thresholds for the question and fetch phases.

Earlier than migrating each configuration setting, assess if the setting will be adjusted primarily based in your present search system expertise and greatest practices. For example, within the previous instance, the sluggish logs threshold of 1 second is perhaps intensive for logging, so that may be revisited. In the identical instance, max.booleanClauses is perhaps one other factor to have a look at and scale back.

Variations: Some settings are executed on the cluster stage or node stage and never on the index stage. Together with settings resembling max boolean clause, circuit breaker settings, cache settings, and so forth.

Rewriting queries

Rewriting queries deserves its personal weblog publish; nonetheless we need to no less than showcase the autocomplete characteristic accessible in OpenSearch Dashboards, which helps ease question writing.

Much like the Solr Admin UI, OpenSearch additionally contains a UI referred to as OpenSearch Dashboards. You should use OpenSearch Dashboards to handle and scale your OpenSearch clusters. Moreover, it gives capabilities for visualizing your OpenSearch knowledge, exploring knowledge, monitoring observability, working queries, and so forth. The equal for the question tab on the Solr UI in OpenSearch Dashboard is Dev Instruments. Dev Instruments is a improvement setting that permits you to arrange your OpenSearch Dashboards setting, run queries, discover knowledge, and debug issues.

Now, let’s assemble a question to perform the next:

  1. Seek for shirt OR shoe in an index.
  2. Create a aspect question to search out the variety of distinctive prospects. Side queries are referred to as aggregation queries in OpenSearch. Also referred to as aggs question.

The Solr question would appear like this:

http://localhost:8983/solr/solr_sample_data_ecommerce/choose?q=shirt OR shoe
  &aspect=true
  &aspect.discipline=customer_id
  &aspect.restrict=-1
  &aspect.mincount=1
  &json.aspect={
   unique_customer_count:"distinctive(customer_id)"
  }

The picture under demonstrates tips on how to re-write the above Solr question into an OpenSearch question DSL:

Conclusion

OpenSearch covers all kinds of makes use of instances, together with enterprise search, website search, software search, ecommerce search, semantic search, observability (log observability, safety analytics (SIEM), anomaly detection, hint analytics), and analytics. Migration from Solr to OpenSearch is changing into a standard sample. This weblog publish is designed to be a place to begin for groups searching for steering on such migrations.

You’ll be able to check out OpenSearch with the OpenSearch Playground. You’ll be able to get began with Amazon OpenSearch Service, a managed implementation of OpenSearch within the AWS Cloud.


Concerning the Authors

Aswath Srinivasan is a Senior Search Engine Architect at Amazon Net Companies at present primarily based in Munich, Germany. With over 17 years of expertise in numerous search applied sciences, Aswath at present focuses on OpenSearch. He’s a search and open-source fanatic and helps prospects and the search group with their search issues.

Jon Handler is a Senior Principal Options Architect at Amazon Net Companies primarily based in Palo Alto, CA. Jon works carefully with OpenSearch and Amazon OpenSearch Service, offering assist and steering to a broad vary of shoppers who’ve search and log analytics workloads that they need to transfer to the AWS Cloud. Previous to becoming a member of AWS, Jon’s profession as a software program developer included 4 years of coding a large-scale, ecommerce search engine. Jon holds a Bachelor of the Arts from the College of Pennsylvania, and a Grasp of Science and a PhD in Pc Science and Synthetic Intelligence from Northwestern College.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here

Stay on op - Ge the daily news in your inbox