Introducing self-managed information sources for Amazon OpenSearch Ingestion


Enterprise clients more and more undertake Amazon OpenSearch Ingestion (OSI) to carry information into Amazon OpenSearch Service for numerous use circumstances. These embrace petabyte-scale log analytics, real-time streaming, safety analytics, and looking semi-structured key-value or doc information. OSI makes it easy, with easy integrations, to ingest information from many AWS companies, together with Amazon DynamoDB, Amazon Easy Storage Service (Amazon S3), Amazon Managed Streaming for Apache Kafka (Amazon MSK), and Amazon DocumentDB (with MongoDB compatibility).

At this time we’re asserting assist for ingesting information from self-managed OpenSearch/Elasticsearch and Apache Kafka clusters. These sources can both be on Amazon Elastic Compute Cloud (Amazon EC2) or on-premises environments.

On this submit, we define the steps to get began with these sources.

Resolution overview

OSI helps the AWS Cloud Improvement Equipment (AWS CDK), AWS CloudFormation, the AWS Command Line Interface (AWS CLI), Terraform, AWS APIs, and the AWS Administration Console to deploy pipelines. On this submit, we use the console to exhibit find out how to create a self-managed Kafka pipeline.

Conditions

To verify OSI can join and skim information efficiently, the next situations needs to be met:

  • Community connectivity to information sources – OSI is usually deployed in a public community, such because the web, or in a digital non-public cloud (VPC). OSI deployed in a buyer VPC is ready to entry information sources in the identical or completely different VPC and on the web with an hooked up web gateway. In case your information sources are in one other VPC, frequent strategies for community connectivity embrace direct VPC peering, utilizing a transit gateway, or utilizing buyer managed VPC endpoints powered by AWS PrivateLink. In case your information sources are in your company information heart or different on-premises surroundings, frequent strategies for community connectivity embrace AWS Direct Join and utilizing a community hub like a transit gateway. The next diagram reveals a pattern configuration of OSI working in a VPC and utilizing Amazon OpenSearch Service as a sink. OSI runs in a service VPC and creates an Elastic Community interface (ENI) within the buyer VPC. For self-managed information supply these ENIs are used for studying information from on-premises surroundings. OSI creates an VPC endpoint within the service VPC to ship information to the sink.
  • Identify decision for information sources – OSI makes use of an Amazon Route 53 resolver. This resolver robotically solutions queries to names native to a VPC, public domains on the web, and information hosted in non-public hosted zones. When you’re are utilizing a personal hosted zone, be sure to have a DHCP possibility set enabled, hooked up to the VPC utilizing AmazonProvidedDNS as area title server. For extra info, see Work with DHCP possibility units. Moreover, you should utilize resolver inbound and outbound endpoints for those who want a posh decision schemes with situations which might be past a easy non-public hosted zone.
  • Certificates verification for information supply names – OSI helps solely SASL_SSL for transport for Apache Kafka supply. Inside SASL, Amazon OpenSearch Service helps most authentication mechanisms like PLAIN, SCRAM, IAM, GSAPI and others. When utilizing SASL_SSL, be sure to have entry to certificates wanted for OSI to authenticate. For self-managed OpenSearch information sources, ensure that verifiable certificates are put in on the clusters. Amazon OpenSearch Service doesn’t assist insecure communication between OSI and OpenSearch. Certificates verification can’t be turned off. Specifically, the “insecure” configuration possibility is just not supported.
  • Entry to AWS Secrets and techniques Supervisor – OSI makes use of AWS Secrets and techniques Supervisor to retrieve credentials and certificates wanted to speak with self-managed information sources. For extra info, see Create and handle secrets and techniques with AWS Secrets and techniques Supervisor.
  • IAM function for pipelines – You want an AWS Identification and Entry Administration (IAM) pipeline function to jot down to information sinks. For extra info, see Identification and Entry Administration for Amazon OpenSearch Ingestion.

Create a pipeline with self-managed Kafka as a supply

After you full the conditions, you’re able to create a pipeline in your information supply. Full the next steps:

  1. On the OpenSearch Service console, select Pipelines below Ingestion within the navigation pane.
  2. Select Create pipeline.
  3. Select Streaming below Use case within the navigation pane.
  4. Choose Self managed Apache Kafka below Ingestion pipeline blueprints and select Choose blueprint.

This may populate a pattern configuration for this pipeline.

  1. Present a reputation for this pipeline and select the suitable pipeline capability.
  2. Below Pipeline configuration, present your pipeline configuration in YAML format. The next code snippet reveals pattern configuration in YAML for SASL_SSL authentication:
    model: '2'
    kafka-pipeline:
      supply:
        kafka:
          acknowledgments: true
          bootstrap_servers:
            - 'node-0.instance.com:9092'
          encryption:
            kind: "ssl"
            certificates: '${{aws_secrets:kafka-cert}}'
            
          authentication:
            sasl:
              plain:
                username: '${{aws_secrets:secrets and techniques:username}}'
                password: '${{aws_secrets:secrets and techniques:password}}'
          matters:
            - title: on-prem-topic
              group_id: osi-group-1
      processor:
        - grok:
            match:
              message:
                - '%{COMMONAPACHELOG}'
        - date:
            vacation spot: '@timestamp'
            from_time_received: true
      sink:
        - opensearch:
            hosts: ["https://search-domain-12345567890.us-east-1.es.amazonaws.com"]
            aws:
              area: us-east-1
              sts_role_arn: 'arn:aws:iam::123456789012:function/pipeline-role'
            index: "on-prem-kakfa-index"
    extension:
      aws:
        secrets and techniques:
          kafka-cert:
            secret_id: kafka-cert
            area: us-east-1
            sts_role_arn: 'arn:aws:iam::123456789012:function/pipeline-role'
          secrets and techniques:
            secret_id: secrets and techniques
            area: us-east-1
            sts_role_arn: 'arn:aws:iam::123456789012:function/pipeline-role'

  1. Select Validate pipeline and make sure there aren’t any errors.
  2. Below Community configuration, select Public entry or VPC entry. (For this submit, we select VPC entry).
  3. When you selected VPC entry, specify your VPC, subnets, and an acceptable safety group so OSI can attain the outgoing ports for the info supply.
  4. Below VPC attachment choices, choose Connect to VPC and select an acceptable CIDR vary.

OSI assets are created in a service VPC managed by AWS that’s separate from the VPC you selected within the final step. This choice permits you to configure what CIDR ranges OSI ought to use inside this service VPC. The selection exists so you may make positive there isn’t any handle collision between CIDR ranges in your VPC that’s hooked up to your on-premises community and this service VPC. Many pipelines in your account can share identical CIDR ranges for this service VPC.

  1. Specify any non-obligatory tags and log publishing choices, then select Subsequent.
  2. Assessment the configuration and select Create pipeline.

You possibly can monitor the pipeline creation and any log messages within the Amazon CloudWatch Logs log group you specified. Your pipeline ought to now be efficiently created. For extra details about find out how to provision capability for the efficiency of this pipeline, see the part Really useful Compute Models (OCUs) for the MSK pipeline in Introducing Amazon MSK as a supply for Amazon OpenSearch Ingestion.

Create a pipeline with self-managed OpenSearch as a supply

The steps for making a pipeline for self-managed OpenSearch are much like the steps for creating one for Kafka. Through the blueprint choice, select Information Migration below Use case and choose Self managed OpenSearch/Elasticsearch. OpenSearch Ingestion can supply information from all variations of OpenSearch and Elasticsearch from model 7.0  to  model 7.10.

The next blueprint reveals a pattern configuration YAML for this information supply:

model: "2"
opensearch-migration-pipeline:
  supply:
    opensearch:
      acknowledgments: true
      hosts: [ "https://node-0.example.com:9200" ]
      username: "${{aws_secrets:secret:username}}"
      password: "${{aws_secrets:secret:password}}"
      indices:
        embrace:
        - index_name_regex: "opensearch_dashboards_sample_data*"
        exclude:
          - index_name_regex: '..*'
  sink:
    - opensearch:
        hosts: [ "https://search-domain-12345567890.us-east-1.es.amazonaws.com" ]
        aws:
          sts_role_arn: "arn:aws:iam::123456789012:function/pipeline-role"
          area: "us-east-1"
        index: "on-prem-os"
extension:
  aws:
    secrets and techniques:
      secret:
        secret_id: "self-managed-os-credentials"
        area: "us-east-1"
        sts_role_arn: "arn:aws:iam::123456789012:function/pipeline-role"
        refresh_interval: PT1H

Concerns for self-managed OpenSearch information supply

Certificates put in on the OpenSearch cluster must be verifiable for OSI to connect with this information supply earlier than studying information. Insecure connections are presently not supported.

After you’re related, ensure that the cluster has adequate learn bandwidth to permit for OSI to learn information. Use the Min and Max OCU setting to restrict OSI learn bandwidth consumption. Your learn bandwidth will range relying upon information quantity, variety of indexes, and provisioned OCU capability. Begin small and improve the variety of OCUs to steadiness between obtainable bandwidth and acceptable migration time.

This supply is often meant for one-time migration of knowledge and never as steady ingestion to maintain information in sync between information sources and sinks.

OpenSearch Service domains assist distant reindexing, however that consumes assets in your domains. Utilizing OSI will transfer this compute out of the area, and OSI can obtain considerably larger bandwidth than distant reindexing, thereby leading to sooner migration instances.

OSI doesn’t assist deferred replay or visitors recording at this time; consult with Migration Assistant for Amazon OpenSearch Service in case your migration wants these capabilities.

Conclusion

On this submit, we launched self-managed sources for OpenSearch Ingestion that allow you to ingest information from company information facilities or different on-premises environments. OSI additionally helps numerous different information sources and integrations. Confer with Working with Amazon OpenSearch Ingestion pipeline integrations to find out about these different information sources.


Concerning the Authors

Muthu Pitchaimani is a Search Specialist with Amazon OpenSearch Service. He builds large-scale search purposes and options. Muthu is within the matters of networking and safety, and relies out of Austin, Texas.

Arjun Nambiar is a Product Supervisor with Amazon OpenSearch Service. He focuses on ingestion applied sciences that allow ingesting information from all kinds of sources into Amazon OpenSearch Service at scale. Arjun is involved in large-scale distributed techniques and cloud-centered applied sciences, and relies out of Seattle, Washington.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here

Stay on op - Ge the daily news in your inbox