<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[~Blog]]></title><description><![CDATA[~Blog]]></description><link>https://rohit.cc</link><generator>RSS for Node</generator><lastBuildDate>Sat, 11 Apr 2026 06:35:04 GMT</lastBuildDate><atom:link href="https://rohit.cc/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[All about "Elastic Agent" in Elastic SIEM -  Part 1]]></title><description><![CDATA[Elastic agent is an one stop solution for both collecting of data and also providing endpoint security. Single agent can be configured to collect different forms of data by adding necessary integrations on demand. It is way easier to configure and ma...]]></description><link>https://rohit.cc/all-about-elastic-agent-in-elastic-siem-part-1</link><guid isPermaLink="true">https://rohit.cc/all-about-elastic-agent-in-elastic-siem-part-1</guid><category><![CDATA[elastic-agent]]></category><category><![CDATA[fleet-server]]></category><category><![CDATA[elasticsearch]]></category><category><![CDATA[kibana]]></category><dc:creator><![CDATA[Rohit]]></dc:creator><pubDate>Mon, 27 Jan 2025 03:44:11 GMT</pubDate><content:encoded><![CDATA[<p><a target="_blank" href="https://www.elastic.co/elastic-agent">Elastic agent</a> is an one stop solution for both collecting of data and also providing endpoint security. Single agent can be configured to collect different forms of data by adding necessary integrations on demand. It is way easier to configure and manage as all the chaos are handled by the stack itself.’</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737949147728/053d0e0e-a739-4693-9f51-d1a448435bde.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-components-needed-for-a-basic-siem-setup-in-elastic-stack">Components needed for a basic SIEM setup in Elastic stack</h3>
<ul>
<li><p>Elasticsearch</p>
</li>
<li><p>Kibana</p>
</li>
<li><p>Elastic Agent</p>
</li>
<li><p>Fleet server</p>
</li>
</ul>
<p>Elasticsearch is the brain of Elastic stack. Kibana is a very powerful and easy to use UI where we can view and visualize the data that is present in elasticsearch. Elastic Agent and Fleet server are the ones about which we will focus on further.</p>
<h2 id="heading-types-of-elastic-agent-based-on-installation">Types of Elastic Agent (Based on installation):</h2>
<ul>
<li><p><strong>Fleet managed Elastic Agent</strong></p>
<p>  The elastic agent is enrolled to a fleet server through which policy configuration and update can be performed remotely.</p>
</li>
<li><p><strong>Standalone Elastic Agent</strong></p>
<p>  The elastic agent runs as standalone agent. For configuring the agent we must manually update the elastic-agent.yml file with updated configs. This is a tiresome work and it is always recommended to go with fleet managed elastic agents.</p>
</li>
</ul>
<h2 id="heading-building-blocks-of-elastic-agent">Building blocks of Elastic Agent</h2>
<ul>
<li><p>Policy</p>
</li>
<li><p>Outputs</p>
</li>
<li><p>Integrations</p>
</li>
</ul>
<h2 id="heading-policy">Policy</h2>
<p>Policy refers to the configuration for elastic agent. Agent policies define the integrations you want to run and the hosts they should run on. You can assign a single Elastic Agent policy to multiple agents, simplifying large-scale configuration management. It contains the details about data inputs and details on where to sent the collected data to. It contains the settings in detail on which data must be collected, from where the data must be fetched, what are the preprocessing that must be performed. If the agent is managed by fleet then details about the fleet will also be available.</p>
<h2 id="heading-integration">Integration</h2>
<p>Elastic integrations simplify connecting Elastic with external services and systems, enabling quick insights and actions. They can gather new data sources and typically include ready-to-use assets like dashboards, visualizations, and pipelines for extracting structured fields from logs and events. This allows you to gain insights in just seconds.</p>
<p>When you add an integration, you configure inputs for logs and metrics, such as the path to your Nginx access logs. When you’re done, you save the integration to an Elastic Agent policy. The next time enrolled agents check in, they receive the update.</p>
<h2 id="heading-outputs">Outputs</h2>
<p>The outputs specifies where to send data. You can specify multiple outputs to pair specific inputs with specific outputs.</p>
<p>Supported outputs are:</p>
<ul>
<li><p>Elasticsearch</p>
</li>
<li><p>Kafka</p>
</li>
<li><p>Logstash</p>
</li>
</ul>
<h2 id="heading-fleet-server">Fleet server</h2>
<p>Fleet server is technically a elastic agent with a policy that has fleet server integration added to it. Fleet server is like a manager in an office. It is used to manage elastic agents( employees ). It acts as a communication hub between the Elastic Agent and the Elasticsearch cluster. It handles agent enrollment, configuration updates, and data ingestion, ensuring efficient and secure management of multiple agents. By using Fleet Server, administrators can simplify deployment and streamline agent management at scale.</p>
<p>Fleet provides a web-based UI in Kibana for centrally managing Elastic Agents and their policies.</p>
<p>The Agents page provides a view of agent health status, indicating which agents are healthy or unhealthy, along with the last check-in time. It also displays the Elastic Agent binary version and the associated policy.</p>
<p>We can see more about Elastic agent installation and about elastic endpoint in next part! Will update link here soon.</p>
]]></content:encoded></item><item><title><![CDATA[How to Index and Search Documents in Elasticsearch: A Step-by-Step Guide]]></title><description><![CDATA[Elasticsearch, a powerful distributed search engine, enables users to efficiently store, search, and analyze large volumes of data in near real-time. Whether you're a developer building a search application or a data analyst working on business insig...]]></description><link>https://rohit.cc/how-to-index-and-search-documents-in-elasticsearch-a-step-by-step-guide</link><guid isPermaLink="true">https://rohit.cc/how-to-index-and-search-documents-in-elasticsearch-a-step-by-step-guide</guid><category><![CDATA[elasticsearch]]></category><category><![CDATA[lucene]]></category><category><![CDATA[indexing]]></category><category><![CDATA[search]]></category><dc:creator><![CDATA[Rohit]]></dc:creator><pubDate>Sun, 05 Jan 2025 18:30:00 GMT</pubDate><content:encoded><![CDATA[<p>Elasticsearch, a powerful distributed search engine, enables users to efficiently store, search, and analyze large volumes of data in near real-time. Whether you're a developer building a search application or a data analyst working on business insights, understanding how to index and search documents in Elasticsearch is essential. This guide walks you through the basics of indexing and searching documents, complete with practical examples to help you get started.</p>
<h2 id="heading-prerequisites">✅ Prerequisites</h2>
<p>Before diving in, ensure you have:</p>
<ol>
<li><p><strong>Elasticsearch installed and running:</strong> You can install Elasticsearch locally or use a managed service like Elastic Cloud.(There is a 14 day trail available..check it out)</p>
</li>
<li><p><strong>A basic understanding of JSON:</strong> Elasticsearch stores and processes data in JSON format.</p>
</li>
<li><p><strong>Tools to interact with Elasticsearch:</strong> Use cURL, Kibana, or any HTTP client like Postman to send requests.</p>
</li>
</ol>
<h2 id="heading-understand-the-basics">📚 Understand the Basics</h2>
<h3 id="heading-what-is-an-index">What is an Index?</h3>
<p>An index in Elasticsearch is a collection of documents that share similar characteristics. For example, an 🛒 e-commerce application might have separate indices for <code>products</code>, <code>customers</code>, and <code>orders</code>.</p>
<h3 id="heading-what-is-a-document">What is a Document?</h3>
<p>A document is a single record stored in an index. It is represented in JSON format and contains fields (key-value pairs) that describe the data. For example:</p>
<pre><code class="lang-bash">{
  <span class="hljs-string">"name"</span>: <span class="hljs-string">"Wireless Mouse"</span>,
  <span class="hljs-string">"price"</span>: 25.99,
  <span class="hljs-string">"in_stock"</span>: <span class="hljs-literal">true</span>
}
</code></pre>
<h2 id="heading-index-a-document">Index a Document</h2>
<h3 id="heading-create-an-index">Create an Index</h3>
<p>To create an index, use the <code>PUT</code> request:</p>
<pre><code class="lang-bash">PUT /products
</code></pre>
<p>This creates an index named <code>products</code>. By default, Elasticsearch handles the mapping (schema) automatically. You can also define your own mappings for more control.</p>
<h3 id="heading-index-a-single-document">Index a Single Document</h3>
<p>To add a document to the <code>products</code> index, use the <code>POST</code> or <code>PUT</code> request:</p>
<pre><code class="lang-bash">POST /products/_doc/1
{
  <span class="hljs-string">"name"</span>: <span class="hljs-string">"Wireless Mouse"</span>,
  <span class="hljs-string">"price"</span>: 25.99,
  <span class="hljs-string">"in_stock"</span>: <span class="hljs-literal">true</span>
}
</code></pre>
<p>Here:</p>
<ul>
<li><p><code>/products</code> specifies the index.</p>
</li>
<li><p><code>/_doc/1</code> indicates the document ID (1 in this case). Elasticsearch assigns an ID automatically if you omit it.</p>
</li>
</ul>
<h3 id="heading-index-multiple-documents">Index Multiple Documents</h3>
<p>You can bulk index multiple documents using the <code>_bulk</code> endpoint:</p>
<pre><code class="lang-bash">POST /_bulk
{ <span class="hljs-string">"index"</span>: { <span class="hljs-string">"_index"</span>: <span class="hljs-string">"products"</span>, <span class="hljs-string">"_id"</span>: <span class="hljs-string">"1"</span> } }
{ <span class="hljs-string">"name"</span>: <span class="hljs-string">"Wireless Mouse"</span>, <span class="hljs-string">"price"</span>: 25.99, <span class="hljs-string">"in_stock"</span>: <span class="hljs-literal">true</span> }
{ <span class="hljs-string">"index"</span>: { <span class="hljs-string">"_index"</span>: <span class="hljs-string">"products"</span>, <span class="hljs-string">"_id"</span>: <span class="hljs-string">"2"</span> } }
{ <span class="hljs-string">"name"</span>: <span class="hljs-string">"Mechanical Keyboard"</span>, <span class="hljs-string">"price"</span>: 89.99, <span class="hljs-string">"in_stock"</span>: <span class="hljs-literal">false</span> }
</code></pre>
<p>Each pair of lines specifies an action (<code>index</code>) and the document data.</p>
<h2 id="heading-search-for-documents">🔍 Search for Documents</h2>
<h3 id="heading-basic-search">Basic Search</h3>
<p>To search for all documents in the <code>products</code> index, use the <code>GET</code> request:</p>
<pre><code class="lang-bash">GET /products/_search
</code></pre>
<p>This returns all documents along with metadata. By default, Elasticsearch retrieves the top 10 results.</p>
<h3 id="heading-match-query">Match Query</h3>
<p>To search for documents containing specific text, use the <code>match</code> query:</p>
<pre><code class="lang-bash">GET /products/_search
{
  <span class="hljs-string">"query"</span>: {
    <span class="hljs-string">"match"</span>: {
      <span class="hljs-string">"name"</span>: <span class="hljs-string">"Mouse"</span>
    }
  }
}
</code></pre>
<p>This searches for documents where the <code>name</code> field contains the term "Mouse."</p>
<h3 id="heading-term-query">Term Query</h3>
<p>The <code>term</code> query is used for exact matches:</p>
<pre><code class="lang-bash">GET /products/_search
{
  <span class="hljs-string">"query"</span>: {
    <span class="hljs-string">"term"</span>: {
      <span class="hljs-string">"in_stock"</span>: <span class="hljs-literal">true</span>
    }
  }
}
</code></pre>
<h3 id="heading-filtered-search">Filtered Search</h3>
<p>To combine queries and filters, use the <code>bool</code> query:</p>
<pre><code class="lang-bash">GET /products/_search
{
  <span class="hljs-string">"query"</span>: {
    <span class="hljs-string">"bool"</span>: {
      <span class="hljs-string">"must"</span>: {
        <span class="hljs-string">"match"</span>: {
          <span class="hljs-string">"name"</span>: <span class="hljs-string">"Keyboard"</span>
        }
      },
      <span class="hljs-string">"filter"</span>: {
        <span class="hljs-string">"term"</span>: {
          <span class="hljs-string">"in_stock"</span>: <span class="hljs-literal">true</span>
        }
      }
    }
  }
}
</code></pre>
<p>This finds documents with <code>name</code> containing "Keyboard" and <code>in_stock</code> set to <code>true.</code></p>
<h2 id="heading-update-a-document">✏️ Update a Document</h2>
<p>To update an existing document, use the <code>_update</code> endpoint:</p>
<pre><code class="lang-bash">POST /products/_doc/1/_update
{
  <span class="hljs-string">"doc"</span>: {
    <span class="hljs-string">"price"</span>: 24.99
  }
}
</code></pre>
<p>This updates the <code>price</code> field of the document with ID <code>1.</code></p>
<h2 id="heading-delete-a-document-or-index">🗑️ Delete a Document or Index</h2>
<h3 id="heading-delete-a-document">Delete a Document</h3>
<p>To delete a document by ID, use the <code>DELETE</code> request:</p>
<pre><code class="lang-bash">DELETE /products/_doc/1
</code></pre>
<h3 id="heading-delete-an-index">Delete an Index</h3>
<p>To delete the entire <code>products</code> index, use:</p>
<pre><code class="lang-bash">DELETE /products
</code></pre>
<p>By following these steps, you’ll be well on your way to mastering Elasticsearch document indexing and search capabilities. Whether you’re building a search engine, analyzing logs, or managing e-commerce data, Elasticsearch provides the tools you need to handle data efficiently. Happy searching! 🎉</p>
]]></content:encoded></item><item><title><![CDATA[Elasticsearch Cluster TLS Encryption]]></title><description><![CDATA[Enabling TLS encryption is a great add on protection for you clusters and also most of the compliance certifications need TLS/SSL encrypted communication. It encrypts, both the connection between in between nodes and HTTP API calls. It takes only few...]]></description><link>https://rohit.cc/elasticsearch-cluster-tls-encryption</link><guid isPermaLink="true">https://rohit.cc/elasticsearch-cluster-tls-encryption</guid><category><![CDATA[elasticsearch security]]></category><category><![CDATA[elasticsearch]]></category><category><![CDATA[TLS]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Rohit]]></dc:creator><pubDate>Tue, 04 Jun 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1718938965570/48706f18-28fb-45cd-8bfd-47bf4a9281c9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Enabling TLS encryption is a great add on protection for you clusters and also most of the compliance certifications need TLS/SSL encrypted communication. It encrypts, both the connection between in between nodes and HTTP API calls. It takes only few steps to enable TLS for your elasticsearch cluster as most of the hard tasks are already handled by the executables provided in elasticsearch zip. If you are not familiar with the concept of TLS then checkout this <a target="_blank" href="https://www.andrewhowden.com/p/the-magic-of-tls-x509-and-mutual-authentication-explained-b2162dec4401">awesome blog</a> by Andrew Howden in which he explains how TLS works in the simplest way. Now lets start the process of protecting our elasticsearch clusters🫡. We will go with using Self signed certificates for this example blog. In production you can use your companys Authority files to sign the certificates. We will be using <strong>elasticsearch-certutil</strong> to generate required certificates.</p>
<h3 id="heading-certificate-authorityca">Certificate Authority(CA):</h3>
<p>A <strong>Certificate Authority (CA)</strong> is a trusted entity that issues digital certificates. These certificates verify the ownership/authenticity of a digital asset. The CA's role is to confirm the identity of entities (such as websites) and to bind public keys with those identities through digital certificates. This helps establish trust in a digital communication environment.</p>
<h3 id="heading-x509-certificate">X.509 Certificate:</h3>
<p>An <strong>X.509 Certificate</strong> is a digital certificate that uses the X.509 standard to define the format of public key certificates. X.509 certificates are used to establish a secure, encrypted connection between a client (like a web browser) and a server (like a website).</p>
<h2 id="heading-steps-to-generate-certificates">Steps to generate certificates:</h2>
<p>‣Generating a certificate authority to sign our X.509 certificates.</p>
<pre><code class="lang-bash">./bin/elasticsearch-certutil ca
</code></pre>
<p>Executing this will prompt for few questions.</p>
<ol>
<li><p>CA file name - you can name it anything you wish to. I will leave it as default.</p>
</li>
<li><p>CA password - Its very important to remember the CA password as we need it for configuration purposes in multiple places. you can also leave it empty. For now i will go with "12345" as password.</p>
</li>
</ol>
<pre><code class="lang-bash">//Download elasticsearh zip and extract the same

~/clusters/sample/elasticsearch % ./bin/elasticsearch-certutil ca

This tool assists you <span class="hljs-keyword">in</span> the generation of X.509 certificates and certificate
signing requests <span class="hljs-keyword">for</span> use with SSL/TLS <span class="hljs-keyword">in</span> the Elastic stack.

The <span class="hljs-string">'ca'</span> mode generates a new <span class="hljs-string">'certificate authority'</span>
This will create a new X.509 certificate and private key that can be used
to sign certificate when running <span class="hljs-keyword">in</span> <span class="hljs-string">'cert'</span> mode.

Use the <span class="hljs-string">'ca-dn'</span> option <span class="hljs-keyword">if</span> you wish to configure the <span class="hljs-string">'distinguished name'</span>
of the certificate authority

By default the <span class="hljs-string">'ca'</span> mode produces a single PKCS<span class="hljs-comment">#12 output file which holds:</span>
    * The CA certificate
    * The CA<span class="hljs-string">'s private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]: 
Enter password for elastic-stack-ca.p12 : 

~/clusters/sample/elasticsearch % ls
LICENSE.txt          README.asciidoc      blog                 elastic-stack-ca.p12 lib                  modules              upgrade
NOTICE.txt           bin                  config               jdk                  logs                 plugins</span>
</code></pre>
<p>Now you can see that <strong>elastic-stack-ca.p12</strong> is created. This certificate authority file contains the following components in it:</p>
<ul>
<li><p>Private key</p>
</li>
<li><p>Public certificate which also contains a public key</p>
</li>
</ul>
<p>To view the contents of the .p12 file you can use the below command</p>
<pre><code class="lang-bash">openssl pkcs12 -info -<span class="hljs-keyword">in</span> elastic-stact-ca.p12
</code></pre>
<p>‣Next we are going to generate Node certificate for enabling encrypting communication in between nodes.</p>
<pre><code class="lang-bash">./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
</code></pre>
<p>Executing this will prompt for few questions.</p>
<ol>
<li><p>CA file password - you can name it anything you wish to. I will leave it as default.</p>
</li>
<li><p>Certificate password - We need to remember the Certificate password also. you can also leave it empty. For now i will go with "12345" as password.</p>
</li>
</ol>
<p><strong>Contents of Node Certificate:</strong></p>
<ul>
<li><p>Node's Private key</p>
</li>
<li><p>Node certificate ( also contains public key)</p>
</li>
<li><p>CA certificate (only certificate of CA, private key will not be included )</p>
</li>
</ul>
<p>‣ Final step in certificate generation part is to generate a certificate for HTTPS communication. (You can skip this step if you do not want TLS for HTTP communications ).</p>
<pre><code class="lang-bash">./bin/elasticsearch-certutil http
</code></pre>
<p><strong>Contents of Node Certificate:</strong></p>
<ul>
<li><p>HTTP's Private key</p>
</li>
<li><p>HTTP certificate ( also contains public key)</p>
</li>
<li><p>CA certificate (only certificate of CA, private key will not be included)</p>
</li>
</ul>
<p>Both Node and HTTP certificates are X.509 certificates.</p>
<pre><code class="lang-bash"><span class="hljs-comment">## Do you wish to generate a Certificate Signing Request (CSR)?</span>

A CSR is used when you want your certificate to be created by an existing
Certificate Authority (CA) that you <span class="hljs-keyword">do</span> not control (that is, you don<span class="hljs-string">'t have
access to the keys for that CA). 

If you are in a corporate environment with a central security team, then you
may have an existing Corporate CA that can generate your certificate for you.
Infrastructure within your organisation may already be configured to trust this
CA, so it may be easier for clients to connect to Elasticsearch if you use a
CSR and send that request to the team that controls your CA.

If you choose not to generate a CSR, this tool will generate a new certificate
for you. That certificate will be signed by a CA under your control. This is a
quick and easy way to secure your cluster with TLS, but you will need to
configure all your clients to trust that custom CA.

Generate a CSR? [y/N]N

## Do you have an existing Certificate Authority (CA) key-pair that you wish to use to sign your certificate?

If you have an existing CA certificate and key, then you can use that CA to
sign your new http certificate. This allows you to use the same CA across
multiple Elasticsearch clusters which can make it easier to configure clients,
and may be easier for you to manage.

If you do not have an existing CA, one will be generated for you.

Use an existing CA? [y/N]y

## What is the path to your CA?

Please enter the full pathname to the Certificate Authority that you wish to
use for signing your new http certificate. This can be in PKCS#12 (.p12), JKS
(.jks) or PEM (.crt, .key, .pem) format.
CA Path: /Users/rohit/clusters/sample/elasticsearch/elastic-stack-ca.p12
Reading a PKCS12 keystore requires a password.
It is possible for the keystore'</span>s password to be blank,
<span class="hljs-keyword">in</span> <span class="hljs-built_in">which</span> <span class="hljs-keyword">case</span> you can simply press &lt;ENTER&gt; at the prompt
Password <span class="hljs-keyword">for</span> elastic-stack-ca.p12:

<span class="hljs-comment">## How long should your certificates be valid?</span>

Every certificate has an expiry date. When the expiry date is reached clients
will stop trusting your certificate and TLS connections will fail.

Best practice suggests that you should either:
(a) <span class="hljs-built_in">set</span> this to a short duration (90 - 120 days) and have automatic processes
to generate a new certificate before the old one expires, or
(b) <span class="hljs-built_in">set</span> it to a longer duration (3 - 5 years) and <span class="hljs-keyword">then</span> perform a manual update
a few months before it expires.

You may enter the validity period <span class="hljs-keyword">in</span> years (e.g. 3Y), months (e.g. 18M), or days (e.g. 90D)

For how long should your certificate be valid? [5y] 10y

<span class="hljs-comment">## Do you wish to generate one certificate per node?</span>

If you have multiple nodes <span class="hljs-keyword">in</span> your cluster, <span class="hljs-keyword">then</span> you may choose to generate a
separate certificate <span class="hljs-keyword">for</span> each of these nodes. Each certificate will have its
own private key, and will be issued <span class="hljs-keyword">for</span> a specific hostname or IP address.

Alternatively, you may wish to generate a single certificate that is valid
across all the hostnames or addresses <span class="hljs-keyword">in</span> your cluster.

If all of your nodes will be accessed through a single domain
(e.g. node01.es.example.com, node02.es.example.com, etc) <span class="hljs-keyword">then</span> you may find it
simpler to generate one certificate with a wildcard hostname (*.es.example.com)
and use that across all of your nodes.

However, <span class="hljs-keyword">if</span> you <span class="hljs-keyword">do</span> not have a common domain name, and you expect to add
additional nodes to your cluster <span class="hljs-keyword">in</span> the future, <span class="hljs-keyword">then</span> you should generate a
certificate per node so that you can more easily generate new certificates when
you provision new nodes.

Generate a certificate per node? [y/N]N

<span class="hljs-comment">## Which hostnames will be used to connect to your nodes?</span>

These hostnames will be added as <span class="hljs-string">"DNS"</span> names <span class="hljs-keyword">in</span> the <span class="hljs-string">"Subject Alternative Name"</span>
(SAN) field <span class="hljs-keyword">in</span> your certificate.

You should list every hostname and variant that people will use to connect to
your cluster over http.
Do not list IP addresses here, you will be asked to enter them later.

If you wish to use a wildcard certificate (<span class="hljs-keyword">for</span> example *.es.example.com) you
can enter that here.

Enter all the hostnames that you need, one per line.
When you are <span class="hljs-keyword">done</span>, press &lt;ENTER&gt; once more to move on to the next step.


You did not enter any hostnames.
Clients are likely to encounter TLS hostname verification errors <span class="hljs-keyword">if</span> they
connect to your cluster using a DNS name.

Is this correct [Y/n]Y

<span class="hljs-comment">## Which IP addresses will be used to connect to your nodes?</span>

If your clients will ever connect to your nodes by numeric IP address, <span class="hljs-keyword">then</span> you
can list these as valid IP <span class="hljs-string">"Subject Alternative Name"</span> (SAN) fields <span class="hljs-keyword">in</span> your
certificate.

If you <span class="hljs-keyword">do</span> not have fixed IP addresses, or not wish to support direct IP access
to your cluster <span class="hljs-keyword">then</span> you can just press &lt;ENTER&gt; to skip this step.

Enter all the IP addresses that you need, one per line.
When you are <span class="hljs-keyword">done</span>, press &lt;ENTER&gt; once more to move on to the next step.


You did not enter any IP addresses.

Is this correct [Y/n]Y

<span class="hljs-comment">## Other certificate options</span>

The generated certificate will have the following additional configuration
values. These values have been selected based on a combination of the
information you have provided above and secure defaults. You should not need to
change these values unless you have specific requirements.

Key Name: elasticsearch
Subject DN: CN=elasticsearch
Key Size: 2048

Do you wish to change any of these options? [y/N]N

<span class="hljs-comment">## What password do you want for your private key(s)?</span>

Your private key(s) will be stored <span class="hljs-keyword">in</span> a PKCS<span class="hljs-comment">#12 keystore file named "http.p12".</span>
This <span class="hljs-built_in">type</span> of keystore is always password protected, but it is possible to use a
blank password.

If you wish to use a blank password, simply press &lt;enter&gt; at the prompt below.
Provide a password <span class="hljs-keyword">for</span> the <span class="hljs-string">"http.p12"</span> file:  [&lt;ENTER&gt; <span class="hljs-keyword">for</span> none]

<span class="hljs-comment">## Where should we save the generated files?</span>

A number of files will be generated including your private key(s),
public certificate(s), and sample configuration options <span class="hljs-keyword">for</span> Elastic Stack products.

These files will be included <span class="hljs-keyword">in</span> a single zip archive.

What filename should be used <span class="hljs-keyword">for</span> the output zip file? [/Users/rohit/clusters/sample/elasticsearch/elasticsearch-ssl-http.zip] 

Zip file written to /Users/rohit/clusters/sample/elasticsearch/elasticsearch-ssl-http.zip
</code></pre>
<p>After generating the necessary certificate copy them to elasticsearch/config/certs folder. That's all for certificate generation part. Now lets configure the same.</p>
<h2 id="heading-configuring-in-elasticsearchyaml">Configuring in elasticsearch.yaml:</h2>
<pre><code class="lang-bash">xpack.security.enabled: <span class="hljs-literal">true</span>
xpack.security.transport.ssl.enabled: <span class="hljs-literal">true</span>
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

xpack.security.http.ssl.enabled: <span class="hljs-literal">true</span>
xpack.security.http.ssl.keystore.path: <span class="hljs-string">"certs/http.p12"</span>
xpack.security.http.ssl.verification_mode: certificate
</code></pre>
<p>Note: If you have set password for your certificate files then you have to add it to elasticsearch's keystore. You can use the below commands to do the same.</p>
<pre><code class="lang-bash">./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
//prompts <span class="hljs-keyword">for</span> node certificates password

./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
//prompts <span class="hljs-keyword">for</span> node certificates password

./bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
//prompts <span class="hljs-keyword">for</span> http certificates password
</code></pre>
<h3 id="heading-xpacksecurityhttpsslverificationmode-xpacksecuritytransportsslverificationmode">xpack.security.http.ssl.verification_mode / xpack.security.transport.ssl.verification_mode:</h3>
<p>It helps to control the SSL certicate verification level. Based on this configuration we can restrict connection to a finite set of host or IP addresses. For enabling this restriction we need to provide the list of allowed hostnames and IP addresses while generation of certificate. There are three modes available. They are</p>
<ol>
<li><p><strong>full:</strong><br /> Validates that the provided certificate: has an issue date that’s within the not_before and not_after dates; chains to a trusted Certificate Authority (CA); has a hostname or IP address that matches the names within the certificate.</p>
</li>
<li><p><strong>certificate:</strong><br /> Validates the provided certificate and verifies that it’s signed by a trusted authority (CA), but doesn’t check the certificate hostname.</p>
</li>
<li><p><strong>none:</strong><br /> It basically disables SSL verification.</p>
</li>
</ol>
<p>In this example we have generated single node and http certificate which we will be using in all nodes and also we have not provided IP address or hostname for certificate generation therefore we will go with <strong>certificate</strong> mode. Incase you had multiple nodes if you go with <strong>full</strong> mode cluster will not get created due to Host verification failure.</p>
]]></content:encoded></item></channel></rss>