Kibana: build your own dashboard

With Kibana it is possible to build custom dashboards to display our data in an appropriate way. There are different types of visualization including pie charts, bar charts and geographic maps. In this tutorial we'll discover how to build a custom dashboard starting with data entry.

Share

Reading time: 8 minutes

In articles Kibana: let’s explore data we saw the main features of Kibana and some potential of the data visualization and search methods offered by the tool. Therefore, the time has come to try creating our own dashboard with our data. In this tutorial we will see how to define indexes and their mapping, upload data to Elasticsearch via the bulk API and finally create multiple visualization panels to compose a dashboard.

Load a dataset in Elasticsearch

This tutorial requires three datasets:

First you have to unzip the downloaded files.

Structure of datasets

The Shakespeare dataset has the following this structure:

{
    "line_id": INT,
    "play_name": "String",
    "speech_number": INT,
    "line_number": "String",
    "speaker": "String",
    "text_entry": "String",
} 

The accounts dataset is structured as follows:

{
    "account_number": INT,
    "balance": INT,
    "firstname": "String",
    "lastname": "String",
    "age": INT,
    "gender": "M or F",
    "address": "String",
    "employer": "String",
    "email": "String",
    "city": "String",
    "state": "String"
} 

The logs dataset has dozens of different fields. Here are the noteworthy fields for this tutorial:

{
    "memory": INT,
    "geo.coordinates": "geo_point"
    "@timestamp": "date"
} 

Set up mappings

Before loading the Shakespeare and log datasets, you must set up mappings for the fields. The mappings divide the documents in the index into logical groups and specify characteristics of the fields. These characteristics include whether the field is searchable and whether it is tokenized, or split into separate words.

In Kibana Dev Tools > Console, set up a mapping for the Shakespeare dataset:

PUT /shakespeare
{
 "mappings": {
  "properties": {
    "speaker": {"type": "keyword"},
    "play_name": {"type": "keyword"},
    "line_id": {"type": "integer"},
    "speech_number": {"type": "integer"}
  }
 }
}
 

This mapping specifies the field characteristics for the dataset:

  • The speaker and play_name fields are keyword fields. These fields are not parsed. Strings are treated as a single unit even if they contain multiple words.
  • The line_id and speech_number fields are integers.

The log dataset requires a mapping to label latitude and longitude pairs as geographic locations by applying the geo_point type.

PUT /logstash-2015.05.18
{
  "mappings": {
    "properties": {
      "geo": {
        "properties": {
          "coordinates": {
            "type": "geo_point"
          }
        }
      }
    }
  }
}

PUT /logstash-2015.05.19
{
  "mappings": {
    "properties": {
      "geo": {
        "properties": {
          "coordinates": {
            "type": "geo_point"
          }
        }
      }
    }
  }
}

PUT /logstash-2015.05.20
{
  "mappings": {
    "properties": {
      "geo": {
        "properties": {
          "coordinates": {
            "type": "geo_point"
          }
        }
      }
    }
  }
}
 

The accounts dataset does not require any mapping.

Load datasets

At this point, you are ready to use Elasticsearch’s bulk API to load the datasets:

curl --user elastic:changeme -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk' --data-binary @accounts.json
curl --user elastic:changeme -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/shakespeare/_bulk' --data-binary @shakespeare_6.0.json
curl --user elastic:changeme -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/_bulk' --data-binary @logs.jsonl
 

These commands may take some time to execute, depending on available computing resources.

Verify that loading was successful:

GET /_cat/indices?v 

Define index patterns

Index patterns tell Kibana which Elasticsearch indexes you want to explore. An index pattern can match the name of a single index, or include a wildcard (*) to match multiple indexes.

For example, Logstash typically creates a set of indexes in the format logstash-YYYY.MMM.DD. To explore all the log data for May 2018, you could specify the index pattern logstash-2018.05*.

You will create patterns for the Shakespeare dataset, which has an index named shakespeare, and the accounts dataset, which has an index named bank. These datasets do not contain time series data.

  1. In Kibana, open Stack Management and then click Index Patterns.
  2. If this is your first index pattern, the Create index pattern page opens automatically. Otherwise, click on the top left corner.
  3. Enter shakes* in the Index pattern field.
  1. Click . For this pattern, you do not need to configure any settings.
  2. Define a second index pattern called ba* You do not need to configure any settings for this pattern.
  3. Now create an index model for the Logstash dataset. This dataset contains time series data.
  4. Define an index pattern called logstash*.
  5. Select @timestamp in the Timestamp drop-down menu.

Discover and explore data

Using the Discover app, you can enter an Elasticsearch query to search your data and filter the results.

  1. Open Discover.
  2. The current index pattern appears under the filter bar, in this case shake*. You may need to click on New in the menu bar to update the data.
  3. Click on the cursor to the right of the current index pattern and select ba*. In the search field, enter the following string:
account_number<= 100 and balance >= 47500 
  1. Click

By default, all fields are shown for each corresponding document. To choose which fields to display, hover over the list of Available Fields and then click add next to each field you want to include as a column in the table.

For example, if you add the account_number field, the display changes to a list of five account numbers.

Visualize data

In the Visualize app, you can shape your data using a variety of charts, tables and maps, and more. You’ll create four visualizations: a pie chart, a bar chart, a map, and a Markdown widget.

  1. Open Visualize Library
  2. Click . You will see all types of visualization in Kibana.
  1. Click Aggregation based
  1. Click Pie
  2. In New Search, select the index pattern ba*. You will use the pie chart to get an idea of account balances.

Pie chart

Initially, the pie contains only one “slice”. This is because the default search matches all documents.

To specify which slices to display in the pie, you use an Elasticsearch bucket aggregation. This aggregation sorts the documents that match your search criteria into different categories, also known as buckets.

Use a bucket aggregation to establish multiple ranges of account balances and find out how many accounts fall into each range.

  1. In the Buckets panel, click Add and then Split Slices.
  2. In the Aggregation drop-down menu, select Range.
  3. In the Field drop-down menu, select balance.
  4. Click Add Range four times to bring the total number of ranges to six.
  5. Define the following intervals:
0             999
1000         2999
3000         6999
7000        14999
15000       30999
31000       50000 
  1. Click

Now you can see what proportion of the 1000 accounts fall within each budget range.

Add another bucket aggregation that looks at the age of the account holders.

  1. At the bottom of the Buckets panel, click Add.
  2. In Select buckets type, click Split Slices.
  3. In the Sub Aggregation drop-down menu, select Terms.
  4. In the Field drop-down menu, select age.
  5. Click .

Now you can see the age breakdown of the account holders, displayed in a ring around the budget ranges.

To save this chart   

  1. Click on Save in the menu bar at the top and enter Pie Example
  2. Select None from the Add to dashboard options
  3. Click

Bar chart

You will use a bar graph to look at Shakespeare’s data set and compare the number of spoken parts in the plays.

Create a Vertical Bar graph and set the search origin to shakes*.

Initially, the graph contains a single bar showing the total count of documents that match the default query.

Display the number of spoken parts per show along the Y axis. This requires configuring metrics aggregation on the Y-axis. This aggregation calculates the metrics based on the values of the search results.

  1. In the Metrics panel, expand Y-Axis
  2. Set Aggregation to Unique Count
  3. Set Field to speaker
  4. In the Custom Label box, enter Speaking Parts
  5. Click

View the plays along the X-axis.

  1. In the Buckets panel, click Add and then X-Axis.
  2. Set Aggregation to Terms and Field to play_name.
  3. To list the plays in alphabetical order, in the Order drop-down menu, select Ascending.
  4. Give the axis a custom label, Play Name.
  5. Click

Hovering over a bar displays a tooltip with the number of spoken parts for that work.

Notice how the individual play names appear as whole sentences, instead of broken into individual words. This is the result of the mapping you did at the beginning of the tutorial, when you marked the play_name field as unscanned.

To save this chart  

  1. Click on Save in the top menu bar and enter Bar Example
  2. Select None from the Add to dashboard options
  3. Click

Map

Using a map, you can view the geographic information in the sample data in the log file.

  1. From the visualization types select Map
  1. Click Add layer
  2. Select Heat map
Schermata 2021-12-17 alle 18.54.37
  1. In the index pattern menu select logstash*
  2. Click
  3. Enter the name events
  4. Click
  5. In the top menu bar, click on the time selector on the far right.
  6. Click Absolute.
  7. Set the start time to May 18, 2015 and the end time to May 20, 2015.
  8. Click Update.

To save this chart    

  1. Click on Save in the top menu bar and enter Map Example
  2. Select None from the Add to dashboard options
  3. Click

Text

The final display is a Markdown widget that renders the formatted text.

  1. In the visualization type menu, select Text under Tools. In the text box, enter the following:
# This is a tutorial dashboard!
The Markdown widget uses **markdown** syntax.
> Blockquotes in Markdown use the > character.
 
  1. Click

The Markdown is displayed in the preview panel:

To save this chart  

  1. Click on Save in the top menu bar and enter Markdown Example
  2. Select None from the Add to dashboard options
  3. Click

Adding visualizations to a dashboard

A dashboard is a collection of visualizations that you can organize and share. You will build a dashboard that contains the visualizations you saved during this tutorial.

  1. Open Dashboard
  2. Click
  3. Click
  4. Add Bar Example, Map Example, Markdown Example, and Pie Example.

You can rearrange views by clicking on a view’s header and dragging it. The gear icon at the top right of a view shows controls for editing and deleting the view. A resize control is in the lower right corner.

To get a link to share or HTML code to embed the dashboard in a web page, save the dashboard. The Share button allows you to share the dashboard as Embedded code, Permalinks, PDF and PNG reports.

Recommended Readings

More To Explore

Artificial intelligence

Gradio: web applications in python for AI [part2]

Gradio is a python library that allows us to create web applications quickly and intuitively for our machine learning and AI models. Our applications always require user interaction and layout customization. Let us find out, through examples, how to improve our applications.

Artificial intelligence

Gradio: web applications in python for AI [part1]

Writing web applications for our machine learning and/or artificial intelligence models can take a lot of time and skills that we do not possess. To streamline and speed up this task we are helped by Gradio, a Python library designed to create web applications with just a few lines of code. Let’s discover its basic functionality with some examples.

Leave a Reply

Your email address will not be published. Required fields are marked *

Design with MongoDB

Design with MongoDB!!!

Buy the new book that will help you to use MongoDB correctly for your applications. Available now on Amazon!