Install ThoughtSpot on Amazon Linux 2 online clusters.

Before starting the install, complete the pre-installation steps. If you are using the AWS SSM agent as an alternative to SSH, you must run the Ansible playbook and all commands on the SSM console.

In an online cluster, the hosts can access the public repositories to download the required packages.

Before you build the ThoughtSpot cluster and install the ThoughtSpot application on the hosts, you must run the Ansible playbook. The TS Ansible playbook prepares your clusters in the following manner:

Configure the Ansible Playbook

To set up the Ansible, follow these steps:

  1. Obtain the Ansible tarball.

    Contact ThoughtSpot Support to request the ansible.

  2. Download the Ansible to your local machine.

    You can download it by running the cp command. For example, if the tarball is in your S3 bucket, run aws s3 cp s3://bucket_name/path/to/the/tarball ./.

    Note that you only need to copy the tarball to one node.

  3. Unzip the Ansible tarball, to see the following files and directories on your local machine:
    customize.sh
    This script runs as the last step in the preparation process. You can use it to inject deployment-specific customizations, such as enabling or disabling a corporate proxy, configuring extra SSH keys, installing extra services, and so on. Additionally, you can include the prepare_disks script here. Add the following line to the customize.sh file: sudo /usr/local/scaligent/bin/prepare_disks.sh.
    hosts.sample
    The Ansible inventory file.
    prod_image
    This directory contains the ThoughtSpot tools and tscli, the ThoughtSpot CLI binary.
    README.md
    Basic information for the unzipped file
    rpm_gpg
    This directory contains the GPG keys that authenticate the public repository.
    toolchain
    The tools that are necessary to compile the instructions you define in the Ansible Playbook, the source code, into executables that can run on your device. The toolchain includes a compiler, a linker, and run-time libraries.
    ts-new.yaml
    The Ansible Playbook for new installations.
    ts-update.yaml
    The Ansible Playbook for updates.
    ts.yaml
    yum.repos.d
    This directory contains information about the yum repo used by the cluster.
  4. Copy the Ansible inventory file hosts.sample to hosts.yaml, and using a text editor of your choice, update the file to include your host configuration.

    Copy the file by running this command: cp hosts.sample hosts.yaml.

    If you are using SSM, you must additionally run a command to replace the ts_partition_name, and run a command to create a single partition on the disk mounted under /export. Run the following command to replace the ts_partition_name:

    TS_DISK=disk_name_for_export_partition
      TS_PARTITION_NAME=${TS_DISK}1
    sed -i "s/xvda9/$TS_PARTITION_NAME/g" hosts.yaml
    Then run this command to create a single partition on the disk mounted under /export:
    sudo parted -s /dev/$TS_DISK mklabel gpt
    sudo parted -s /dev/$TS_DISK mkpart primary xfs 0% 100%


    hosts
    Add the IP addresses or hostnames of all hosts in the ThoughtSpot cluster.
    admin_uid
    The admin user ID parameter. If you are using ssh instead of AWS SSM, use the default values. If you are using SSM, the ssm_user uses the default value, 1001. You must choose a new value. Note that the thoughtspot user uses 1002, so you cannot use 1001 or 1002.
    admin_gid
    The admin user group ID. If you are using ssh instead of AWS SSM, use the default values. If you are using SSM, the ssm_user uses the default value, 1001. You must choose a new value. Note that the thoughtspot user uses 1002, so you cannot use 1001 or 1002.
    ssh_user

    The ssh_user must exist on the ThoughtSpot host, and it must have sudo privileges. This user is the same as the ec2_user.

    If you are using AWS SSM instead of ssh, there is no need to fill out this parameter.

    ssh_private_key
    Add the private key for ssh access to the hosts.yaml file. You can use an existing key pair, or generate a new key pair in the Ansible Control server.
    Run the following command to verify that the Ansible Control Server can connect to the hosts over ssh:
    ansible -m ping -i hosts.yaml all

    If you are using AWS SSM instead of ssh, there is no need to fill out this parameter or run the above command.

    ssh_public_key
    Add the public key to the ssh authorized_keys file for each host, and add the private key to the hosts.yaml file. You can use an existing key pair, or generate a new key pair in the Ansible Control server.
    Run the following command to verify that the Ansible Control Server can connect to the hosts over ssh:
    ansible -m ping -i hosts.yaml all

    If you are using AWS SSM instead of ssh, there is no need to fill out this parameter or run the above command.

    extra_admin_ssh_key
    [Optional] An additional or extra key may be required by your security application, such as Qualys, to connect to the hosts.

    If you are using AWS SSM instead of ssh, there is no need to fill out this parameter.

    http(s)_proxy
    If the hosts must access public repositories through an internal proxy service, provide the proxy information.
    This release of ThoughtSpot does not support proxy credentials to authenticate to the proxy service.
    ts_partition_name
    The extended name of the ThoughtSpot export partition, such as /dev/sdb1.

Run the Ansible Playbook

Run the Ansible Playbook from your local machine or the SSM console by entering the following command:

ansible-playbook -i hosts.yaml ts.yaml

As the Ansible Playbook runs, it will perform these tasks:

  1. Trigger the installation of Yum, Python, and R packages
  2. Configure the local user accounts that the ThoughtSpot application uses
  3. Install the ThoughtSpot CLI
  4. Configure all the nodes in the ThoughtSpot cluster:
    • Format and create export partitions, if they do not exist

After the Ansible Playbook finishes, run the prepare_disks script on every node, if you did not include it in the customize.sh file. Specify the data drives by adding the full device path for all data drives, such as /dev/sdc, after the script name. Separate data drives with a space.

sudo /usr/local/scaligent/bin/prepare_disks.sh /dev/sdc /dev/sdd

Your hosts are ready for installing the ThoughtSpot application.

Install the ThoughtSpot cluster and the application

Refer to Install ThoughtSpot clusters in AWS for more detailed information on installing the ThoughtSpot cluster.

Follow these general steps to install ThoughtSpot on the prepared hosts:

  1. Connect to the host as an admin user.
  2. Download the release artifact from the ThoughtSpot file sharing system.
  3. Upload the release artifact to the first host.
  4. Run the tscli cluster create command. This script prompts for user input.
  5. Check the cluster health by running health checks and logging into the application.