Skip to content

Migrating from a Default to External Configuration

The following section outlines the steps involved in migrating the data stored in the internal data storage applications as part of a default configuration of CAS Manager, to MongoDB and Vault instances in the external configuration mode.

Prerequisites for Migrating CAS Manager Data

Migration Commands are for Internal Storage Only

The migration commands can only be run if CAS Manager is using internal data storage for Vault or MongoDB as part of the default configuration. The Vault migration commands will not work if CAS Manager is already using an external Vault. The MongoDB commands commands will not work if CAS Manager is already using an external MongoDB.

  1. Create the configuration files for the target MongoDB and Vault you are migrating CAS Manager data to. To create blank configuration files run the following command in an SSH terminal:

    /usr/local/bin/cas-manager generate --vault --mongo
    

    This will create mongo-template.json and vault-template.json files which will be created in the config-templates/ directory within the current directory. The commands output will contain the full path to the files for reference.

  2. Input the required parameters for the configuration file by following the instructions here.

  3. In order for our migration scripts to be able to read from these files, please install the jq utility by running the following command in an SSH terminal:

    sudo dnf install -y jq
    

Migrating Internal MongoDB Data

The following steps outline how to migrate the internal MongoDB data to the external MongoDB instance as part of an external configuration of CAS Manager.

Migration Commands are for Internal Storage Only

The migration commands can only be run if CAS Manager is using internal data storage for MongoDB. This command will not work if CAS Manager is using an external MongoDB instance.

Once you have configured the config-templates/mongo-template.json, you can run the following commands to migrate the data from the internal storage to the external MongoDB instance.

  1. Run the following command in an SSH terminal to set the configuration for where to migrate the data to:

    # Set path to Mongo Configuration file
    export PATH_TO_MONGO_CONFIG='config-templates/mongo-template.json'
    
  2. Run the migration script. Ensure that the DEST_MONGO_DB is set correctly. It should match the database specified by the MongoDB connection string in the configuration file.

    # Run Commands to migrate MongoDB data from internal MongoDB to external MongoDB
    /usr/local/bin/kubectl exec -it deployments/mongo -- bash -c "
    #!/bin/sh
    set -e
    
    # If destination DB is different from default (casmdb), set it accordingly.
    export DEST_MONGO_DB='casmdb';
    
    # Get connection string from mongo configuration file.
    export DEST_MONGO_CONNECTION_STRING=$(jq '."db-connection-string"' ${PATH_TO_MONGO_CONFIG});
    
    # Check if TLS is enabled for external MongoDB
    if [[ $(jq '."db-enable-tls"' ${PATH_TO_MONGO_CONFIG}) == 'true' ]]; then
        export MONGO_TLS='--ssl --tlsInsecure'
    fi
    
    # Get internal MongoDB's credentials
    export MONGO_ADMIN=$(/usr/local/bin/kubectl get secrets/mongo-secret --template={{.data.username}} | base64 -d);
    export MONGO_DB=$(/usr/local/bin/kubectl get secrets/mongo-secret --template={{.data.dbname}} | base64 -d);
    export MONGO_PWD=$(/usr/local/bin/kubectl get secrets/mongo-secret --template={{.data.password}} | base64 -d);
    
    $(cat << 'EOF'
    # Check if TLS is enabled for internal MongoDB. This file is volume mounted in K8S manifest when TLS is required for mongo.
    if [[ -f /certs/tls_combined.crt ]]; then
        export INTERNAL_MONGO_TLS='--ssl --tlsInsecure'
    fi
    rm -rf /export/
    mkdir -p /export/
    
    # Dump data from internal MongoDB
    mongodump ${INTERNAL_MONGO_TLS} -u $MONGO_ADMIN -p $MONGO_PWD --db $MONGO_DB --gzip --archive=/export/mongo.archive
    # Restore dumped data to external MongoDB instance
    mongorestore ${MONGO_TLS} --uri="${DEST_MONGO_CONNECTION_STRING}" --drop --gzip --nsInclude=$MONGO_DB.* --nsFrom=$MONGO_DB.* --nsTo=$DEST_MONGO_DB.* --archive=/export/mongo.archive
    
    # Clean up
    rm -rf /export/
    EOF
    )"
    

    Once this command is complete, the last line logged by mongorestore will display a message similar to the following:

    6 document(s) restored successfully. 0 document(s) failed to restore.
    
  3. Run the following command to apply the external MongoDB configuration to complete the migration:

    # Point CASM instance to External MongoDB
    /usr/local/bin/cas-manager configure --config-file ${PATH_TO_MONGO_CONFIG}
    

After running this command, there may be some momentary down time as the database is switched over. Once the command is complete, CAS Manager should be functional. If for whatever reason you need to re-run the migration commands, you need to run the following command to start the internal MongoDB:

/usr/local/bin/kubectl scale deployments/mongo --replicas=1
Common issue are that the DEST_MONGO_DB environment variable set in the script and the database specified by the external MongoDB connection string in the configuration file do not match, or there are permissions issues with the credentials in the connection string. Applying the MongoDB configuration again will disable to the internal MongoDB.

Migrating Internal Vault Data

The following steps outline how to migrate the internal Vault data to the external Vault instance as part of an external configuration of CAS Manager.

Migration Command is for Internal Storage Only

The migration commands can only be run if CAS Manager is using internal data storage for Vault. This command will not work if CAS Manager is using an external Vault instance.

Once you have configured the config-templates/vault-template.json, you can run the following commands to migrate the data from the internal storage to the to the external Vault instance.

  1. Run the following command in an SSH terminal to set the configuration for where to migrate the data to:

    # Set path to Vault Configuration file
    export PATH_TO_VAULT_CONFIG='config-templates/vault-template.json'
    
  2. Create a backup of the internal Vault's token:

    # Create backup of internal vault's token in case something fails
    /usr/local/bin/kubectl create secret generic clustervaulttoken --from-literal=token="$(/usr/local/bin/kubectl get secret vault-secret --template={{.data.roottoken}} | base64 -d)" --from-literal=address="$(/usr/local/bin/kubectl get secrets app --template={{.data.VAULT_ADDRESS}} | base64 -d)"
    
  3. Run the migration script:

    # Run Commands to migrate Vault data from internal Vault to external Vault
    /usr/local/bin/kubectl exec -it deployments/vault -- sh -c "
    #!/bin/sh
    set -e
    
    # Get target Vault settings from configuration file.
    export DEST_VAULT=$(jq '."vault-url"' ${PATH_TO_VAULT_CONFIG});
    export DEST_VAULT_TOKEN=$(jq '."vault-token"' ${PATH_TO_VAULT_CONFIG});
    export DEST_SECRET_PATH=$(jq '."vault-secret-path"' ${PATH_TO_VAULT_CONFIG});
    
    # Set existing vault settings
    export VAULT_ADDR=$(/usr/local/bin/kubectl get secret clustervaulttoken --template={{.data.address}} | base64 -d);
    export VAULT_TOKEN=$(/usr/local/bin/kubectl get secret clustervaulttoken --template={{.data.token}} | base64 -d);
    export VAULT_SECRET_PATH='secret/';
    export VAULT_SKIP_VERIFY='true';
    
    # Dump secrets in json format
    $(cat << 'EOF'
    rm -rf /export/
    mkdir -p /export/
    for key in $( vault kv list ${VAULT_SECRET_PATH} | tail +3  )
    do
        dest=/export/$key.json
        # Don't copy sub-folders
        if [[ $(echo $key | grep -E '/\s*$') ]]
        then
            continue;
        fi
        mkdir -p /export/${key%/*}
        echo \"get ${VAULT_SECRET_PATH}$key\"
        vault kv get -format=json -field=data  ${VAULT_SECRET_PATH}$key > $dest;
    done
    
    # Copy secrets to destination vault
    export VAULT_ADDR=${DEST_VAULT}
    export VAULT_TOKEN=${DEST_VAULT_TOKEN}
    export DEST_SECRET_PATH=$(echo ${DEST_SECRET_PATH} | sed -e 's|\(.*\)data|\1|g')
    for secret_file in $( ls /export/*.json   ); do      
        key_file_name=$(basename -- \"$secret_file\")
        key_name=${key_file_name%%.*}
        echo \"put ${DEST_SECRET_PATH}$key_name\"
        vault kv put ${DEST_SECRET_PATH}$key_name @$secret_file;
    done
    
    # Clean up
    rm -rf /export/
    EOF
    )"
    

    On successful completion, the output will display a message similar to the following:

    "get secret/60f9f0455234e00881fd00a2"
    "get secret/admin-60f9f0365234e066b4fd00a1"
    "get secret/secret-management-service-health"
    "put secret/60f9f0455234e00881fd00a2"
    Key              Value
    ---              -----
    created_time     2021-07-22T22:27:59.961440121Z
    deletion_time    n/a
    destroyed        false
    version          1
    "put secret/admin-60f9f0365234e066b4fd00a1"
    Key              Value
    ---              -----
    created_time     2021-07-22T22:28:00.088969023Z
    deletion_time    n/a
    destroyed        false
    version          1
    "put secret/secret-management-service-health"
    Key              Value
    ---              -----
    created_time     2021-07-22T22:28:00.207620136Z
    deletion_time    n/a
    destroyed        false
    version          1
    
  4. Run the following command to apply the external Vault configuration to complete the migration:

    # Point CASM instance to External Vault
    /usr/local/bin/cas-manager configure --config-file ${PATH_TO_VAULT_CONFIG}
    

    After running this command, there may be some momentary down time as the vault is switched over. Once the command is complete, CAS Manager should be functional. If for whatever reason you need to re-run the migration commands, run the following command to start the internal Vault:

    /usr/local/bin/kubectl scale deployments/vault --replicas=1
    /usr/local/bin/kubectl patch cronjobs vaultunseal -p '{"spec" : {"suspend" : false }}'
    sleep 60
    
    A common issue is that the destination secret path is incorrect or the Vault has been sealed. If there is a problem please check the configuration and try again.

  5. If everything is okay, delete the backup of the internal Vault's token by running the following command:

    /usr/local/bin/kubectl delete secret clustervaulttoken
    
    Once this is deleted you will no longer be able to access data from the internal Vault.