If a TACACS+ server is misbehaving and rejecting login requests, you can pick a particular to use if you have this global configuration command configured:
tacacs-server directed-request
With this command configured, you can log into devices with the format <username>@<tacacs-ip-address> to select a specific working server
Salt SSH is an agentless option provided by SaltStack that allows users to run Salt commands without having to install a minion on the remote device or node.
The main requirements to use Salt SSH is that the remote system has SSH and Python enabled.
Salt SSH connects to the remote system and installs a lightweight version of SaltStack in a temporary directory, and can optionally delete this temporary director once the operations have completed.
Salt SSH is considerably slower than the 0MQ distributed messaging library, but it is considered faster than logging into the system and executing commands.
Puppet Bolt allows the functionality of Puppet without the need to install a Puppet master or Puppet agents on devices or nodes.
Puppet Bolt connects to devices using SSH or WinRM connections.
Puppet Bolt is an open source language based on the Ruby Language and can be installed as a single package.
Tasks can be used for pushing configurations and managing services such as starting and stopping services or delivering an application.
Puppet Bolt allows a change to be executed or configuration changed and then validated.
There are two ways to use Puppet Bolt
Orchestrator Driven Tasks
Standalone Tasks
Orchestrator Driven Tasks
Orchestrator driven tasks can leverage the Puppet architecture to use services to connect to devices. Meant for large scale environments
Standalone Tasks
Standalone tasks are used for connecting directly to devices or nodes to execute tasks and do not require any Puppet environment or components to be set up.
Individual commands can be run from the command line using the command bolt command run <command name> followed by the list of devices to run the command against. Scripts can be made to run multiple commands if required.
Puppet Bolt will copy the script to a temporary directory on the remote device, execute the script, capture the result, and remove the script from the system.
Ansible is an automation tool that is capable of automating cloud provisioning, deployment of applications, and configuration management.
Ansible was created with the following concepts in mind:
Consistent
Secure
High reliable
Minimal learning curve
Ansible is an agentless tool, no software or agent needs to be installed on the client machines that are being managed.
Ansible communicates using SSH for the majority of devices, but can support Windows Remote Management and other transport methods.
Ansible does not need an administrative account on the clients it manages, it can use built-in authorisation escalation tools such as sudo when it requires to do so.
Ansible sends all requests from a control station, which can be a laptop or server sitting in a data centre.
The control station is the device that is used to run Ansible and issue changes and send requests to remote hosts.
Ansible uses playbooks to deploy configuration changes or retrieve information from hosts within a network.
An ansible playbook is a structured set of instructions.
A playbook can contain multiple plays, and each playbook contains the task that needs to be accomplished in order for the play to be succesful.
Playbooks are normally written in YAML.
CLI Commands
Use Case
ansible
Runs modules against a targeted host
ansible-playbook
Runs a playbook
ansible-pull
Changes ansible clients from default push model to pull model
ansible-vault
Encrypts YAML files that may contain sensitive data
ansible-docs
Provides documentation on syntax and parameters in CLI
Common Ansible CLI Commands
Ansible keeps an inventory file to keep track of the hosts it manages. The inventory can be a named group of hosts or a simple list of individual h osts.
A host can belong to multiple groups and can be represented by a hostname or IP address.
SaltStack is configuration management tool in the same category as Chef and Puppet.
SlatStack is built on Python, and has a Python interface so a user can program directly to SaltStack using Python code.
Most of the instructions from SaltStack get sent out in YAML or a DSL.
There are called Salt Forumlas. Formulas can be modified but are designed to work out of the box.
SaltStack uses the concept of systems which are divided into multiple categories. SaltStack has masters and minons.
SaltStack can run remote commands to systems in a parallel fashion which allows for very fast performance.
SaltStack leverages a distributed messaging platform called 0MQ for fast reliable messaging throughout the networking stack.
SaltStack is an event driven technology that has components called reactors and beacons.
A reactor lives on the master and listens for any types of changes in the node or device that differ from the desired state or configuration, such as:
Command line configuration change
Disk, Memory, or Processor Utilisation change
Service status change
Beacons live on minions. If there is a configuration change on the minion, the beacon will alert the reactor. This is known as the remote execution system, it helps determine whether the configuration is in an appropriate state on the minions. These actions are called jobs and executed jobs can be stored in an external database for reuse or review.
SaltStack uses pillars and grains. SaltStack uses grains to gather system information on the minions and report back to the master, gathered by the salt-minion daemon.
Grains provide specific information to the master about the host, such as uptime.
Pillars store information about data that can be retrieved from the master.
Pillars can have minions assigned to them, and other minions that are not assigned to a pillar can not retrieve that information.
SaltStack command structure contains targets, commands, and arguments.
The target is the desired system that a command should run on. The target can be the MinionID of a minion, or * as a wildcard to target all minions called globbing
Chef is an open source configuration management tool designed to automate configurations and operations of a network and server environment.
Chef is written in Ruby and Erland. Writing code within Chef is done in Ruby.
Configuration management tools function in two different types of models, push or pull.
Push models push configuration from a centralised tool or management server, pull models check in with the server to see if there is any change in the configuration, and if there is, pulls the updated configuration to the end device.
Chefs structure, terminology and core components are different from those of Puppet.
Chef leverages a similar client/server functionality to Puppet though.
Chef Components
Puppet Components
Description
Chef Server
Puppet Master
Server/Master
Chef Client
Puppet Agent
Client/Agent Functions
Cookbook
Module
Collection of code or files
Recipe
Manifest
Code being deployed to make configuration changes
Workstation
Puppet Console
Users interact with configuration management tools and create code
Puppet and Chef Comparison
Code is created on the Chef work station.
The code is stored in a file called a recipe
Once a receipe is created on the workstation, it is uploaded to the Chef server to be used in an ernvironment.
Knife is the name of the command line tool used to upload cookbooks to the Chef server.
The command used is knife upload <cookbook-name>
The chef server can be hosted locally on the workstation, or remotely on a server.
There are four types of Chef server deployments:
Chef Solo – Hosted locally
Chef Client and Server – Typical Chef Deployment with distributed components
Hosted Chef – Chef server is hosted in the cloud
Private Chef – All Chef components are in the same enterprise network
All cookbooks are stored on the Chef server
The server also holds are the tools required to transfer the configurations to the Chef Clients.
OHAI, a server installed on the nodes, is used to collect the current state of a node to send the information back to the Chef server through the Chef client service.
The chef server checks if there is any new configuration that needs to be sent to the node by comparing the information from the OHAI server to the cookbook or recipe.
The Chef client service that runs on the nodes is responsible for all the communications to the Chef server.
When a node needs a recipe, the Chef client will handle the communication back to the Chef server to signify the nodes need for an updated configuration or recipe.
Because nodes can be unique or identical, the recipes can be the same or different for each note.
Recipe files have the the file extension .rb
The kitchen is where all recipes and cookbooks can be automatically executed and tested prior to going live on any production nodes.
The kitchen allows for not only testing within the enterprise environment but within many cloud providers and virtualisation technologies.
The kitchen supports many of the common testing frameworks used by the Ruby community:
Puppet is a robust configuration management and automation tool.
Cisco supports the use of Puppet on a variety of devices such as Catalyst Switches, Nexus Switches and the Cisco Unified Computing System (UCS) server platform.
Puppet works with many different vendors and is one of the more common tools used for automation.
Puppet can be used for the entire lifecycle of a device, including initial deployment, configuration, repurposing, and removing devices.
Puppet uses the concept of a Puppet master server to communicate with devices that have the puppet agent locally installed on the device.
Changes and automation tasks are executed within the puppet console and are s hared between the puppet master and puppet agents.
The changes are stored in the Puppet database called PuppetDB, which can be located on the master server or a different box.
Puppet allows for the management and configuration of many device types at the same time.
Puppet agents communicate to the puppet master using different TCP connections.
Each TCP port represents a communications path running from an agent on a device or node.
Puppet also has the capability to periodically verify the configuration on a device. This can be set to any frequency. If the configuration is changed an alert can be sent as well as put the device back to its original configuration.
There are three different installation types with Puppet:
Installation Type
Scale
Monolithic
Up to 4000 nodes
Monolithic with compile masters
4000 to 20000 nodes
Monolithic with compile masters and standalone PE-PostgreSQL
More than 20000 nodes
The typical installation is monolithic. Puppet can scale to very large installations though.
In very large environments Puppet needs a master of masters server to manage the distributed puppet masters and their database to help simplify management.
Large deployments need compile masters which are simply load balanced Puppet servers that help scale the number of agents to be managed.
Puppet Modules
Puppet Modules allow for configuration of anything that can be configured manually. Puppet has many modules for different vendors and device types. Puppet Modules contain the following components:
Manifests
Templates
Files
Manifests are the code that configures the clients or nodes running the Puppet agent. These manifests are pushed to the device using SSL and require certificates to be installed to ensure the security of the communications between the Puppet master and Puppet agents.
Each of the manifests is used to modify the running configuration on Cisco Catalyst devices. Manifests can be saved as an individual file and have the extension .pp
Here is an example of a Puppet file that configures the network time protocol server on a Cisco Catalyst Device: