Search the Community
Showing results for tags 'cli tools'.
-
AWS offers numerous services for anyone to use. However, when you want an easy and controlled way of controlling all your AWS services, you must install an AWS CLI (command-line interface). The AWS CLI gives you access to the AWS API, allowing you to manage all the services depending on the task you are handling or want to automate. You can install the AWS CLI on Ubuntu 24.04, and there are two approaches you can use depending on your preference. You can install it as a snap package or a Python module inside a Python virtual environment. Let’s discuss each option. Method 1: Install AWS CLI Via Snap Ubuntu supports AWS CLI, and you can access it as a snap package from App Center via GUI or using the snap command. Installing AWS CLI as a snap will install all the dependency packages it requires without installing them separately. If you find this approach convenient, execute the below command to install AWS CLI. $ sudo snap install aws-cli --classic AWS CLI will be downloaded and installed on your system. With this option, the AWS CLI is accessible system-wide and not just in a virtual environment, as in the second method. Once you install AWS CLI, check the installed version to ascertain the package has been installed successfully. $ aws --version We’ve installed AWS CLI version 2.15.38. The next step is to set up the AWS CLI by connecting it with your AWS account to allow you to take control of your AWS services. Run the configure command below. $ aws configure Provide your credentials to complete the setup. Method 2: Install AWS CLI as a Python Module You can also install AWS CLI on Ubuntu 24.04 as a Python module. This method requires creating a virtual environment and using pip to install AWS CLI. A virtual environment is a secluded way of using packages. Instead of making AWS CLI accessible system-wide, you can limit it to only being accessible inside the created virtual environment. Besides, if you don’t have sudo privileges on the system, provided you have a virtual environment, you will manage to install AWS CLI. Follow the steps below. Step 1: Install Python3 PIP and Venv To create a virtual environment, we must have Venv installed. Execute the below command to install it. $ sudo apt install Python3-venv You also need PIP to help with installing Python modules. Therefore, install PIP using the following command. $ sudo apt install python3-pip Step 2: Create a Virtual Environment With Venv, we can create a secluded environment in which to install AWS CLI without requiring sudo privileges. A virtual environment is recommended when working with Python modules, as it doesn’t interfere with APT-installed packages. Besides, if things go sideways, it doesn’t affect your system. We are using Python 3 for this case. Here’s how you create a virtual environment and activate it. $ python3 -m venv .venv $ source .venv/bin/activate We’ve named the virtual environment “venv,” but you can use any preferred name. Again, we’ve created the directory in our current location, but you can specify a different path. Step 3: Install AWS CLI Inside the virtual environment, running the following command will download and install AWS CLI. $ pip3 install awscli Ensure the download completes successfully. You will get an output similar to the one below. You can check the installed version. $ aws --version With AWS CLI installed, configure it to start managing your AWS services. Once you are done using it or want to exit the virtual environment, deactivate it. Conclusion AWS CLI is a preferred way of managing your AWS services. On Ubuntu 24.04, you can install it as a snap package or in a Python virtual environment using PIP. This post discussed each method, giving examples to create a detailed and straightforward guide for anyone to follow along. View the full article
-
Synchronizing files and data among multiple servers is crucial for smooth functioning. Fortunately, many tools are available online for file synchronization, and Rsync is one of them. Rsync is one of the most popular and widely used utilities for remotely syncing data in Linux. Rsync features efficient file transfer, preservation of file metadata, updating existing files, partial transfers, and more. This makes Rsync an ideal choice for nearly all administrators. So, this guide will be all about using the Rsync command in Linux without hassles. How To Use the Rsync Command in Linux Most Linux distributions contain the Rsync utility, but you have to install it through the following command: Operating System Command Debian/Ubuntu sudo apt install rsync Fedora sudo dnf install rsync Arch Linux sudo pacman -Sy rsync After completing the installation, please run the below command to initiate data syncing between the source and the target: rsync -o source target Here, you should replace the source with the directory from where you want to synchronize the data and target with the directory where you want to store that data. For example, let’s sync the Videos and Documents directories by running the following command: rsync -o Videos Documents If you want to copy-paste data within the same system, use the following command: sudo rsync -avz /source/path /target/path/ The ‘-a’ or ‘–archive’ keeps the file attributes intact during a data transfer. The ‘-v’ or ‘–verbose’ option is to display what data is being transferred. Although optional, you should use the ‘-z’ or ‘–compress’ option to compress the data during transfer. This aids in speeding up the synchronization process. Let’s take an example and use the above rsync command to synchronize files from the Scripts directory to the Python directory: sudo rsync -avz ~/Scripts ~/Python Moreover, the primary purpose of rsync is to transfer data remotely between two devices or servers connected over a network: rsync -av -e ssh user@remote_host:/source/path/ /target/path Here, the ‘-e ssh’ option commands your system to use the secure shell/SSH for this transaction. Furthermore, if the system encounters any interruption during a remote file transfer, don’t worry. You can resume it through the ‘–partial’ option: rsync --partial -av -e ssh user@remote_host:/source/path/ /target/path Dry Run Rsync initiates the file transfer immediately after you enter a command. Therefore, to avoid any unintended consequences, you should always perform a dry run first. During a dry run, your system simply demonstrates the actions of your command without an actual data transfer. Hence, here you can add the ‘–dry-run’ option to start a dry run. For instance, to see what’s going to happen during a data sync from Python to Scripts directory, use: rsync -avz --dry-run ~/Python ~/Scripts Make Identical Servers In case there are some files in the target directory that are not available in the source directory, This results in non-uniformity, and in some cases, it even causes unnecessary disk consumption. So you can use the ‘–delete’ option to delete the data from the target which is not present at the source. For example: rsync -av --delete /source/path/ /target/path/ Show Progress During Transfers If you want to see the progress of your transfer, enter the ‘–progress’ option to display the progress indicator. For instance, on enabling the progress indicator, the above example will produce the following results: rsync -avz --progress ~/Python ~/Scripts A Quick Summary Mastering rsync commands enables you to efficiently transfer files to both local and remote hosts. It’s a robust tool for synchronizing data across different locations. This guide comprehensively explains how to use the rsync command in Linux. First, we look at rsync’s installation on Linux systems. Then, it comprehensively demonstrates different rsync commands and methods according to the use cases. View the full article
-
A free and open-source system called Kubernetes manages the container solutions like Docker. With the help of Docker, you may build the application containers from predefined images. The next stage is provided by Kubernetes which enables you to operate on numerous containers across several platforms and equalize the load among them. You may follow the instructions in this manual to set up Kubernetes on Ubuntu 20.04 via two different methods. Update the Ubuntu Packages Before installing Kubectl on Ubuntu, we need to update all the packages of the Ubuntu system. For an update, we have to utilize the “Apt” utility of Ubuntu within the “update” command as follows: $ sudo apt update Set Up Docker The first prerequisite to properly install Kubernetes on Ubuntu is to set up Docker. To install Docker, we utilize the “apt” utility of Ubuntu within the installation command. The Docker package should be mentioned as “docker.io” followed by the “-y” option to force the “apt” utility to install Docker. The Docker will be installed in a while, and you can start using it after enabling it. $ sudo apt install docker.io -y To enable Docker on the Ubuntu system, we utilize the “systemctl” utility. The two separate commands are utilized with the “enable” and “status” keywords to enable the Docker service and check its status after enabling it. The output of the status query displays that the Docker service is running perfectly fine. $ sudo systemctl enable docker $ sudo systemctl status docker After enabling the docker utility, you have to start using the same “systemctl” utility followed by the “Start” keyword and service name as “docker”. $ sudo systemctl start docker Install Kubernetes Before installaing Kubernetes, it’s necessary to install the “curl” utility in your system so that you can add the key of Kubernetes in your system. The “Snap” package is utilized to install the “curl” utility. The utility will be installed successfully. $ sudo snap install curl After the installation of the “curl” utility, we utilize it within the command followed by the “fsSL” option to get the GPG key of Kubernetes from its official cloud repository. This command gets the signing key as depicted in the output image: $ curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo tee /usr/share/keyrings/kubernetes.gpg Now, to add the Kubernetes repository within the default repositories of our system, we have to use the “echo” command followed by the “deb” instruction. Ensure that your other Ubuntu node has the Docker installed and you have to execute the following instruction on other nodes, i.e., for smooth communication: $ echo “deb [arch=amd64 signed-by=/usr/share/keyrings/Kubernetes.gpg] http://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee -a /etc/apt/sources.list A utility called Kubeadm, or Kubernetes Administrator, aids with cluster initialization. Utilizing the community-sourced standards expedites setup. The work bundle that launches the containers on each node is called Kubelet. You may access the clusters via the command line with the program. Run the following instructions on every server node to install the Kubernetes installation tools via the “snap” utility for each instruction followed by the “—classic” keyword: $ sudo snap install kubelet --classic $ sudo snap install kubectl --classic $ sudo snap install kubeadm --classic After the successful installation of Kubernetes tools, we utilize the version instructions for each tool to look for the installed version as follows: $ kubeadm version $ kubelet version $ kubectl version Configure Kubernetes It’s time to configure the installed tools of Kubernetes on our system. In this section, you may learn how to get the servers ready for a Kubernetes deployment. Perform the following commands on each Linux machine that you used as a node. First of all, turn off the swap storage first. Run the “swapoff” command with the “-a” option to carry out this operation. The “swapoff” command should be followed by the “sed” command as follows: $ sudo swapoff -a $ sudo sed -I ‘/ swap / s/^\(.*\)$/#\1/g’ /etc/fstab It’s time to load the modules of “containerd”. To load those modules, we open the “containerd” configuration file using the GNU nano editor. This file is in the “etc” folder per the following command: $ sudo nano /etc/modules-load.d/containerd.conf Upon opening the configuration file, we load the “overlay” and “br_netfilter” modules by adding the following two keywords: overlay br_netfilter After mentioning the modules in the files, we have to utilize the “modprobe” instruction followed by the names of modules to load them finally. $ sudo modprobe overlay $ sudo modprobe br_netfilter It’s high time to configure the network of Kubernetes by utilizing its configuration file located in the “etc” folder. The GNU nano editor is utilized to open the file. $ sudo nano /etc/sysctl.d/kubernetes.conf We need to set the following shown variables for Kubernetes to enable the networking. This configuration sets up the iptables for Kubernetes. Now, make sure to save the file before exiting. net.bridge-bridge-nf-call-ip6tables = 1 net.bridge-bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 Now, we reload the system services and configurations once and for all. The “sysctl” utility is utilized here with the “—system” option. You will see that the new configurations are added and reloaded, i.e. displayed in the output. $ sudo sysctl --system After successfully loading the Kubernetes modules and reloading the services, we have to assign a unique name to each node in your Kubernetes network. For instance, we want to set the current node as the master node of the Kubernetes network. Therefore, we try the “hostnamectl” utility in the command to set the hostname as “master-node”. $ sudo hostnamectl set-hostname master-node The worker node is also set using the same instruction with a different node name; this command needs to be performed at the worker server. $ sudo hostnamectl set-hostname omar Open the host configuration files on every node within our Kubernetes network and add the IP addresses of each node within the file. You need to mention the host names along with IP addresses to identify them uniquely. $ sudo nano /etc/hosts The image depicts the IP addresses and their hostnames in the host configuration file that is opened via the nano editor. Make sure that you are at the master node right now and open the Kubelet configuration file via the nano editor. $ sudo nano /etc/default/kubelet Make sure to write the following shown line into the file to set a KUBELET_EXTRA_ARGS variable for Kubernetes on the master node. KUBELET_EXTRA_ARGS=”—cgroup-driver=cgroupfs” Now, you have to reload the just-set configuration within your master node and the worker nodes. The “systemctl” utility is utilized with the “daemon-reload” keyword. After providing the password, don’t forget to restart the Kubernetes service. $ systemctl daemon-reload Conclusion In the end, your Kubernetes service is successfully installed and configured. Make sure to reload the Docker service and perform the specified commands on the worker nodes as well. View the full article
-
AWS Copilot CLI for Amazon Elastic Container Service (Amazon ECS) is now generally available with v1.0.0. The AWS Copilot CLI makes it easy to build, release, and operate production-ready containerized applications on Amazon ECS with the Fargate launch type. AWS Copilot incorporates AWS’s best practices, from infrastructure-as-code to continuous delivery, and makes them available to customers from the comfort of their terminal. With AWS Copilot, you can focus on building your applications instead of setting up infrastructure. View the full article
-
Today, the AWS Copilot CLI for Amazon Elastic Container Service (Amazon ECS) launched version 0.5.0. Starting with this release, you can deploy applications or jobs that need to run only on a particular schedule. AWS Copilot has built in timeouts and retries to provide more flexibility for how your scheduled jobs run. AWS Copilot will also deploy all the required infrastructure and settings, while you just provide the application and the schedule to be run. This allows you to focus on development instead of manually setting up rules and infrastructure to ensure your scheduled jobs run when needed. View the full article
- 1 reply
-
- aws
- scheduling
-
(and 2 more)
Tagged with:
-
We are excited to let you know that we have released a new experimental tool. We would love to get your feedback on it. Today we have released an experimental Docker Hub CLI tool, the hub-tool. The new Hub CLI tool lets you explore, inspect and manage your content on Docker Hub as well as work with your teams and manage your account. The new tool is available as of today for Docker Desktop for Mac and Windows users and we will be releasing this for Linux in early 2021. The hub-tool is designed to map as closely to the top level features we know people are using in Docker Hub and provide a new way for people to start interacting with and managing their content. Let’s start by taking a look at the top level options we have. What you can do We can see that we have the ability to jump into your account, your content, your orgs and your personal access tokens. From here I can dive into one of my repos And from here I can then decide to list the tags in one of those repos. This also now lets me see when these images were last pulled Changing focus, I can go over and look at some of the teams I am a member of to see what permissions people have Or I can have a look at my access tokens Why a standalone tool? I also wanted to mention why we have decided to do this as a standalone tool rather than a Docker command with something like docker registry. We know that Docker Hub has some unique features and we wanted to bring these out as part of this tool and get feedback on whether this is something that would be valuable to add (or which bits of this we should add!) to the Docker CLI in the future. Given that some of these are unique to Hub, that we wanted feedback before adding more top level commands into the Docker CLI and that we wanted to do something quick, we decided to go with a stand alone tool. This does mean that this tool is going to be an experiment so we do expect it to go away sometime in 2021. We plan to use the lessons we learn here to make something awesome as part of the Docker CLI. Give us feedback! If you have feedback or want to see this move into the existing Docker CLI, please let us know on the roadmap item. To get started trying out the tool, sign up for a Hub account and start using the tool in the Edge version of Docker Desktop. The post Docker Hub Experimental CLI tool appeared first on Docker Blog. View the full article
-
- docker hub
- experimental
-
(and 1 more)
Tagged with:
-
Forum Statistics
63.6k
Total Topics61.7k
Total Posts