Search the Community
Showing results for tags 'cli commands'.
-
Synchronizing files and data among multiple servers is crucial for smooth functioning. Fortunately, many tools are available online for file synchronization, and Rsync is one of them. Rsync is one of the most popular and widely used utilities for remotely syncing data in Linux. Rsync features efficient file transfer, preservation of file metadata, updating existing files, partial transfers, and more. This makes Rsync an ideal choice for nearly all administrators. So, this guide will be all about using the Rsync command in Linux without hassles. How To Use the Rsync Command in Linux Most Linux distributions contain the Rsync utility, but you have to install it through the following command: Operating System Command Debian/Ubuntu sudo apt install rsync Fedora sudo dnf install rsync Arch Linux sudo pacman -Sy rsync After completing the installation, please run the below command to initiate data syncing between the source and the target: rsync -o source target Here, you should replace the source with the directory from where you want to synchronize the data and target with the directory where you want to store that data. For example, let’s sync the Videos and Documents directories by running the following command: rsync -o Videos Documents If you want to copy-paste data within the same system, use the following command: sudo rsync -avz /source/path /target/path/ The ‘-a’ or ‘–archive’ keeps the file attributes intact during a data transfer. The ‘-v’ or ‘–verbose’ option is to display what data is being transferred. Although optional, you should use the ‘-z’ or ‘–compress’ option to compress the data during transfer. This aids in speeding up the synchronization process. Let’s take an example and use the above rsync command to synchronize files from the Scripts directory to the Python directory: sudo rsync -avz ~/Scripts ~/Python Moreover, the primary purpose of rsync is to transfer data remotely between two devices or servers connected over a network: rsync -av -e ssh user@remote_host:/source/path/ /target/path Here, the ‘-e ssh’ option commands your system to use the secure shell/SSH for this transaction. Furthermore, if the system encounters any interruption during a remote file transfer, don’t worry. You can resume it through the ‘–partial’ option: rsync --partial -av -e ssh user@remote_host:/source/path/ /target/path Dry Run Rsync initiates the file transfer immediately after you enter a command. Therefore, to avoid any unintended consequences, you should always perform a dry run first. During a dry run, your system simply demonstrates the actions of your command without an actual data transfer. Hence, here you can add the ‘–dry-run’ option to start a dry run. For instance, to see what’s going to happen during a data sync from Python to Scripts directory, use: rsync -avz --dry-run ~/Python ~/Scripts Make Identical Servers In case there are some files in the target directory that are not available in the source directory, This results in non-uniformity, and in some cases, it even causes unnecessary disk consumption. So you can use the ‘–delete’ option to delete the data from the target which is not present at the source. For example: rsync -av --delete /source/path/ /target/path/ Show Progress During Transfers If you want to see the progress of your transfer, enter the ‘–progress’ option to display the progress indicator. For instance, on enabling the progress indicator, the above example will produce the following results: rsync -avz --progress ~/Python ~/Scripts A Quick Summary Mastering rsync commands enables you to efficiently transfer files to both local and remote hosts. It’s a robust tool for synchronizing data across different locations. This guide comprehensively explains how to use the rsync command in Linux. First, we look at rsync’s installation on Linux systems. Then, it comprehensively demonstrates different rsync commands and methods according to the use cases. View the full article
-
List of all command with options for /opt/lampp/lampp sudo /opt/lampp/lampp start # Start XAMPP (Apache, MySQL, and any other services included) sudo /opt/lampp/lampp stop # Stop XAMPP sudo /opt/lampp/lampp restart # Restart XAMPP sudo /opt/lampp/lampp startapache # Start only the Apache service sudo /opt/lampp/lampp stopapache # Stop the Apache service sudo /opt/lampp/lampp startmysql # Start only the MySQL/MariaDB service sudo /opt/lampp/lampp stopmysql # Stop the MySQL/MariaDB service sudo /opt/lampp/lampp startftp # Start the ProFTPD service sudo /opt/lampp/lampp stopftp # Stop the ProFTPD service sudo /opt/lampp/lampp security # Run a simple security check script sudo /opt/lampp/lampp enablessl # Enable SSL support for Apache sudo /opt/lampp/lampp disablessl # Disable SSL support for Apache sudo /opt/lampp/lampp backup # Create a simple backup of your XAMPP configuration, data, and logs sudo /opt/lampp/lampp status # Show status of XAMPP services sudo /opt/lampp/lampp reload # Reload XAMPP (Apache and MySQL reload configuration without stopping the server) sudo /opt/lampp/lampp reloadapache # Reload only the Apache service sudo /opt/lampp/lampp reloadmysql # Reload only the MySQL/MariaDB service sudo /opt/lampp/lampp reloadftp # Reload the ProFTPD service sudo /opt/lampp/lampp enablephpmyadmin # Enable phpMyAdmin access from network (modify permissions) sudo /opt/lampp/lampp disablephpmyadmin# Disable phpMyAdmin access from network sudo /opt/lampp/lampp phpstatus # Show PHP status (e.g., for checking PHP-FPM status) sudo /opt/lampp/lampp clean # Clean XAMPP (clears temporary files and logs) The post Lampp commands line reference with example appeared first on DevOpsSchool.com. View the full article
-
A free and open-source system called Kubernetes manages the container solutions like Docker. With the help of Docker, you may build the application containers from predefined images. The next stage is provided by Kubernetes which enables you to operate on numerous containers across several platforms and equalize the load among them. You may follow the instructions in this manual to set up Kubernetes on Ubuntu 20.04 via two different methods. Update the Ubuntu Packages Before installing Kubectl on Ubuntu, we need to update all the packages of the Ubuntu system. For an update, we have to utilize the “Apt” utility of Ubuntu within the “update” command as follows: $ sudo apt update Set Up Docker The first prerequisite to properly install Kubernetes on Ubuntu is to set up Docker. To install Docker, we utilize the “apt” utility of Ubuntu within the installation command. The Docker package should be mentioned as “docker.io” followed by the “-y” option to force the “apt” utility to install Docker. The Docker will be installed in a while, and you can start using it after enabling it. $ sudo apt install docker.io -y To enable Docker on the Ubuntu system, we utilize the “systemctl” utility. The two separate commands are utilized with the “enable” and “status” keywords to enable the Docker service and check its status after enabling it. The output of the status query displays that the Docker service is running perfectly fine. $ sudo systemctl enable docker $ sudo systemctl status docker After enabling the docker utility, you have to start using the same “systemctl” utility followed by the “Start” keyword and service name as “docker”. $ sudo systemctl start docker Install Kubernetes Before installaing Kubernetes, it’s necessary to install the “curl” utility in your system so that you can add the key of Kubernetes in your system. The “Snap” package is utilized to install the “curl” utility. The utility will be installed successfully. $ sudo snap install curl After the installation of the “curl” utility, we utilize it within the command followed by the “fsSL” option to get the GPG key of Kubernetes from its official cloud repository. This command gets the signing key as depicted in the output image: $ curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo tee /usr/share/keyrings/kubernetes.gpg Now, to add the Kubernetes repository within the default repositories of our system, we have to use the “echo” command followed by the “deb” instruction. Ensure that your other Ubuntu node has the Docker installed and you have to execute the following instruction on other nodes, i.e., for smooth communication: $ echo “deb [arch=amd64 signed-by=/usr/share/keyrings/Kubernetes.gpg] http://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee -a /etc/apt/sources.list A utility called Kubeadm, or Kubernetes Administrator, aids with cluster initialization. Utilizing the community-sourced standards expedites setup. The work bundle that launches the containers on each node is called Kubelet. You may access the clusters via the command line with the program. Run the following instructions on every server node to install the Kubernetes installation tools via the “snap” utility for each instruction followed by the “—classic” keyword: $ sudo snap install kubelet --classic $ sudo snap install kubectl --classic $ sudo snap install kubeadm --classic After the successful installation of Kubernetes tools, we utilize the version instructions for each tool to look for the installed version as follows: $ kubeadm version $ kubelet version $ kubectl version Configure Kubernetes It’s time to configure the installed tools of Kubernetes on our system. In this section, you may learn how to get the servers ready for a Kubernetes deployment. Perform the following commands on each Linux machine that you used as a node. First of all, turn off the swap storage first. Run the “swapoff” command with the “-a” option to carry out this operation. The “swapoff” command should be followed by the “sed” command as follows: $ sudo swapoff -a $ sudo sed -I ‘/ swap / s/^\(.*\)$/#\1/g’ /etc/fstab It’s time to load the modules of “containerd”. To load those modules, we open the “containerd” configuration file using the GNU nano editor. This file is in the “etc” folder per the following command: $ sudo nano /etc/modules-load.d/containerd.conf Upon opening the configuration file, we load the “overlay” and “br_netfilter” modules by adding the following two keywords: overlay br_netfilter After mentioning the modules in the files, we have to utilize the “modprobe” instruction followed by the names of modules to load them finally. $ sudo modprobe overlay $ sudo modprobe br_netfilter It’s high time to configure the network of Kubernetes by utilizing its configuration file located in the “etc” folder. The GNU nano editor is utilized to open the file. $ sudo nano /etc/sysctl.d/kubernetes.conf We need to set the following shown variables for Kubernetes to enable the networking. This configuration sets up the iptables for Kubernetes. Now, make sure to save the file before exiting. net.bridge-bridge-nf-call-ip6tables = 1 net.bridge-bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 Now, we reload the system services and configurations once and for all. The “sysctl” utility is utilized here with the “—system” option. You will see that the new configurations are added and reloaded, i.e. displayed in the output. $ sudo sysctl --system After successfully loading the Kubernetes modules and reloading the services, we have to assign a unique name to each node in your Kubernetes network. For instance, we want to set the current node as the master node of the Kubernetes network. Therefore, we try the “hostnamectl” utility in the command to set the hostname as “master-node”. $ sudo hostnamectl set-hostname master-node The worker node is also set using the same instruction with a different node name; this command needs to be performed at the worker server. $ sudo hostnamectl set-hostname omar Open the host configuration files on every node within our Kubernetes network and add the IP addresses of each node within the file. You need to mention the host names along with IP addresses to identify them uniquely. $ sudo nano /etc/hosts The image depicts the IP addresses and their hostnames in the host configuration file that is opened via the nano editor. Make sure that you are at the master node right now and open the Kubelet configuration file via the nano editor. $ sudo nano /etc/default/kubelet Make sure to write the following shown line into the file to set a KUBELET_EXTRA_ARGS variable for Kubernetes on the master node. KUBELET_EXTRA_ARGS=”—cgroup-driver=cgroupfs” Now, you have to reload the just-set configuration within your master node and the worker nodes. The “systemctl” utility is utilized with the “daemon-reload” keyword. After providing the password, don’t forget to restart the Kubernetes service. $ systemctl daemon-reload Conclusion In the end, your Kubernetes service is successfully installed and configured. Make sure to reload the Docker service and perform the specified commands on the worker nodes as well. View the full article
-
“Bash shell offers the export command, a built-in command that allows exporting variables in a shell to make them global, such that you can access them from another shell. With the export command, you export the environmental variables to other shells as a child process without tampering with the existing environmental variables. This guide discusses the Bash export command, giving examples of how you can use it.” Understanding the Bash Export Command The export command has only three options. -n: used to indicate the named variables won’t be exported -p: used to list all exported variables and functions -f: used when exporting functions Let’s have an example to understand when and where the export command comes in handy. Suppose you are in the current shell and create a local variable named demo1, as shown below. If we want to access the variable’s value, we could use the echo command. $ echo $demo1 However, if we opened another shell using the bash command and tried to get the value of the declared variable in the other shell, we got nothing as an output. This happens because the variable we created was local to the given shell. If we want to make it global, we must export it. Let’s return to our previous shell and export the variable using the below command. $ exit $ export demo1 If we go to a new shell and echo the variable, we see that we now get a value, meaning we have access to the variable. That’s all possible because we exported it using the export command. Viewing Exported Variables If you run the export command with no arguments, it will list all the exported variables in your system regardless of the shell. However, if you add the -p option, it will only list the exported variables in the current shell Notice that we have the variable we exported at the bottom of the list. Exporting Functions You can go beyond variables to export even functions with the export command. To make the shell recognize that you are exporting a function, use the -f flag. Let’s create a function, export it in the current shell, then try to access it in another shell and see if that works. With our function created, we can verify that it’s unavailable in the new shell, as shown below, where it returns an error. So, go back and export it using the -f flag. Lastly, open a new shell and verify that our function is now global. Bingo! You managed to export a function. Setting Values You can set the value of a variable using the export command. For this, you need the syntax below. $ export name[=value] For instance, if we wanted to change the value of the variable we created, we could use the below command. Once updated, echo the variable to check if it reflects the new value. That’s how you can manipulate other environmental variables, such as setting the default editor. You only need to grep the currently set editor and set a new one. Let’s check the current default editor using the command below. $ export | grep EDITOR In this case, we have no currently set default editor. Let’s go ahead and set one like vim or nano using the command below. $ export EDITOR=/usr/bin/nano If we check again, we see that the variable value was modified. Conclusion This article has focused on understanding the bash export command and how to use it. We’ve given various examples of how to use the export command options to export variables and functions. Knowing how to use the export command comes in handy when working with bash scripts. So, don’t stop here. Keep practicing! View the full article
-
SORT command in Linux is used to arrange the record in a specific order according to the option used. It helps in sorting the data in the file line by line. SORT command has different features that it follows in the resultant of commands. First is that the lines having numbers will come before the alphabetic lines. Those lines having lower case letters will be displayed earlier than the lines having the same character in uppercase. Prerequisite: You need to install Ubuntu on a virtual box and configure it. Users must be created to have the privileges of accessing the applications. Syntax: Sort (options) (file) Example: This is a simple example of sorting a file having data of names. These names are not in order, and to make them in an order form you need to sort them. So, consider a file named file1.txt. We will display the contents in the file by using the appended command: $ Cat file1.txt Now use the command to sort the text in the file: $ sort file1.txt Save the Output in Another File By using the sort command, you will come to know that its result is only displayed but not saved. To capture the result we need to store it. For this purpose –o option in the sort command is used. Consider an example name sample1.txt having the names of cars. We want to sort them and save the resultant data in a separate file. A file named result.txt is created at run-time and the respective output is stored in it. The data of sample1.txt is transferred to the resultant file and then with the help of –o the respective data is sorted. We have displayed the data using the cat command: $ sort sample1.txt > result.txt $ sort –o result.txt sample1.txt $ Cat result.txt The output shows that the data is sorted and saved in another file. Sort for Column Number Sorting is not only done on a single column. We can sort one column because of the second column. Let us have an example of a text file in which there are names and marks of the students. We want to organize them in ascending order. So we will use the keyword –k in the command. Whereas –n is used for numerical sorting. $ sort –k 2n file3.txt As there are two columns, so 2 is used with n. Check the Sorted Condition of a File If you are not assured if the present file is sorted or not, remove this doubt using the command that clarifies the confusion and displays the message. We will come through two basic examples: Unsorted Data Now, consider an unsorted file having the vegetable names. The command will use the keyword –c. This will check whether the data in the file is sorted or not. If the data is unsorted, then the output will display the line number of the first word where unsortedness is present in the file and also the word. $ sort –c sample2.txt From the given output, you can understand that the 3rd word in the file was misplaced. Sorted Data In this case, when the data is already organized, there is no need to do anything else. Consider a file result.txt. $ sort –c result.txt From the result, you can see that no message is shown which indicates that the data in the respective file is already sorted. Remove Duplicate Items Here is the most useful option of some sort. This helps in removing the repeated words in a file and make the file item organized too. It also maintains the consistency of the data in the file. Consider the file name file2.txt having the names of subjects but one subject is repeated multiple times. Sort command will then use the –u keyword to remove duplication and relatedness: $ sort –u file2.txt Now, you can see that the repeated items are removed from the output and that the data is also sorted. Sort Using Pipe in a Command If we want to sort the data of the file by providing the list of the directory concerning the file sizes, we will enlist all respective data of the directory. The ‘ls’ is used in command and -l will display it. The Pipe will help in displaying the files in an organized manner. $ ls –l /home/aqsayasin/ | sort –nk5 Random Sorting Sometimes, while performing any function, you can mess with the arrangement. If you want to arrange the data in any sequence and if there are no criteria for sorting, then random sorting is preferred. Consider a file named sample3.txt having the names of the continents. $ sort sample3.txt -R The respective output shows that the file is sorted and items are arranged in a different order. Sort the Data of Multiple Files One of the most useful commands of sorting is to sort the data of different files at a time. This can be done by using the find command. The output of the find command will act as an input for the command after the pipe that is a sort command. Find keyword is used to give only one file on each line, or we can say that it uses a break after each word. For instance, let’s consider three files named sample1.txt, sample2.txt, and sample3.txt. Here the “?” represents any number that is followed by the word “sample”. Find will fetch all three files and their data will be sorted with the help of a sort command with the pipe initiative: $ find –name “sample?.txt” –print0 | sort –files0-from=- The output shows that the data of all sample.txt series files are displayed and is arranged and organized alphabetically. Sort with Join Now, we are introducing an example that is quite different from the ones that are discussed earlier in this tutorial. In addition to sort, we have used join. This process is done in such a way that both the files are first sorted and then joined using a join keyword. Consider two files you want to join. Now use the below-cited query to apply the given concept: $ join <(sort sample2.txt) <(sort sample3.txt) You can see from the output that the data both files are combined in sorted form. Compare Files Using Sort We can also adopt the concept of comparing two files. The technique is the same as it was for joining. Firstly two files are sorted and then the data in them are compared. Consider the same two files as discussed in the previous example. Sample2.txt and sample3.txt: $ comm <(sort sample2.txt) <(sort sample3.txt) The data is sorted and arranged alternatively. The initial line of the file sample2.txt is written next to the first line of the file sample3.txt. Conclusion In this article, we have talked about the basic functionality and options of the sort command. Linux sort command is very beneficial in the maintenance of data and filtering all useless items from the files. View the full article
-
Sometimes, you might want to have a peek at the year’s calendar or even narrow it down to a month’s calendar. The Linux cal command is an excellent built-in tool that displays a calendar of a given year or month depending on the options passed. In this brief guide, we explore a few example sages of the Linux cal command. Basic syntax The cal command takes the following command syntax: $ cal month year Linux cal command without arguments In its basic format, the cal command prints out the current month and highlights the present day. For instance, by the time I’m writing this, the date is 18th May 2021 $ cal Print a month of the year with cal command To print a specific month of the year, provide the numeric value of the month [ 1 – 12 ] followed by the year. For example, to display the 10th month of 2021, run the command: $ cal 10 2021 This prints out the calendar dates for the 10th month ( October ) only. Alternatively, you can invoke the -m option followed by the month of the year. If the year is not provided as an argument, then the month of the current year is printed instead. $ cal -m June Print current month alongside previous & following month Let’s try out something more ambitious. The cal command also allows you to print the previous, current, and the next month. Simply pass the -3 option at the end $ cal -3 Print the entire calendar of the current year To print the entire traditional calendar for the current year run: $ cal -y For instance, to view the calendar of a different year, for example, 2022, specify the year after the cal command as shown. $ cal 2022 These are some of the commonly used cal command examples. However, if curiosity gets the better of you, find more command options by visiting the cal man pages as shown. $ man cal Summary The Linux cal command displays a simple calendar that allows you to view the current month of the year, the entire current year, or other months or years, depending on your command arguments. View the full article
-
Forum Statistics
63.6k
Total Topics61.7k
Total Posts