At the time of writing this article, I was using Ubuntu 20.04 Focal Fossa which had nearly 20 new wallpapers. But if you have been using Ubuntu for quite some time you must be missing the old Ubuntu wallpapers which you were used to quite used to gaze.
Though it’s easier to just find the old Ubuntu Wallpapers on the internet, Download the image, and set them as your wallpaper, where is FUN in that. As a regular Ubuntu user, it is enthralling to use the command line to do our work. Let’s give some space for nostalgia to kick in.
So to download the Wallpaper of all Ubuntu versions execute the below command via terminal (shortcut : Ctrl+Alt+T).
View Wallpaper Packages of all the Ubuntu Versions in sorted manner –
apt-cache search ubuntu wallpapers | sort -n
After this you can either choose a particular version to download or download almost all the wallpapers with the below command.
Any IOT system is incomplete without a Cloud Server that can collect/route the data. But before doing actual deployment on the cloud you want to test your server config or just do some local testing. Surely you can spin the server on your Raspberry Pi in the LAN but that may not be the case with everyone. Maybe you need more resources than RPi can offer or there isn’t any RPi lying around.
For such scenarios, we can always install the server in your localhost but it’s not recommended as any wrong config might break your host operating system. So it’s always better to play around on a VM in your local PC and just delete it if something goes wrong or you are done with your work. This type of VM that runs on your Host OS is called Type-2 Hypervisor.
To setup VM on a LINUX is much more simpler and efficient in terms of hardware utilization. To do this we have various options like VMware and Virtual Box but we will go with something RAW – KVM (Kernal Virtual Machine). Kernel-based Virtual Machine is a virtualization module in the Linux kernel that allows the kernel to function as a hypervisor. It was merged into the Linux kernel mainline in kernel version 2.6.20, which was released on February 5, 2007. This means it comes out of the box but we still need to install some additional packages which we will see in the steps below.
Check if your system supports KVM
Run the below command to find out if your PC supports virtualization.
egrep -c '(vmx|svm)' /proc/cpuinfo
An outcome greater than 1 means virtualization is supported and the number implies the number of CPU cores.
sudo apt install cpu-checker
The output “KVM acceleration can be used” clearly indicates we are on the right path.
Installing KVM and other dependencies
Now that we are sure our system supports Virtualization, it’s time to install KVM and other dependencies.
Here we are giving all the specs (CPU, RAM, Network) and location of the ios file in one command and executing it. After running the command the VM installation will start (image at end of the post)
Creating a VM using Graphical Interface
The graphic interface is more or less like other VM offerings where we need to manually select the specifications and the iso file. To start the KVM Graphical interface execute the below command.
After this, a GUI application will open up you can begin by creating a new VM, select the local ISO file, hardware specs, and Network interface. And VM installing process will start as shown in the image below.
Afterward just continue with the steps as usual ubuntu installation: setting the username, PC-name, password, locale, etc. And after boot up, you need to need to set the screen resolution. Moreover, this VM will be a part of a virtual network virbr0 and will have a NAT IP address.
Now you can use a GUI-based OS interface to access this server or you can just ssh onto it from your host computer. There is another way to put this VM in your actual LAN using Bridge mode thereby your VM getting the IP address directly from your router, which we will discuss in the next post.
To understand the co-relation between all these terms we need to go back a little bit in time, say the 1960’s. In early 1960, the AT&T Bell Laboratories, MIT and General Electric started developing a time-sharing operating system called MULTICS (Multiplexed Information and Computing Service). The project went on till 1967 when things didn’t work out and the project was dropped.
But, Interestingly two of the scientists named Ken Thompson and Dennis Ritchie (creator of C language) continued their work. They worked on UNICS (Uniplexed Information and computing system) which was made on a machine named PDP-7. Later this was renamed UNIX (but license fees were not affordable by everyone especially students)
Then from where did, LINUX came into the picture?
So there was a guy named Linus Torvalds who was pursuing his masters in a Finnish university. He wanted to buy a license for UNIX but luckily he didn’t have enough money (which turned out good for us). So he decided to make a clone of UNIX from scratch called LINUX [“Linux: A Portable Operating System” was the title of his thesis in M.Sc.]. JFYI he also created GIT to manage the files while he was creating Linux.
Now its time to get Technical!!!!!!!
Let’s again start from the beginning.
A Computer OS is a piece of software that acts as the base of a computer. It does critical tasks like assign memory and start applications.
An OS runs on top of an even lower level program called a Kernel. A Kernel is written in machine-level languages, and interacts directly with the hardware and gives driver support.
Now that you know the base, let’s start with the answer. Unix and Linux are both popular Kernels. Both have their own advantages and fans.
Unfortunately, a kernel cannot act as an OS on its own. It misses essential features, such as – putting an image to a screen, copying data to the hard drive and basic software like a text editor. That is where GNU comes in. When Linus Torvalds wrote an awesome piece of open-source software called Linux, Richard Stallman wrote a suite to run on it called GNU. The combination of GNU and Linux has become ubiquitous to the point that the duo is often referred to as just “Linux“
When it comes to Linux, GNU/Linux isn’t enough to run a modern PC on its own. So it gives programmers the ability to go ahead and write their own versions of GNU/Linux. These different versions, called Distros or Distributions, differ in their base software. So two Distros can have different package managers, text editors, terminal applications, calculator apps, etc.
Each Linux Distribution is typically tailored for specific target systems, such as servers, desktops, mobile devices, embedded devices, etc.
There are mainly 3 popular parent Linux distros
(1 more mention being android)
All the software present in a Linux distro is managed by a Package Management System. This manager keeps a log of all the programs installed on your system, keeps a listing of all programs NOT installed on your system, and easily identifies upgradable programs. A Linux user MUST be familiar with the package manager in order to install the software.
Below is the list of package management systems of popular Linux distros
Now, on top of a Linux Distro, run certain programs called Desktop Environment. These DEs are used to change the look and feel of the distro. Most DEs can run on the majority of Distros, so you have your own choice when customising your desktop. Popular DEs include
Wrapping up all and bonus point
– Kernel interacts with hardware
– Linux distro adds software(using a package manager) on top of the kernel
– Linux flavors add more features as per their unique use case
– Desktop Environment gives GUI interface
Most of you guys must have logged onto servers using SSH protocol and verified yourself with a Password. Everything seems good, but don’t you sometimes feel a bit frustrated when every-time you have to enter the password, also entering the password is not the best way in terms of security (storing a password in scripts which auto logins to a server is not a good idea). That’s where the concept of SSH Keys comes into the picture.
‘SSH keys’ is one of the many ways of authenticating, while logging to a remote server over the internet. SSH keys work on the principle of Asymmetric cryptography where client and server have different keys and authentication is successful as long as these 2 keys fit the formula (as both of these keys are derived from a mathematical formula). Now we will see how to use SSH keys as a method of authentication.
STEP 1: Generate an SSH key pair
ssh-keygen -t rsa
This command will generate 2 keys under a hidden folder named ‘.ssh/‘ in your home directory. Before generating new keys its best to check if any previous keys are present (cd ./ssh)
The 2 generated keys are as follows :
PUBLIC KEY (id_rsa.pub): This key is given to the system (server) to which we are trying to connect.
PRIVATE KEY (id_rsa): This key is stored on the system from which we are trying to connect.
STEP 2: Upload the Public key on Server
Now you need to upload the Public Key to the server to which your client will connect. eg: while configuring ssh keys on Github we paste the public key in Github’s ssh keys settings.
ssh-copy-id uses the SSH protocol to connect to the target host and upload the SSH user key. This command edits the authorized_keys file on the server. It creates the .ssh directory if it doesn’t exist. It creates the authorized keys file if it doesn’t exist. Effectively, copying the public key to the server.
STEP 3: Connecting to the Server
When the client tries to connect to the server, below sequence of operations take place
This creates an authentication mechanism based on “something you have” (the private key file) as opposes to “something you know” (a password or phrase). The best authentication mechanisms contain a component of both – this is why ssh-keygen prompts you for a passphrase to encrypt the private key.
NOTE: After the client is authenticated by the server an SSH tunnel is established. The data send over SSH is encrypted with a session key(which is shared between client and server after establishing the connection). Also, the session key uses a symmetrical cryptography technique.
We have many times logged onto servers via ssh command using our terminal or if you are a windows user you must have used putty to login to any cloud servers (AWS for example). But do we know how exactly SSH protocol works?
SSH stands for Secure Shell which is a secure way of connecting to a public server over the internet. SSH is widely used by network administrators for managing systems and applications remotely, allowing them to log into another computer over a network, execute commands and move files from one computer to another.
SSH works on Client-Server model: Client is where the session is displayed and Server is where session runs. SSH by default runs on TCP port 22.
The most basic use of using ssh is ssh username@server
This command will cause the client to connect to the server (172.20.10.2) with the username (root) given. Afterwards, for first-time connections the user will be prompted with the remote host’s public key fingerprint and prompted to connect, despite there having been no prior connection:
The authenticity of host '172.20.10.2' cannot be established.
DSA key fingerprint is 01:23:45:67:89:ab:cd:ef:ff:fe:dc:ba:98:76:54:32:10.
Are you sure you want to continue connecting (yes/no)?
Answering “yes” to the prompt will cause the session to continue and the host key is stored in the local system’s known_hosts file. This is a hidden file, stored by default in a hidden directory, called /.ssh/known_hosts, in the user’s home directory. Once the host key has been stored in the known_hosts file, the client system can connect directly to that server again without the need for any approvals: the host key authenticates the connection. Afterwards, it will prompt you to enter the password and a secure connection will be established.
The known_hosts files can sometimes be exploited by hackers. Also adding username and password in automated scripts can put your server to risk as anyone with access to source code can view those details.