Blog

Latest News From Blog

Image Other

Top Five Free Ai Content Generator: Free Ai Content Generator 2023

The soul of a website is called the content. So it is easy to understand that the website has no value without content. And writing content takes a lot of research. There is a lot to think about. Whether it is WordPress or Blogspot, everyone knows how hard it is to write content. But what if these contents become automatically generated? Yes, you are right, today I will talk about something like this in this article. Today I will share with you five free AI content generators that are completely free.What Is Ai Generated Content?Before knowing what AI content is, we need to know what AI is.  AI is Artificial Intelligence.  Many people have more or fewer ideas about artificial intelligence.  But for those who don't know.  Artificial intelligence is when a machine such as a computer can make decisions by itself, it is called artificial intelligence.  Now let me tell you what AI content is.  AI content means you just give a few hints and it will automatically write the entire content within seconds.  This content is called AI-generated content.Is Ai Generated Content Unique?Yes, Of course. Because these AI-generated contents are generated by different AI content generators completely by themselves.  So it will give you completely unique content.  AI content generators will write the content based on the topic which are you give.How Does Ai Content Generator Work?The way Ai content generators work is that You first give the generator little hints about the topic you want to write about like it might ask you for a headline.  Give some keywords in the topic and it will generate unique content for you in a few seconds.Best Free Ai Content Generator ToolsThere are many AI generator tools.  But today I will talk about 5 content generators that are better than other content generators and easy to use.  And these content generators will give you much better content than any other content generator.Simplified- Content Generator ToolSimplified is one of the best content generator tools. There are a lot of free tools you'll find here. You can generate up to five thousand words by Simplified which is huge. And also there are a lot of free features available in Simplified.1. Ryter: An All In One Content WriterRyter is a next-level content generator tool Because it can understand the language model deeply. For this reason, it can generate the best content. You can use all the features for free but you can only generate 5000 words per month.2. Copy Ai Free Ai Content GeneratorCopy AI is the best content generator of any other tool. Because it is built on the world's best ai technology. So it can generate more creative and best content very fast. You can generate 100 pieces of content in the first months by Copy Ai Free Content Generator.3. Content Bot Advance AI WriterContent Bot Advance AI Writer tool can generate content from any short hints or keywords. And it's a very powerful content generator tool. You can generate 500 pieces of content per month and one thousand five hundred long-form editor words every month. But it's a little bit hard to use.4. Smart Copy By UnbounceSmart Copy is one of the advanced level AI writing tools. You can use it to generate your content You can use all features fully free but you can generate only 5 articles per day.So here is the top five best AI content generator tool you can use a lot of free features without purchasing it but there is a few limitations. So try to use free features. 

Read More
Image Other

How to Use Ansible to Install and Set Up Docker on Ubuntu 22.04

IntroductionServer automation now plays an essential role in systems administration, due to the disposable nature of modern application environments. Configuration management tools such as Ansible are typically used to streamline the process of automating server setup by establishing standard procedures for new servers while also reducing human error associated with manual setups.Ansible offers a simple architecture that doesn’t require special software to be installed on nodes. It also provides a robust set of features and built-in modules which facilitate writing automation scripts.This guide explains how to use Ansible to automate the steps contained in our guide on How To Install and Use Docker on Ubuntu 22.04. Docker is an application that simplifies the process of managing containers, resource-isolated processes that behave in a similar way to virtual machines, but are more portable, more resource-friendly, and depend more heavily on the host operating system.PrerequisitesIn order to execute the automated setup provided by the playbook in this guide, you’ll need:One Ansible control node: an Ubuntu 22.04 machine with Ansible installed and configured to connect to your Ansible hosts using SSH keys. Make sure the control node has a regular user with sudo permissions and a firewall enabled, as explained in our Initial Server Setup guide. To set up Ansible, please follow our guide on How to Install and Configure Ansible on Ubuntu 22.04.One or more Ansible Hosts: one or more remote Ubuntu 22.04 servers previously set up following the guide on How to Use Ansible to Automate Initial Server Setup on Ubuntu 22.04.Before proceeding, you first need to make sure your Ansible control node is able to connect and execute commands on your Ansible host(s). For a connection test, check Step 3 of How to Install and Configure Ansible on Ubuntu 22.04.What Does this Playbook Do?This Ansible playbook provides an alternative to manually running through the procedure outlined in our guide on How To Install and Use Docker on Ubuntu 22.04. Set up your playbook once, and use it for every installation after.Running this playbook will perform the following actions on your Ansible hosts:Install aptitude, which is preferred by Ansible as an alternative to the apt package manager.Install the required system packages.Install the Docker GPG APT key.Add the official Docker repository to the apt sources.Install Docker.Install the Python Docker module via pip.Pull the default image specified by default_container_image from Docker Hub.Create the number of containers defined by the container_count variable, each using the image defined by default_container_image, and execute the command defined in default_container_command in each new container.Once the playbook has finished running, you will have a number of containers created based on the options you defined within your configuration variables.To begin, log into a sudo enabled user on your Ansible control node server.Step 1 — Preparing your PlaybookThe playbook.yml file is where all your tasks are defined. A task is the smallest unit of action you can automate using an Ansible playbook. But first, create your playbook file using your preferred text editor:nano playbook.yml CopyThis will open an empty YAML file. Before diving into adding tasks to your playbook, start by adding the following:playbook.yml--- - hosts: all become: true vars: container_count: 4 default_container_name: docker default_container_image: ubuntu default_container_command: sleep 1 CopyAlmost every playbook you come across will begin with declarations similar to this. hosts declares which servers the Ansible control node will target with this playbook. become states whether all commands will be done with escalated root privileges.vars allows you to store data in variables. If you decide to change these in the future, you will only have to edit these single lines in your file. Here’s a brief explanation of each variable:container_count: The number of containers to create.default_container_name: Default container name.default_container_image: Default Docker image to be used when creating containers.default_container_command: Default command to run on new containers.Note: If you want to see the playbook file in its final finished state, jump to Step 5. YAML files can be particular with their indentation structure, so you may want to double-check your playbook once you’ve added all your tasks.Step 2 — Adding Packages Installation Tasks to your PlaybookBy default, tasks are executed synchronously by Ansible in order from top to bottom in your playbook. This means task ordering is important, and you can safely assume one task will finish executing before the next task begins.All tasks in this playbook can stand alone and be re-used in your other playbooks.Add your first tasks of installing aptitude, a tool for interfacing with the Linux package manager, and installing the required system packages. Ansible will ensure these packages are always installed on your server:playbook.yml tasks: - name: Install aptitude apt: name: aptitude state: latest update_cache: true - name: Install required system packages apt: pkg: - apt-transport-https - ca-certificates - curl - software-properties-common - python3-pip - virtualenv - python3-setuptools state: latest update_cache: true CopyHere, you’re using the apt Ansible built-in module to direct Ansible to install your packages. Modules in Ansible are shortcuts to execute operations that you would otherwise have to run as raw bash commands. Ansible safely falls back onto apt for installing packages if aptitude is not available, but Ansible has historically preferred aptitude.You can add or remove packages to your liking. This will ensure all packages are not only present, but on the latest version, and do after an update with apt is called.Step 3 — Adding Docker Installation Tasks to your PlaybookYour task will install the latest version of Docker from the official repository. The Docker GPG key is added to verify the download, the official repository is added as a new package source, and Docker will be installed. Additionally, the Docker module for Python will be installed as well:playbook.yml - name: Add Docker GPG apt Key apt_key: url: https://download.docker.com/linux/ubuntu/gpg state: present - name: Add Docker Repository apt_repository: repo: deb https://download.docker.com/linux/ubuntu jammy stable state: present - name: Update apt and install docker-ce apt: name: docker-ce state: latest update_cache: true - name: Install Docker Module for Python pip: name: docker CopyYou’ll see that  apt_key and apt_repository built-in Ansible modules are first pointed at the correct URLs, then tasked to ensure they are present. This allows installation of the latest version of Docker, along with using pip to install of the module for Python.Step 4 — Adding Docker Image and Container Tasks to your PlaybookThe actual creation of your Docker containers starts here with the pulling of your desired Docker image. By default, these images come from the official Docker Hub. Using this image, containers will be created according to the specifications laid out by the variables declared at the top of your playbook:playbook.yml - name: Pull default Docker image community.docker.docker_image: name: "{{ default_container_image }}" source: pull - name: Create default containers community.docker.docker_container: name: "{{ default_container_name }}{{ item }}" image: "{{ default_container_image }}" command: "{{ default_container_command }}" state: present with_sequence: count={{ container_count }} Copydocker_image is used to pull the Docker image you want to use as the base for your containers. docker_container allows you to specify the specifics of the containers you create, along with the command you want to pass them.with_sequence is the Ansible way of creating a loop, and in this case, it will loop the creation of your containers according to the count you specified. This is a basic count loop, so the item variable here provides a number representing the current loop iteration. This number is used here to name your containers.Step 5 — Reviewing your Complete PlaybookYour playbook should look roughly like the following, with minor differences depending on your customizations:playbook.yml--- - hosts: all become: true vars: container_count: 4 default_container_name: docker default_container_image: ubuntu default_container_command: sleep 1d tasks: - name: Install aptitude apt: name: aptitude state: latest update_cache: true - name: Install required system packages apt: pkg: - apt-transport-https - ca-certificates - curl - software-properties-common - python3-pip - virtualenv - python3-setuptools state: latest update_cache: true - name: Add Docker GPG apt Key apt_key: url: https://download.docker.com/linux/ubuntu/gpg state: present - name: Add Docker Repository apt_repository: repo: deb https://download.docker.com/linux/ubuntu jammy stable state: present - name: Update apt and install docker-ce apt: name: docker-ce state: latest update_cache: true - name: Install Docker Module for Python pip: name: docker - name: Pull default Docker image community.docker.docker_image: name: "{{ default_container_image }}" source: pull - name: Create default containers community.docker.docker_container: name: "{{ default_container_name }}{{ item }}" image: "{{ default_container_image }}" command: "{{ default_container_command }}" state: present with_sequence: count={{ container_count }} CopyFeel free to modify this playbook to best suit your individual needs within your own workflow. For example, you could use the docker_image module to push images to Docker Hub or the docker_container module to set up container networks.Note: This is a gentle reminder to be mindful of your indentations. If you run into an error, this is very likely the culprit. YAML suggests using 2 spaces as an indent, as was done in this example.Once you’re satisfied with your playbook, you can exit your text editor and save.Step 6 — Running your PlaybookYou’re now ready to run this playbook on one or more servers. Most playbooks are configured to be executed on every server in your inventory by default, but you’ll specify your server this time.To execute the playbook only on server1, connecting as sammy, you can use the following command:ansible-playbook playbook.yml -l server1 -u sammy CopyThe -l flag specifies your server and the -u flag specifies which user to log into on the remote server. You will get output similar to this:Output. . . changed: [server1] TASK [Create default containers] ***************************************************************************************************************** changed: [server1] => (item=1) changed: [server1] => (item=2) changed: [server1] => (item=3) changed: [server1] => (item=4) PLAY RECAP *************************************************************************************************************************************** server1 : ok=9 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Note: For more information on how to run Ansible playbooks, check our Ansible Cheat Sheet Guide.This indicates your server setup is complete! Your output doesn’t have to be exactly the same, but it is important that you have zero failures.When the playbook is finished running, log in via SSH to the server provisioned by Ansible to check if the containers were successfully created.Log in to the remote server with:ssh sammy@your_remote_server_ip CopyAnd list your Docker containers on the remote server:sudo docker ps -a CopyYou should see output similar to this:OutputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a3fe9bfb89cf ubuntu "sleep 1d" 5 minutes ago Created docker4 8799c16cde1e ubuntu "sleep 1d" 5 minutes ago Created docker3 ad0c2123b183 ubuntu "sleep 1d" 5 minutes ago Created docker2 b9350916ffd8 ubuntu "sleep 1d" 5 minutes ago Created docker1 This means the containers defined in the playbook were created successfully. Since this was the last task in the playbook, it also confirms that the playbook was fully executed on this server.ConclusionAutomating your infrastructure setup can not only save you time, but it also helps to ensure that your servers will follow a standard configuration that can be customized to your needs. With the distributed nature of modern applications and the need for consistency between different staging environments, automation like this has become a central component in many teams’ development processes.In this guide, you demonstrated how to use Ansible to automate the process of installing and setting up Docker on a remote server. Because each individual typically has different needs when working with containers, we encourage you to check out the official Ansible documentation for more information and use cases of the docker_container Ansible module.If you’d like to include other tasks in this playbook to further customize your initial server setup, please refer to our introductory Ansible guide Configuration Management 101: Writing Ansible Playbooks.

Read More
Image Other

Initial Server Setup with Ubuntu 20.04

IntroductionWhen you first create a new Ubuntu 20.04 server, you should perform some important configuration steps as part of the initial setup. These steps will increase the security and usability of your server, and will give you a solid foundation for subsequent actions.Step 1 — Logging in as rootTo log into your server, you will need to know your server’s public IP address. You will also need the password or — if you installed an SSH key for authentication — the private key for the root user’s account. If you have not already logged into your server, you may want to follow our guide on how to Connect to Droplets with SSH, which covers this process in detail.If you are not already connected to your server, log in now as the root user using the following command (substitute the highlighted portion of the command with your server’s public IP address):ssh root@your_server_ip CopyAccept the warning about host authenticity if it appears. If you are using password authentication, provide your root password to log in. If you are using an SSH key that is passphrase protected, you may be prompted to enter the passphrase the first time you use the key each session. If this is your first time logging into the server with a password, you may also be prompted to change the root password.About rootThe root user is the administrative user in a Linux environment that has very broad privileges. Because of the heightened privileges of the root account, you are discouraged from using it on a regular basis. This is because the root account is able to make very destructive changes, even by accident.The next step is setting up a new user account with reduced privileges for day-to-day use. Later, we’ll show you how to temporarily gain increased privileges for the times when you need them.Step 2 — Creating a New UserOnce you are logged in as root, you’ll be able to add the new user account. In the future, we’ll log in with this new account instead of root.This example creates a new user called sammy, but you should replace that with a username that you like:adduser sammy CopyYou will be asked a few questions, starting with the account password.Enter a strong password and, optionally, fill in any of the additional information if you would like. This is not required and you can just hit ENTER in any field you wish to skip.Step 3 — Granting Administrative PrivilegesNow we have a new user account with regular account privileges. However, we may sometimes need to do administrative tasks.To avoid having to log out of our normal user and log back in as the root account, we can set up what is known as superuser or root privileges for our normal account. This will allow our normal user to run commands with administrative privileges by putting the word sudo before the command.To add these privileges to our new user, we need to add the user to the sudo group. By default, on Ubuntu 20.04, users who are members of the sudo group are allowed to use the sudo command.As root, run this command to add your new user to the sudo group (substitute the highlighted username with your new user):usermod -aG sudo sammy CopyNow, when logged in as your regular user, you can type sudo before commands to run them with superuser privileges.Step 4 — Setting Up a Basic FirewallUbuntu 20.04 servers can use the UFW firewall to make sure only connections to certain services are allowed. We can set up a basic firewall using this application.Note: If your servers are running on DigitalOcean, you can optionally use DigitalOcean Cloud Firewalls instead of the UFW firewall. We recommend using only one firewall at a time to avoid conflicting rules that may be difficult to debug.Applications can register their profiles with UFW upon installation. These profiles allow UFW to manage these applications by name. OpenSSH, the service allowing us to connect to our server now, has a profile registered with UFW.You can see this by typing:ufw app list CopyOutputAvailable applications: OpenSSH We need to make sure that the firewall allows SSH connections so that we can log back in next time. We can allow these connections by typing:ufw allow OpenSSH CopyAfterwards, we can enable the firewall by typing:ufw enable CopyType y and press ENTER to proceed. You can see that SSH connections are still allowed by typing:ufw status CopyOutputStatus: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) As the firewall is currently blocking all connections except for SSH, if you install and configure additional services, you will need to adjust the firewall settings to allow traffic in. You can learn some common UFW operations in our UFW Essentials guide.Step 5 — Enabling External Access for Your Regular UserNow that we have a regular user for daily use, we need to make sure we can SSH into the account directly.Note: Until verifying that you can log in and use sudo with your new user, we recommend staying logged in as root. This way, if you have problems, you can troubleshoot and make any necessary changes as root. If you are using a DigitalOcean Droplet and experience problems with your root SSH connection, you can regain access to Droplets using the Recovery Console.The process for configuring SSH access for your new user depends on whether your server’s root account uses a password or SSH keys for authentication.If the root Account Uses Password AuthenticationIf you logged in to your root account using a password, then password authentication is enabled for SSH. You can SSH to your new user account by opening up a new terminal session and using SSH with your new username:ssh sammy@your_server_ip CopyAfter entering your regular user’s password, you will be logged in. Remember, if you need to run a command with administrative privileges, type sudo before it like this:sudo command_to_run CopyYou will be prompted for your regular user password when using sudo for the first time each session (and periodically afterwards).To enhance your server’s security, we strongly recommend setting up SSH keys instead of using password authentication. Follow our guide on setting up SSH keys on Ubuntu 20.04 to learn how to configure key-based authentication.If the root Account Uses SSH Key AuthenticationIf you logged in to your root account using SSH keys, then password authentication is disabled for SSH. You will need to add a copy of your local public key to the new user’s ~/.ssh/authorized_keys file to log in successfully.Since your public key is already in the root account’s ~/.ssh/authorized_keys file on the server, we can copy that file and directory structure to our new user account in our existing session.The simplest way to copy the files with the correct ownership and permissions is with the rsync command. This will copy the root user’s .ssh directory, preserve the permissions, and modify the file owners, all in a single command. Make sure to change the highlighted portions of the command below to match your regular user’s name:Note: The rsync command treats sources and destinations that end with a trailing slash differently than those without a trailing slash. When using rsync below, be sure that the source directory (~/.ssh) does not include a trailing slash (check to make sure you are not using ~/.ssh/).If you accidentally add a trailing slash to the command, rsync will copy the contents of the root account’s ~/.ssh directory to the sudo user’s home directory instead of copying the entire ~/.ssh directory structure. The files will be in the wrong location and SSH will not be able to find and use them.rsync --archive --chown=sammy:sammy ~/.ssh /home/sammy CopyNow, open up a new terminal session on your local machine, and use SSH with your new username:ssh sammy@your_server_ip CopyYou should be logged in to the new user account without using a password. Remember, if you need to run a command with administrative privileges, type sudo before it like this:sudo command_to_run CopyYou will be prompted for your regular user password when using sudo for the first time each session (and periodically afterward).Where To Go From Here?At this point, you have a solid foundation for your server. You can install any of the software you need on your server now.

Read More
Image Other

How To Install Node.js on Ubuntu 20.04

IntroductionNode.js is a JavaScript runtime for server-side programming. It allows developers to create scalable backend functionality using JavaScript, a language many are already familiar with from browser-based web development.In this guide, we will show you three different ways of getting Node.js installed on an Ubuntu 20.04 server:using apt to install the nodejs package from Ubuntu’s default software repositoryusing apt with an alternate PPA software repository to install specific versions of the nodejs packageinstalling nvm, the Node Version Manager, and using it to install and manage multiple versions of Node.jsFor many users, using apt with the default, the repo will be sufficient. If you need specific newer or legacy versions of Node, you should use the PPA repository. If you are actively developing Node applications and need to switch between node versions frequently, choose the nvm method.PrerequisitesTo follow this guide, you will need an Ubuntu 20.04 server setup. Before you begin, you should have a non-root user account with sudo privileges set up on your system. You can learn how to do this by following the Ubuntu 20.04 initial server setup tutorial.Option 1 — Installing Node.js with Apt from the Default RepositoriesUbuntu 20.04 contains a version of Node.js in its default repositories that can be used to provide a consistent experience across multiple systems. At the time of writing, the version in the repositories is 10.19. This will not be the latest version, but it should be stable and sufficient for quick experimentation with the language.Warning: the version of Node.js included with Ubuntu 20.04, version 10.19, is now unsupported and unmaintained. You should not use this version in production, and should refer to one of the other sections in this tutorial to install a more recent version of Node.To get this version, you can use the apt package manager. Refresh your local package index first:sudo apt update CopyThen install Node.js:sudo apt install nodejs CopyCheck that the install was successful by querying node for its version number:node -v CopyOutputv10.19.0 If the package in the repositories suits your needs, this is all you need to do to get set up with Node.js. In most cases, you’ll also want to also install npm, the Node.js package manager. You can do this by installing the npm package with apt:sudo apt install npm CopyThis allows you to install modules and packages to use with Node.js.At this point, you have successfully installed Node.js and npm using apt and the default Ubuntu software repositories. The next section will show how to use an alternate repository to install different versions of Node.js.Option 2 — Installing Node.js with Apt Using a NodeSource PPATo install a different version of Node.js, you can use a PPA (personal package archive) maintained by NodeSource. These PPAs have more versions of Node.js available than the official Ubuntu repositories. Node.js v16 and v18 are available as of the time of writing.First, install the PPA to get access to its packages. From your home directory, use curl to retrieve the installation script for your preferred version, making sure to replace 16.x with your preferred version string (if different):cd ~ curl -sL https://deb.nodesource.com/setup_16.x -o /tmp/nodesource_setup.sh CopyRefer to the NodeSource documentation for more information on the available versions.Inspect the contents of the downloaded script with nano or your preferred text editor:nano /tmp/nodesource_setup.sh CopyWhen you are satisfied that the script is safe to run, exit your editor. Then run the script with sudo:sudo bash /tmp/nodesource_setup.sh CopyThe PPA will be added to your configuration and your local package cache will be updated automatically. You can now install the Node.js package in the same way you did in the previous section:sudo apt install nodejs CopyVerify that you’ve installed the new version by running node with the -v version flag:node -v CopyOutputv16.19.0 The NodeSource nodejs package contains both the node binary and npm, so you don’t need to install npm separately.At this point, you have successfully installed Node.js and npm using apt and the NodeSource PPA. The next section will show how to use the Node Version Manager to install and manage multiple versions of Node.js.Option 3 — Installing Node Using the Node Version ManagerAnother way of installing Node.js that is particularly flexible is to use nvm, the Node Version Manager. This piece of software allows you to install and maintain many different independent versions of Node.js, and their associated Node packages, at the same time.To install NVM on your Ubuntu 20.04 machine, visit the project’s GitHub page. Copy the curl command from the README file that displays on the main page. This will get you the most recent version of the installation script.Before piping the command through to bash, it is always a good idea to audit the script to make sure it isn’t doing anything you don’t agree with. You can do that by removing the | bash segment at the end of the curl command:curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh CopyReview the script and make sure you are comfortable with the changes it is making. When you are satisfied, run the command again with | bash appended at the end. The URL you use will change depending on the latest version of nvm, but as of right now, the script can be downloaded and executed with the following:curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash CopyThis will install the nvm script to your user account. To use it, you must first source your .bashrc file:source ~/.bashrc CopyNow, you can ask NVM which versions of Node are available:nvm list-remote CopyOutput. . . v18.0.0 v18.1.0 v18.2.0 v18.3.0 v18.4.0 v18.5.0 v18.6.0 v18.7.0 v18.8.0 v18.9.0 v18.9.1 v18.10.0 v18.11.0 v18.12.0 (LTS: Hydrogen) v18.12.1 (LTS: Hydrogen) v18.13.0 (Latest LTS: Hydrogen) v19.0.0 v19.0.1 v19.1.0 v19.2.0 v19.3.0 v19.4.0 It’s a very long list. You can install a version of Node by writing in any of the release versions listed. For instance, to get version v14.10.0, you can run:nvm install v14.10.0 CopyYou can view the different versions you have installed by listing them:nvm list Output-> v14.10.0 v14.21.2 default -> v14.10.0 iojs -> N/A (default) unstable -> N/A (default) node -> stable (-> v14.21.2) (default) stable -> 14.21 (-> v14.21.2) (default) . . . This shows the currently active version on the first line (-> v14.10.0), followed by some named aliases and the versions that those aliases point to.Note: if you also have a version of Node.js installed through apt, you may receive a system entry here. You can always activate the system-installed version of Node using nvm use system.Additionally, there are aliases for the various long-term support (or LTS) releases of Node:Outputlts/* -> lts/hydrogen (-> N/A) lts/argon -> v4.9.1 (-> N/A) lts/boron -> v6.17.1 (-> N/A) lts/carbon -> v8.17.0 (-> N/A) lts/dubnium -> v10.24.1 (-> N/A) lts/erbium -> v12.22.12 (-> N/A) lts/fermium -> v14.21.2 lts/gallium -> v16.19.0 (-> N/A) lts/hydrogen -> v18.13.0 (-> N/A) You can install a release based on these aliases as well. For instance, to install the latest long-term support version, hydrogen, run the following:nvm install lts/hydrogen CopyOutputDownloading and installing node v18.13.0... . . . Now using node v18.13.0 (npm v8.19.3) You can switch between installed versions with nvm use:nvm use v14.10.0 CopyOutputNow using node v14.10.0 (npm v6.14.8) ``` You can verify that the install was successful using the same technique from the other sections: ```command node -v Outputv14.10.0 The correct version of Node is installed on your machine as expected. A compatible version of npm is also available.Removing Node.jsYou can uninstall Node.js using apt or nvm, depending on how it was installed. To remove the version from the system repositories, use apt remove:sudo apt remove nodejs CopyBy default, apt remove retains any local configuration files that were created since installation. If you don’t want to save the configuration files for later use, use apt purge:sudo apt purge nodejs CopyTo uninstall a version of Node.js that you installed using nvm, first determine whether it is the current active version:nvm current CopyIf the version you are targeting is not the current active version, you can run:nvm uninstall node_version CopyOutputUninstalled node node_version This command will uninstall the selected version of Node.js.If the version you would like to remove is the current active version, you first need to deactivate nvm it to enable your changes:nvm deactivate CopyNow you can uninstall the current version using the uninstall command used previously. This removes all files associated with the targeted version of Node.js.ConclusionThere are quite a few ways to get up and running with Node.js on your Ubuntu 20.04 server. Your circumstances will dictate which of the above methods is best for your needs. While using the packaged version in Ubuntu’s repository is one method, using nvm or a NodeSource PPA offers additional flexibility.For more information on programming with Node.js, please refer to our tutorial series 

Read More
Image Other

How to Set Up SSH Keys on Ubuntu 20.04

IntroductionSSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with an Ubuntu server, chances are you will spend most of your time in a terminal session connected to your server through SSH.In this guide, we’ll focus on setting up SSH keys for an Ubuntu 20.04 installation. SSH keys provide a secure way of logging into your server and are recommended for all users.Step 1 — Creating the Key PairThe first step is to create a key pair on the client machine (usually your computer):ssh-keygen CopyBy default recent versions of ssh-keygen will create a 3072-bit RSA key pair, which is secure enough for most use cases (you may optionally pass in the -b 4096 flag to create a larger 4096-bit key).After entering the command, you should see the following output:OutputGenerating public/private rsa key pair. Enter file in which to save the key (/your_home/.ssh/id_rsa): Press enter to save the key pair into the .ssh/ subdirectory in your home directory, or specify an alternate path.If you had previously generated an SSH key pair, you may see the following prompt:Output/home/your_home/.ssh/id_rsa already exists. Overwrite (y/n)? If you choose to overwrite the key on disk, you will not be able to authenticate using the previous key anymore. Be very careful when selecting yes, as this is a destructive process that cannot be reversed.You should then see the following prompt:OutputEnter passphrase (empty for no passphrase): Here you optionally may enter a secure passphrase, which is highly recommended. A passphrase adds an additional layer of security to prevent unauthorized users from logging in. To learn more about security, consult our tutorial on How To Configure SSH Key-Based Authentication on a Linux Server.You should then see the output similar to the following:OutputYour identification has been saved in /your_home/.ssh/id_rsa Your public key has been saved in /your_home/.ssh/id_rsa.pub The key fingerprint is: SHA256:/hk7MJ5n5aiqdfTVUZr+2Qt+qCiS7BIm5Iv0dxrc3ks user@host The key's randomart image is: +---[RSA 3072]----+ | .| | + | | + | | . o . | |o S . o | | + o. .oo. .. .o| |o = oooooEo+ ...o| |.. o *o+=.*+o....| | =+=ooB=o.... | +----[SHA256]-----+ You now have a public and private key that you can use to authenticate. The next step is to place the public key on your server so that you can use SSH-key-based authentication to log in.Step 2 — Copying the Public Key to Your Ubuntu ServerThe quickest way to copy your public key to the Ubuntu host is to use a utility called ssh-copy-id. Due to its simplicity, this method is highly recommended if available. If you do not have ssh-copy-id available to you on your client machine, you may use one of the two alternate methods provided in this section (copying via password-based SSH, or manually copying the key).Copying the Public Key Using ssh-copy-idThe ssh-copy-id tool is included by default in many operating systems, so you may have it available on your local system. For this method to work, you must already have password-based SSH access to your server.To use the utility, you specify the remote host that you would like to connect to, and the user account that you have password-based SSH access to. This is the account to which your public SSH key will be copied.The syntax is:ssh-copy-id username@remote_host CopyYou may see the following message:OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established. ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe. Are you sure you want to continue connecting (yes/no)? yes This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type “yes” and press ENTER to continue.Next, the utility will scan your local account for the id_rsa.pub key that we created earlier. When it finds the key, it will prompt you for the password of the remote user’s account:Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys [email protected]'s password: Type in the password (your typing will not be displayed, for security purposes) and press ENTER. The utility will connect to the account on the remote host using the password you provided. It will then copy the contents of your ~/.ssh/id_rsa.pub key into a file in the remote account’s home ~/.ssh directory called authorized_keys.You should see the following output:OutputNumber of key(s) added: 1 Now try logging into the machine, with: "ssh '[email protected]'" and check to make sure that only the key(s) you wanted were added. At this point, your id_rsa.pub key has been uploaded to the remote account. You can continue on to Step 3.Copying the Public Key Using SSHIf you do not have ssh-copy-id available, but you have password-based SSH access to an account on your server, you can upload your keys using a conventional SSH method.We can do this by using the cat command to read the contents of the public SSH key on our local computer and piping that through an SSH connection to the remote server.On the other side, we can make sure that the ~/.ssh directory exists and has the correct permissions under the account we’re using.We can then output the content we piped over into a file called authorized_keys within this directory. We’ll use the >> redirect symbol to append the content instead of overwriting it. This will let us add keys without destroying previously added keys.The full command looks like this:cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys" CopyYou may see the following message:OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established. ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe. Are you sure you want to continue connecting (yes/no)? yes This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type yes and press ENTER to continue.Afterwards, you should be prompted to enter the remote user account password:[email protected]'s password: After entering your password, the content of your id_rsa.pub key will be copied to the end of the authorized_keys file of the remote user’s account. Continue on to Step 3 if this was successful.Copying the Public Key ManuallyIf you do not have password-based SSH access to your server available, you will have to complete the above process manually.We will manually append the content of your id_rsa.pub file to the ~/.ssh/authorized_keys file on your remote machine.To display the content of your id_rsa.pub key, type this into your local computer:cat ~/.ssh/id_rsa.pub CopyYou will see the key’s content, which should look something like this:Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test Access your remote host using whichever method you have available.Once you have access to your account on the remote server, you should make sure the ~/.ssh directory exists. This command will create the directory if necessary, or do nothing if it already exists:mkdir -p ~/.ssh CopyNow, you can create or modify the authorized_keys file within this directory. You can add the contents of your id_rsa.pub file to the end of the authorized_keys file, creating it if necessary, using this command:echo public_key_string >> ~/.ssh/authorized_keys CopyIn the above command, substitute the public_key_string with the output from the cat ~/.ssh/id_rsa.pub command that you executed on your local system. It should start with ssh-rsa AAAA....Finally, we’ll ensure that the ~/.ssh directory and authorized_keys file have the appropriate permissions set:chmod -R go= ~/.ssh CopyThis recursively removes all “group” and “other” permissions for the ~/.ssh/ directory.If you’re using the root account to set up keys for a user account, it’s also important that the ~/.ssh directory belongs to the user and not to root:chown -R sammy:sammy ~/.ssh CopyIn this tutorial our user is named sammy but you should substitute the appropriate username into the above command.We can now attempt passwordless authentication with our Ubuntu server.Step 3 — Authenticating to Your Ubuntu Server Using SSH KeysIf you have successfully completed one of the procedures above, you should be able to log into the remote host without providing the remote account’s password.The basic process is the same:ssh username@remote_host CopyIf this is your first time connecting to this host (if you used the last method above), you may see something like this:OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established. ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe. Are you sure you want to continue connecting (yes/no)? yes This means that your local computer does not recognize the remote host. Type “yes” and then press ENTER to continue.If you did not supply a passphrase for your private key, you will be logged in immediately. If you supplied a passphrase for the private key when you created the key, you will be prompted to enter it now (note that your keystrokes will not display in the terminal session for security). After authenticating, a new shell session should open for you with the configured account on the Ubuntu server.If key-based authentication was successful, continue on to learn how to further secure your system by disabling password authentication.Step 4 — Disabling Password Authentication on Your ServerIf you were able to log into your account using SSH without a password, you have successfully configured SSH-key-based authentication to your account. However, your password-based authentication mechanism is still active, meaning that your server is still exposed to brute-force attacks.Before completing the steps in this section, make sure that you either have SSH-key-based authentication configured for the root account on this server, or preferably, that you have SSH-key-based authentication configured for a non-root account on this server with sudo privileges. This step will lock down password-based logins, so ensuring that you will still be able to get administrative access is crucial.Once you’ve confirmed that your remote account has administrative privileges, log into your remote server with SSH keys, either as root or with an account with sudo privileges. Then, open up the SSH daemon’s configuration file:sudo nano /etc/ssh/sshd_config CopyInside the file, search for a directive called PasswordAuthentication. This line may be commented out with a # at the beginning of the line. Uncomment the line by removing the #, and set the value to no. This will disable your ability to log in via SSH using account passwords:/etc/ssh/sshd_config. . . PasswordAuthentication no . . . Save and close the file when you are finished by pressing CTRL+X, then Y to confirm saving the file, and finally ENTER to exit nano. To actually activate these changes, we need to restart the sshd service:sudo systemctl restart ssh CopyAs a precaution, open up a new terminal window and test that the SSH service is functioning correctly before closing your current session:ssh username@remote_host CopyOnce you have verified your SSH service is functioning properly, you can safely close all current server sessions.The SSH daemon on your Ubuntu server now only responds to SSH-key-based authentication. Password-based logins have been disabled.ConclusionYou should now have SSH-key-based authentication configured on your server, allowing you to sign in without providing an account password.

Read More
Image Network Security

Network Security Checklist, Protect your Business

We have compiled a network security checklist for SMBs providing actions that should be taken to secure your business network against internal and external threats.A Network Security Checklist for SMBsAttacks can come from all angles and as your network grows and you add more devices, increase the number of users, and use new applications, the threat surface rapidly grows, and your network becomes more complicated to defend. The purpose of this network security checklist is to provide you with tips on the key areas of network security you should be focusing on.The best place to start is to develop a series of policies that describe the actions that are permitted and not permitted by your employees. If you do not explain how systems must be used and train users on best practices, risky behaviors are likely to continue that will undermine the hard work you put into defending your network.Develop Policies that Dictate What is and is Not AllowedYou should develop an acceptable use policy covering all systems, an internet access policy stating how the internet can be used, and the websites and content that should not be accessed. Policies are required for an email stating how email must be used and data that is not permitted to be sent via mail. You will no doubt have some workers who access your network remotely. A policy is required covering secure remote access and the use of VPNs. If you allow the use of personal devices, a BYOD policy is a must. You should clearly state the sanctions for violating policies and must ensure that policies are enforced, ideally using automated technical measures.Secure Servers and WorkstationsAll servers and workstations must be properly secured. Create a checklist for deploying new servers and workstations to ensure that each is properly secured before being used.Create a list of all servers and workstations on the network including their name, purpose, IP address, service dates and tag, location, and person responsible for each.Ensure all devices are running the latest software and are patched as soon as patches are released. Antivirus software should be used on all devices.Ensure a firewall is used to prevent unauthorized external access and make sure the default username and password are changed and a strong unique password is set. Use Deny All for internal and external access and ensure all rules added to the firewall are fully documented. Disable any permissive firewall rules. Consider also using an internal/software/application firewall for added security.Decide on a remote access solution and only use one.Purchase a UPS for your servers and ensure the agent on the UPS will safely shut down servers in the event of a power outage.Monitor server logs for unauthorized access and suspicious activityEnsure servers are routinely backed upSecure Network Equipment and DevicesYou must ensure your network is secured, along with any devices allowed to connect to the network.You should only purchase network equipment from authorized resellers and should implement physical security controls to prevent unauthorized access to network equipment.Ensure all firmware is kept up to date and firmware upgrades are only downloaded from official sources.Maintain a network hardware list detailing the device name and type, location, serial number, service tag, and party responsible for the device.For ease of management and consistency, use standard configurations for each network deviceConfigure networking equipment to use the most secure configuration possible. Ensure wireless devices are using WPA2, use SSH version 2 and disable telnet and SSH1Make sure very strong passwords are set for remote access.Disable all inactive ports to prevent external devices from accessing your internal network. Also set up a guest network to ensure visitors cannot access your internal resources.Use network segmentation to allow parts of the network to be isolated in the event of an attack and to hamper lateral movement attempts.Use a remote management solution to allow the authentication of authorized users.If you need to use SNMP, use SNMPv3. Change default community strings and set authorized management stations. If you are not using SNMP then ensure it is switched off.User Account ManagementYou should adopt the principle of least privilege and only give access rights to users that need to access resources for routine, legitimate purposes. Restrict the use of admin credentials as far as is possible. Admin accounts should only be used for admin purposes. Log out of admin accounts when administration tasks have been performed and use a different account with lower privileges for routine work.Ensure that each user has a unique account and password and make sure accounts are de-provisioned promptly when employees leave the company. Create a password policy and enforce the use of strong passwords. Consider using a password manager to help your employees remember their secure passwords.Email SecurityEmail is the most common attack vector used to gain access to business networks. Phishing is used in 90% of cyberattacks and email is a common source of malware infections. You should use an email security solution that scans inbound and outbound email to protect your network from attack and avoid reputation damage should email accounts be compromised and used to attack your business contacts.Your email security solution should provide protection against the full range of email threats, including email impersonation attacks, phishing/spear phishing, and malware and ransomware. The solution should also be configured to prevent directory harvesting attempts.Web SecurityThe internet is a common source of malware infections and phishing attacks usually have a web-based component. You should implement a web filtering solution such as a DNS filter to provide secure internet access, which should protect users on and off the network. Your filtering solution should be capable of decrypting, scanning, and re-encrypting HTTPS traffic and should scan for malware including file downloads, streaming media, and malicious scripts on web pages. Use port blocking to block unauthorized outbound traffic and attempts to bypass your internet controls.Traffic and Log MonitoringYou should be regularly reviewing access and traffic logs to identify suspicious activity that could indicate an attack in progress. Make sure logging is enabled and logs are regularly reviewed. If you only have a handful of servers you could do this manually, but ideally, you should have a security information and event management (SIEM) solution to provide real-time analysis of security alerts generated by your endpoints and network equipment.Security Awareness TrainingIf you follow this network security checklist and implement all of the above protections, your network will be well secured, but even robust network security defenses can be undone if your employees engage in risky behaviors and are not aware of security best practices. Employees should be provided with security awareness training to teach cybersecurity best practices and how to identify threats such as phishing. Security awareness training should be provided regularly, and you should keep employees up to date on the latest threats.

Read More