There are in fact two ways to manipulate using HTTP. It is either you have access to the. @Bernard you should edit /.ssh/config with a text editor, like nano, gedit or any other that you like. Simply type in these lines or copy paste them. For example: nano /.ssh/config this will open the file in the console text editor called nano.So after you're done typing in these lines, press ctrl+x to exit from nano and then y to confirm your changes – Aleks-Daniel Jakimenko-A.
This guide will teach you step by step how to download files from the command line in Linux, Windows or macOS using open source (free) software – wget. Wget is a very cool command-line downloader for Linux and UNIX environments that has also been ported to Windows and macOS. Don’t be fooled by the fact that it is a command line tool. It is very powerful and versatile and can match some of the best graphical downloaders around today. It has features such as resuming of downloads, bandwidth control, it can handle authentication, and much more. I’ll get you started with the basics of using wget and then I’ll show you how you can automate a complete backup of your website using wget and cron.
Let’s get started by installing wget. Most Linux distributions come with wget pre-installed. If you manage to land yourself a Linux machine without a copy of wget try the following. On a Red Hat Linux based system such a Fedora you can use:
# yum install wget
or if you use a Debian based system like Ubuntu:
# sudo apt-get install wget
One of the above should do the trick for you. Otherwise, check with your Linux distribution’s manual to see how to get and install packages. Users on Windows can access wget via this website, and for Mac users we have a full guide on how to install wget in macOS.
The most basic operation a download manager needs to perform is to download a file from a URL. Here’s how you would use wget to download a file:
# wget https://www.simplehelp.net/images/file.zip
Yes, it’s that simple. Now let’s do something more fun. Let’s download an entire website. Here’s a taste of the power of wget. If you want to download a website you can specify the depth that wget must fetch files from. Say you want to download the first level links of Yahoo!’s home page. Here’s how would do that:
# wget -r -l 1 https://www.yahoo.com/
Here’s what each options does. The -r activates the recursive retrieval of files. The -l stands for level, and the number 1 next to it tells wget how many levels deep to go while fetching the files. Try increasing the number of levels to two and see how much longer wget takes.
Now if you want to download all the “jpeg” images from a website, a user familiar with the Linux command line might guess that a command like “wget http://www.sevenacross.com*.jpeg” would work. Well, unfortunately, it won’t. What you need to do is something like this:
# wget -r -l1 –no-parent -A.jpeg https://www.yahoo.com
Another very useful option in wget is the resumption of a download. Say you started downloading a large file and you lost your Internet connection before the download could complete. You can use the -c option to continue your download from where you left it.
# wget -c http://www.example_url.com/ubuntu-live.iso
Now let’s move on to setting up a daily backup of a website. The following command will create a mirror of a site in your local disk. For this purpose wget has a specific option, –mirror. Try the following command, replacing sevenacross.com with your website’s address.
Linux How To Install
# wget –mirror http://www.sevenacross.com/
When the command is done running you should have a local mirror of your website. This make for a pretty handy tool for backups. Let’s turn this command into a cool shell script and schedule it to run at midnight every night. Open your favorite text editor and type the following. Remember to adapt the path of the backup and the website URL to your requirements.
#!/bin/bash
YEAR=`date +”%Y”`
MONTH=`date +”%m”`
DAY=`date +”%d”`
BACKUP_PATH=`/home/backup/` # replace path with your backup directory
WEBSITE_URL=`http://www.sevenacross.net` # replace url with the address of the website you want to backup
# Create and move to backup directory
cd $BACKUP_PARENT_DIR/$YEAR/$MONTH
mkdir $DAY
cd $DAY
wget –mirror ${WEBSITE_URL}
Now save this file as something like website_backup.sh and grant it executable permissions:
# chmod +x website_backup.sh
Open your cron configuration with the crontab command and add the following line at the end:
Download Files From Linux To Windows
0 0 * * * /path/to/website_backup.sh
Download Files From Linux To Mac
You should have a copy of your website in /home/backup/YEAR/MONTH/DAY every day. For more help using cron and crontab, see this tutorial.
How To Download Files On Linux Server
Now that you get the basics of downloading files from the command line you can get into the advanced stuff by reading up wget’s man page – just type man wget from the command line.