.Rar extensions have excellent compression and is very easy to open in Windows. This video clip will walk you through the installation of WinRar by Rarlabs which makes quick work of this task.
You can find winrar here: Download WinRar
.Rar extensions are similar to .Zip files that you encounter either as email attachments or downloaded files online. File compression is a rather clever concept which is especially beneficial for the web.
Zip and Rar files are very useful especially when uploading files to a web server where each request is a tax on the server. For example, when you are uploading files to your web server, it is best to package the file contents up into a zip file and then upload them as a single unit. Your upload speed will be much faster and once the files are up you can extract while on the server.
The main goal is file size reduction, so it saves bandwidth and you can store files in the same amount of disk space versus uncompressed files. Programs such as WinRar or WinZip are utilities that can both extract and compress files into these packages.
The Process of Compression
The basic idea behind the process of compression is quite simple. Let’s imagine you have a set of files you would like to compress. All that the file compression program does is it looks for redundant data and uses what are known as references.
So if you have a pattern of words in a sentence for example: I have a sobering verse from the Bible. The book of Proverbs, Chapter 1 verse 28.
Then shall they call upon me, but I will not answer; they shall seek me early, but they shall not find me:
I have color coded the instances of the words that repeat. A total of 22 words with 106 characters (including spaces). My files size as you would imagine is 106 bytes. (1 byte is the equivalent of one character in this case. Remember, this is the file size of this text file BEFORE compression begins.
When the compression program goes to work is finds the redundant data. This is a visualization of what the programs logic is determining.
This verse in this file has several duplicate patterns of information. As listed above, the number of occurrences is rather surprising. At first glance you may overlook this, but this is the beauty of what compression does.
So in reality we only need to use 14 words and 3 different punctuation marks to reconstruct this exact verse. Almost 1 third of the data in this file is redundant. Now, on a file this size, you’re not to notice a difference, in fact you will see the size of the file actually increase in size.
So, just to illustrate the point, I am going to copy and paste that exact phrase a few times. And I have compressed the original file in .rar file format to get a final size of 167 bytes!
Now let’s look at this little experiment, when we multiply the verse 20 times in the file I get a file size of 2,120 bytes (20 x 106 bytes) or 2.10 KB.
Let’s compress this file now and see what we get. The new file weighs in at a whopping 186 bytes. Not much difference from before you say, well, that’s exactly the point. There were so many redundancies in the file that the added 20 times worth of data really isn’t affecting things that much.
So what would 40 times look like?
Now, the standard file is at 4.21KB (40 x 106 bytes). And, after compression, the file has only increased by 11 bytes. Literally double the data, and only 11 bytes of added information is needed to recreate it. Without getting into the complexities of how this is done, this demonstration is merely to show you that compression of some files will be more effective on some than on other files based on the level of redundant data.
LZ Adaptive Dictionary Based Algorithm
LZ – Lempel and Ziv are the creators of this algorithm that most compression programs utilize. The way the dictionary works is more like a phone book, where numbers for example are tied to specific pieces of data, like these words. The algorithm uses the number reference to refer back to the word in the dictionary and thus the magic of file size reduction occurs.
Using the system the original bible verse can be reconstructed using this dictionary. This is why smaller files as in the first example above, don’t show any change in size when they’re that small. The dictionary itself takes up some space. However, once you get large enough as seen in the second illustration where we copied 20x the same verse over and over, you begin to see how the file compression did its thing.
This is another reason why, when web developers build a website and try to improve performance on the server, we do something called minifying files. Which doesn’t do exactly the same thing, but it strips out all spaces which then ultimately speeds things up. Compression is available as well on configured servers where the files are unzipped when they’re needed, at the user level. This is known as GZIP compression, which makes compressing web files up to 75-80% very possible. This version is then sent to the user’s browser where it is extracted and rendered.