NOROFF UNIVERSITY COLLEGE
2020 / 2021
You are required to critically reflect upon the following task(s), summarising your response(s) in approximately 300 (no more than 500) words.
You will be required to provide your own interpretation to the questions and answer them in your own words, with citations and referencing where appropriate.
Introduction to Operating and File Systems
We have downloaded VMWare and/or VirtualBox and played around with the VLE, creating virtual machines. Why do you think virtualisation have become such an important and necessary tool – both in a business-related use and in educational use?
This reflection tackles the importance and necessity of virtualization and virtual machines in computing. Virtualization is a technology that enables “software applications to run on virtual hardware or virtual operating systems” (Carnegie Mellon University (1)).
This virtual hardware or virtual machine is described by Microsoft (2) as, “a computer file, typically called an image, that behaves like an actual computer.” Because the virtual machine (hereafter VM) is isolated from the ‘host’ machine it requires its own operating system to function. This nesting hierarchy may be thought of as a computer within a computer.
As stated by VmWare (3) this ability can result in unstable performance and reduced efficiency or speed on the host computer. However the benefits far outweigh these factors. They enable virtualization of:
One such method of achieving some of these abilities of virtualization is the use of containers (“a virtual runtime environment” (1)) such as Docker or Kubernetes, termed as containerization.
Image 1 – the virtualization stack from CMU (1)
So why might a business adopt these technologies? In short it enables a business to make much better use of their computing resources while drastically improving security. This efficiency can be defined in the productivity and economical sense. Because a ‘host’ computer can run multiple virtual machines multiple business tasks can be carried out in isolation from one another in a secure manner. Some of these tasks might include performing system backups, testing other operating systems, the usage of beta software and developing software. In addition virtual servers enable portability and backup of systems data in a much more convenient manner.
Another area where virtualization is massively beneficial is the education sector. By removing local dependencies a consistent experience is provided to students. A task specific to an operating system such as Windows suddenly becomes easily feasible for a class – even if some use Apple or Linux – by using a virtual machine. This is fairer, more economical for the students and for the school as they remove the need to purchase multiple licenses.
System admin setup and support time is reduced with this centralized resource model. Mateljan et al (2014) cite the virtualization of the classroom, remote connection facilities and easier maintenance as major benefits in education. Students need “root access as a basis for learning key issues” and the “ability to reconfigure and restore networks rapidly” according to Begum et al.
Once again the isolated nature of a VM enables the above in a secure way with perfect consistency all while saving time and money for both the students and schools. The biggest benefit is the increased equality of access and opportunity for students to take advantage of complex or expensive environments.
- Virtualization via Containers. (25/09/17). Retrieved October 9, 2020, from https://insights.sei.cmu.edu/sei_blog/2017/09/virtualization-via-containers.html
- What is a Virtual Machine and How Does it Work | Microsoft Azure. (n.d.). Retrieved October 7, 2020, from https://azure.microsoft.com/en-us/overview/what-is-a-virtual-machine/
- What is a Virtual Machine? | VMware Glossary. (n.d.). Retrieved October 9, 2020, from https://www.vmware.com/topics/glossary/content/virtual-machine
- V. Mateljan, V. Juricic and M. Moguljak, “Virtual machines in education,” 2014 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, 2014. Retrieved October 10, 2020, from https://ieeexplore.ieee.org/document/6859639
- Using Virtual Machines in System Administration Education, Begum K et al. Retrieved October 9, 2020, from https://homepages.staff.os3.nl/~karst/edu/SANE_2004_VNL.pdf
During these first days, you have come across the term “Self-protecting operating systems”. Discuss the history behind this and what it means for modern operating systems to be “self-protecting”.
This reflection details the process of ‘self-protecting operating systems’. Let’s begin by first defining the term and then by looking back at the history of this function before discussing the modern position of this process.
Self protecting refers to the systems built in features and processes that prevent issues and vulnerabilities. This could be as simple as password access control or as complex as the work undertaken by the Linux foundation (1), but generally means protection of system resources and detection of latent errors (2) whether this is from an attack or an internal error.
Also this reflection considers the angle that sometimes a programme or piece of hardware can malfunction causing issues – it isn’t always a hackers fault! The self protection often focuses on the memory. Image one shows examples of how an operating system can be damaged or exploited.
In the well known case of stack overflow, its occurrence enables “writing on other stacks” (1). An operating system will attempt to mitigate this, with unauthorized access “generally causing abnormal termination of the offending process” (6). One such technique is ASLR (7) as highlighted by Apple below in image two.
Image 1 – details from the Linux Kernel Self protection Project
Historically, cyber security can be traced back to the early 1970’s with the invention of ARPNET and the beginning of computers linking to each other over external networks. 1987 saw the launch of McAfee security software (3) and the launch of an industry providing third party solutions to operating systems issues vulnerabilities. Much of this is built into modern operating systems along with instant patching for any issues, mitigating many weaknesses.
Image 2 – detail from Apple’s website (5)
Modern operating systems provide a much sterner protective layer and system such as some of the features from Apple shown in image two below. This includes the T2 chip which ¨automatically encrypts data on your Mac’. An extension of the self protection feature may imply the ability for the computer to remain usable by avoiding a crash or the need of a complete reset or re-install of factory settings. Linux, MacOS and Windows all offer a safe mode feature which enable some basic functionality while the problem exists. One can perhaps surmise that the extra built in features and the ability to patch instantly have made computing safer in some regards in the modern era.
- Kernel Self Protection Project – Linux Kernel Security Subsystem. (n.d.). Retrieved October 10, 2020, from https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project
- System Protection in Operating Systems (21/8/19). Retrieved October 6, 2020, from https://www.geeksforgeeks.org/system-protection-in-operating-system/
- Antivirus software – Wikipedia. (n.d.). Retrieved October 8, 2020, from https://en.wikipedia.org/wiki/Antivirus_software
- Linux malware – Wikipedia. (n.d.). Retrieved October 10, 2020, from https://en.wikipedia.org/wiki/Linux_malware
- macOS – Security – Apple. (n.d.). Retrieved October 9, 2020, from https://www.apple.com/macos/security/
- Memory protection – Wikipedia. (n.d.). Retrieved October 8, 2020, from https://en.wikipedia.org/wiki/Memory_protection
- Address space layout randomization – Wikipedia. (n.d.). Retrieved October 8, 2020, from https://en.wikipedia.org/wiki/Address_space_layout_randomization
When we talk about files – you have come across the two terms “Everything is a file” and “There is no such thing as a file”. The two terms seem contradictive…discuss the meaning of the two terms and explain why they are not really contradictive.
For this reflection we shall look in detail at the abstraction of data storage concepts with focus on two particular models. Although these two specific viewpoints seem diametrically opposed to one another some investigation reveals that they don’t fully contradict one another.
Let’s take our first viewpoint of “Everything is a file” and explore what the abstraction implies. The statement considers the user experience from the command line or a visual interface where navigation and access of “documents, directories, hard-drives, modems, keyboards, printers and network communications” (1) is done the same way, via a directory of files. Linux creator Linus Torvalds is rather vehement in his agreement of this approach to operating system design as highlighted in a public email thread (2). Another way of defining this approach would be to say that it is an exposure of the ‘namespace’ and is a usage of a ‘virtual file system’.
Image 1 – accessing typical ‘non-files’ as a file in Linux (4).
At its core, all computer operations are based upon binary data – collections of one’s and zero’s. Early computer operators many decades ago had to work at this extremely raw level. The evolution of computing could also be described as an increasing number of layers of abstractions, all built upon one another.
A computer processor doesn’t recognise files and therefore we can also say that “there is no such thing as a file”. A processor recognises “only blocks of memory” (3), a may access many different records of memory to compose a complete file for the user. For example a file contains data on its name, location, size, protection, type and a time stamp – all of which is held separately in the computer’s architecture. In Linux for example, accessing the file cat /proc/cpuinfo returns system information in a typical text file structure (4).
If one was speaking more objectively it would be much safer to say that “everything is a stream of bytes”(4) yet the phrase remains as it succinctly describes the goal of being able to interact with different data and processes in a uniform way across the system.
So in summary it is fair to agree with both statements depending on which layer of abstraction you aim to represent.
- Everything is a file – Wikipedia. (n.d.). Retrieved October 10, 2020, from https://en.wikipedia.org/wiki/Everything_is_a_file
- The everything-is-a-file principle (Linus Torvalds). (n.d.). Retrieved October 10, 2020, from https://yarchive.net/comp/linux/everything_is_file.html
- Lecture 3, Noroff. Retrieved Octover 10, 2020, from https://lms.noroff.no/pluginfile.php/130582/mod_resource/content/0/Lecture%2003-Files.pdf
- What Does “Everything Is a File” Mean in Linux?. Hoffman, Chris (28/09/16). Retrieved October 10, 2020, from https://www.howtogeek.com/117939/htg-explains-what-everything-is-a-file-means-on-linux/
- Virtual file system – Wikipedia. (n.d.). Retrieved October 10, 2020, from https://en.wikipedia.org/wiki/Virtual_file_system
Many students start off Microsoft Server courses with a feeling that “I know this” because of the similarities in design of the graphical interface between Windows Server 2019 and Windows 10 for example. The students making this assumption often struggle with these courses because a Microsoft server operating system is in fact very different from a Microsoft client operating system. Discuss the differences between Microsoft server operating systems and Microsoft client operating systems.
This reflection tackles the difference between the Microsoft client and Microsoft server operating systems. Let’s begin with a generalised summary of definitions before diving into a little more detail specifically about the Windows systems.
Image 1 – Client and server relationship (Sun Microsystems)
A client can be considered as the highest layer of abstraction of a software system which is distinct from the server layer and “does not have to be concerned with how the server performs while fulfilling the request and delivering the response.” (1) A client implicitly “relies on sending a request to another program or a computer hardware or software that accesses a service made available by a server (which may or may not be located on another computer).”
A server operating system “provides a function or service” to clients. It hosts and serves applications and is simply summarised by Oracle as the “client makes a request for a service and receives a reply to that request; a server receives and processes a request, and sends back the required response. The server is a process that can reside on the same machine as a client or on a different machine across a network.”
With these details in mind, the aforementioned two Windows operating systems can be said to have similar visual design but very different use cases. One can say that they “use the same kernel and can feasibly run the same software” (Hendrikson, HTG). The client system enables usage of many applications for the end user but offers little in the way of managing other computers or services which is the purpose of Windows Server OS with its central interface. Indeed as shown in image two – using the Windows server with a GUI is not officially recommended!
Image 2 – Windows Server without GUI recommended by Microsoft
Microsoft’s description for the server model is that it “enables you to create cloud native and modernize traditional apps using containers and micro-services”. They add it “manages and monitors client computers and/or operating systems”.
Image 3 – products included with Windows Server (Wikipedia)
Also different to the client is the server’s ability to run more powerful hardware both in RAM (2TB v 24TB) and support for unlimited cores. It enables a much deeper configuration of the OS processes. As the server model is aimed at businesses it comes with a much higher cost than a Windows client as shown in image four.
Image 4 – Windows Server pricing (Microsoft)
- Microsoft Servers – Wikipedia. (n.d.). Retrieved October 13, 2020, from https://en.wikipedia.org/wiki/Microsoft_Servers#Productivity
- Windows Server 2019 Licensing & Pricing | Microsoft. (n.d.). Retrieved October 13 2020, from https://www.microsoft.com/en-us/windows-server/pricing
- Try Windows Server 2019 on Microsoft Evaluation Center. (n.d.). Retrieved October 14, 2020, from https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2019
- What’s the Difference Between Windows and Windows Server? J Hendrikson (21/2/19.). Retrieved October 14, 2020, from https://www.howtogeek.com/404763/whats-the-difference-between-windows-and-windows-server/
- Anatomy of the Client/Server Model. (n.d.). Retrieved October 14, 2020, from https://docs.oracle.com/cd/E13203_01/tuxedo/tux80/atmi/intbas3.htm
- Sun Microsystems – Distributed System Architecture – (Wayback Machine). (n.d.). Retrieved October 15, 2020, from https://web.archive.org/web/20110406121920/http://java.sun.com/developer/Books/jdbc/ch07.pdf
- Client–server model – Wikipedia. (n.d.). Retrieved October 16, 2020, from https://en.wikipedia.org/wiki/Client–server_model
Linus Torvalds posted the following to the newsgroup comp.os.minix on Usenet:
Hello everybody out there using minix – I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). I’ve currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I’ll get something practical within a few months, and I’d like to know what features most people would want. Any suggestions are welcome, but I won’t promise I’ll implement them 🙂 Linus (torvalds [at] kruuna.helsinki.fi) PS. Yes – it’s free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that’s all I have :-(. —
Why do you think this post lead to the remarkable history of Linux? Discuss the positive impact the post and the subsequent development of the Linux kernel, and the Linux-based operating systems has had on the IT industry.
The history of Linux is remarkable in a number of ways with the most significant perhaps being the growth and usage it has received from humble beginnings. It could be argued however that the way in which such a large community formed around Linux and successfully worked together in developing the kernel is what really impacted computing the most. It proved that a skilled and motivated community can be more effective than the world’s biggest corporations.
One could take the open source achievements for granted in 2020 but it has clearly taken a lot of belief and commitment from Linus to remain true to the original ethos of the project, so evident in that first email. The humble nature of Linus and the way in which he states his narrow aim and the invitation for feedback ended up empowering developers to do so much more without the license fees of Microsoft and others. Linux can now officially count over 20,000 developers as official contributors and over a million commits of code (Linux History Report 2020).
A MetroActive interview from 1997 with Linus Torvalds contains some insightful commentary on the early days. “Linux gave everyone the power of unix (by being open sourced). The biggest leap forward was when Oracle announced a port of their DB to Linux. That pushed it forward into the internals of the data centers at corporations …and then IBM started putting resources into Linux.”
Image 1 – supported by the worlds biggest companies including all of the top ten cloud providers
This precedent has now ended up as a fully fledged effort from the commercial world to support the system with the Linux Foundation as seen in image one and two.
Image 2- supporting the foundation doesn’t just include dollars but development contributions
Although Linux isn’t very well known to consumers, it dominates Apple and Microsoft as the defacto choice across all computing. This includes “your phones, your thermostats, in your cars, refrigerators, Roku devices, and televisions. It also runs most of the Internet, all of the world’s top 500 supercomputers, and the world’s stock exchanges”.
Here is a non-exhaustive list of popular reasons for it success:
- It’s free and neutral
- It’s totally customizable
- It’s more performant
- It’s more stable
With all these details in mind, trying to imagine a world without Linux, without open source collaboration at scale, becomes very difficult. It appears to be absolutely essential in almost every industry and most importantly free – as in money but also in freedom.
- 2020 Linux Kernel History Report. (August 2020.). Retrieved October 14, 2020, from https://www.linuxfoundation.org/wp-content/uploads/2020/08/2020_kernel_history_report_082720.pdf
- What is Linux? – Linux.com. (n.d.). Retrieved October 15, 2020, from https://www.linux.com/what-is-linux/
- The Linux Foundation – Supporting Open Source Ecosystems. (n.d.). Retrieved October 14, 2020, from https://www.linuxfoundation.org/
- Metroactive Features | Michael Learmouth | Linus Torvalds. (5/8/97). Retrieved October 14, 2020, from http://www.metroactive.com/papers/metro/05.08.97/cover/linus-9719.html
- Our Corporate Members – The Linux Foundation. (n.d.). Retrieved October 16, 2020, from https://www.linuxfoundation.org/membership/members/
In a scenario where you run a Windows Active Directory Domain, containing two domain
controllers, six servers running databases, filesharing and other useful services in a Windows network – senior management have decided to implement a webserver for both internal and external information flow. You see the benefits of implementing a Windows Server with Internet Information Services…but you also favour an Ubuntu Server with Apache in many cases…discuss advantages and disadvantages with both setups and suggest the best solution for the given scenario.
This post will first attempt to provide some insight into the potential use of two different options; Windows Server with Internet Information Services (IIS) and an Ubuntu Server with Apache. Following this will be a recommendation for a particular business solution.
IIS is a Windows based “flexible, secure and manageable Web server for hosting anything on the Web” (1). Although only available on Windows, Microsoft aims to offer a seamless platform experience with a highly “accessible” REST API. Part of this platform experience includes security, accepting “any form of authentication and authorization that IIS uses”. Microsoft also touts the ability for IIS to work effectively from one client for a cluster of machines with little overhead (2).
Image 1 – the IIS web user interface
The Apache server is free and open source, typically runs PHP in Linux and powers 36% of the internet including huge services such as Apple.com and Baidu.com. Developers positively comment on its stable performance and its incredible customization options. It’s a mature and proven technology although extensive documentation and a lack of GUI perhaps count as drawbacks for some.
Image 2 – Apache usage on the internet – currently 36%
Our two options in this business scenario are ranked as the two most popular solutions in this category so both have a justifiable implementation for the solution. However as this business already has a Windows setup and licenses the decision does not need to consider any cost differences. I would therefore recommend the IIS option as the native integration would be assumed to ensure a simpler setup, maintenance (GUI tools) and take advantage of other platform features such as “automated IIS policy setups in multiple servers…using Windows group policy tools”.
Another likely positive factor in favour of IIS is the current developers are familiar with the Microsoft technology stack (and probable use of the .NET framework).
If we consider performance, this particular example in image three shows Apache outperforming IIS although this is dependent on many factors. Some developers (8) state a preference for IIS because it “consumes less CPU, has better response time and can handle more requests per second”. A final point of consideration concerns the type of traffic the server will handle as suggested by DevX. Due to the above media rich sites and traffic are better served with IIS whilst Apache, being decentralized, is a much more portable option.
Image 3 – server performance comparison (7)
- IIS vs Apache – which server platform is best for you? – Comparitech. (n.d.). Retrieved October 16, 2020, from https://www.comparitech.com/net-admin/iis-vs-apache/
- IIS vs Apache: Which is the Best Web Server? | UpGuard. (27/8/20). Retrieved October 18, 2020, from https://www.upguard.com/blog/iis-apache
- Apache vs. IIS Performance. (21/2/17). Retrieved October 16, 2020, from https://www.devx.com/webdev/performance-comparison-apache-vs.-iis.html
- Usage Statistics and Market Share of Apache, October 2020. (n.d.). Retrieved October 18, 2020, from https://w3techs.com/technologies/details/ws-apache
- Pros and Cons of Apache Web Server 2020. (n.d.). Retrieved October 18, 2020, from https://www.trustradius.com/products/apache-web-server/reviews?qs=pros-and-cons
- What is the best server, Apache or IIS? – Quora. (n.d.). Retrieved October 16, 2020, from https://www.quora.com/What-is-the-best-server-Apache-or-IIS
- Introduction to the Microsoft IIS Administration API | Microsoft Docs. (n.d.). Retrieved October 16, 2020, from https://docs.microsoft.com/en-us/IIS-Administration/
- Overview : The Official Microsoft IIS Site. (30/11/2018). Retrieved October 18, 2020, from https://www.iis.net/overview