INTRODUCTION
Topics Covered:
- Benefits of coding
- Computer basics
- Algorithms
- Flowcharts
- Programming
“Everybody should learn to program a computer, because it teaches you how to think.”
– Steve Jobs, Apple founder
Benefits of Coding
Why should I learn to code? If you are not planning to be a software developer, this is a reasonable question. If you are a business student or a psychology major it may be difficult to see how coding fits into your career plans. The goal of this book is to help you understand how learning to code is a skill that will help you in any career. Through learning to code you will no longer only be a computer user; you will become a coding Jedi with the ability to command the computer to obey your will.
Benefits of learning to code include:
- You can become a better problem solver: Employers in every industry consistently rank problem solving as the most desired skill for new hires. Coding helps you to become a better problem solver by teaching you to break down problems into a logical and structured format. In addition, it’s often necessary to find creative problem solutions that are different than anything that’s been done before. This analytical process will come in handy whenever you need those skills to tackle a challenging problem at work or in your daily life.
- You may need to code in your job: It doesn’t matter if you are a scientist in a research lab or a financial planner working in an office; there may come a time when you need to write code. You may need to solve a small task that your current software cannot perform. Sometimes the need to code shows up in a surprising context. Creating database queries and spreadsheet macros are essentially instances of coding. With your newfound programming skills, you will be able to write a program to solve that task and impress your boss.
- You can develop a basic understanding of how software works: In virtually every career, you will be working with technology and software on a daily basis. Once you have written programs yourself, you will gain an appreciation and understanding of how software works. You will gain insights into what features can be better exploited and be able to identify deficiencies or limitations in the programs that you are using.
- You can learn to be persistent: Albert Einstein famously stated, “It’s not that I’m so smart, I just stay with problems longer.” Coding helps you learn to be persistent when facing difficult problems. You may get stuck and hit roadblocks on your journey, but the satisfaction of sticking with it and finding a solution is worth the effort. It takes persistence to be successful in almost any endeavor, and coding helps you learn persistence.
- You can communicate about technology effectively: Learning the basics of programming will be helpful in job situations where a non-techie will need to talk to someone in the computing field. There are so many terms and phrases that you will pick up while learning to program. You won’t have to speak the techie language perfectly, but you will know enough to pick up on important conversations among computing professionals, especially if you are working with software developers.
Although this book will focus on all of these coding benefits, in this chapter, we will discuss a few problem-solving tools. Before we do that, though, it’s important that you are familiar with some computer basics.
Computer Basics
Before we begin our journey to coding Jedi, we need to make sure we have a basic understanding of how computers work. Computers are constructed from hardware and software. The hardware makes up the physical components of a computer. Most general-purpose computers consist of four parts: (1) the Central Processing Unit (CPU) or processor, (2) memory or Random Access Memory (RAM), (3) inputs like a keyboard or mouse, and (4) outputs like a monitor or printer.
The software is the computer programs. This consists of both the operating system (Windows, Macintosh, Linux, etc.) and the applications (word processor, spreadsheet, games, web browser, etc.).
As illustrated in the previous figure, most computers today use the von Neumann architecture. This means that programs and data are loaded into RAM before a program runs. When you double-click on a program icon to run a program, you may notice a slight delay before your program appears. That is your computer loading the program and any necessary data from the external storage into the internal memory unit.
When we create software, most of the time our programs will follow the data processing cycle. This consists of three stages: (1) input, (2) processing, and (3) output. This cycle is illustrated below.
Two of the primary tools that programmers rely on when developing solutions are algorithms and flowcharts. We will discuss each of these tools and explain how they can be used to help solve problems, as well as help simplify the coding process.
Algorithms
An algorithm is a set of general steps to solve a problem. We encounter algorithms frequently in our everyday lives. As we drive to school, we are following an algorithm (turn right, go 1.5 miles, turn left at the second light, etc.). When we get home and decide to treat ourselves by baking a cake (beat two eggs, add one cup of flour, add one tablespoon of salt, stir vigorously, etc.), we are following an algorithm.
Let’s take a look at an example. Suppose you have a parent that is constantly yelling at you because you leave your bedroom lamp on. Maybe you are curious as to how much that is actually increasing the monthly electricity bill. You decide to write an algorithm that will solve this problem.
With a little digging on the Internet, you discover that in order to find the cost of electricity, you will need to know three things: (1) the wattage of your light bulb, (2) how many hours did you leave it on, and (3) the price that your electric company charges. You also discover that to compute this cost, you simply multiply the wattage by the hours, then divide that by 1,000 times the price of electricity. You divide by 1000 since electric companies charge by center per kilowatt-hours, and we are asking the user to enter the time in hours. Therefore, your algorithm ends up looking like this:
Algorithm for computing cost of electricity:
- Have the user input wattage, hours, price
- cost = (wattage x hours) / (1000 x price)
- Output cost
Algorithms for computer programs need to be a lot more precise than algorithms for people. For example, many shampoo bottles will include the following “algorithm:”
a) Wet your hair
b) Apply a small amount of shampoo
c) Lather
d) Rinse
e) Repeat
A computer program would need to clarify – “How do you wet your hair?” “What is a small amount of shampoo?” “How do you lather?” And most importantly, only repeat one time! Computers are way too literal and need extremely detailed instructions.
For a list of steps to be considered an algorithm, it must have the following five characteristics:
- Inputs – zero or more well-defined data items that must be provided to the algorithm
- Outputs – one or more well-defined results produced by the algorithm
- Definiteness – the algorithm must specify every step and the order the steps must be taken in the process
- Effectiveness – every step must be feasible. You couldn’t have a step, for example, that said “list every prime number”
- Finiteness – the algorithm must eventually stop
Flowchart
A flowchart is a visual representation of a problem solution. Different shapes in a flowchart have different meanings. Arrows, or arcs, connect the shapes and provide the flow of control for your problem solution. The following table illustrates the meaning of some of the more commonly used flowchart shapes.
Shape picture | Shape Name | Shape Purpose |
Ellipse | Start/End | |
Parallelogram | Inputs/Outputs | |
Rectangle | Formulas/Actions | |
Diamond | Decisions |
Going back to the problem of computing the electricity, let’s take a look at those steps expressed using a flowchart. There are many tools for creating a flowchart. We recommend using the free web http://draw.io to build your flowcharts.
What is Computer Programming?
A computer, in its simplest form, is nothing more than a collection of silicon, plastic, metal, glass, and wire. When you turn it on, it does precisely what it’s been instructed to do – nothing more and nothing less. That’s where programming comes in. A program is a set of instructions that tell the computer what it’s supposed to do; programming is the process of preparing these instructions. Because computers interpret their programs very literally, programmers need to be quite explicit in the directions that they prepare.
The Origins of Programming
Since the first computers were developed in the 1940s, the discipline of computer programming has undergone a continuous evolution. In those days, computers were often programmed by means of large patch panels, using wires to represent individual instructions. Early on, though, it was recognized that flexibility would be increased if computer programs could instead be encoded as numeric data and stored in the computer’s memory. While an improvement over patch panels, these machine language programs were still difficult to work with. Even then, most computers used binary numbers, and the machine language programs were nothing more than strings of zeros and ones.
To streamline the programmer’s job, special assembly languages were developed. Rather than having to remember, for example, that the binary pattern 00101010 is the instruction that tells the computer to add two values, while the pattern 00101011 stands for subtract, the assembly language programmer uses special mnemonic names, such as ADD or SUB. In addition, assembly languages introduced the concept of using labels to stand for addresses within the computer’s memory. Thus, the instruction:
ADD A,B
might be used to tell the computer to add the values in memory locations 123 and 147, rather than the binary form:
00101010 01111011 10010011
Of course, the computer didn’t, and still doesn’t, understand assembly language directly. Instead, special programs, called assemblers, were (and are) used to translate assembly language to its binary equivalent.
Although assembly language programming is still an option with computer systems, it’s used only sparingly, primarily for performing very low-level tasks where direct communication with the computer’s hardware is required. Most programming is instead done using more sophisticated languages. This is because assembly language is still quite difficult to work with, requiring even the simplest tasks to be broken down into sequences of several, or even several hundred, instructions. Also, virtually every computer system has its own unique assembly language. To run an existing assembly language program on a new computer system requires translation of the program into the new system’s assembly language – often a formidable task.
The Development of High-Level Languages
High-level programming languages were first introduced in the 1950s. Whereas each instruction in an assembly language represents a single machine language instruction, a single high-level language instruction will usually translate into several machine language instructions. This implies, of course, that high-level languages are far more expressive than assembly languages. It also implies that the translation process required to convert programs written in these languages into a form that the computer can process is far more complex.
There are two strategies for translating high-level languages. The first, software called a compiler translates programs fully into machine language. Once translated, the compiled program can be run at any time without any additional translation required. In contrast, other languages are interpretive. When a program written in an interpretive language is run, an interpreter program reads one instruction at a time, and determines how to carry out the required action.
To better understand the difference between compiling and interpreting, imagine that you have an article written in a foreign language. You could hire someone to translate the article to English and give you a written copy of this translation. This is what a compiler does. Alternatively, you could hire someone to read the article aloud, translating it to English as they read. This is what an interpreter does. Notice the important difference between these two approaches. When the article is “compiled” for you, you can refer back to the translated version at any time; the “interpreted” version, however, is not retained, and you’d need to seek out your interpreter again if you want to review the article’s contents.
There are literally hundreds of different high-level programming languages. When first learning to program, one of the first challenges is the selection of an appropriate language. The TIOBE (The Importance Of Being Earnest) index is a measure of the popularity of programming languages. This list gets updated monthly, but here is a recent glimpse of the top ten high-level, general-purpose languages:
- C
- Java
- Python
- C++
- C#
- Visual Basic
- JavaScript
- PHP
- Go
- R
For this book, we have chosen the Python programming language. In Chapter Two, we will explain why Python is a great choice for learning to code. We will also show you how to download and install Python for free in just a matter of minutes. Before you know it, you will be writing your first programs.
Chapter Review Exercises:
1.1. Describe five benefits for a person not working in a computing career to learn how to code.
1.2. Explain the function of each of the following flowchart shapes:
- Arrow
- Diamond
- Ellipse
- Parallelogram
- Rectangle
Programming Projects:
1.1. Answer the following questions about algorithms:
- Give the steps for withdrawing money from an ATM.
- Explain how your steps for part (a) meet all five of the essential characteristics for an algorithm.
- Develop an algorithm for subtracting two 3-digit numbers.
- Explain how your steps for part (c) meet all five of the essential characteristics for an algorithm.
- Create an algorithm for another common, everyday task you are familiar with.
- Explain how your algorithm for part (e) meets all five of the essential characteristics for an algorithm.