Misplaced Pages

Computer program: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 16:00, 18 August 2008 edit200.34.6.1 (talk) Paradigms← Previous edit Latest revision as of 12:41, 11 January 2025 edit undoAlexeyevitch (talk | contribs)Extended confirmed users, Pending changes reviewers, Rollbackers29,216 edits rv testTag: Undo 
Line 1: Line 1:
{{Short description|Instructions a computer can execute}}
'''Computer programs''' (also ''']''', or just '''programs''') are ] for a ].<ref name="pis-ch4-p132">{{cite book
{{for|the TV program|The Computer Programme{{!}}''The Computer Programme''}}
| last = Stair
] for a computer program written in the ] language. It demonstrates the ''appendChild'' method. The method adds a new child node to an existing parent node. It is commonly used to dynamically modify the structure of an HTML document.]]
| first = Ralph M., et al
{{Program execution}}
| title = Principles of Information Systems, Sixth Edition
A '''computer program''' is a ] or set{{efn|The ] language allows for a database of facts and rules to be entered in any order. However, a question about a database must be at the very end.}} of instructions in a ] for a ] to ]. It is one component of ], which also includes ] and other intangible components.<ref name="ISO 2020">{{cite web
| publisher = Thomson Learning, Inc.
| title=ISO/IEC 2382:2015
| date = 2003
| website=ISO
| pages = 132
| date=2020-09-03
| id = ISBN 0-619-06489-7
| url=https://www.iso.org/obp/ui/#iso:std:iso-iec:2382:ed-1:v1:en
}}</ref>. A computer requires programs to function. Moreover, a computer program does not run unless its instructions are executed by a ];<ref name="osc-ch3-p58">{{cite book
| access-date=2022-05-26
| quote= all or part of the programs, procedures, rules, and associated documentation of an information processing system.
| archive-date=2016-06-17
| archive-url=https://web.archive.org/web/20160617031837/https://www.iso.org/obp/ui/#iso:std:iso-iec:2382:ed-1:v1:en
| url-status=live
}}</ref>

A ''computer program'' in its ] form is called ]. Source code needs another computer program to ] because computers can only execute their native ]. Therefore, source code may be ] to machine instructions using a ] written for the language. (] programs are translated using an ].) The resulting file is called an ]. Alternatively, source code may execute within an ] written for the language.<ref name="cpl_3rd-ch1-7_quoted">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 7
| quote = An alternative to compiling a source program is to use an interpreter. An interpreter can directly execute a source program
| isbn = 0-201-71012-9
}}</ref>

If the executable is requested for execution, then the ] ] it into ] and starts a ].<ref name="osc-ch4-p98">{{cite book
| last = Silberschatz | last = Silberschatz
| first = Abraham | first = Abraham
| title = Operating System Concepts, Fourth Edition | title = Operating System Concepts, Fourth Edition
| publisher = Addison-Wesley | publisher = Addison-Wesley
| date = 1994 | year = 1994
| pages = 58 | page = 98
| id = ISBN 0-201-50480-4 | isbn = 978-0-201-50480-4
}}</ref> The ] will soon ] to this process so it can ] each machine instruction.<ref name="sco-ch2-p32">{{cite book
}}</ref> however, a program may communicate an ] to people without running. Computer programs are usually ] programs or the ] from which executable programs are derived (e.g., ]).
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Third Edition
| publisher = Prentice Hall
| year = 1990
| page =
| isbn = 978-0-13-854662-5
| url = https://archive.org/details/structuredcomput00tane/page/32
}}</ref>


If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each ]. Running the source code is slower than running an ].<ref name="cpl_3rd-ch1-7">{{cite book
Computer source code is often written by professional ]s. Source code is written in a ] that usually follows one of two main ]: ] or ] programming. Source code may be converted into an ] (sometimes called an executable program or a binary) by a ]. Alternatively, computer programs may be executed by a ] with the aid of an ], or may be ] directly into ].
| last = Wilson

| first = Leslie B.
Computer programs may be categorized along functional lines: ] and ]. And many computer programs may run simultaneously on a single computer, a process known as ].
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 7
| isbn = 0-201-71012-9
}}</ref>{{efn|An executable has each ] ready for the ].}} Moreover, the interpreter must be installed on the computer.


==Example computer program==
==Programming==


The ] is used to illustrate a language's basic ]. The syntax of the language ] (1964) was intentionally limited to make the language easy to learn.<ref name="cpl_3rd-ch2-30_quote1">{{cite book
{{main|Computer programming}}

<div class="thumb tright">
<div class="thumbinner" style="width:252px;">
<div style="width:240px;" style="font-size: 12px; font-family: monospace; background-color: #ffffff; text-align: left">
<!-- The pound sign generates a number. How do you display a pound sign?
#include "output_string.h"<br />
-->
main()<br />
{<br />
<div style="margin-left: 10%">
output_string("Hello world!"); <br />
</div>
} <br />
</div>
<div class="thumbcaption">
Source code of a program written in the ]
</div>
</div>
</div>

] is the iterative process of writing or editing ]. Editing source code involves testing, analyzing, and refining, and sometimes coordinating with other programmers on a jointly developed program. A person who practices this skill is referred to as a computer ] or software developer. The sometimes lengthy process of computer programming is usually referred to as ]. The term ] is becoming popular as the process is seen as an ] discipline.

=== Paradigms ===
Computer programs are pitos
Programs written using an imperative language specify an ] using declarations, expressions, and statements.<ref name="cpl-ch4-75">{{cite book
| last = Wilson | last = Wilson
| first = Leslie B. | first = Leslie B.
| title = Comparative Programming Languages, Second Edition | title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley | publisher = Addison-Wesley
| date = 1993 | year = 2001
| pages = 75 | page = 30
| id = ISBN 0-201-56885-3 | isbn = 0-201-71012-9
| quote = Their intention was to produce a language that was very simple for students to learn
}}</ref> A declaration associates a ] name with a ]. For example: <code> var x: integer; </code>. An expression yields a value. For example: <code> 2 + 2 </code> yields 4. Finally, a statement might assign an expression to a variable or use the value of a variable to alter the program's control flow. For example: <code>x := 2 + 2; if x = 4 then do_something();</code> One criticism of imperative languages is the side-effect of an assignment statement on a class of variables called non-local variables.<ref name = "cpl-ch9-213"/>
}}</ref> For example, ] are not ] before being used.<ref name="cpl_3rd-ch2-31">{{cite book

Programs written using a declarative language specify the properties that have to be met by the output and do not specify any implementation details. Two broad categories of declarative languages are ]s and ]s. The principle behind functional languages (like ]) is to not allow side-effects, which makes it easier to reason about programs like mathematical functions.<ref name="cpl-ch9-213">{{cite book
| last = Wilson | last = Wilson
| first = Leslie B. | first = Leslie B.
| title = Comparative Programming Languages, Second Edition | title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley | publisher = Addison-Wesley
| date = 1993 | year = 2001
| pages = 213 | page = 31
| id = ISBN 0-201-56885-3 | isbn = 0-201-71012-9
}}</ref> The principle behind logical languages (like ]) is to define the problem to be solved the goal — and leave the detailed solution to the Prolog system itself.<ref name="cpl-ch10-244">{{cite book }}</ref> Also, variables are automatically initialized to zero.<ref name="cpl_3rd-ch2-31"/> Here is an example computer program, in Basic, to ] a list of numbers:<ref name="cpl_3rd-ch2-30">{{cite book
| last = Wilson | last = Wilson
| first = Leslie B. | first = Leslie B.
| title = Comparative Programming Languages, Second Edition | title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley | publisher = Addison-Wesley
| date = 1993 | year = 2001
| pages = 244 | page = 30
| id = ISBN 0-201-56885-3 | isbn = 0-201-71012-9
}}</ref>
}}</ref> The goal is defined by providing a list of subgoals. Then each subgoal is defined by further providing a list of its subgoals, etc. If a path of subgoals fails to find a solution, then that subgoal is ] and another path is systematically attempted.
<syntaxhighlight lang="basic">
10 INPUT "How many numbers to average?", A
20 FOR I = 1 TO A
30 INPUT "Enter number:", B
40 LET C = C + B
50 NEXT I
60 LET D = C/A
70 PRINT "The average is", D
80 END
</syntaxhighlight>


Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer systems.<ref name="cpl_3rd-ch2-30_quote2">{{cite book
The form in which a program is created may be textual or visual. In a ] program, elements are graphically manipulated rather than textually specified.
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 30
| isbn = 0-201-71012-9
| quote = The idea was that students could be merely casual users or go on from Basic to more sophisticated and powerful languages
}}</ref>


==History==
===Compilation or interpretation===
{{See also|Computer programming#History|Programmer#History|History of computing|History of programming languages|History of software}}
A ''computer program'' in the form of a ], computer programming language is called ]. Source code may be converted into an ] by a ] or executed immediately with the aid of an ].


Improvements in ] are the result of improvements in ]. At each stage in hardware's history, the task of ] changed dramatically.
Compiled computer programs are commonly referred to as executables, binary images, or simply as ] &mdash; a reference to the ] ] used to store the executable code. Compilers are used to translate source code from a programming language into either ] or ]. Object code needs further processing to become machine code, and machine code is the ]'s native ], ready for execution.


===Analytical Engine===
Interpreted computer programs are either decoded and then immediately executed or are decoded into some efficient intermediate representation for future execution. ], ], and ] are examples of immediately executed computer programs. Alternatively, ] computer programs are compiled ahead of time and stored as a machine independent code called ]. Bytecode is then executed upon request by an interpreter called a ].
]
In 1837, ] inspired ] to attempt to build the ].<ref name="eniac-ch1-p16">{{cite book
| last = McCartney
| first = Scott
| title = ENIAC – The Triumphs and Tragedies of the World's First Computer
| publisher = Walker and Company
| year = 1999
| page =
| isbn = 978-0-8027-1348-3
| url = https://archive.org/details/eniac00scot/page/16
}}</ref>
The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a ''store'' which consisted of memory to hold 1,000 numbers of 50 decimal digits each.<ref name="sco-ch1-p14">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Third Edition
| publisher = Prentice Hall
| year = 1990
| page =
| isbn = 978-0-13-854662-5
| url = https://archive.org/details/structuredcomput00tane/page/14
}}</ref> Numbers from the ''store'' were transferred to the ''mill'' for processing. The engine was programmed using two sets of perforated cards. One set directed the operation and the other set inputted the variables.<ref name="eniac-ch1-p16" /><ref>{{cite journal
| first = Allan G.
| last = Bromley
| author-link = Allan G. Bromley
| year = 1998
| url = http://profs.scienze.univr.it/~manca/storia-informatica/babbage.pdf
| title = Charles Babbage's Analytical Engine, 1838
| journal = ]
| volume = 20
| number = 4
| pages = 29–45
| doi = 10.1109/85.728228
| s2cid = 2285332
| access-date = 2015-10-30
| archive-date = 2016-03-04
| archive-url = https://web.archive.org/web/20160304081812/http://profs.scienze.univr.it/~manca/storia-informatica/babbage.pdf
| url-status = live
}}</ref> However, the thousands of cogged wheels and gears never fully worked together.<ref name="sco-ch1-p15">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Third Edition
| publisher = Prentice Hall
| year = 1990
| page =
| isbn = 978-0-13-854662-5
| url = https://archive.org/details/structuredcomput00tane/page/15
}}</ref>


] worked for Charles Babbage to create a description of the Analytical Engine (1843).<ref>{{citation
The main disadvantage of interpreters is computer programs run slower than if compiled. Interpreting code is slower than running the compiled version because the interpreter must ] each ] each time it is loaded and then perform the desired action. On the other hand, software development may be quicker using an interpreter because testing is immediate when the compilation step is omitted. Another disadvantage of interpreters is the interpreter must be present on the computer at the time the computer program is executed. By contrast, compiled computer programs need not have the compiler present at the time of execution.
|author1 = J. Fuegi
|author2 =J. Francis
|title = Lovelace & Babbage and the creation of the 1843 'notes'
|journal = Annals of the History of Computing
|volume = 25
|issue = 4
|date=October–December 2003
|doi = 10.1109/MAHC.2003.1253887
|pages = 16, 19, 25}}</ref> The description contained Note G which completely detailed a method for calculating ]s using the Analytical Engine. This note is recognized by some historians as the world's first ''computer program''.<ref name="sco-ch1-p15"/>


===Universal Turing machine===
No properties of a programming language require it to be exclusively compiled or exclusively interpreted. The categorization usually reflects the most popular method of language execution. For example, BASIC is thought of as an interpreted language and C a compiled language, despite the existence of BASIC compilers and C interpreters. Some systems use ] (JIT) whereby sections of the source are compiled 'on the fly' and stored for subsequent executions.
]
In 1936, ] introduced the ], a theoretical device that can model every computation.<ref name="discrete-ch10-p654">{{cite book
| last = Rosen
| first = Kenneth H.
| title = Discrete Mathematics and Its Applications
| publisher = McGraw-Hill, Inc.
| year = 1991
| page =
| isbn = 978-0-07-053744-6
| url = https://archive.org/details/discretemathemat00rose/page/654
| quote = Turing machines can model all the computations that can be performed on a computing machine.
}}</ref>
It is a ] that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an ]. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state.<ref name="formal_languages-ch9-p234">{{cite book
| last = Linz
| first = Peter
| title = An Introduction to Formal Languages and Automata
| publisher = D. C. Heath and Company
| year = 1990
| page = 234
| isbn = 978-0-669-17342-0
}}</ref> All present-day computers are ].<ref name="formal_languages-ch9-p243">{{cite book
| last = Linz
| first = Peter
| title = An Introduction to Formal Languages and Automata
| publisher = D. C. Heath and Company
| year = 1990
| page = 243
| isbn = 978-0-669-17342-0
| quote = ll the common mathematical functions, no matter how complicated, are Turing-computable.
}}</ref>


===Self-modifying programs=== ===ENIAC===
]
A computer program in ] is normally treated as being different from the ] the program operates on. However, in some cases this distinction is blurred when a computer program modifies itself. The modified computer program is subsequently executed as part of the same program. ] is possible for programs written in ], ], ], ], ] and ] and probably many others. Sometimes self modification is used as a form of dynamic optimization where the code becomes more efficient through ] or similar techniques. The technique is also often used to nullify all overhead of already embedded debugging code after a 'one time' test decides that debugging should be 'switched off' for the run. Early mainframe operating systems allowed program overlays as a normal practice for application programs, to conserve memory.
The ] (ENIAC) was built between July 1943 and Fall 1945. It was a ], general-purpose computer that used 17,468 ]s to create the ]. At its core, it was a series of ]s wired together.<ref name="eniac-ch5-p102">{{cite book
| last = McCartney
| first = Scott
| title = ENIAC – The Triumphs and Tragedies of the World's First Computer
| publisher = Walker and Company
| year = 1999
| page =
| isbn = 978-0-8027-1348-3
| url = https://archive.org/details/eniac00scot/page/102
}}</ref> Its 40 units weighed 30 tons, occupied {{convert|1,800|sqft|m2|0}}, and consumed $650 per hour (]) in electricity when idle.<ref name="eniac-ch5-p102" /> It had 20 ] ]. Programming the ENIAC took up to two months.<ref name="eniac-ch5-p102" /> Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels by plugging heavy black cables into ]s. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week.<ref name="eniac-ch5-p94">{{cite book
| last = McCartney
| first = Scott
| title = ENIAC – The Triumphs and Tragedies of the World's First Computer
| publisher = Walker and Company
| year = 1999
| page =
| isbn = 978-0-8027-1348-3
| url = https://archive.org/details/eniac00scot/page/94
}}</ref> It ran from 1947 until 1955 at ], calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns.<ref name="eniac-ch5-p107">{{cite book
| last = McCartney
| first = Scott
| title = ENIAC – The Triumphs and Tragedies of the World's First Computer
| publisher = Walker and Company
| year = 1999
| page =
| isbn = 978-0-8027-1348-3
| url = https://archive.org/details/eniac00scot/page/107
}}</ref>


===Stored-program computers===
==Execution and storage==<!-- This section is linked from ] -->
Instead of plugging in cords and turning switches, a ] loads its instructions into ] just like it loads its data into memory.<ref name="eniac-ch6-p120">{{cite book
Typically, computer programs are stored in ] until requested either directly or indirectly to be ] by the computer user. Upon such a request, the program is loaded into ], by a computer program called an ], where it can be accessed directly by the central processor. The central processor then executes ("runs") the program, instruction by instruction, until termination. A program in execution is called a ].<ref name="osc-ch4-97">{{cite book
| last = Silberschatz | last = McCartney
| first = Abraham | first = Scott
| title = ENIAC – The Triumphs and Tragedies of the World's First Computer
| title = Operating System Concepts, Fourth Edition
| publisher = Walker and Company
| year = 1999
| page =
| isbn = 978-0-8027-1348-3
| url = https://archive.org/details/eniac00scot/page/120
}}</ref> As a result, the computer could be programmed quickly and perform calculations at very fast speeds.<ref name="eniac-ch6-p118">{{cite book
| last = McCartney
| first = Scott
| title = ENIAC – The Triumphs and Tragedies of the World's First Computer
| publisher = Walker and Company
| year = 1999
| page =
| isbn = 978-0-8027-1348-3
| url = https://archive.org/details/eniac00scot/page/118
}}</ref> ] and ] built the ENIAC. The two engineers introduced the ''stored-program concept'' in a three-page memo dated February 1944.<ref name="eniac-ch6-p119">{{cite book
| last = McCartney
| first = Scott
| title = ENIAC – The Triumphs and Tragedies of the World's First Computer
| publisher = Walker and Company
| year = 1999
| page =
| isbn = 978-0-8027-1348-3
| url = https://archive.org/details/eniac00scot/page/119
}}</ref> Later, in September 1944, ] began working on the ENIAC project. On June 30, 1945, von Neumann published the '']'', which equated the structures of the computer with the structures of the human brain.<ref name="eniac-ch6-p118"/> The design became known as the ]. The architecture was simultaneously deployed in the constructions of the ] and ] computers in 1949.<ref name="eniac-ch6-p123">{{cite book
| last = McCartney
| first = Scott
| title = ENIAC – The Triumphs and Tragedies of the World's First Computer
| publisher = Walker and Company
| year = 1999
| page =
| isbn = 978-0-8027-1348-3
| url = https://archive.org/details/eniac00scot/page/123
}}</ref>

The ] (1964) was a family of computers, each having the same ]. The ] was the smallest and least expensive. Customers could upgrade and retain the same ].<ref name="sco-ch1-p21">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Third Edition
| publisher = Prentice Hall
| year = 1990
| page =
| isbn = 978-0-13-854662-5
| url = https://archive.org/details/structuredcomput00tane
| url-access = registration
}}</ref> The ] was the most premium. Each System/360 model featured ]<ref name="sco-ch1-p21"/>—having multiple ] in ] at once. When one process was waiting for ], another could compute.

IBM planned for each model to be programmed using ].<ref name="cpl_3rd-ch2-27">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley | publisher = Addison-Wesley
| date = 1994 | year = 2001
| pages = 97 | page = 27
| id = ISBN 0-201-50480-4 | isbn = 0-201-71012-9
}}</ref> A committee was formed that included ], ] and ] programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace Cobol and Fortran.<ref name="cpl_3rd-ch2-27"/> The result was a large and complex language that took a long time to ].<ref name="cpl_3rd-ch2-29">{{cite book
}}</ref> Termination is either by normal self-termination or by error — software or hardware error.
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 29
| isbn = 0-201-71012-9
}}</ref>


] 3, manufactured in the mid-1970s]]
===Embedded programs===
Computers manufactured until the 1970s had front-panel switches for manual programming.<ref name="osc-ch1-p6">{{cite book
] on the right of this ] is controlled with embedded ].]]
Some computer programs are embedded into hardware. A ] requires an initial computer program stored in its ] to ]. The boot process is to identify and initialize all aspects of the system, from ] to ] to ] contents.<ref name="osc-ch2-p30">{{cite book
| last = Silberschatz | last = Silberschatz
| first = Abraham | first = Abraham
| title = Operating System Concepts, Fourth Edition | title = Operating System Concepts, Fourth Edition
| publisher = Addison-Wesley | publisher = Addison-Wesley
| date = 1994 | year = 1994
| pages = 30 | page = 6
| id = ISBN 0-201-50480-4 | isbn = 978-0-201-50480-4
}}</ref> The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were automatically inputted via ], ] or ]. After the medium was loaded, the starting address was set via switches, and the execute button was pressed.<ref name="osc-ch1-p6"/>
}}</ref> Following the initialization process, this initial computer program loads the ] and sets the ] to begin normal operations. Independent of the host computer, a ] might have embedded ] to control its operation. Firmware is used when the computer program is rarely or never expected to change, or when the program must not be lost when the power is off.<ref name="sco-ch1-p11">{{cite book

===Very Large Scale Integration===
] ]]
A major milestone in software development was the invention of the ] (VLSI) circuit (1964).<ref name="digibarn_bp">{{cite web
| url=https://www.digibarn.com/stories/bill-pentz-story/index.html#story
| title=Bill Pentz — A bit of Background: the Post-War March to VLSI
| publisher=Digibarn Computer Museum
| date=August 2008
| access-date=January 31, 2022
| archive-date=March 21, 2022
| archive-url=https://web.archive.org/web/20220321183527/https://www.digibarn.com/stories/bill-pentz-story/index.html#story
| url-status=live
}}</ref> Following ], tube-based technology was replaced with ]s (1947) and ]s (late 1950s) mounted on a ].<ref name="digibarn_bp"/> ], the ] industry replaced the circuit board with an ].<ref name="digibarn_bp"/>

], co-founder of ] (1957) and ] (1968), achieved a technological improvement to refine the ] of ]s (1963).<ref name="digital_age">{{cite book
| url=https://books.google.com/books?id=UUbB3d2UnaAC&pg=PA46
| title=To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS
| publisher=Johns Hopkins University Press
| year=2002
| isbn=9780801886393
| access-date=February 3, 2022
| archive-date=February 2, 2023
| archive-url=https://web.archive.org/web/20230202181649/https://books.google.com/books?id=UUbB3d2UnaAC&pg=PA46
| url-status=live
}}</ref> The goal is to alter the ] of a ]. First, naturally occurring ] are converted into ] rods using the ].<ref name="osti">{{cite web
| url=https://www.osti.gov/servlets/purl/1497235
| title=Manufacturing of Silicon Materials for Microelectronics and Solar PV
| publisher=Sandia National Laboratories
| year=2017
| access-date=February 8, 2022
| last1=Chalamala
| first1=Babu
| archive-date=March 23, 2023
| archive-url=https://web.archive.org/web/20230323163602/https://www.osti.gov/biblio/1497235
| url-status=live
}}</ref> The ] then converts the rods into a ], ].<ref name="britannica_wafer">{{cite web
| url=https://www.britannica.com/technology/integrated-circuit/Fabricating-ICs#ref837156
| title=Fabricating ICs Making a base wafer
| publisher=Britannica
| access-date=February 8, 2022
| archive-date=February 8, 2022
| archive-url=https://web.archive.org/web/20220208103132/https://www.britannica.com/technology/integrated-circuit/Fabricating-ICs#ref837156
| url-status=live
}}</ref> The ] is then thinly sliced to form a ] ]. The ] of ] then ''integrates'' unipolar transistors, ]s, ]s, and ]s onto the wafer to build a matrix of ] (MOS) transistors.<ref name="anysilicon">{{cite web
| url=https://anysilicon.com/introduction-to-nmos-and-pmos-transistors/
| title=Introduction to NMOS and PMOS Transistors
| date=4 November 2021
| publisher=Anysilicon
| access-date=February 5, 2022
| archive-date=6 February 2022
| archive-url=https://web.archive.org/web/20220206051146/https://anysilicon.com/introduction-to-nmos-and-pmos-transistors/
| url-status=live
}}</ref><ref name="britannica_micropressor">{{cite web
| url=https://www.britannica.com/technology/microprocessor#ref36149
| title=microprocessor definition
| publisher=Britannica
| access-date=April 1, 2022
| archive-date=April 1, 2022
| archive-url=https://web.archive.org/web/20220401085141/https://www.britannica.com/technology/microprocessor#ref36149
| url-status=live
}}</ref> The MOS transistor is the primary component in ''integrated circuit chips''.<ref name="digital_age"/>

Originally, integrated circuit chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a ] of ] (ROM). The matrix resembled a two-dimensional array of fuses.<ref name="digibarn_bp"/> The process to embed instructions onto the matrix was to burn out the unneeded connections.<ref name="digibarn_bp"/> There were so many connections, ] programmers wrote a ''computer program'' on another chip to oversee the burning.<ref name="digibarn_bp"/> The technology became known as ]. In 1971, Intel ] and named it the ] ].<ref name="intel_4004">{{cite web
| url=https://spectrum.ieee.org/chip-hall-of-fame-intel-4004-microprocessor
| title=Chip Hall of Fame: Intel 4004 Microprocessor
| publisher=Institute of Electrical and Electronics Engineers
| date=July 2, 2018
| access-date=January 31, 2022
| archive-date=February 7, 2022
| archive-url=https://web.archive.org/web/20220207101915/https://spectrum.ieee.org/chip-hall-of-fame-intel-4004-microprocessor
| url-status=live
}}</ref>

]
The terms ''microprocessor'' and ] (CPU) are now used interchangeably. However, CPUs predate microprocessors. For example, the ] (1964) had a CPU made from ].<ref name="ibm_360">{{cite web
| url=https://www.computer-museum.ru/books/archiv/ibm36040.pdf |archive-url=https://ghostarchive.org/archive/20221010/https://www.computer-museum.ru/books/archiv/ibm36040.pdf |archive-date=2022-10-10 |url-status=live
| title=360 Revolution
| publisher=Father, Son & Co.
| year=1990
| access-date=February 5, 2022
}}</ref>

===Sac State 8008===
]
The Intel 4004 (1971) was a 4-] microprocessor designed to run the ] calculator. Five months after its release, Intel released the ], an 8-bit microprocessor. Bill Pentz led a team at ] to build the first ] using the Intel 8008: the ''Sac State 8008'' (1972).<ref name="cnet">{{cite web
| url=https://www.cnet.com/news/inside-the-worlds-long-lost-first-microcomputer/
| title=Inside the world's long-lost first microcomputer
| publisher=c/net
| date=January 8, 2010
| access-date=January 31, 2022
| archive-date=February 1, 2022
| archive-url=https://web.archive.org/web/20220201023538/https://www.cnet.com/news/inside-the-worlds-long-lost-first-microcomputer/
| url-status=live
}}</ref> Its purpose was to store patient medical records. The computer supported a ] to run a ], 3-], ].<ref name="digibarn_bp"/> It had a color display and keyboard that was packaged in a single console. The disk operating system was programmed using ]. The medical records application was programmed using a ] interpreter.<ref name="digibarn_bp"/> However, the computer was an evolutionary dead-end because it was extremely expensive. Also, it was built at a public university lab for a specific purpose.<ref name="cnet"/> Nonetheless, the project contributed to the development of the ] (1974) ].<ref name="digibarn_bp"/>

===x86 series===
] (1981) used an Intel 8088 microprocessor.]]
In 1978, the modern ] environment began when Intel upgraded the ] to the ]. Intel simplified the Intel 8086 to manufacture the cheaper ].<ref name="infoworld_8-23-82">{{cite web
| url=https://books.google.com/books?id=VDAEAAAAMBAJ&pg=PA22
| title=Bill Gates, Microsoft and the IBM Personal Computer
| publisher=InfoWorld
| date=August 23, 1982
| access-date=1 February 2022
| archive-date=18 February 2023
| archive-url=https://web.archive.org/web/20230218183644/https://books.google.com/books?id=VDAEAAAAMBAJ&pg=PA22
| url-status=live
}}</ref> ] embraced the Intel 8088 when they entered the ] market (1981). As ] ] for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the ]. The ] is a family of ] ]s. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new ]. The major categories of instructions are:{{efn|For more information, visit ].}}
* Memory instructions to set and access numbers and ] in ].
* Integer ] (ALU) instructions to perform the primary arithmetic operations on ].
* Floating point ALU instructions to perform the primary arithmetic operations on ]s.
* ] instructions to push and pop ] needed to allocate memory and interface with ].
* ] (SIMD) instructions{{efn|introduced in 1999}} to increase speed when multiple processors are available to perform the same ] on an ].

===Changing programming environment===
] ] (1978) was a widely used ].]]
VLSI circuits enabled the ] to advance from a ] (until the 1990s) to a ] (GUI) computer. Computer terminals limited programmers to a single ] running in a ]. During the 1970s, full-screen source code editing became possible through a ]. Regardless of the technology available, the goal is to program in a ].

==Programming paradigms and languages==

] features exist to provide building blocks to be combined to express programming ideals.<ref name="stroustrup-ch1-10">{{cite book
| last = Stroustrup
| first = Bjarne
| title = The C++ Programming Language, Fourth Edition
| publisher = Addison-Wesley
| year = 2013
| page = 10
| isbn = 978-0-321-56384-2
}}</ref> Ideally, a programming language should:<ref name="stroustrup-ch1-10"/>
* express ideas directly in the code.
* express independent ideas independently.
* express relationships among ideas directly in the code.
* combine ideas freely.
* combine ideas only where combinations make sense.
* express simple ideas simply.

The ] of a programming language to provide these building blocks may be categorized into ]s.<ref name="stroustrup-ch1-11">{{cite book
| last = Stroustrup
| first = Bjarne
| title = The C++ Programming Language, Fourth Edition
| publisher = Addison-Wesley
| year = 2013
| page = 11
| isbn = 978-0-321-56384-2
}}</ref> For example, different paradigms may differentiate:<ref name="stroustrup-ch1-11"/>
* ], ], and ].
* different levels of ].
* different levels of ].
* different levels of input ], as in ] and ].
Each of these programming styles has contributed to the synthesis of different ''programming languages''.<ref name="stroustrup-ch1-11"/>

A ''programming language'' is a set of ], ], ], and rules by which programmers can communicate instructions to the computer.<ref name="pis-ch4-p159">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 159
| isbn = 0-619-06489-7
}}</ref> They follow a set of rules called a ].<ref name="pis-ch4-p159"/>

* ''Keywords'' are reserved words to form ] and ].
* ''Symbols'' are characters to form ], ], ], and ]s.
* ''Identifiers'' are words created by programmers to form ], ], ], and ].
* ''Syntax Rules'' are defined in the ].

''Programming languages'' get their basis from ]s.<ref name="fla-ch1-p2">{{cite book
| last = Linz
| first = Peter
| title = An Introduction to Formal Languages and Automata
| publisher = D. C. Heath and Company
| year = 1990
| page = 2
| isbn = 978-0-669-17342-0
}}</ref> The purpose of defining a solution in terms of its ''formal language'' is to generate an ] to solve the underlining problem.<ref name="fla-ch1-p2"/> An ''algorithm'' is a sequence of simple instructions that solve a problem.<ref name="dsa-ch2-p29">{{cite book
| last = Weiss
| first = Mark Allen
| title = Data Structures and Algorithm Analysis in C++
| publisher = Benjamin/Cummings Publishing Company, Inc.
| year = 1994
| page = 29
| isbn = 0-8053-5443-3
}}</ref>

===Generations of programming language===
{{main|Programming language generations}}
] monitor on a ] ]]]
The evolution of programming languages began when the ] (1949) used the first ] in its ].<ref name="sco-ch1-p17">{{cite book
| last = Tanenbaum | last = Tanenbaum
| first = Andrew S. | first = Andrew S.
| title = Structured Computer Organization, Third Edition | title = Structured Computer Organization, Third Edition
| publisher = Prentice Hall | publisher = Prentice Hall
| date = 1990 | year = 1990
| page =
| pages = 11
| id = ISBN 0-13-854662-2 | isbn = 978-0-13-854662-5
| url = https://archive.org/details/structuredcomput00tane/page/17
}}</ref> Programming the EDSAC was in the first ].

* The ] is ].<ref name="pis-ch4-p160">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 160
| isbn = 0-619-06489-7
}}</ref> ''Machine language'' requires the programmer to enter instructions using ''instruction numbers'' called ]. For example, the ADD operation on the ] has instruction number 24576.<ref name="sco-ch7-p399">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Third Edition
| publisher = Prentice Hall
| year = 1990
| page =
| isbn = 978-0-13-854662-5
| url = https://archive.org/details/structuredcomput00tane/page/399
}}</ref>

* The ] is ].<ref name="pis-ch4-p160"/> ''Assembly language'' allows the programmer to use ] ] instead of remembering instruction numbers. An ] translates each assembly language mnemonic into its machine language number. For example, on the PDP-11, the operation 24576 can be referenced as ADD in the source code.<ref name="sco-ch7-p399"/> The four basic arithmetic operations have assembly instructions like ADD, SUB, MUL, and DIV.<ref name="sco-ch7-p399"/> Computers also have instructions like DW (Define ]) to reserve ] cells. Then the MOV instruction can copy ]s between ] and memory.

:* The basic structure of an assembly language statement is a label, ], ], and comment.<ref name="sco-ch7-p400">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Third Edition
| publisher = Prentice Hall
| year = 1990
| page =
| isbn = 978-0-13-854662-5
| url = https://archive.org/details/structuredcomput00tane/page/400
}}</ref>
::* ''Labels'' allow the programmer to work with ]. The assembler will later translate labels into physical ]es.
::* ''Operations'' allow the programmer to work with mnemonics. The assembler will later translate mnemonics into instruction numbers.
::* ''Operands'' tell the assembler which data the operation will process.
::* ''Comments'' allow the programmer to articulate a narrative because the instructions alone are vague.
:: The key characteristic of an assembly language program is it forms a one-to-one mapping to its corresponding machine language target.<ref name="sco-ch7-p398">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Third Edition
| publisher = Prentice Hall
| year = 1990
| page =
| isbn = 978-0-13-854662-5
| url = https://archive.org/details/structuredcomput00tane/page/398
}}</ref>

* The ] uses ]s and ] to execute computer programs. The distinguishing feature of a ''third generation'' language is its independence from particular hardware.<ref name="cpl_3rd-ch2-26">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 26
| isbn = 0-201-71012-9
}}</ref> Early languages include ] (1958), ] (1959), ] (1960), and ] (1964).<ref name="pis-ch4-p160"/> In 1973, the ] emerged as a ] that produced efficient machine language instructions.<ref name="cpl_3rd-ch2-37">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 37
| isbn = 0-201-71012-9
}}</ref> Whereas ''third-generation'' languages historically generated many machine instructions for each statement,<ref name="pis-ch4-p160_quote1">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 160
| isbn = 0-619-06489-7
| quote = With third-generation and higher-level programming languages, each statement in the language translates into several instructions in machine language.
}}</ref> C has statements that may generate a single machine instruction.{{efn|] like <code>x++</code> will usually compile to a single instruction.}} Moreover, an ] might overrule the programmer and produce fewer machine instructions than statements. Today, an entire ] of languages fill the ], ''third generation'' spectrum.

* The ] emphasizes what output results are desired, rather than how programming statements should be constructed.<ref name="pis-ch4-p160"/> ] attempt to limit ] and allow programmers to write code with relatively few errors.<ref name="pis-ch4-p160"/> One popular ''fourth generation'' language is called ] (SQL).<ref name="pis-ch4-p160"/> ] developers no longer need to process each database record one at a time. Also, a simple statement can generate output records without having to understand how they are retrieved.

===Imperative languages===
{{main|Imperative programming}}

]
''Imperative languages'' specify a sequential ] using ], ], and ]:<ref name="cpl-ch4-75">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Second Edition
| publisher = Addison-Wesley
| year = 1993
| page = 75
| isbn = 978-0-201-56885-1
}}</ref> }}</ref>
* A ''declaration'' introduces a ] name to the ''computer program'' and assigns it to a ]<ref name="stroustrup-ch2-40">{{cite book
| last = Stroustrup
| first = Bjarne
| title = The C++ Programming Language, Fourth Edition
| publisher = Addison-Wesley
| year = 2013
| page = 40
| isbn = 978-0-321-56384-2
}}</ref> – for example: <code>var x: integer;</code>
* An ''expression'' yields a value – for example: <code>2 + 2</code> yields 4
* A ''statement'' might ] an expression to a variable or use the value of a variable to alter the program's ] – for example: <code>x := 2 + 2; ] x = 4 then do_something();</code>


===Manual programming=== ====Fortran====
] (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system". It was designed for scientific calculations, without ] handling facilities. Along with ], ], and ], it supported:
] 3]]
* ].
Computer programs historically were manually input to the central processor via switches. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also historically were manually input via ] or ]. After the medium was loaded, the starting address was set via switches and the execute button pressed.<ref name="osc-ch1-p6">{{cite book
* ].
| last = Silberschatz
* ].
| first = Abraham

| title = Operating System Concepts, Fourth Edition
It succeeded because:
* programming and debugging costs were below computer running costs.
* it was supported by IBM.
* applications at the time were scientific.<ref name="cpl_3rd-ch2-16">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley | publisher = Addison-Wesley
| date = 1994 | year = 2001
| pages = 6 | page = 16
| id = ISBN 0-201-50480-4 | isbn = 0-201-71012-9
}}</ref> }}</ref>


However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler.<ref name="cpl_3rd-ch2-16"/> The ] (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports:
===Automatic program generation===
* ].
] is a style of ] that creates ] through ] ], ], ]s, ]s, and ]s to improve ] productivity. Source code is generated with ]s such as a ] or an ]. The simplest form of source code generator is a ] processor, such as the ], which replaces patterns in source code according to relatively simple rules.
* ] to arrays.


====COBOL====
]s output source code or ] that simultaneously become the input to another ]. The analogy is that of one process driving another process, with the computer code being burned as fuel. ]s are software engines that deliver applications to ]s. For example, a ] is an application server that allows users to build ] assembled from ]. Wikis generate ], ], ], and ] which are then ] by a ].
] (1959) stands for "COmmon Business Oriented Language". Fortran manipulated symbols. It was soon realized that symbols did not need to be numbers, so ] were introduced.<ref name="cpl_3rd-ch2-24">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 24
| isbn = 0-201-71012-9
}}</ref> The ] influenced COBOL's development, with ] being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.<ref name="cpl_3rd-ch2-25">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 25
| isbn = 0-201-71012-9
}}</ref>


COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards. As a consequence, it was not changed for 15 years until 1974. The 1990s version did make consequential changes, like ].<ref name="cpl_3rd-ch2-25"/>
=== Simultaneous execution===

{{seealso|Process (computing)|Multiprocessing}}
====Algol====
Many operating systems support ] which enables many computer programs to appear to be running simultaneously on a single computer. Operating systems may run multiple programs through ] — a software mechanism to ] the CPU among processes frequently so that users can ] with each program while it is running.<ref name="osc-ch4-100">{{cite book
] (1960) stands for "ALGOrithmic Language". It had a profound influence on programming language design.<ref name="cpl_3rd-ch2-19">{{cite book
| last = Silberschatz
| first = Abraham | last = Wilson
| first = Leslie B.
| title = Operating System Concepts, Fourth Edition
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley | publisher = Addison-Wesley
| date = 1994 | year = 2001
| pages = 100 | page = 19
| id = ISBN 0-201-50480-4 | isbn = 0-201-71012-9
}}</ref> Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable, structured design. Algol was first to define its ] using the ].<ref name="cpl_3rd-ch2-19"/> This led to ] compilers. It added features like:
}}</ref> Within hardware, modern day multiprocessor computers or computers with multicore processors may run multiple programs.<ref name="mcore">{{cite book
* ], where variables were local to their block.
| last = Akhter
* arrays with variable bounds.
| first = Shameem
* ].
| title = Multi-Core Programming
* ].
| publisher = Richard Bowles (Intel Press)
* ].<ref name="cpl_3rd-ch2-19"/>
| date = 2006

| pages = pp. 11-13
Algol's direct descendants include ], ], ], ] and ] on one branch. On another branch the descendants include ], ] and ].<ref name="cpl_3rd-ch2-19"/>
| id = ISBN 0-9764832-4-6

====Basic====
] (1964) stands for "Beginner's All-Purpose Symbolic Instruction Code". It was developed at ] for all of their students to learn.<ref name="cpl_3rd-ch2-30"/> If a student did not go on to a more powerful language, the student would still remember Basic.<ref name="cpl_3rd-ch2-30"/> A Basic interpreter was installed in the ] manufactured in the late 1970s. As the microcomputer industry grew, so did the language.<ref name="cpl_3rd-ch2-30"/>

Basic pioneered the ].<ref name="cpl_3rd-ch2-30"/> It offered ] commands within its environment:
* The 'new' command created an empty slate.
* Statements evaluated immediately.
* Statements could be programmed by preceding them with line numbers.{{efn|The line numbers were typically incremented by 10 to leave room if additional statements were added later.}}
* The 'list' command displayed the program.
* The 'run' command executed the program.

However, the Basic syntax was too simple for large programs.<ref name="cpl_3rd-ch2-30"/> Recent dialects added structure and object-oriented extensions. ] ] is still widely used and produces a ].<ref name="cpl_3rd-ch2-31"/>

====C====
] (1973) got its name because the language ] was replaced with ], and ] called the next version "C". Its purpose was to write the ] ].<ref name="cpl_3rd-ch2-37"/> C is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s.<ref name="cpl_3rd-ch2-37"/> Its growth also was because it has the facilities of ], but uses a ]. It added advanced features like:
* ].
* arithmetic on pointers.
* pointers to functions.
* bit operations.
* freely combining complex ].<ref name="cpl_3rd-ch2-37"/>

]
''C'' allows the programmer to control which region of memory data is to be stored. ]s and ]s require the fewest ] to store. The ] is automatically used for the standard variable ]. ] memory is returned to a ] from the ] function.

* The ''global and static data'' region is located just above the ''program'' region. (The program region is technically called the ''text'' region. It is where machine instructions are stored.)
:* The global and static data region is technically two regions.<ref name="geeksforgeeks">{{cite web
| url = https://www.geeksforgeeks.org/memory-layout-of-c-program/
| title = Memory Layout of C Programs
| date = 12 September 2011
| access-date = 6 November 2021
| archive-date = 6 November 2021
| archive-url = https://web.archive.org/web/20211106175644/https://www.geeksforgeeks.org/memory-layout-of-c-program/
| url-status = live
}}</ref> One region is called the ''initialized ]'', where variables declared with default values are stored. The other region is called the '']'', where variables declared without default values are stored.
:* Variables stored in the ''global and static data'' region have their ] set at compile-time. They retain their values throughout the life of the process.

:* The global and static region stores the ''global variables'' that are declared on top of (outside) the <code>main()</code> function.<ref name="cpl-ch1-p31">{{cite book
|title=The C Programming Language Second Edition
|last1=Kernighan
|first1=Brian W.
|last2=Ritchie
|first2=Dennis M.
|publisher=Prentice Hall
|year=1988
|isbn=0-13-110362-8
|page=31}}</ref> Global variables are visible to <code>main()</code> and every other function in the source code.

: On the other hand, variable declarations inside of <code>main()</code>, other functions, or within <code>{</code> <code>}</code> ] are ''local variables''. Local variables also include ''] variables''. Parameter variables are enclosed within the parenthesis of a function definition.<ref name="cpl_3rd-ch6-128">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 128
| isbn = 0-201-71012-9
}}</ref> Parameters provide an ] to the function.

:* ''Local variables'' declared using the <code>static</code> prefix are also stored in the ''global and static data'' region.<ref name="geeksforgeeks"/> Unlike global variables, static variables are only visible within the function or block. Static variables always retain their value. An example usage would be the function <code>int increment_counter(){static int counter = 0; counter++; return counter;}</code>{{efn|This function could be written more concisely as <code>int increment_counter(){ static int counter; return ++counter;}</code>. 1) Static variables are automatically initialized to zero. 2) <code>++counter</code> is a prefix ].}}

* The ] region is a contiguous block of memory located near the top memory address.<ref name="lpi-ch6-p121">{{cite book
|title=The Linux Programming Interface
|last=Kerrisk
|first=Michael
|publisher=No Starch Press
|year=2010
|isbn=978-1-59327-220-3
|page=121}}</ref> Variables placed in the stack are populated from top to bottom.{{efn|This is despite the metaphor of a ''stack,'' which normally grows from bottom to top.}}<ref name="lpi-ch6-p121"/> A ] is a special-purpose ] that keeps track of the last memory address populated.<ref name="lpi-ch6-p121"/> Variables are placed into the stack via the ''assembly language'' PUSH instruction. Therefore, the addresses of these variables are set during ]. The method for stack variables to lose their ] is via the POP instruction.

:* ''Local variables'' declared without the <code>static</code> prefix, including formal parameter variables,<ref name="lpi-ch6-p122">{{cite book
|title=The Linux Programming Interface
|last=Kerrisk
|first=Michael
|publisher=No Starch Press
|year=2010
|isbn=978-1-59327-220-3
|page=122}}</ref> are called ''automatic variables''<ref name="cpl-ch1-p31"/> and are stored in the stack.<ref name="geeksforgeeks"/> They are visible inside the function or block and lose their scope upon exiting the function or block.

* The ] region is located below the stack.<ref name="geeksforgeeks"/> It is populated from the bottom to the top. The ] manages the heap using a ''heap pointer'' and a list of allocated memory blocks.<ref name="cpl-ch1-p185">{{cite book
|title=The C Programming Language Second Edition
|last1=Kernighan
|first1=Brian W.
|last2=Ritchie
|first2=Dennis M.
|publisher=Prentice Hall
|year=1988
|isbn=0-13-110362-8
|page=185}}</ref> Like the stack, the addresses of heap variables are set during runtime. An ] error occurs when the heap pointer and the stack pointer meet.

:* ''C'' provides the <code>malloc()</code> library function to ] heap memory.{{efn|''C'' also provides the <code>calloc()</code> function to allocate heap memory. It provides two additional services: 1) It allows the programmer to create an ] of arbitrary size. 2) It sets each ] to zero.}}<ref name="cpl-ch8-p187">{{cite book
|title=The C Programming Language Second Edition
|last1=Kernighan
|first1=Brian W.
|last2=Ritchie
|first2=Dennis M.
|publisher=Prentice Hall
|year=1988
|isbn=0-13-110362-8
|page=187}}</ref> Populating the heap with data is an additional copy function.{{efn|For ] variables, ''C'' provides the <code>strdup()</code> function. It executes both the allocation function and the copy function.}} Variables stored in the heap are economically passed to functions using pointers. Without pointers, the entire block of data would have to be passed to the function via the stack.

====C++====
In the 1970s, ] needed language support to break large projects down into ].<ref name="cpl_3rd-ch2-38">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 38
| isbn = 0-201-71012-9
}}</ref> One obvious feature was to decompose large projects ''physically'' into separate ]. A less obvious feature was to decompose large projects ''logically'' into ]s.<ref name="cpl_3rd-ch2-38"/> At the time, languages supported ] datatypes like ] numbers, ] numbers, and ] of ]. Abstract datatypes are ] of concrete datatypes, with a new name assigned. For example, a ] of integers could be called <code>integer_list</code>.

In object-oriented jargon, abstract datatypes are called ]. However, a ''class'' is only a definition; no memory is allocated. When memory is allocated to a class and ] to an ], it is called an ].<ref name="cpl_3rd-ch8-193">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 193
| isbn = 0-201-71012-9
}}</ref> }}</ref>


] developed by combining the need for classes and the need for safe ].<ref name="cpl_3rd-ch2-39">{{cite book
== Functional categories ==
| last = Wilson
Computer programs may be categorized along functional lines. These functional categories are ] and ]. System software includes the ] which couples the ] with the application software.<ref name = "osc-ch1-1"/> The purpose of the operating system is to provide an environment in which application software executes in a convenient and efficient manner.<ref name="osc-ch1-1">{{cite book
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 39
| isbn = 0-201-71012-9
}}</ref> A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a ], ], or ]. ''Object-oriented programming'' is executing ''operations'' on ''objects''.<ref name="cpl_3rd-ch2-35">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 35
| isbn = 0-201-71012-9
}}</ref>

''Object-oriented languages'' support a syntax to model ] relationships. In ], an ] of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other people do not have. ''Object-oriented languages'' model ''subset/superset'' relationships using ].<ref name="cpl_3rd-ch8-192">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 192
| isbn = 0-201-71012-9
}}</ref> ''Object-oriented programming'' became the dominant language paradigm by the late 1990s.<ref name="cpl_3rd-ch2-38"/>

] (1985) was originally called "C with Classes".<ref name="stroustrup-notes-22">{{cite book
| last = Stroustrup
| first = Bjarne
| title = The C++ Programming Language, Fourth Edition
| publisher = Addison-Wesley
| year = 2013
| page = 22
| isbn = 978-0-321-56384-2
}}</ref> It was designed to expand ] capabilities by adding the object-oriented facilities of the language ].<ref name="stroustrup-notes-21">{{cite book
| last = Stroustrup
| first = Bjarne
| title = The C++ Programming Language, Fourth Edition
| publisher = Addison-Wesley
| year = 2013
| page = 21
| isbn = 978-0-321-56384-2
}}</ref>

An object-oriented module is composed of two files. The definitions file is called the ]. Here is a C++ ''header file'' for the ''GRADE class'' in a simple school application:

<syntaxhighlight lang="cpp">
// grade.h
// -------

// Used to allow multiple source files to include
// this header file without duplication errors.
// ----------------------------------------------
#ifndef GRADE_H
#define GRADE_H

class GRADE {
public:
// This is the constructor operation.
// ----------------------------------
GRADE ( const char letter );

// This is a class variable.
// -------------------------
char letter;

// This is a member operation.
// ---------------------------
int grade_numeric( const char letter );

// This is a class variable.
// -------------------------
int numeric;
};
#endif
</syntaxhighlight>

A ] operation is a function with the same name as the class name.<ref name="stroustrup-ch2-49">{{cite book
| last = Stroustrup
| first = Bjarne
| title = The C++ Programming Language, Fourth Edition
| publisher = Addison-Wesley
| year = 2013
| page = 49
| isbn = 978-0-321-56384-2
}}</ref> It is executed when the calling operation executes the <code>]</code> statement.

A module's other file is the '']''. Here is a C++ source file for the ''GRADE class'' in a simple school application:

<syntaxhighlight lang="cpp">
// grade.cpp
// ---------
#include "grade.h"

GRADE::GRADE( const char letter )
{
// Reference the object using the keyword 'this'.
// ----------------------------------------------
this->letter = letter;

// This is Temporal Cohesion
// -------------------------
this->numeric = grade_numeric( letter );
}

int GRADE::grade_numeric( const char letter )
{
if ( ( letter == 'A' || letter == 'a' ) )
return 4;
else
if ( ( letter == 'B' || letter == 'b' ) )
return 3;
else
if ( ( letter == 'C' || letter == 'c' ) )
return 2;
else
if ( ( letter == 'D' || letter == 'd' ) )
return 1;
else
if ( ( letter == 'F' || letter == 'f' ) )
return 0;
else
return -1;
}
</syntaxhighlight>

Here is a C++ ''header file'' for the ''PERSON class'' in a simple school application:

<syntaxhighlight lang="cpp">
// person.h
// --------
#ifndef PERSON_H
#define PERSON_H

class PERSON {
public:
PERSON ( const char *name );
const char *name;
};
#endif
</syntaxhighlight>

Here is a C++ ''source file'' for the ''PERSON class'' in a simple school application:

<syntaxhighlight lang="cpp">
// person.cpp
// ----------
#include "person.h"

PERSON::PERSON ( const char *name )
{
this->name = name;
}
</syntaxhighlight>

Here is a C++ ''header file'' for the ''STUDENT class'' in a simple school application:

<syntaxhighlight lang="cpp">
// student.h
// ---------
#ifndef STUDENT_H
#define STUDENT_H

#include "person.h"
#include "grade.h"

// A STUDENT is a subset of PERSON.
// --------------------------------
class STUDENT : public PERSON{
public:
STUDENT ( const char *name );
GRADE *grade;
};
#endif
</syntaxhighlight>

Here is a C++ ''source file'' for the ''STUDENT class'' in a simple school application:

<syntaxhighlight lang="cpp">
// student.cpp
// -----------
#include "student.h"
#include "person.h"

STUDENT::STUDENT ( const char *name ):
// Execute the constructor of the PERSON superclass.
// -------------------------------------------------
PERSON( name )
{
// Nothing else to do.
// -------------------
}
</syntaxhighlight>

Here is a driver program for demonstration:

<syntaxhighlight lang="cpp">
// student_dvr.cpp
// ---------------
#include <iostream>
#include "student.h"

int main( void )
{
STUDENT *student = new STUDENT( "The Student" );
student->grade = new GRADE( 'a' );

std::cout
// Notice student inherits PERSON's name
<< student->name
<< ": Numeric grade = "
<< student->grade->numeric
<< "\n";
return 0;
}
</syntaxhighlight>

Here is a ] to compile everything:

<syntaxhighlight lang="make">
# makefile
# --------
all: student_dvr

clean:
rm student_dvr *.o

student_dvr: student_dvr.cpp grade.o student.o person.o
c++ student_dvr.cpp grade.o student.o person.o -o student_dvr

grade.o: grade.cpp grade.h
c++ -c grade.cpp

student.o: student.cpp student.h
c++ -c student.cpp

person.o: person.cpp person.h
c++ -c person.cpp
</syntaxhighlight>

===Declarative languages===
{{main|Declarative programming}}

''Imperative languages'' have one major criticism: assigning an expression to a ''non-local'' variable may produce an unintended ].<ref name="cpl_3rd-ch9-218">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 218
| isbn = 0-201-71012-9
}}</ref> ] generally omit the assignment statement and the control flow. They describe ''what'' computation should be performed and not ''how'' to compute it. Two broad categories of declarative languages are ]s and ].

The principle behind a ''functional language'' is to use ] as a guide for a well defined ].<ref name="cpl_3rd-ch9-217">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 217
| isbn = 0-201-71012-9
}}</ref> In mathematics, a function is a rule that maps elements from an ''expression'' to a range of ''values''. Consider the function:

<code>times_10(x) = 10 * x</code>

The ''expression'' <code>10 * x</code> is mapped by the function <code>times_10()</code> to a range of ''values''. One ''value'' happens to be 20. This occurs when x is 2. So, the application of the function is mathematically written as:

<code>times_10(2) = 20</code>

A ''functional language'' compiler will not store this value in a variable. Instead, it will ''push'' the value onto the computer's ] before setting the ] back to the calling function. The calling function will then ''pop'' the value from the stack.<ref name="dsa-ch3-p103">{{cite book
| last = Weiss
| first = Mark Allen
| title = Data Structures and Algorithm Analysis in C++
| publisher = Benjamin/Cummings Publishing Company, Inc.
| year = 1994
| page = 103
| isbn = 0-8053-5443-3
| quote = When there is a function call, all the important information needs to be saved, such as register values (corresponding to variable names) and the return address (which can be obtained from the program counter) ... When the function wants to return, it ... restores all the registers. It then makes the return jump. Clearly, all of this work can be done using a stack, and that is exactly what happens in virtually every programming language that implements recursion.
}}</ref>

''Imperative languages'' do support functions. Therefore, ''functional programming'' can be achieved in an imperative language, if the programmer uses discipline. However, a ''functional language'' will force this discipline onto the programmer through its syntax. Functional languages have a syntax tailored to emphasize the ''what''.<ref name="cpl_3rd-ch9-230">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 230
| isbn = 0-201-71012-9
}}</ref>

A functional program is developed with a set of primitive functions followed by a single driver function.<ref name="cpl_3rd-ch9-218"/> Consider the ]:

<code>function max( a, b ){/* code omitted */}</code>

<code>function min( a, b ){/* code omitted */}</code>

<code>function range( a, b, c ) {</code>
:<code>return max( a, max( b, c ) ) - min( a, min( b, c ) );</code>
<code>}</code>

The primitives are <code>max()</code> and <code>min()</code>. The driver function is <code>range()</code>. Executing:

<code>put( range( 10, 4, 7) );</code> will output 6.

''Functional languages'' are used in ] research to explore new language features.<ref name="cpl_3rd-ch9-240">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 240
| isbn = 0-201-71012-9
}}</ref> Moreover, their lack of side-effects have made them popular in ] and ].<ref name="cpl_3rd-ch9-241">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 241
| isbn = 0-201-71012-9
}}</ref> However, application developers prefer the ] of ''imperative languages''.<ref name="cpl_3rd-ch9-241"/>

====Lisp====
] (1958) stands for "LISt Processor".<ref name="ArtOfLisp">{{cite book
| last1=Jones
| first1=Robin
| last2=Maynard
| first2=Clive
| last3=Stewart
| first3=Ian
| title=The Art of Lisp Programming
| date=December 6, 2012
| publisher=Springer Science & Business Media
| isbn=9781447117193
| page=2}}</ref> It is tailored to process ]. A full structure of the data is formed by building lists of lists. In memory, a ] is built. Internally, the tree structure lends nicely for ] functions.<ref name="cpl_3rd-ch9-220">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 220
| isbn = 0-201-71012-9
}}</ref> The syntax to build a tree is to enclose the space-separated ] within parenthesis. The following is a ] of three elements. The first two elements are themselves lists of two elements:

<code>((A B) (HELLO WORLD) 94)</code>

Lisp has functions to extract and reconstruct elements.<ref name="cpl_3rd-ch9-221">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 221
| isbn = 0-201-71012-9
}}</ref> The function <code>head()</code> returns a list containing the first element in the list. The function <code>tail()</code> returns a list containing everything but the first element. The function <code>cons()</code> returns a list that is the concatenation of other lists. Therefore, the following expression will return the list <code>x</code>:

<code>cons(head(x), tail(x))</code>

One drawback of Lisp is when many functions are nested, the parentheses may look confusing.<ref name="cpl_3rd-ch9-230"/> Modern Lisp ] help ensure parenthesis match. As an aside, Lisp does support the ''imperative language'' operations of the assignment statement and goto loops.<ref name="cpl_3rd-ch9-229">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 229
| isbn = 0-201-71012-9
}}</ref> Also, ''Lisp'' is not concerned with the ] of the elements at compile time.<ref name="cpl_3rd-ch9-227">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 227
| isbn = 0-201-71012-9
}}</ref> Instead, it assigns (and may reassign) the datatypes at ]. Assigning the datatype at runtime is called ].<ref name="cpl_3rd-ch9-222">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 222
| isbn = 0-201-71012-9
}}</ref> Whereas dynamic binding increases the language's flexibility, programming errors may linger until late in the ].<ref name="cpl_3rd-ch9-222"/>

Writing large, reliable, and readable Lisp programs requires forethought. If properly planned, the program may be much shorter than an equivalent ''imperative language'' program.<ref name="cpl_3rd-ch9-230"/> ''Lisp'' is widely used in ]. However, its usage has been accepted only because it has ''imperative language'' operations, making unintended side-effects possible.<ref name="cpl_3rd-ch9-241"/>

====ML====
] (1973)<ref name="Gordon1996">{{cite web
| last = Gordon
| first = Michael J. C.
| author-link = Michael J. C. Gordon
| year = 1996
| title = From LCF to HOL: a short history
| url = http://www.cl.cam.ac.uk/~mjcg/papers/HolHistory.html
| access-date = 2021-10-30
| archive-date = 2016-09-05
| archive-url = https://web.archive.org/web/20160905201847/http://www.cl.cam.ac.uk/~mjcg/papers/HolHistory.html
| url-status = live
}}</ref> stands for "Meta Language". ML checks to make sure only data of the same type are compared with one another.<ref name="cpl_3rd-ch9-233">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 233
| isbn = 0-201-71012-9
}}</ref> For example, this function has one input parameter (an integer) and returns an integer:

{{sxhl|2=sml|1=fun times_10(n : int) : int = 10 * n;}}

''ML'' is not parenthesis-eccentric like ''Lisp''. The following is an application of <code>times_10()</code>:

times_10 2

It returns "20 : int". (Both the results and the datatype are returned.)

Like ''Lisp'', ''ML'' is tailored to process lists. Unlike ''Lisp'', each element is the same datatype.<ref name="cpl_3rd-ch9-235">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 235
| isbn = 0-201-71012-9
}}</ref> Moreover, ''ML'' assigns the datatype of an element at ]. Assigning the datatype at compile-time is called ]. Static binding increases reliability because the compiler checks the context of variables before they are used.<ref name="cpl_3rd-ch3-55">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 55
| isbn = 0-201-71012-9
}}</ref>

====Prolog====
] (1972) stands for "PROgramming in LOGic". It is a ] language, based on formal ]. The language was developed by ] and Philippe Roussel in Marseille, France. It is an implementation of ], pioneered by ] and others at the ].<ref>{{Cite journal
| publisher = Association for Computing Machinery
| doi = 10.1145/155360.155362
| first1 = A.
| last1 = Colmerauer
| first2 = P.
| last2 = Roussel
| title = The birth of Prolog
| journal = ACM SIGPLAN Notices
| volume = 28
| issue = 3
| page = 5
| year = 1992
| url=http://alain.colmerauer.free.fr/alcol/ArchivesPublications/PrologHistory/19november92.pdf}}</ref>

The building blocks of a Prolog program are ''facts'' and ''rules''. Here is a simple example:
<syntaxhighlight lang=prolog>
cat(tom). % tom is a cat
mouse(jerry). % jerry is a mouse

animal(X) :- cat(X). % each cat is an animal
animal(X) :- mouse(X). % each mouse is an animal

big(X) :- cat(X). % each cat is big
small(X) :- mouse(X). % each mouse is small

eat(X,Y) :- mouse(X), cheese(Y). % each mouse eats each cheese
eat(X,Y) :- big(X), small(Y). % each big animal eats each small animal
</syntaxhighlight>

After all the facts and rules are entered, then a question can be asked:
: Will Tom eat Jerry?
<syntaxhighlight lang=prolog>
?- eat(tom,jerry).
true
</syntaxhighlight>

The following example shows how Prolog will convert a letter grade to its numeric value:
<syntaxhighlight lang="prolog">
numeric_grade('A', 4).
numeric_grade('B', 3).
numeric_grade('C', 2).
numeric_grade('D', 1).
numeric_grade('F', 0).
numeric_grade(X, -1) :- not X = 'A', not X = 'B', not X = 'C', not X = 'D', not X = 'F'.
grade('The Student', 'A').
</syntaxhighlight>
<syntaxhighlight lang="prolog">
?- grade('The Student', X), numeric_grade(X, Y).
X = 'A',
Y = 4
</syntaxhighlight>

Here is a comprehensive example:<ref name="Logical English">Kowalski, R., Dávila, J., Sartor, G. and Calejo, M., 2023. Logical English for law and education. In Prolog: The Next 50 Years (pp. 287-299). Cham: Springer Nature Switzerland.</ref>

1) All dragons billow fire, or equivalently, a thing billows fire if the thing is a dragon:
<syntaxhighlight lang="prolog">
billows_fire(X) :-
is_a_dragon(X).
</syntaxhighlight>
2) A creature billows fire if one of its parents billows fire:
<syntaxhighlight lang="prolog">
billows_fire(X) :-
is_a_creature(X),
is_a_parent_of(Y,X),
billows_fire(Y).
</syntaxhighlight>
3) A thing X is a parent of a thing Y if X is the mother of Y or X is the father of Y:
<syntaxhighlight lang="prolog">
is_a_parent_of(X, Y):- is_the_mother_of(X, Y).
is_a_parent_of(X, Y):- is_the_father_of(X, Y).
</syntaxhighlight>

4) A thing is a creature if the thing is a dragon:
<syntaxhighlight lang="prolog">
is_a_creature(X) :-
is_a_dragon(X).
</syntaxhighlight>

5) Norberta is a dragon, and Puff is a creature. Norberta is the mother of Puff.

<syntaxhighlight lang="prolog">
is_a_dragon(norberta).
is_a_creature(puff).
is_the_mother_of(norberta, puff).
</syntaxhighlight>

Rule (2) is a ] (inductive) definition. It can be understood declaratively, without the need to understand how it is executed.

Rule (3) shows how ] are represented by using relations. Here, the mother and father functions ensure that every individual has only one mother and only one father.

Prolog is an untyped language. Nonetheless, ] can be represented by using predicates. Rule (4) asserts that a creature is a superclass of a dragon.

Questions are answered using ]. Given the question:

<syntaxhighlight lang="prolog"> ?- billows_fire(X).
</syntaxhighlight>
Prolog generates two answers :
<syntaxhighlight lang="prolog">
X = norberta
X = puff
</syntaxhighlight>

Practical applications for Prolog are ] and ] in ].

===Object-oriented programming===
] is a programming method to execute ] (]) on ].<ref name="cpl_3rd-ch2-35_quote1">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 35
| isbn = 0-201-71012-9
| quote = Simula was based on Algol 60 with one very important addition — the class concept. ... The basic idea was that the data (or data structure) and the operations performed on it belong together
}}</ref> The basic idea is to group the characteristics of a ] into an object ] and give the container a name. The ''operations'' on the phenomenon are also grouped into the container.<ref name="cpl_3rd-ch2-35_quote1"/> ''Object-oriented programming'' developed by combining the need for containers and the need for safe ].<ref name="cpl_3rd-ch2-39_quote1">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 39
| isbn = 0-201-71012-9
| quote = Originally, a large number of experimental languages were designed, many of which combined object-oriented and functional programming.
}}</ref> This programming method need not be confined to an ''object-oriented language''.<ref name="se-ch9-284_quote1">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 284
| isbn = 0-256-08515-3
| quote = While it is true that OOD as such is not supported by the majority of popular languages, a large subset of OOD can be used.
}}</ref> In an object-oriented language, an object container is called a ]. In a non-object-oriented language, a ] (which is also known as a ]) may become an object container. To turn a data structure into an object container, operations need to be written specifically for the structure. The resulting structure is called an ].<ref name="dsa-ch3-p57">{{cite book
| last = Weiss
| first = Mark Allen
| title = Data Structures and Algorithm Analysis in C++
| publisher = Benjamin/Cummings Publishing Company, Inc.
| year = 1994
| page = 57
| isbn = 0-8053-5443-3
}}</ref> However, ] will be missing. Nonetheless, this shortcoming can be overcome.

Here is a ] ''header file'' for the ''GRADE abstract datatype'' in a simple school application:

<syntaxhighlight lang="c">
/* grade.h */
/* ------- */

/* Used to allow multiple source files to include */
/* this header file without duplication errors. */
/* ---------------------------------------------- */
#ifndef GRADE_H
#define GRADE_H

typedef struct
{
char letter;
} GRADE;

/* Constructor */
/* ----------- */
GRADE *grade_new( char letter );

int grade_numeric( char letter );
#endif
</syntaxhighlight>

The <code>grade_new()</code> function performs the same algorithm as the C++ ] operation.

Here is a C programming language '']'' for the ''GRADE abstract datatype'' in a simple school application:

<syntaxhighlight lang="c">
/* grade.c */
/* ------- */
#include "grade.h"

GRADE *grade_new( char letter )
{
GRADE *grade;

/* Allocate heap memory */
/* -------------------- */
if ( ! ( grade = calloc( 1, sizeof ( GRADE ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
__FILE__,
__FUNCTION__,
__LINE__ );
exit( 1 );
}

grade->letter = letter;
return grade;
}

int grade_numeric( char letter )
{
if ( ( letter == 'A' || letter == 'a' ) )
return 4;
else
if ( ( letter == 'B' || letter == 'b' ) )
return 3;
else
if ( ( letter == 'C' || letter == 'c' ) )
return 2;
else
if ( ( letter == 'D' || letter == 'd' ) )
return 1;
else
if ( ( letter == 'F' || letter == 'f' ) )
return 0;
else
return -1;
}
</syntaxhighlight>

In the constructor, the function <code>calloc()</code> is used instead of <code>malloc()</code> because each memory cell will be set to zero.

Here is a C programming language ''header file'' for the ''PERSON abstract datatype'' in a simple school application:

<syntaxhighlight lang="cpp">
/* person.h */
/* -------- */
#ifndef PERSON_H
#define PERSON_H

typedef struct
{
char *name;
} PERSON;

/* Constructor */
/* ----------- */
PERSON *person_new( char *name );
#endif
</syntaxhighlight>

Here is a C programming language ''source file'' for the ''PERSON abstract datatype'' in a simple school application:

<syntaxhighlight lang="cpp">
/* person.c */
/* -------- */
#include "person.h"

PERSON *person_new( char *name )
{
PERSON *person;

if ( ! ( person = calloc( 1, sizeof ( PERSON ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
__FILE__,
__FUNCTION__,
__LINE__ );
exit( 1 );
}

person->name = name;
return person;
}
</syntaxhighlight>

Here is a C programming language ''header file'' for the ''STUDENT abstract datatype'' in a simple school application:

<syntaxhighlight lang="c">
/* student.h */
/* --------- */
#ifndef STUDENT_H
#define STUDENT_H

#include "person.h"
#include "grade.h"

typedef struct
{
/* A STUDENT is a subset of PERSON. */
/* -------------------------------- */
PERSON *person;

GRADE *grade;
} STUDENT;

/* Constructor */
/* ----------- */
STUDENT *student_new( char *name );
#endif
</syntaxhighlight>

Here is a C programming language ''source file'' for the ''STUDENT abstract datatype'' in a simple school application:

<syntaxhighlight lang="cpp">
/* student.c */
/* --------- */
#include "student.h"
#include "person.h"

STUDENT *student_new( char *name )
{
STUDENT *student;

if ( ! ( student = calloc( 1, sizeof ( STUDENT ) ) ) )
{
fprintf(stderr,
"ERROR in %s/%s/%d: calloc() returned empty.\n",
__FILE__,
__FUNCTION__,
__LINE__ );
exit( 1 );
}

/* Execute the constructor of the PERSON superclass. */
/* ------------------------------------------------- */
student->person = person_new( name );
return student;
}
</syntaxhighlight>

Here is a driver program for demonstration:

<syntaxhighlight lang="c">
/* student_dvr.c */
/* ------------- */
#include <stdio.h>
#include "student.h"

int main( void )
{
STUDENT *student = student_new( "The Student" );
student->grade = grade_new( 'a' );

printf( "%s: Numeric grade = %d\n",
/* Whereas a subset exists, inheritance does not. */
student->person->name,
/* Functional programming is executing functions just-in-time (JIT) */
grade_numeric( student->grade->letter ) );

return 0;
}
</syntaxhighlight>

Here is a ] to compile everything:

<syntaxhighlight lang="make">
# makefile
# --------
all: student_dvr

clean:
rm student_dvr *.o

student_dvr: student_dvr.c grade.o student.o person.o
gcc student_dvr.c grade.o student.o person.o -o student_dvr

grade.o: grade.c grade.h
gcc -c grade.c

student.o: student.c student.h
gcc -c student.c

person.o: person.c person.h
gcc -c person.c
</syntaxhighlight>

The formal strategy to build object-oriented objects is to:<ref name="se-ch9-285">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 285
| isbn = 0-256-08515-3
}}</ref>
* Identify the objects. Most likely these will be nouns.
* Identify each object's attributes. What helps to describe the object?
* Identify each object's actions. Most likely these will be verbs.
* Identify the relationships from object to object. Most likely these will be verbs.

For example:
* A person is a human identified by a name.
* A grade is an achievement identified by a letter.
* A student is a person who earns a grade.

===Syntax and semantics===
]

The ] of a ''computer program'' is a ] of ] which form its ].<ref name="cpl_3rd-ch12-290_quote">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 290
| quote = The syntax (or grammar) of a programming language describes the correct form in which programs may be written
| isbn = 0-201-71012-9
}}</ref> A programming language's grammar correctly places its ], ], and ].<ref name="cpl_3rd-ch4-78_quote1">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 78
| isbn = 0-201-71012-9
| quote = The main components of an imperative language are declarations, expressions, and statements.
}}</ref> Complementing the ''syntax'' of a language are its ]. The ''semantics'' describe the meanings attached to various syntactic constructs.<ref name="cpl_3rd-ch12-290">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 290
| isbn = 0-201-71012-9
}}</ref> A syntactic construct may need a semantic description because a production rule may have an invalid interpretation.<ref name="cpl_3rd-ch12-294">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 294
| isbn = 0-201-71012-9
}}</ref> Also, different languages might have the same syntax; however, their behaviors may be different.

The syntax of a language is formally described by listing the production rules. Whereas the syntax of a ] is extremely complicated, a subset of the English language can have this production rule listing:<ref name="discrete-ch10-p615">{{cite book
| last = Rosen
| first = Kenneth H.
| title = Discrete Mathematics and Its Applications
| publisher = McGraw-Hill, Inc.
| year = 1991
| page =
| isbn = 978-0-07-053744-6
| url = https://archive.org/details/discretemathemat00rose/page/615}}</ref>
# a '''sentence''' is made up of a '''noun-phrase''' followed by a '''verb-phrase''';
# a '''noun-phrase''' is made up of an '''article''' followed by an '''adjective''' followed by a '''noun''';
# a '''verb-phrase''' is made up of a '''verb''' followed by a '''noun-phrase''';
# an '''article''' is 'the';
# an '''adjective''' is 'big' or
# an '''adjective''' is 'small';
# a '''noun''' is 'cat' or
# a '''noun''' is 'mouse';
# a '''verb''' is 'eats';
The words in '''bold-face''' are known as ''non-terminals''. The words in 'single quotes' are known as ''terminals''.<ref name="cpl_3rd-ch12-291">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 291
| isbn = 0-201-71012-9
}}</ref>

From this production rule listing, complete sentences may be formed using a series of replacements.<ref name="discrete-ch10-p616">{{cite book
| last = Rosen
| first = Kenneth H.
| title = Discrete Mathematics and Its Applications
| publisher = McGraw-Hill, Inc.
| year = 1991
| page =
| isbn = 978-0-07-053744-6
| url = https://archive.org/details/discretemathemat00rose/page/616}}</ref> The process is to replace ''non-terminals'' with either a valid ''non-terminal'' or a valid ''terminal''. The replacement process repeats until only ''terminals'' remain. One valid sentence is:
* '''sentence'''
* '''noun-phrase''' '''verb-phrase'''
* '''article''' '''adjective''' '''noun''' '''verb-phrase'''
* ''the'' '''adjective''' '''noun''' '''verb-phrase'''
* ''the'' ''big'' '''noun''' '''verb-phrase'''
* ''the'' ''big'' ''cat'' '''verb-phrase'''
* ''the'' ''big'' ''cat'' '''verb''' '''noun-phrase'''
* ''the'' ''big'' ''cat'' ''eats'' '''noun-phrase'''
* ''the'' ''big'' ''cat'' ''eats'' '''article''' '''adjective''' '''noun'''
* ''the'' ''big'' ''cat'' ''eats'' ''the'' '''adjective''' '''noun'''
* ''the'' ''big'' ''cat'' ''eats'' ''the'' ''small'' '''noun'''
* ''the'' ''big'' ''cat'' ''eats'' ''the'' ''small'' ''mouse''

However, another combination results in an invalid sentence:
* ''the'' ''small'' ''mouse'' ''eats'' ''the'' ''big'' ''cat''
Therefore, a ''semantic'' is necessary to correctly describe the meaning of an ''eat'' activity.

One ''production rule'' listing method is called the ] (BNF).<ref name="discrete-ch10-p623">{{cite book
| last = Rosen
| first = Kenneth H.
| title = Discrete Mathematics and Its Applications
| publisher = McGraw-Hill, Inc.
| year = 1991
| page =
| isbn = 978-0-07-053744-6
| url = https://archive.org/details/discretemathemat00rose/page/623}}</ref> BNF describes the syntax of a language and itself has a ''syntax''. This recursive definition is an example of a ].<ref name="cpl_3rd-ch12-290"/> The ''syntax'' of BNF includes:
* <code>::=</code> which translates to ''is made up of a'' when a non-terminal is to its right. It translates to ''is'' when a terminal is to its right.
* <code>|</code> which translates to ''or''.
* <code><</code> and <code>></code> which surround '''non-terminals'''.

Using BNF, a subset of the English language can have this ''production rule'' listing:
<syntaxhighlight lang="bnf">
<sentence> ::= <noun-phrase><verb-phrase>
<noun-phrase> ::= <article><adjective><noun>
<verb-phrase> ::= <verb><noun-phrase>
<article> ::= the
<adjective> ::= big | small
<noun> ::= cat | mouse
<verb> ::= eats
</syntaxhighlight>

Using BNF, a signed-] has the ''production rule'' listing:<ref name="discrete-ch10-p624">{{cite book
| last = Rosen
| first = Kenneth H.
| title = Discrete Mathematics and Its Applications
| publisher = McGraw-Hill, Inc.
| year = 1991
| page =
| isbn = 978-0-07-053744-6
| url = https://archive.org/details/discretemathemat00rose/page/624}}</ref>
<syntaxhighlight lang="bnf">
<signed-integer> ::= <sign><integer>
<sign> ::= + | -
<integer> ::= <digit> | <digit><integer>
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
</syntaxhighlight>

Notice the recursive production rule:
<syntaxhighlight lang="bnf">
<integer> ::= <digit> | <digit><integer>
</syntaxhighlight>
This allows for an infinite number of possibilities. Therefore, a ''semantic'' is necessary to describe a limitation of the number of digits.

Notice the leading zero possibility in the production rules:
<syntaxhighlight lang="bnf">
<integer> ::= <digit> | <digit><integer>
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
</syntaxhighlight>
Therefore, a ''semantic'' is necessary to describe that leading zeros need to be ignored.

Two formal methods are available to describe ''semantics''. They are ] and ].<ref name="cpl_3rd-ch12-297">{{cite book
| last = Wilson
| first = Leslie B.
| title = Comparative Programming Languages, Third Edition
| publisher = Addison-Wesley
| year = 2001
| page = 297
| isbn = 0-201-71012-9
}}</ref>

==Software engineering and computer programming==
] and ] programmed the ] by moving cables and setting switches.]]

] is a variety of techniques to produce ] ''computer programs''.<ref name="se-preface1">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = Preface
| isbn = 0-256-08515-3
}}</ref> ] is the process of writing or editing ]. In a formal environment, a ] will gather information from managers about all the organization's processes to automate. This professional then prepares a ] for the new or modified system.<ref name="pis-ch12-p507">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 507
| isbn = 0-619-06489-7
}}</ref> The plan is analogous to an architect's blueprint.<ref name="pis-ch12-p507"/>

===Performance objectives===
The systems analyst has the objective to deliver the right information to the right person at the right time.<ref name="pis-ch12-p513">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 513
| isbn = 0-619-06489-7
}}</ref> The critical factors to achieve this objective are:<ref name="pis-ch12-p513"/>
# The quality of the output. Is the output useful for decision-making?
# The accuracy of the output. Does it reflect the true situation?
# The format of the output. Is the output easily understood?
# The speed of the output. Time-sensitive information is important when communicating with the customer in real time.

===Cost objectives===
Achieving performance objectives should be balanced with all of the costs, including:<ref name="pis-ch12-p514">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 514
| isbn = 0-619-06489-7
}}</ref>
# Development costs.
# Uniqueness costs. A reusable system may be expensive. However, it might be preferred over a limited-use system.
# Hardware costs.
# Operating costs.

Applying a ] will mitigate the axiom: the later in the process an error is detected, the more expensive it is to correct.<ref name="pis-ch12-p516">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 516
| isbn = 0-619-06489-7
}}</ref>

===Waterfall model===
The ] is an implementation of a ''systems development process''.<ref name="se-ch1-8">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 8
| isbn = 0-256-08515-3
}}</ref> As the ''waterfall'' label implies, the basic phases overlap each other:<ref name="pis-ch12-p517">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 517
| isbn = 0-619-06489-7
}}</ref>
# The ''investigation phase'' is to understand the underlying problem.
# The ''analysis phase'' is to understand the possible solutions.
# The ''design phase'' is to ] the best solution.
# The ''implementation phase'' is to program the best solution.
# The ''maintenance phase'' lasts throughout the life of the system. Changes to the system after it is deployed may be necessary.<ref name="se-ch11-345">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 345
| isbn = 0-256-08515-3
}}</ref> Faults may exist, including specification faults, design faults, or coding faults. Improvements may be necessary. Adaption may be necessary to react to a changing environment.

===Computer programmer===
A ] is a specialist responsible for writing or modifying the source code to implement the detailed plan.<ref name="pis-ch12-p507"/> A programming team is likely to be needed because most systems are too large to be completed by a single programmer.<ref name="se-ch10-319">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 319
| isbn = 0-256-08515-3
}}</ref> However, adding programmers to a project may not shorten the completion time. Instead, it may lower the quality of the system.<ref name="se-ch10-319"/> To be effective, program modules need to be defined and distributed to team members.<ref name="se-ch10-319"/> Also, team members must interact with one another in a meaningful and effective way.<ref name="se-ch10-319"/>

Computer programmers may be ]: programming within a single module.<ref name="se-ch10-331">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 331
| isbn = 0-256-08515-3
}}</ref> Chances are a module will execute modules located in other source code files. Therefore, computer programmers may be ]: programming modules so they will effectively couple with each other.<ref name="se-ch10-331"/> Programming-in-the-large includes contributing to the ] (API).

===Program modules===
] is a technique to refine ''imperative language'' programs. Refined programs may reduce the software size, separate responsibilities, and thereby mitigate ]. A ''program module'' is a sequence of statements that are bounded within a ] and together identified by a name.<ref name="se-ch8-216">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 216
| isbn = 0-256-08515-3
}}</ref> Modules have a ''function'', ''context'', and ''logic'':<ref name="se-ch8-219">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 219
| isbn = 0-256-08515-3
}}</ref>

* The ''function'' of a module is what it does.
* The ''context'' of a module are the elements being performed upon.
* The ''logic'' of a module is how it performs the function.

The module's name should be derived first by its ''function'', then by its ''context''. Its ''logic'' should not be part of the name.<ref name="se-ch8-219"/> For example, <code>function compute_square_root( x )</code> or <code>function compute_square_root_integer( i : integer )</code> are appropriate module names. However, <code>function compute_square_root_by_division( x )</code> is not.

The degree of interaction ''within'' a module is its level of ].<ref name="se-ch8-219"/> ''Cohesion'' is a judgment of the relationship between a module's name and its ''function''. The degree of interaction ''between'' modules is the level of ].<ref name="se-ch8-226">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 226
| isbn = 0-256-08515-3
}}</ref> ''Coupling'' is a judgement of the relationship between a module's ''context'' and the elements being performed upon.

===Cohesion===
The levels of cohesion from worst to best are:<ref name="se-ch8-220">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 220
| isbn = 0-256-08515-3
}}</ref>

* ''Coincidental Cohesion'': A module has coincidental cohesion if it performs multiple functions, and the functions are completely unrelated. For example, <code>function read_sales_record_print_next_line_convert_to_float()</code>. Coincidental cohesion occurs in practice if management enforces silly rules. For example, "Every module will have between 35 and 50 executable statements."<ref name="se-ch8-220"/>
* Logical Cohesion: A module has logical cohesion if it has available a series of functions, but only one of them is executed. For example, <code>function perform_arithmetic( perform_addition, a, b )</code>.
* ''Temporal Cohesion'': A module has temporal cohesion if it performs functions related to time. One example, <code>function initialize_variables_and_open_files()</code>. Another example, <code>stage_one()</code>, <code>stage_two()</code>, ...
* ''Procedural Cohesion'': A module has procedural cohesion if it performs multiple loosely related functions. For example, <code>function read_part_number_update_employee_record()</code>.
* ''Communicational Cohesion'': A module has communicational cohesion if it performs multiple closely related functions. For example, <code>function read_part_number_update_sales_record()</code>.
* ''Informational Cohesion'': A module has informational cohesion if it performs multiple functions, but each function has its own entry and exit points. Moreover, the functions share the same data structure. Object-oriented classes work at this level.
* ''Functional Cohesion'': a module has functional cohesion if it achieves a single goal working only on local variables. Moreover, it may be reusable in other contexts.

===Coupling===
The levels of coupling from worst to best are:<ref name="se-ch8-226"/>

* ''Content Coupling'': A module has content coupling if it modifies a local variable of another function. COBOL used to do this with the ''alter'' verb.
* ''Common Coupling'': A module has common coupling if it modifies a global variable.
* ''Control Coupling'': A module has control coupling if another module can modify its ]. For example, <code>perform_arithmetic( perform_addition, a, b )</code>. Instead, control should be on the makeup of the returned object.
* ''Stamp Coupling'': A module has stamp coupling if an element of a data structure passed as a parameter is modified. Object-oriented classes work at this level.
* '' Data Coupling'': A module has data coupling if all of its input parameters are needed and none of them are modified. Moreover, the result of the function is returned as a single object.

===Data flow analysis===
]
''Data flow analysis'' is a design method used to achieve modules of ''functional cohesion'' and ''data coupling''.<ref name="se-ch9-258">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 258
| isbn = 0-256-08515-3
}}</ref> The input to the method is a ]. A data-flow diagram is a set of ovals representing modules. Each module's name is displayed inside its oval. Modules may be at the executable level or the function level.

The diagram also has arrows connecting modules to each other. Arrows pointing into modules represent a set of inputs. Each module should have only one arrow pointing out from it to represent its single output object. (Optionally, an additional exception arrow points out.) A ] of ovals will convey an entire ]. The input modules should start the diagram. The input modules should connect to the transform modules. The transform modules should connect to the output modules.<ref name="se-ch9-259">{{cite book
| last = Schach
| first = Stephen R.
| title = Software Engineering
| publisher = Aksen Associates Incorporated Publishers
| year = 1990
| page = 259
| isbn = 0-256-08515-3
}}</ref>

==Functional categories==
] interacts with the ]. The application software interacts with the ], which interacts with the ].]]

''Computer programs'' may be categorized along functional lines. The main functional categories are ] and ]. System software includes the ], which couples ] with application software.<ref name="osc-overview"/> The purpose of the operating system is to provide an environment where application software executes in a convenient and efficient manner.<ref name="osc-overview">{{cite book
| last = Silberschatz | last = Silberschatz
| first = Abraham | first = Abraham
| title = Operating System Concepts, Fourth Edition | title = Operating System Concepts, Fourth Edition
| publisher = Addison-Wesley | publisher = Addison-Wesley
| date = 1994 | year = 1994
| pages = 1 | page = 1
| id = ISBN 0-201-50480-4 | isbn = 978-0-201-50480-4
}}</ref> In addition to the operating system, system software includes ] that help manage and tune the computer. If a computer program is not system software then it is application software. Application software includes ], which couples the system software with the ]. Application software also includes utility programs that help users solve application problems, like the need for sorting. }}</ref> Both application software and system software execute ]. At the hardware level, a ] controls the circuits throughout the ].
==See Also==
* See ] for the relationship between computer programs and algorithms.


===Application software===
==References==
{{Main|Application software}}
{{reflist}}
Application software is the key to unlocking the potential of the computer system.<ref name="pis-ch4-p147_quote1">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 147
| isbn = 0-619-06489-7
| quote = The key to unlocking the potential of any computer system is application software.
}}</ref> ] bundles accounting, personnel, customer, and vendor applications. Examples include ], ], and ].


Enterprise applications may be developed in-house as a one-of-a-kind ].<ref name="pis-ch4-p148">{{cite book
==Further reading==
| last = Stair
{{refbegin}}
| first = Ralph M.
*{{cite book
| title = Principles of Information Systems, Sixth Edition
| last = Knuth
| first = Donald E. | publisher = Thomson
| year = 2003
| title = The Art of Computer Programming, Volume 1, 3rd Edition
| publisher = Addison-Wesley | page = 147
| date = 1997 | isbn = 0-619-06489-7
}}</ref> Alternatively, they may be purchased as ]. Purchased software may be modified to provide ]. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer.<ref name="pis-ch4-p147_quote2">{{cite book
| location = Boston
| id = ISBN 0-201-89683-4 | last = Stair
| first = Ralph M.
}}
| title = Principles of Information Systems, Sixth Edition
*{{cite book
| publisher = Thomson
|last=Knuth
| year = 2003
|first=Donald E.
| page = 147
|year=1997
| isbn = 0-619-06489-7
|title=The Art of Computer Programming, Volume 2, 3rd Edition
| quote = third-party software firm, often called a value-added software vendor, may develop or modify a software program to meet the needs of a particular industry or company.
|location = Boston
}}</ref>
|publisher=Addison-Wesley
|id=ISBN 0-201-89684-2
}}
*{{cite book
|last=Knuth
|first=Donald E.
|year=1997
|title=The Art of Computer Programming, Volume 3, 3rd Edition
|location = Boston
|publisher=Addison-Wesley
|id=ISBN 0-201-89685-0
}}{{refend}}


The potential advantages of in-house software are features and reports may be developed exactly to specification.<ref name="pis-ch4-p148_quote1">{{cite book
==External links==
| last = Stair
* at Webopedia
| first = Ralph M.
* at ]
| title = Principles of Information Systems, Sixth Edition
* at dictionary.com
| publisher = Thomson
| year = 2003
| page = 148
| isbn = 0-619-06489-7
| quote = Heading: Proprietary Software; Subheading: Advantages; Quote: You can get exactly what you need in terms of features, reports, and so on.
}}</ref> Management may also be involved in the development process and offer a level of control.<ref name="pis-ch4-p148_quote2">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 148
| isbn = 0-619-06489-7
| quote = Heading: Proprietary Software; Subheading: Advantages; Quote: Being involved in the development offers a further level of control over the results.
}}</ref> Management may decide to counteract a competitor's new initiative or implement a customer or vendor requirement.<ref name="pis-ch4-p148_quote3">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 147
| isbn = 0-619-06489-7
| quote = Heading: Proprietary Software; Subheading: Advantages; Quote: There is more flexibility in making modifications that may be required to counteract a new initiative by one of your competitors or to meet new supplier and/or customer requirements.
}}</ref> A merger or acquisition may necessitate enterprise software changes. The potential disadvantages of in-house software are time and resource costs may be extensive.<ref name="pis-ch4-p148"/> Furthermore, risks concerning features and performance may be looming.


The potential advantages of off-the-shelf software are upfront costs are identifiable, the basic needs should be fulfilled, and its performance and reliability have a track record.<ref name="pis-ch4-p148"/> The potential disadvantages of off-the-shelf software are it may have unnecessary features that confuse end users, it may lack features the enterprise needs, and the data flow may not match the enterprise's work processes.<ref name="pis-ch4-p148"/>
]
]


One approach to economically obtaining a customized enterprise application is through an ].<ref name="pis-ch4-p149">{{cite book
]
| last = Stair
]
| first = Ralph M.
]
| title = Principles of Information Systems, Sixth Edition
]
| publisher = Thomson
]
| year = 2003
]
| page = 149
]
| isbn = 0-619-06489-7
]
}}</ref> Specialty companies provide hardware, custom software, and end-user support. They may speed the development of new applications because they possess skilled information system staff. The biggest advantage is it frees in-house resources from staffing and managing complex computer projects.<ref name="pis-ch4-p149"/> Many application service providers target small, fast-growing companies with limited information system resources.<ref name="pis-ch4-p149"/> On the other hand, larger companies with major systems will likely have their technical infrastructure in place. One risk is having to trust an external organization with sensitive information. Another risk is having to trust the provider's infrastructure reliability.<ref name="pis-ch4-p149"/>
]

]
===Operating system===
]
{{See also|Operating system}}
]
] vs. ] <br/>], ], ]|upright=1.8]]
]
An ] is the low-level software that supports a computer's basic functions, such as ] ] and controlling ]s.<ref name="osc-overview"/>
]

]
In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing.<ref name="osc-ch1-p6"/> More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an ''operating system'' was kept in the computer at all times.<ref name="sco-ch1-p11">{{cite book
]
|url=https://archive.org/details/structuredcomput00tane/page/11
]
|title=Structured Computer Organization, Third Edition
]
|last=Tanenbaum
]
|first=Andrew S.
]
|publisher=Prentice Hall
]
|year=1990
]
|isbn=978-0-13-854662-5
]
|page=}}</ref>
]

]
The term ''operating system'' may refer to two levels of software.<ref name="lpi-ch2-p21">{{cite book
]
|title=The Linux Programming Interface
]
|last=Kerrisk
]
|first=Michael
]
|publisher=No Starch Press
]
|year=2010
]
|isbn=978-1-59327-220-3
]
|page=21}}</ref> The operating system may refer to the ] that manages the ], ], and ]. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, ], ], ], and ].<ref name="lpi-ch2-p21"/>
]

]
====Kernel Program====
]
]
]
The kernel's main purpose is to manage the limited resources of a computer:
]
* The kernel program should perform ],<ref name="lpi-ch2-p22">{{cite book
]
|title=The Linux Programming Interface
|last=Kerrisk
|first=Michael
|publisher=No Starch Press
|year=2010
|isbn=978-1-59327-220-3
|page=22}}</ref> which is also known as a ]. The kernel creates a ] when a ''computer program'' is ]. However, an executing program gets exclusive access to the ] only for a ]. To provide each user with the ], the kernel quickly ] each process control block to execute another one. The goal for ] is to minimize ].
]
* The kernel program should perform ].
:* When the kernel initially ] an executable into memory, it divides the address space logically into ].<ref name="duos-ch6-p152">{{cite book
| last = Bach
| first = Maurice J.
| title = The Design of the UNIX Operating System
| publisher = Prentice-Hall, Inc.
| year = 1986
| page = 152
| isbn = 0-13-201799-7
}}</ref> The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running ].<ref name="duos-ch6-p152"/> These tables constitute the ]. The master-region table is used to determine where its contents are located in ]. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion.
:*The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes of the same executable.<ref name="duos-ch6-p152"/>
:* To save time and memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely.<ref name="lpi-ch2-p22"/>
:*The kernel is responsible for translating virtual addresses into ]es. The kernel may request data from the ] and, instead, receive a ].<ref name="sco6th-ch6-p443">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 443
| isbn = 978-0-13-291652-3
}}</ref> If so, the kernel accesses the ] to populate the physical data region and translate the address.<ref name="esa-ch1-p8">{{cite book
| last = Lacamera
| first = Daniele
| title = Embedded Systems Architecture
| publisher = Packt
| year = 2018
| page = 8
| isbn = 978-1-78883-250-2
}}</ref>
:* The kernel allocates memory from the ''heap'' upon request by a process.<ref name="cpl-ch8-p187"/> When the process is finished with the memory, the process may request for it to be ]. If the process exits without requesting all allocated memory to be freed, then the kernel performs ] to free the memory.
:* The kernel also ensures that a process only accesses its own memory, and not that of the kernel or other processes.<ref name="lpi-ch2-p22"/>
* The kernel program should perform ].<ref name="lpi-ch2-p22"/> The kernel has instructions to create, retrieve, update, and delete files.
* The kernel program should perform ].<ref name="lpi-ch2-p22"/> The kernel provides programs to standardize and simplify the interface to the mouse, keyboard, disk drives, printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time.
* The kernel program should perform ].<ref name="lpi-ch2-p23">{{cite book
|title=The Linux Programming Interface
|last=Kerrisk
|first=Michael
|publisher=No Starch Press
|year=2010
|isbn=978-1-59327-220-3
|page=23}}</ref> The kernel transmits and receives ] on behalf of processes. One key service is to find an efficient ] to the target system.
* The kernel program should provide ] for programmers to use.<ref name="upe-ch7-p201">{{cite book
|title=The Unix Programming Environment
|last=Kernighan
|first=Brian W.
|publisher=Prentice Hall
|year=1984
|isbn=0-13-937699-2
|page=201}}</ref>
** Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, ]s, file seeking, physical reading, and physical writing.
** Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface.
** Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface.<ref name="lpi-ch10-p187">{{cite book
|title=The Linux Programming Interface
|last=Kerrisk
|first=Michael
|publisher=No Starch Press
|year=2010
|isbn=978-1-59327-220-3
|page=187}}</ref>
* The kernel program should provide a ] between executing processes.<ref name="usp-ch6-p121">{{cite book
|title=Unix System Programming
|last=Haviland
|first=Keith
|publisher=Addison-Wesley Publishing Company
|year=1987
|isbn=0-201-12919-1
|page=121}}</ref> For a large software system, it may be desirable to ] the system into smaller processes. Processes may communicate with one another by sending and receiving ].

Originally, operating systems were programmed in ]; however, modern operating systems are typically written in higher-level languages like ], ], and ].{{efn|The ] operating system was written in C, ] was written in Objective-C, and Swift replaced Objective-C.}}

===Utility program===
A ] is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers.<ref name="pis-ch4-p145">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 145
| isbn = 0-619-06489-7
}}</ref> A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated.<ref name="pis-ch4-p146">{{cite book
| last = Stair
| first = Ralph M.
| title = Principles of Information Systems, Sixth Edition
| publisher = Thomson
| year = 2003
| page = 146
| isbn = 0-619-06489-7
}}</ref>

Utility programs include compression programs so data files are stored on less disk space.<ref name="pis-ch4-p145"/> Compressed programs also save time when data files are transmitted over the network.<ref name="pis-ch4-p145"/> Utility programs can sort and merge data sets.<ref name="pis-ch4-p146"/> Utility programs detect ]es.<ref name="pis-ch4-p146"/>

===Microcode program===
{{main|Microcode}}
]
]
]
]
]
A ] is the bottom-level interpreter that controls the ] of software-driven computers.<ref name="sco6th-ch1-p6">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 6
| isbn = 978-0-13-291652-3
}}</ref>
(Advances in ] have migrated these operations to ].)<ref name="sco6th-ch1-p6"/> Microcode instructions allow the programmer to more easily implement the ]<ref name="sco6th-ch4-p243">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 243
| isbn = 978-0-13-291652-3
}}</ref>—the computer's real hardware. The digital logic level is the boundary between ] and ].<ref name="sco6th-ch3-p147">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 147
| isbn = 978-0-13-291652-3
}}</ref>

A ] is a tiny ] that can return one of two signals: on or off.<ref name="sco6th-ch3-p148">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 148
| isbn = 978-0-13-291652-3
}}</ref>

* Having one transistor forms the ].
* Connecting two transistors in series forms the ].
* Connecting two transistors in parallel forms the ].
* Connecting a NOT gate to a NAND gate forms the ].
* Connecting a NOT gate to a NOR gate forms the ].

These five gates form the building blocks of ]—the digital logic functions of the computer.

Microcode instructions are ] programmers may use to execute digital logic functions instead of forming them in binary algebra. They are stored in a ] (CPU) ].<ref name="sco6th-ch4-p253">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 253
| isbn = 978-0-13-291652-3
}}</ref>
These hardware-level instructions move data throughout the ].

The micro-instruction cycle begins when the ] uses its microprogram counter to ''fetch'' the next ] from ].<ref name="sco6th-ch4-p255">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 255
| isbn = 978-0-13-291652-3
}}</ref> The next step is to ''decode'' the machine instruction by selecting the proper output line to the hardware module.<ref name="sco6th-ch3-p161">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 161
| isbn = 978-0-13-291652-3
}}</ref>
The final step is to ''execute'' the instruction using the hardware module's set of gates.

]
Instructions to perform arithmetic are passed through an ] (ALU).<ref name="sco6th-ch3-p166">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 166
| isbn = 978-0-13-291652-3
}}</ref> The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic.

Microcode instructions move data between the CPU and the ]. Memory controller microcode instructions manipulate two ]. The ] is used to access each memory cell's address. The ] is used to set and read each cell's contents.<ref name="sco6th-ch4-p249">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 249
| isbn = 978-0-13-291652-3
}}</ref>

Microcode instructions move data between the CPU and the many ]. The ] writes to and reads from ]s. Data is also moved between the CPU and other functional units via the ]<ref name="sco6th-ch2-p111">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 111
| isbn = 978-0-13-291652-3
}}</ref>

==Notes==
{{Notelist}}

==References==
{{reflist|30em}}

{{DEFAULTSORT:Computer Program}}
]
]

Latest revision as of 12:41, 11 January 2025

Instructions a computer can execute For the TV program, see The Computer Programme.
Source code for a computer program written in the JavaScript language. It demonstrates the appendChild method. The method adds a new child node to an existing parent node. It is commonly used to dynamically modify the structure of an HTML document.
Program execution
General concepts
Types of code
Compilation strategies
Notable runtimes
Notable compilers & toolchains

A computer program is a sequence or set of instructions in a programming language for a computer to execute. It is one component of software, which also includes documentation and other intangible components.

A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using a compiler written for the language. (Assembly language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within an interpreter written for the language.

If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction.

If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement. Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer.

Example computer program

The "Hello, World!" program is used to illustrate a language's basic syntax. The syntax of the language BASIC (1964) was intentionally limited to make the language easy to learn. For example, variables are not declared before being used. Also, variables are automatically initialized to zero. Here is an example computer program, in Basic, to average a list of numbers:

10 INPUT "How many numbers to average?", A
20 FOR I = 1 TO A
30 INPUT "Enter number:", B
40 LET C = C + B
50 NEXT I
60 LET D = C/A
70 PRINT "The average is", D
80 END

Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer systems.

History

See also: Computer programming § History, Programmer § History, History of computing, History of programming languages, and History of software

Improvements in software development are the result of improvements in computer hardware. At each stage in hardware's history, the task of computer programming changed dramatically.

Analytical Engine

Lovelace's description from Note G

In 1837, Jacquard's loom inspired Charles Babbage to attempt to build the Analytical Engine. The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a store which consisted of memory to hold 1,000 numbers of 50 decimal digits each. Numbers from the store were transferred to the mill for processing. The engine was programmed using two sets of perforated cards. One set directed the operation and the other set inputted the variables. However, the thousands of cogged wheels and gears never fully worked together.

Ada Lovelace worked for Charles Babbage to create a description of the Analytical Engine (1843). The description contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine. This note is recognized by some historians as the world's first computer program.

Universal Turing machine

In 1936, Alan Turing introduced the Universal Turing machine, a theoretical device that can model every computation. It is a finite-state machine that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an algorithm. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state. All present-day computers are Turing complete.

ENIAC

Glenn A. Beck changing a tube in ENIAC

The Electronic Numerical Integrator And Computer (ENIAC) was built between July 1943 and Fall 1945. It was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together. Its 40 units weighed 30 tons, occupied 1,800 square feet (167 m), and consumed $650 per hour (in 1940s currency) in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels by plugging heavy black cables into plugboards. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week. It ran from 1947 until 1955 at Aberdeen Proving Ground, calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns.

Stored-program computers

Instead of plugging in cords and turning switches, a stored-program computer loads its instructions into memory just like it loads its data into memory. As a result, the computer could be programmed quickly and perform calculations at very fast speeds. Presper Eckert and John Mauchly built the ENIAC. The two engineers introduced the stored-program concept in a three-page memo dated February 1944. Later, in September 1944, John von Neumann began working on the ENIAC project. On June 30, 1945, von Neumann published the First Draft of a Report on the EDVAC, which equated the structures of the computer with the structures of the human brain. The design became known as the von Neumann architecture. The architecture was simultaneously deployed in the constructions of the EDVAC and EDSAC computers in 1949.

The IBM System/360 (1964) was a family of computers, each having the same instruction set architecture. The Model 20 was the smallest and least expensive. Customers could upgrade and retain the same application software. The Model 195 was the most premium. Each System/360 model featured multiprogramming—having multiple processes in memory at once. When one process was waiting for input/output, another could compute.

IBM planned for each model to be programmed using PL/1. A committee was formed that included COBOL, Fortran and ALGOL programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace Cobol and Fortran. The result was a large and complex language that took a long time to compile.

Switches for manual input on a Data General Nova 3, manufactured in the mid-1970s

Computers manufactured until the 1970s had front-panel switches for manual programming. The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were automatically inputted via paper tape, punched cards or magnetic-tape. After the medium was loaded, the starting address was set via switches, and the execute button was pressed.

Very Large Scale Integration

A VLSI integrated-circuit die

A major milestone in software development was the invention of the Very Large Scale Integration (VLSI) circuit (1964). Following World War II, tube-based technology was replaced with point-contact transistors (1947) and bipolar junction transistors (late 1950s) mounted on a circuit board. During the 1960s, the aerospace industry replaced the circuit board with an integrated circuit chip.

Robert Noyce, co-founder of Fairchild Semiconductor (1957) and Intel (1968), achieved a technological improvement to refine the production of field-effect transistors (1963). The goal is to alter the electrical resistivity and conductivity of a semiconductor junction. First, naturally occurring silicate minerals are converted into polysilicon rods using the Siemens process. The Czochralski process then converts the rods into a monocrystalline silicon, boule crystal. The crystal is then thinly sliced to form a wafer substrate. The planar process of photolithography then integrates unipolar transistors, capacitors, diodes, and resistors onto the wafer to build a matrix of metal–oxide–semiconductor (MOS) transistors. The MOS transistor is the primary component in integrated circuit chips.

Originally, integrated circuit chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a matrix of read-only memory (ROM). The matrix resembled a two-dimensional array of fuses. The process to embed instructions onto the matrix was to burn out the unneeded connections. There were so many connections, firmware programmers wrote a computer program on another chip to oversee the burning. The technology became known as Programmable ROM. In 1971, Intel installed the computer program onto the chip and named it the Intel 4004 microprocessor.

IBM's System/360 (1964) CPU was not a microprocessor.

The terms microprocessor and central processing unit (CPU) are now used interchangeably. However, CPUs predate microprocessors. For example, the IBM System/360 (1964) had a CPU made from circuit boards containing discrete components on ceramic substrates.

Sac State 8008

Artist's depiction of Sacramento State University's Intel 8008 microcomputer (1972)

The Intel 4004 (1971) was a 4-bit microprocessor designed to run the Busicom calculator. Five months after its release, Intel released the Intel 8008, an 8-bit microprocessor. Bill Pentz led a team at Sacramento State to build the first microcomputer using the Intel 8008: the Sac State 8008 (1972). Its purpose was to store patient medical records. The computer supported a disk operating system to run a Memorex, 3-megabyte, hard disk drive. It had a color display and keyboard that was packaged in a single console. The disk operating system was programmed using IBM's Basic Assembly Language (BAL). The medical records application was programmed using a BASIC interpreter. However, the computer was an evolutionary dead-end because it was extremely expensive. Also, it was built at a public university lab for a specific purpose. Nonetheless, the project contributed to the development of the Intel 8080 (1974) instruction set.

x86 series

The original IBM Personal Computer (1981) used an Intel 8088 microprocessor.

In 1978, the modern software development environment began when Intel upgraded the Intel 8080 to the Intel 8086. Intel simplified the Intel 8086 to manufacture the cheaper Intel 8088. IBM embraced the Intel 8088 when they entered the personal computer market (1981). As consumer demand for personal computers increased, so did Intel's microprocessor development. The succession of development is known as the x86 series. The x86 assembly language is a family of backward-compatible machine instructions. Machine instructions created in earlier microprocessors were retained throughout microprocessor upgrades. This enabled consumers to purchase new computers without having to purchase new application software. The major categories of instructions are:

Changing programming environment

The DEC VT100 (1978) was a widely used computer terminal.

VLSI circuits enabled the programming environment to advance from a computer terminal (until the 1990s) to a graphical user interface (GUI) computer. Computer terminals limited programmers to a single shell running in a command-line environment. During the 1970s, full-screen source code editing became possible through a text-based user interface. Regardless of the technology available, the goal is to program in a programming language.

Programming paradigms and languages

Programming language features exist to provide building blocks to be combined to express programming ideals. Ideally, a programming language should:

  • express ideas directly in the code.
  • express independent ideas independently.
  • express relationships among ideas directly in the code.
  • combine ideas freely.
  • combine ideas only where combinations make sense.
  • express simple ideas simply.

The programming style of a programming language to provide these building blocks may be categorized into programming paradigms. For example, different paradigms may differentiate:

Each of these programming styles has contributed to the synthesis of different programming languages.

A programming language is a set of keywords, symbols, identifiers, and rules by which programmers can communicate instructions to the computer. They follow a set of rules called a syntax.

Programming languages get their basis from formal languages. The purpose of defining a solution in terms of its formal language is to generate an algorithm to solve the underlining problem. An algorithm is a sequence of simple instructions that solve a problem.

Generations of programming language

Main article: Programming language generations
Machine language monitor on a W65C816S microprocessor

The evolution of programming languages began when the EDSAC (1949) used the first stored computer program in its von Neumann architecture. Programming the EDSAC was in the first generation of programming language.

  • The second generation of programming language is assembly language. Assembly language allows the programmer to use mnemonic instructions instead of remembering instruction numbers. An assembler translates each assembly language mnemonic into its machine language number. For example, on the PDP-11, the operation 24576 can be referenced as ADD in the source code. The four basic arithmetic operations have assembly instructions like ADD, SUB, MUL, and DIV. Computers also have instructions like DW (Define Word) to reserve memory cells. Then the MOV instruction can copy integers between registers and memory.
  • The basic structure of an assembly language statement is a label, operation, operand, and comment.
  • Labels allow the programmer to work with variable names. The assembler will later translate labels into physical memory addresses.
  • Operations allow the programmer to work with mnemonics. The assembler will later translate mnemonics into instruction numbers.
  • Operands tell the assembler which data the operation will process.
  • Comments allow the programmer to articulate a narrative because the instructions alone are vague.
The key characteristic of an assembly language program is it forms a one-to-one mapping to its corresponding machine language target.
  • The fourth generation of programming language emphasizes what output results are desired, rather than how programming statements should be constructed. Declarative languages attempt to limit side effects and allow programmers to write code with relatively few errors. One popular fourth generation language is called Structured Query Language (SQL). Database developers no longer need to process each database record one at a time. Also, a simple statement can generate output records without having to understand how they are retrieved.

Imperative languages

Main article: Imperative programming
A computer program written in an imperative language

Imperative languages specify a sequential algorithm using declarations, expressions, and statements:

  • A declaration introduces a variable name to the computer program and assigns it to a datatype – for example: var x: integer;
  • An expression yields a value – for example: 2 + 2 yields 4
  • A statement might assign an expression to a variable or use the value of a variable to alter the program's control flow – for example: x := 2 + 2; if x = 4 then do_something();

Fortran

FORTRAN (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system". It was designed for scientific calculations, without string handling facilities. Along with declarations, expressions, and statements, it supported:

It succeeded because:

  • programming and debugging costs were below computer running costs.
  • it was supported by IBM.
  • applications at the time were scientific.

However, non-IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler. The American National Standards Institute (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports:

COBOL

COBOL (1959) stands for "COmmon Business Oriented Language". Fortran manipulated symbols. It was soon realized that symbols did not need to be numbers, so strings were introduced. The US Department of Defense influenced COBOL's development, with Grace Hopper being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.

COBOL's development was tightly controlled, so dialects did not emerge to require ANSI standards. As a consequence, it was not changed for 15 years until 1974. The 1990s version did make consequential changes, like object-oriented programming.

Algol

ALGOL (1960) stands for "ALGOrithmic Language". It had a profound influence on programming language design. Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable, structured design. Algol was first to define its syntax using the Backus–Naur form. This led to syntax-directed compilers. It added features like:

Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On another branch the descendants include C, C++ and Java.

Basic

BASIC (1964) stands for "Beginner's All-Purpose Symbolic Instruction Code". It was developed at Dartmouth College for all of their students to learn. If a student did not go on to a more powerful language, the student would still remember Basic. A Basic interpreter was installed in the microcomputers manufactured in the late 1970s. As the microcomputer industry grew, so did the language.

Basic pioneered the interactive session. It offered operating system commands within its environment:

  • The 'new' command created an empty slate.
  • Statements evaluated immediately.
  • Statements could be programmed by preceding them with line numbers.
  • The 'list' command displayed the program.
  • The 'run' command executed the program.

However, the Basic syntax was too simple for large programs. Recent dialects added structure and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a graphical user interface.

C

C programming language (1973) got its name because the language BCPL was replaced with B, and AT&T Bell Labs called the next version "C". Its purpose was to write the UNIX operating system. C is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s. Its growth also was because it has the facilities of assembly language, but uses a high-level syntax. It added advanced features like:

Computer memory map

C allows the programmer to control which region of memory data is to be stored. Global variables and static variables require the fewest clock cycles to store. The stack is automatically used for the standard variable declarations. Heap memory is returned to a pointer variable from the malloc() function.

  • The global and static data region is located just above the program region. (The program region is technically called the text region. It is where machine instructions are stored.)
  • The global and static data region is technically two regions. One region is called the initialized data segment, where variables declared with default values are stored. The other region is called the block started by segment, where variables declared without default values are stored.
  • Variables stored in the global and static data region have their addresses set at compile-time. They retain their values throughout the life of the process.
  • The global and static region stores the global variables that are declared on top of (outside) the main() function. Global variables are visible to main() and every other function in the source code.
On the other hand, variable declarations inside of main(), other functions, or within { } block delimiters are local variables. Local variables also include formal parameter variables. Parameter variables are enclosed within the parenthesis of a function definition. Parameters provide an interface to the function.
  • Local variables declared using the static prefix are also stored in the global and static data region. Unlike global variables, static variables are only visible within the function or block. Static variables always retain their value. An example usage would be the function int increment_counter(){static int counter = 0; counter++; return counter;}
  • The stack region is a contiguous block of memory located near the top memory address. Variables placed in the stack are populated from top to bottom. A stack pointer is a special-purpose register that keeps track of the last memory address populated. Variables are placed into the stack via the assembly language PUSH instruction. Therefore, the addresses of these variables are set during runtime. The method for stack variables to lose their scope is via the POP instruction.
  • Local variables declared without the static prefix, including formal parameter variables, are called automatic variables and are stored in the stack. They are visible inside the function or block and lose their scope upon exiting the function or block.
  • The heap region is located below the stack. It is populated from the bottom to the top. The operating system manages the heap using a heap pointer and a list of allocated memory blocks. Like the stack, the addresses of heap variables are set during runtime. An out of memory error occurs when the heap pointer and the stack pointer meet.
  • C provides the malloc() library function to allocate heap memory. Populating the heap with data is an additional copy function. Variables stored in the heap are economically passed to functions using pointers. Without pointers, the entire block of data would have to be passed to the function via the stack.

C++

In the 1970s, software engineers needed language support to break large projects down into modules. One obvious feature was to decompose large projects physically into separate files. A less obvious feature was to decompose large projects logically into abstract data types. At the time, languages supported concrete (scalar) datatypes like integer numbers, floating-point numbers, and strings of characters. Abstract datatypes are structures of concrete datatypes, with a new name assigned. For example, a list of integers could be called integer_list.

In object-oriented jargon, abstract datatypes are called classes. However, a class is only a definition; no memory is allocated. When memory is allocated to a class and bound to an identifier, it is called an object.

Object-oriented imperative languages developed by combining the need for classes and the need for safe functional programming. A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a method, member function, or operation. Object-oriented programming is executing operations on objects.

Object-oriented languages support a syntax to model subset/superset relationships. In set theory, an element of a subset inherits all the attributes contained in the superset. For example, a student is a person. Therefore, the set of students is a subset of the set of persons. As a result, students inherit all the attributes common to all persons. Additionally, students have unique attributes that other people do not have. Object-oriented languages model subset/superset relationships using inheritance. Object-oriented programming became the dominant language paradigm by the late 1990s.

C++ (1985) was originally called "C with Classes". It was designed to expand C's capabilities by adding the object-oriented facilities of the language Simula.

An object-oriented module is composed of two files. The definitions file is called the header file. Here is a C++ header file for the GRADE class in a simple school application:

// grade.h
// -------
// Used to allow multiple source files to include
// this header file without duplication errors.
// ----------------------------------------------
#ifndef GRADE_H
#define GRADE_H
class GRADE {
public:
    // This is the constructor operation.
    // ----------------------------------
    GRADE ( const char letter );
    // This is a class variable.
    // -------------------------
    char letter;
    // This is a member operation.
    // ---------------------------
    int grade_numeric( const char letter );
    // This is a class variable.
    // -------------------------
    int numeric;
};
#endif

A constructor operation is a function with the same name as the class name. It is executed when the calling operation executes the new statement.

A module's other file is the source file. Here is a C++ source file for the GRADE class in a simple school application:

// grade.cpp
// ---------
#include "grade.h"
GRADE::GRADE( const char letter )
{
    // Reference the object using the keyword 'this'.
    // ----------------------------------------------
    this->letter = letter;
    // This is Temporal Cohesion
    // -------------------------
    this->numeric = grade_numeric( letter );
}
int GRADE::grade_numeric( const char letter )
{
    if ( ( letter == 'A' || letter == 'a' ) )
        return 4;
    else
    if ( ( letter == 'B' || letter == 'b' ) )
        return 3;
    else
    if ( ( letter == 'C' || letter == 'c' ) )
        return 2;
    else
    if ( ( letter == 'D' || letter == 'd' ) )
        return 1;
    else
    if ( ( letter == 'F' || letter == 'f' ) )
        return 0;
    else
        return -1;
}

Here is a C++ header file for the PERSON class in a simple school application:

// person.h
// --------
#ifndef PERSON_H
#define PERSON_H
class PERSON {
public:
    PERSON ( const char *name );
    const char *name;
};
#endif

Here is a C++ source file for the PERSON class in a simple school application:

// person.cpp
// ----------
#include "person.h"
PERSON::PERSON ( const char *name )
{
    this->name = name;
}

Here is a C++ header file for the STUDENT class in a simple school application:

// student.h
// ---------
#ifndef STUDENT_H
#define STUDENT_H
#include "person.h"
#include "grade.h"
// A STUDENT is a subset of PERSON.
// --------------------------------
class STUDENT : public PERSON{
public:
    STUDENT ( const char *name );
    GRADE *grade;
};
#endif

Here is a C++ source file for the STUDENT class in a simple school application:

// student.cpp
// -----------
#include "student.h"
#include "person.h"
STUDENT::STUDENT ( const char *name ):
    // Execute the constructor of the PERSON superclass.
    // -------------------------------------------------
    PERSON( name )
{
    // Nothing else to do.
    // -------------------
}

Here is a driver program for demonstration:

// student_dvr.cpp
// ---------------
#include <iostream>
#include "student.h"
int main( void )
{
    STUDENT *student = new STUDENT( "The Student" );
    student->grade = new GRADE( 'a' );
    std::cout
        // Notice student inherits PERSON's name
        << student->name
        << ": Numeric grade = "
        << student->grade->numeric
        << "\n";
	return 0;
}

Here is a makefile to compile everything:

# makefile
# --------
all: student_dvr
clean:
    rm student_dvr *.o
student_dvr: student_dvr.cpp grade.o student.o person.o
    c++ student_dvr.cpp grade.o student.o person.o -o student_dvr
grade.o: grade.cpp grade.h
    c++ -c grade.cpp
student.o: student.cpp student.h
    c++ -c student.cpp
person.o: person.cpp person.h
    c++ -c person.cpp

Declarative languages

Main article: Declarative programming

Imperative languages have one major criticism: assigning an expression to a non-local variable may produce an unintended side effect. Declarative languages generally omit the assignment statement and the control flow. They describe what computation should be performed and not how to compute it. Two broad categories of declarative languages are functional languages and logical languages.

The principle behind a functional language is to use lambda calculus as a guide for a well defined semantic. In mathematics, a function is a rule that maps elements from an expression to a range of values. Consider the function:

times_10(x) = 10 * x

The expression 10 * x is mapped by the function times_10() to a range of values. One value happens to be 20. This occurs when x is 2. So, the application of the function is mathematically written as:

times_10(2) = 20

A functional language compiler will not store this value in a variable. Instead, it will push the value onto the computer's stack before setting the program counter back to the calling function. The calling function will then pop the value from the stack.

Imperative languages do support functions. Therefore, functional programming can be achieved in an imperative language, if the programmer uses discipline. However, a functional language will force this discipline onto the programmer through its syntax. Functional languages have a syntax tailored to emphasize the what.

A functional program is developed with a set of primitive functions followed by a single driver function. Consider the snippet:

function max( a, b ){/* code omitted */}

function min( a, b ){/* code omitted */}

function range( a, b, c ) {

return max( a, max( b, c ) ) - min( a, min( b, c ) );

}

The primitives are max() and min(). The driver function is range(). Executing:

put( range( 10, 4, 7) ); will output 6.

Functional languages are used in computer science research to explore new language features. Moreover, their lack of side-effects have made them popular in parallel programming and concurrent programming. However, application developers prefer the object-oriented features of imperative languages.

Lisp

Lisp (1958) stands for "LISt Processor". It is tailored to process lists. A full structure of the data is formed by building lists of lists. In memory, a tree data structure is built. Internally, the tree structure lends nicely for recursive functions. The syntax to build a tree is to enclose the space-separated elements within parenthesis. The following is a list of three elements. The first two elements are themselves lists of two elements:

((A B) (HELLO WORLD) 94)

Lisp has functions to extract and reconstruct elements. The function head() returns a list containing the first element in the list. The function tail() returns a list containing everything but the first element. The function cons() returns a list that is the concatenation of other lists. Therefore, the following expression will return the list x:

cons(head(x), tail(x))

One drawback of Lisp is when many functions are nested, the parentheses may look confusing. Modern Lisp environments help ensure parenthesis match. As an aside, Lisp does support the imperative language operations of the assignment statement and goto loops. Also, Lisp is not concerned with the datatype of the elements at compile time. Instead, it assigns (and may reassign) the datatypes at runtime. Assigning the datatype at runtime is called dynamic binding. Whereas dynamic binding increases the language's flexibility, programming errors may linger until late in the software development process.

Writing large, reliable, and readable Lisp programs requires forethought. If properly planned, the program may be much shorter than an equivalent imperative language program. Lisp is widely used in artificial intelligence. However, its usage has been accepted only because it has imperative language operations, making unintended side-effects possible.

ML

ML (1973) stands for "Meta Language". ML checks to make sure only data of the same type are compared with one another. For example, this function has one input parameter (an integer) and returns an integer:

fun times_10(n : int) : int = 10 * n;

ML is not parenthesis-eccentric like Lisp. The following is an application of times_10():

times_10 2

It returns "20 : int". (Both the results and the datatype are returned.)

Like Lisp, ML is tailored to process lists. Unlike Lisp, each element is the same datatype. Moreover, ML assigns the datatype of an element at compile-time. Assigning the datatype at compile-time is called static binding. Static binding increases reliability because the compiler checks the context of variables before they are used.

Prolog

Prolog (1972) stands for "PROgramming in LOGic". It is a logic programming language, based on formal logic. The language was developed by Alain Colmerauer and Philippe Roussel in Marseille, France. It is an implementation of Selective Linear Definite clause resolution, pioneered by Robert Kowalski and others at the University of Edinburgh.

The building blocks of a Prolog program are facts and rules. Here is a simple example:

cat(tom).                        % tom is a cat
mouse(jerry).                    % jerry is a mouse
animal(X) :- cat(X).             % each cat is an animal
animal(X) :- mouse(X).           % each mouse is an animal
big(X)   :- cat(X).              % each cat is big
small(X) :- mouse(X).            % each mouse is small
eat(X,Y) :- mouse(X), cheese(Y). % each mouse eats each cheese
eat(X,Y) :- big(X),   small(Y).  % each big animal eats each small animal

After all the facts and rules are entered, then a question can be asked:

Will Tom eat Jerry?
?- eat(tom,jerry).
true

The following example shows how Prolog will convert a letter grade to its numeric value:

numeric_grade('A', 4).
numeric_grade('B', 3).
numeric_grade('C', 2).
numeric_grade('D', 1).
numeric_grade('F', 0).
numeric_grade(X, -1) :- not X = 'A', not X = 'B', not X = 'C', not X = 'D', not X = 'F'.
grade('The Student', 'A').
?- grade('The Student', X), numeric_grade(X, Y).
X = 'A',
Y = 4

Here is a comprehensive example:

1) All dragons billow fire, or equivalently, a thing billows fire if the thing is a dragon:

billows_fire(X) :-
    is_a_dragon(X).

2) A creature billows fire if one of its parents billows fire:

billows_fire(X) :-
    is_a_creature(X),
    is_a_parent_of(Y,X),
    billows_fire(Y).

3) A thing X is a parent of a thing Y if X is the mother of Y or X is the father of Y:

is_a_parent_of(X, Y):- is_the_mother_of(X, Y).
is_a_parent_of(X, Y):- is_the_father_of(X, Y).

4) A thing is a creature if the thing is a dragon:

is_a_creature(X) :-
    is_a_dragon(X).

5) Norberta is a dragon, and Puff is a creature. Norberta is the mother of Puff.

is_a_dragon(norberta).
is_a_creature(puff).
is_the_mother_of(norberta, puff).

Rule (2) is a recursive (inductive) definition. It can be understood declaratively, without the need to understand how it is executed.

Rule (3) shows how functions are represented by using relations. Here, the mother and father functions ensure that every individual has only one mother and only one father.

Prolog is an untyped language. Nonetheless, inheritance can be represented by using predicates. Rule (4) asserts that a creature is a superclass of a dragon.

Questions are answered using backward reasoning. Given the question:

 ?- billows_fire(X).

Prolog generates two answers :

X = norberta
X = puff

Practical applications for Prolog are knowledge representation and problem solving in artificial intelligence.

Object-oriented programming

Object-oriented programming is a programming method to execute operations (functions) on objects. The basic idea is to group the characteristics of a phenomenon into an object container and give the container a name. The operations on the phenomenon are also grouped into the container. Object-oriented programming developed by combining the need for containers and the need for safe functional programming. This programming method need not be confined to an object-oriented language. In an object-oriented language, an object container is called a class. In a non-object-oriented language, a data structure (which is also known as a record) may become an object container. To turn a data structure into an object container, operations need to be written specifically for the structure. The resulting structure is called an abstract datatype. However, inheritance will be missing. Nonetheless, this shortcoming can be overcome.

Here is a C programming language header file for the GRADE abstract datatype in a simple school application:

/* grade.h */
/* ------- */
/* Used to allow multiple source files to include */
/* this header file without duplication errors.   */
/* ---------------------------------------------- */
#ifndef GRADE_H
#define GRADE_H
typedef struct
{
    char letter;
} GRADE;
/* Constructor */
/* ----------- */
GRADE *grade_new( char letter );
int grade_numeric( char letter );
#endif

The grade_new() function performs the same algorithm as the C++ constructor operation.

Here is a C programming language source file for the GRADE abstract datatype in a simple school application:

/* grade.c */
/* ------- */
#include "grade.h"
GRADE *grade_new( char letter )
{
    GRADE *grade;
    /* Allocate heap memory */
    /* -------------------- */
    if ( ! ( grade = calloc( 1, sizeof ( GRADE ) ) ) )
    {
        fprintf(stderr,
                "ERROR in %s/%s/%d: calloc() returned empty.\n",
                __FILE__,
                __FUNCTION__,
                __LINE__ );
        exit( 1 );
    }
    grade->letter = letter;
    return grade;
}
int grade_numeric( char letter )
{
    if ( ( letter == 'A' || letter == 'a' ) )
        return 4;
    else
    if ( ( letter == 'B' || letter == 'b' ) )
        return 3;
    else
    if ( ( letter == 'C' || letter == 'c' ) )
        return 2;
    else
    if ( ( letter == 'D' || letter == 'd' ) )
        return 1;
    else
    if ( ( letter == 'F' || letter == 'f' ) )
        return 0;
    else
        return -1;
}

In the constructor, the function calloc() is used instead of malloc() because each memory cell will be set to zero.

Here is a C programming language header file for the PERSON abstract datatype in a simple school application:

/* person.h */
/* -------- */
#ifndef PERSON_H
#define PERSON_H
typedef struct
{
    char *name;
} PERSON;
/* Constructor */
/* ----------- */
PERSON *person_new( char *name );
#endif

Here is a C programming language source file for the PERSON abstract datatype in a simple school application:

/* person.c */
/* -------- */
#include "person.h"
PERSON *person_new( char *name )
{
    PERSON *person;
    if ( ! ( person = calloc( 1, sizeof ( PERSON ) ) ) )
    {
        fprintf(stderr,
                "ERROR in %s/%s/%d: calloc() returned empty.\n",
                __FILE__,
                __FUNCTION__,
                __LINE__ );
        exit( 1 );
    }
    person->name = name;
    return person;
}

Here is a C programming language header file for the STUDENT abstract datatype in a simple school application:

/* student.h */
/* --------- */
#ifndef STUDENT_H
#define STUDENT_H
#include "person.h"
#include "grade.h"
typedef struct
{
    /* A STUDENT is a subset of PERSON. */
    /* -------------------------------- */
    PERSON *person;
    GRADE *grade;
} STUDENT;
/* Constructor */
/* ----------- */
STUDENT *student_new( char *name );
#endif

Here is a C programming language source file for the STUDENT abstract datatype in a simple school application:

/* student.c */
/* --------- */
#include "student.h"
#include "person.h"
STUDENT *student_new( char *name )
{
    STUDENT *student;
    if ( ! ( student = calloc( 1, sizeof ( STUDENT ) ) ) )
    {
        fprintf(stderr,
                "ERROR in %s/%s/%d: calloc() returned empty.\n",
                __FILE__,
                __FUNCTION__,
                __LINE__ );
        exit( 1 );
    }
    /* Execute the constructor of the PERSON superclass. */
    /* ------------------------------------------------- */
    student->person = person_new( name );
    return student;
}

Here is a driver program for demonstration:

/* student_dvr.c */
/* ------------- */
#include <stdio.h>
#include "student.h"
int main( void )
{
    STUDENT *student = student_new( "The Student" );
    student->grade = grade_new( 'a' );
    printf( "%s: Numeric grade = %d\n",
            /* Whereas a subset exists, inheritance does not. */
            student->person->name,
            /* Functional programming is executing functions just-in-time (JIT) */
            grade_numeric( student->grade->letter ) );
	return 0;
}

Here is a makefile to compile everything:

# makefile
# --------
all: student_dvr
clean:
    rm student_dvr *.o
student_dvr: student_dvr.c grade.o student.o person.o
    gcc student_dvr.c grade.o student.o person.o -o student_dvr
grade.o: grade.c grade.h
    gcc -c grade.c
student.o: student.c student.h
    gcc -c student.c
person.o: person.c person.h
    gcc -c person.c

The formal strategy to build object-oriented objects is to:

  • Identify the objects. Most likely these will be nouns.
  • Identify each object's attributes. What helps to describe the object?
  • Identify each object's actions. Most likely these will be verbs.
  • Identify the relationships from object to object. Most likely these will be verbs.

For example:

  • A person is a human identified by a name.
  • A grade is an achievement identified by a letter.
  • A student is a person who earns a grade.

Syntax and semantics

Production rules consist of a set of terminals and non-terminals.

The syntax of a computer program is a list of production rules which form its grammar. A programming language's grammar correctly places its declarations, expressions, and statements. Complementing the syntax of a language are its semantics. The semantics describe the meanings attached to various syntactic constructs. A syntactic construct may need a semantic description because a production rule may have an invalid interpretation. Also, different languages might have the same syntax; however, their behaviors may be different.

The syntax of a language is formally described by listing the production rules. Whereas the syntax of a natural language is extremely complicated, a subset of the English language can have this production rule listing:

  1. a sentence is made up of a noun-phrase followed by a verb-phrase;
  2. a noun-phrase is made up of an article followed by an adjective followed by a noun;
  3. a verb-phrase is made up of a verb followed by a noun-phrase;
  4. an article is 'the';
  5. an adjective is 'big' or
  6. an adjective is 'small';
  7. a noun is 'cat' or
  8. a noun is 'mouse';
  9. a verb is 'eats';

The words in bold-face are known as non-terminals. The words in 'single quotes' are known as terminals.

From this production rule listing, complete sentences may be formed using a series of replacements. The process is to replace non-terminals with either a valid non-terminal or a valid terminal. The replacement process repeats until only terminals remain. One valid sentence is:

  • sentence
  • noun-phrase verb-phrase
  • article adjective noun verb-phrase
  • the adjective noun verb-phrase
  • the big noun verb-phrase
  • the big cat verb-phrase
  • the big cat verb noun-phrase
  • the big cat eats noun-phrase
  • the big cat eats article adjective noun
  • the big cat eats the adjective noun
  • the big cat eats the small noun
  • the big cat eats the small mouse

However, another combination results in an invalid sentence:

  • the small mouse eats the big cat

Therefore, a semantic is necessary to correctly describe the meaning of an eat activity.

One production rule listing method is called the Backus–Naur form (BNF). BNF describes the syntax of a language and itself has a syntax. This recursive definition is an example of a meta-language. The syntax of BNF includes:

  • ::= which translates to is made up of a when a non-terminal is to its right. It translates to is when a terminal is to its right.
  • | which translates to or.
  • < and > which surround non-terminals.

Using BNF, a subset of the English language can have this production rule listing:

<sentence> ::= <noun-phrase><verb-phrase>
<noun-phrase> ::= <article><adjective><noun>
<verb-phrase> ::= <verb><noun-phrase>
<article> ::= the
<adjective> ::= big | small
<noun> ::= cat | mouse
<verb> ::= eats

Using BNF, a signed-integer has the production rule listing:

<signed-integer> ::= <sign><integer>
<sign> ::= + | -
<integer> ::= <digit> | <digit><integer>
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9

Notice the recursive production rule:

<integer> ::= <digit> | <digit><integer>

This allows for an infinite number of possibilities. Therefore, a semantic is necessary to describe a limitation of the number of digits.

Notice the leading zero possibility in the production rules:

<integer> ::= <digit> | <digit><integer>
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9

Therefore, a semantic is necessary to describe that leading zeros need to be ignored.

Two formal methods are available to describe semantics. They are denotational semantics and axiomatic semantics.

Software engineering and computer programming

Prior to programming languages, Betty Jennings and Fran Bilas programmed the ENIAC by moving cables and setting switches.

Software engineering is a variety of techniques to produce quality computer programs. Computer programming is the process of writing or editing source code. In a formal environment, a systems analyst will gather information from managers about all the organization's processes to automate. This professional then prepares a detailed plan for the new or modified system. The plan is analogous to an architect's blueprint.

Performance objectives

The systems analyst has the objective to deliver the right information to the right person at the right time. The critical factors to achieve this objective are:

  1. The quality of the output. Is the output useful for decision-making?
  2. The accuracy of the output. Does it reflect the true situation?
  3. The format of the output. Is the output easily understood?
  4. The speed of the output. Time-sensitive information is important when communicating with the customer in real time.

Cost objectives

Achieving performance objectives should be balanced with all of the costs, including:

  1. Development costs.
  2. Uniqueness costs. A reusable system may be expensive. However, it might be preferred over a limited-use system.
  3. Hardware costs.
  4. Operating costs.

Applying a systems development process will mitigate the axiom: the later in the process an error is detected, the more expensive it is to correct.

Waterfall model

The waterfall model is an implementation of a systems development process. As the waterfall label implies, the basic phases overlap each other:

  1. The investigation phase is to understand the underlying problem.
  2. The analysis phase is to understand the possible solutions.
  3. The design phase is to plan the best solution.
  4. The implementation phase is to program the best solution.
  5. The maintenance phase lasts throughout the life of the system. Changes to the system after it is deployed may be necessary. Faults may exist, including specification faults, design faults, or coding faults. Improvements may be necessary. Adaption may be necessary to react to a changing environment.

Computer programmer

A computer programmer is a specialist responsible for writing or modifying the source code to implement the detailed plan. A programming team is likely to be needed because most systems are too large to be completed by a single programmer. However, adding programmers to a project may not shorten the completion time. Instead, it may lower the quality of the system. To be effective, program modules need to be defined and distributed to team members. Also, team members must interact with one another in a meaningful and effective way.

Computer programmers may be programming in the small: programming within a single module. Chances are a module will execute modules located in other source code files. Therefore, computer programmers may be programming in the large: programming modules so they will effectively couple with each other. Programming-in-the-large includes contributing to the application programming interface (API).

Program modules

Modular programming is a technique to refine imperative language programs. Refined programs may reduce the software size, separate responsibilities, and thereby mitigate software aging. A program module is a sequence of statements that are bounded within a block and together identified by a name. Modules have a function, context, and logic:

  • The function of a module is what it does.
  • The context of a module are the elements being performed upon.
  • The logic of a module is how it performs the function.

The module's name should be derived first by its function, then by its context. Its logic should not be part of the name. For example, function compute_square_root( x ) or function compute_square_root_integer( i : integer ) are appropriate module names. However, function compute_square_root_by_division( x ) is not.

The degree of interaction within a module is its level of cohesion. Cohesion is a judgment of the relationship between a module's name and its function. The degree of interaction between modules is the level of coupling. Coupling is a judgement of the relationship between a module's context and the elements being performed upon.

Cohesion

The levels of cohesion from worst to best are:

  • Coincidental Cohesion: A module has coincidental cohesion if it performs multiple functions, and the functions are completely unrelated. For example, function read_sales_record_print_next_line_convert_to_float(). Coincidental cohesion occurs in practice if management enforces silly rules. For example, "Every module will have between 35 and 50 executable statements."
  • Logical Cohesion: A module has logical cohesion if it has available a series of functions, but only one of them is executed. For example, function perform_arithmetic( perform_addition, a, b ).
  • Temporal Cohesion: A module has temporal cohesion if it performs functions related to time. One example, function initialize_variables_and_open_files(). Another example, stage_one(), stage_two(), ...
  • Procedural Cohesion: A module has procedural cohesion if it performs multiple loosely related functions. For example, function read_part_number_update_employee_record().
  • Communicational Cohesion: A module has communicational cohesion if it performs multiple closely related functions. For example, function read_part_number_update_sales_record().
  • Informational Cohesion: A module has informational cohesion if it performs multiple functions, but each function has its own entry and exit points. Moreover, the functions share the same data structure. Object-oriented classes work at this level.
  • Functional Cohesion: a module has functional cohesion if it achieves a single goal working only on local variables. Moreover, it may be reusable in other contexts.

Coupling

The levels of coupling from worst to best are:

  • Content Coupling: A module has content coupling if it modifies a local variable of another function. COBOL used to do this with the alter verb.
  • Common Coupling: A module has common coupling if it modifies a global variable.
  • Control Coupling: A module has control coupling if another module can modify its control flow. For example, perform_arithmetic( perform_addition, a, b ). Instead, control should be on the makeup of the returned object.
  • Stamp Coupling: A module has stamp coupling if an element of a data structure passed as a parameter is modified. Object-oriented classes work at this level.
  • Data Coupling: A module has data coupling if all of its input parameters are needed and none of them are modified. Moreover, the result of the function is returned as a single object.

Data flow analysis

A sample function-level data-flow diagram

Data flow analysis is a design method used to achieve modules of functional cohesion and data coupling. The input to the method is a data-flow diagram. A data-flow diagram is a set of ovals representing modules. Each module's name is displayed inside its oval. Modules may be at the executable level or the function level.

The diagram also has arrows connecting modules to each other. Arrows pointing into modules represent a set of inputs. Each module should have only one arrow pointing out from it to represent its single output object. (Optionally, an additional exception arrow points out.) A daisy chain of ovals will convey an entire algorithm. The input modules should start the diagram. The input modules should connect to the transform modules. The transform modules should connect to the output modules.

Functional categories

A diagram showing that the user interacts with the application software. The application software interacts with the operating system, which interacts with the hardware.

Computer programs may be categorized along functional lines. The main functional categories are application software and system software. System software includes the operating system, which couples computer hardware with application software. The purpose of the operating system is to provide an environment where application software executes in a convenient and efficient manner. Both application software and system software execute utility programs. At the hardware level, a microcode program controls the circuits throughout the central processing unit.

Application software

Main article: Application software

Application software is the key to unlocking the potential of the computer system. Enterprise application software bundles accounting, personnel, customer, and vendor applications. Examples include enterprise resource planning, customer relationship management, and supply chain management software.

Enterprise applications may be developed in-house as a one-of-a-kind proprietary software. Alternatively, they may be purchased as off-the-shelf software. Purchased software may be modified to provide custom software. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer.

The potential advantages of in-house software are features and reports may be developed exactly to specification. Management may also be involved in the development process and offer a level of control. Management may decide to counteract a competitor's new initiative or implement a customer or vendor requirement. A merger or acquisition may necessitate enterprise software changes. The potential disadvantages of in-house software are time and resource costs may be extensive. Furthermore, risks concerning features and performance may be looming.

The potential advantages of off-the-shelf software are upfront costs are identifiable, the basic needs should be fulfilled, and its performance and reliability have a track record. The potential disadvantages of off-the-shelf software are it may have unnecessary features that confuse end users, it may lack features the enterprise needs, and the data flow may not match the enterprise's work processes.

One approach to economically obtaining a customized enterprise application is through an application service provider. Specialty companies provide hardware, custom software, and end-user support. They may speed the development of new applications because they possess skilled information system staff. The biggest advantage is it frees in-house resources from staffing and managing complex computer projects. Many application service providers target small, fast-growing companies with limited information system resources. On the other hand, larger companies with major systems will likely have their technical infrastructure in place. One risk is having to trust an external organization with sensitive information. Another risk is having to trust the provider's infrastructure reliability.

Operating system

See also: Operating system
Program vs. Process vs. Thread
Scheduling, Preemption, Context Switching

An operating system is the low-level software that supports a computer's basic functions, such as scheduling processes and controlling peripherals.

In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing. More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an operating system was kept in the computer at all times.

The term operating system may refer to two levels of software. The operating system may refer to the kernel program that manages the processes, memory, and devices. More broadly, the operating system may refer to the entire package of the central software. The package includes a kernel program, command-line interpreter, graphical user interface, utility programs, and editor.

Kernel Program

A kernel connects the application software to the hardware of a computer.

The kernel's main purpose is to manage the limited resources of a computer:

Physical memory is scattered around RAM and the hard disk. Virtual memory is one continuous block.
  • When the kernel initially loads an executable into memory, it divides the address space logically into regions. The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running process. These tables constitute the virtual address space. The master-region table is used to determine where its contents are located in physical memory. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion.
  • The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes of the same executable.
  • To save time and memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely.
  • The kernel is responsible for translating virtual addresses into physical addresses. The kernel may request data from the memory controller and, instead, receive a page fault. If so, the kernel accesses the memory management unit to populate the physical data region and translate the address.
  • The kernel allocates memory from the heap upon request by a process. When the process is finished with the memory, the process may request for it to be freed. If the process exits without requesting all allocated memory to be freed, then the kernel performs garbage collection to free the memory.
  • The kernel also ensures that a process only accesses its own memory, and not that of the kernel or other processes.
  • The kernel program should perform file system management. The kernel has instructions to create, retrieve, update, and delete files.
  • The kernel program should perform device management. The kernel provides programs to standardize and simplify the interface to the mouse, keyboard, disk drives, printers, and other devices. Moreover, the kernel should arbitrate access to a device if two processes request it at the same time.
  • The kernel program should perform network management. The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system.
  • The kernel program should provide system level functions for programmers to use.
    • Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors, file seeking, physical reading, and physical writing.
    • Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface.
    • Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface.
  • The kernel program should provide a communication channel between executing processes. For a large software system, it may be desirable to engineer the system into smaller processes. Processes may communicate with one another by sending and receiving signals.

Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher-level languages like C, Objective-C, and Swift.

Utility program

A utility program is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers. A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated.

Utility programs include compression programs so data files are stored on less disk space. Compressed programs also save time when data files are transmitted over the network. Utility programs can sort and merge data sets. Utility programs detect computer viruses.

Microcode program

Main article: Microcode
NOT gate
NAND gate
NOR gate
AND gate
OR gate

A microcode program is the bottom-level interpreter that controls the data path of software-driven computers. (Advances in hardware have migrated these operations to hardware execution circuits.) Microcode instructions allow the programmer to more easily implement the digital logic level—the computer's real hardware. The digital logic level is the boundary between computer science and computer engineering.

A logic gate is a tiny transistor that can return one of two signals: on or off.

  • Having one transistor forms the NOT gate.
  • Connecting two transistors in series forms the NAND gate.
  • Connecting two transistors in parallel forms the NOR gate.
  • Connecting a NOT gate to a NAND gate forms the AND gate.
  • Connecting a NOT gate to a NOR gate forms the OR gate.

These five gates form the building blocks of binary algebra—the digital logic functions of the computer.

Microcode instructions are mnemonics programmers may use to execute digital logic functions instead of forming them in binary algebra. They are stored in a central processing unit's (CPU) control store. These hardware-level instructions move data throughout the data path.

The micro-instruction cycle begins when the microsequencer uses its microprogram counter to fetch the next machine instruction from random-access memory. The next step is to decode the machine instruction by selecting the proper output line to the hardware module. The final step is to execute the instruction using the hardware module's set of gates.

A symbolic representation of an ALU

Instructions to perform arithmetic are passed through an arithmetic logic unit (ALU). The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic.

Microcode instructions move data between the CPU and the memory controller. Memory controller microcode instructions manipulate two registers. The memory address register is used to access each memory cell's address. The memory data register is used to set and read each cell's contents.

Microcode instructions move data between the CPU and the many computer buses. The disk controller bus writes to and reads from hard disk drives. Data is also moved between the CPU and other functional units via the peripheral component interconnect express bus.

Notes

  1. The Prolog language allows for a database of facts and rules to be entered in any order. However, a question about a database must be at the very end.
  2. An executable has each machine instruction ready for the CPU.
  3. For more information, visit X86 assembly language#Instruction types.
  4. introduced in 1999
  5. Operators like x++ will usually compile to a single instruction.
  6. The line numbers were typically incremented by 10 to leave room if additional statements were added later.
  7. This function could be written more concisely as int increment_counter(){ static int counter; return ++counter;}. 1) Static variables are automatically initialized to zero. 2) ++counter is a prefix increment operator.
  8. This is despite the metaphor of a stack, which normally grows from bottom to top.
  9. C also provides the calloc() function to allocate heap memory. It provides two additional services: 1) It allows the programmer to create an array of arbitrary size. 2) It sets each memory cell to zero.
  10. For string variables, C provides the strdup() function. It executes both the allocation function and the copy function.
  11. The UNIX operating system was written in C, macOS was written in Objective-C, and Swift replaced Objective-C.

References

  1. "ISO/IEC 2382:2015". ISO. 2020-09-03. Archived from the original on 2016-06-17. Retrieved 2022-05-26. all or part of the programs, procedures, rules, and associated documentation of an information processing system.
  2. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 7. ISBN 0-201-71012-9. An alternative to compiling a source program is to use an interpreter. An interpreter can directly execute a source program
  3. Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 98. ISBN 978-0-201-50480-4.
  4. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 32. ISBN 978-0-13-854662-5.
  5. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 7. ISBN 0-201-71012-9.
  6. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 30. ISBN 0-201-71012-9. Their intention was to produce a language that was very simple for students to learn
  7. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 31. ISBN 0-201-71012-9.
  8. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 30. ISBN 0-201-71012-9.
  9. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 30. ISBN 0-201-71012-9. The idea was that students could be merely casual users or go on from Basic to more sophisticated and powerful languages
  10. ^ McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer. Walker and Company. p. 16. ISBN 978-0-8027-1348-3.
  11. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 14. ISBN 978-0-13-854662-5.
  12. Bromley, Allan G. (1998). "Charles Babbage's Analytical Engine, 1838" (PDF). IEEE Annals of the History of Computing. 20 (4): 29–45. doi:10.1109/85.728228. S2CID 2285332. Archived (PDF) from the original on 2016-03-04. Retrieved 2015-10-30.
  13. ^ Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 15. ISBN 978-0-13-854662-5.
  14. J. Fuegi; J. Francis (October–December 2003), "Lovelace & Babbage and the creation of the 1843 'notes'", Annals of the History of Computing, 25 (4): 16, 19, 25, doi:10.1109/MAHC.2003.1253887
  15. Rosen, Kenneth H. (1991). Discrete Mathematics and Its Applications. McGraw-Hill, Inc. p. 654. ISBN 978-0-07-053744-6. Turing machines can model all the computations that can be performed on a computing machine.
  16. Linz, Peter (1990). An Introduction to Formal Languages and Automata. D. C. Heath and Company. p. 234. ISBN 978-0-669-17342-0.
  17. Linz, Peter (1990). An Introduction to Formal Languages and Automata. D. C. Heath and Company. p. 243. ISBN 978-0-669-17342-0. ll the common mathematical functions, no matter how complicated, are Turing-computable.
  18. ^ McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer. Walker and Company. p. 102. ISBN 978-0-8027-1348-3.
  19. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer. Walker and Company. p. 94. ISBN 978-0-8027-1348-3.
  20. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer. Walker and Company. p. 107. ISBN 978-0-8027-1348-3.
  21. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer. Walker and Company. p. 120. ISBN 978-0-8027-1348-3.
  22. ^ McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer. Walker and Company. p. 118. ISBN 978-0-8027-1348-3.
  23. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer. Walker and Company. p. 119. ISBN 978-0-8027-1348-3.
  24. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer. Walker and Company. p. 123. ISBN 978-0-8027-1348-3.
  25. ^ Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 21. ISBN 978-0-13-854662-5.
  26. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 27. ISBN 0-201-71012-9.
  27. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 29. ISBN 0-201-71012-9.
  28. ^ Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 6. ISBN 978-0-201-50480-4.
  29. ^ "Bill Pentz — A bit of Background: the Post-War March to VLSI". Digibarn Computer Museum. August 2008. Archived from the original on March 21, 2022. Retrieved January 31, 2022.
  30. ^ To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS. Johns Hopkins University Press. 2002. ISBN 9780801886393. Archived from the original on February 2, 2023. Retrieved February 3, 2022.
  31. Chalamala, Babu (2017). "Manufacturing of Silicon Materials for Microelectronics and Solar PV". Sandia National Laboratories. Archived from the original on March 23, 2023. Retrieved February 8, 2022.
  32. "Fabricating ICs Making a base wafer". Britannica. Archived from the original on February 8, 2022. Retrieved February 8, 2022.
  33. "Introduction to NMOS and PMOS Transistors". Anysilicon. 4 November 2021. Archived from the original on 6 February 2022. Retrieved February 5, 2022.
  34. "microprocessor definition". Britannica. Archived from the original on April 1, 2022. Retrieved April 1, 2022.
  35. "Chip Hall of Fame: Intel 4004 Microprocessor". Institute of Electrical and Electronics Engineers. July 2, 2018. Archived from the original on February 7, 2022. Retrieved January 31, 2022.
  36. "360 Revolution" (PDF). Father, Son & Co. 1990. Archived (PDF) from the original on 2022-10-10. Retrieved February 5, 2022.
  37. ^ "Inside the world's long-lost first microcomputer". c/net. January 8, 2010. Archived from the original on February 1, 2022. Retrieved January 31, 2022.
  38. "Bill Gates, Microsoft and the IBM Personal Computer". InfoWorld. August 23, 1982. Archived from the original on 18 February 2023. Retrieved 1 February 2022.
  39. ^ Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley. p. 10. ISBN 978-0-321-56384-2.
  40. ^ Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley. p. 11. ISBN 978-0-321-56384-2.
  41. ^ Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 159. ISBN 0-619-06489-7.
  42. ^ Linz, Peter (1990). An Introduction to Formal Languages and Automata. D. C. Heath and Company. p. 2. ISBN 978-0-669-17342-0.
  43. Weiss, Mark Allen (1994). Data Structures and Algorithm Analysis in C++. Benjamin/Cummings Publishing Company, Inc. p. 29. ISBN 0-8053-5443-3.
  44. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 17. ISBN 978-0-13-854662-5.
  45. ^ Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 160. ISBN 0-619-06489-7.
  46. ^ Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 399. ISBN 978-0-13-854662-5.
  47. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 400. ISBN 978-0-13-854662-5.
  48. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 398. ISBN 978-0-13-854662-5.
  49. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 26. ISBN 0-201-71012-9.
  50. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 37. ISBN 0-201-71012-9.
  51. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 160. ISBN 0-619-06489-7. With third-generation and higher-level programming languages, each statement in the language translates into several instructions in machine language.
  52. Wilson, Leslie B. (1993). Comparative Programming Languages, Second Edition. Addison-Wesley. p. 75. ISBN 978-0-201-56885-1.
  53. Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley. p. 40. ISBN 978-0-321-56384-2.
  54. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 16. ISBN 0-201-71012-9.
  55. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 24. ISBN 0-201-71012-9.
  56. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 25. ISBN 0-201-71012-9.
  57. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 19. ISBN 0-201-71012-9.
  58. ^ "Memory Layout of C Programs". 12 September 2011. Archived from the original on 6 November 2021. Retrieved 6 November 2021.
  59. ^ Kernighan, Brian W.; Ritchie, Dennis M. (1988). The C Programming Language Second Edition. Prentice Hall. p. 31. ISBN 0-13-110362-8.
  60. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 128. ISBN 0-201-71012-9.
  61. ^ Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 121. ISBN 978-1-59327-220-3.
  62. Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 122. ISBN 978-1-59327-220-3.
  63. Kernighan, Brian W.; Ritchie, Dennis M. (1988). The C Programming Language Second Edition. Prentice Hall. p. 185. ISBN 0-13-110362-8.
  64. ^ Kernighan, Brian W.; Ritchie, Dennis M. (1988). The C Programming Language Second Edition. Prentice Hall. p. 187. ISBN 0-13-110362-8.
  65. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 38. ISBN 0-201-71012-9.
  66. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 193. ISBN 0-201-71012-9.
  67. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 39. ISBN 0-201-71012-9.
  68. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 35. ISBN 0-201-71012-9.
  69. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 192. ISBN 0-201-71012-9.
  70. Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley. p. 22. ISBN 978-0-321-56384-2.
  71. Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley. p. 21. ISBN 978-0-321-56384-2.
  72. Stroustrup, Bjarne (2013). The C++ Programming Language, Fourth Edition. Addison-Wesley. p. 49. ISBN 978-0-321-56384-2.
  73. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 218. ISBN 0-201-71012-9.
  74. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 217. ISBN 0-201-71012-9.
  75. Weiss, Mark Allen (1994). Data Structures and Algorithm Analysis in C++. Benjamin/Cummings Publishing Company, Inc. p. 103. ISBN 0-8053-5443-3. When there is a function call, all the important information needs to be saved, such as register values (corresponding to variable names) and the return address (which can be obtained from the program counter) ... When the function wants to return, it ... restores all the registers. It then makes the return jump. Clearly, all of this work can be done using a stack, and that is exactly what happens in virtually every programming language that implements recursion.
  76. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 230. ISBN 0-201-71012-9.
  77. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 240. ISBN 0-201-71012-9.
  78. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 241. ISBN 0-201-71012-9.
  79. Jones, Robin; Maynard, Clive; Stewart, Ian (December 6, 2012). The Art of Lisp Programming. Springer Science & Business Media. p. 2. ISBN 9781447117193.
  80. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 220. ISBN 0-201-71012-9.
  81. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 221. ISBN 0-201-71012-9.
  82. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 229. ISBN 0-201-71012-9.
  83. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 227. ISBN 0-201-71012-9.
  84. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 222. ISBN 0-201-71012-9.
  85. Gordon, Michael J. C. (1996). "From LCF to HOL: a short history". Archived from the original on 2016-09-05. Retrieved 2021-10-30.
  86. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 233. ISBN 0-201-71012-9.
  87. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 235. ISBN 0-201-71012-9.
  88. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 55. ISBN 0-201-71012-9.
  89. Colmerauer, A.; Roussel, P. (1992). "The birth of Prolog" (PDF). ACM SIGPLAN Notices. 28 (3). Association for Computing Machinery: 5. doi:10.1145/155360.155362.
  90. Kowalski, R., Dávila, J., Sartor, G. and Calejo, M., 2023. Logical English for law and education. In Prolog: The Next 50 Years (pp. 287-299). Cham: Springer Nature Switzerland.
  91. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 35. ISBN 0-201-71012-9. Simula was based on Algol 60 with one very important addition — the class concept. ... The basic idea was that the data (or data structure) and the operations performed on it belong together
  92. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 39. ISBN 0-201-71012-9. Originally, a large number of experimental languages were designed, many of which combined object-oriented and functional programming.
  93. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 284. ISBN 0-256-08515-3. While it is true that OOD as such is not supported by the majority of popular languages, a large subset of OOD can be used.
  94. Weiss, Mark Allen (1994). Data Structures and Algorithm Analysis in C++. Benjamin/Cummings Publishing Company, Inc. p. 57. ISBN 0-8053-5443-3.
  95. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 285. ISBN 0-256-08515-3.
  96. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 290. ISBN 0-201-71012-9. The syntax (or grammar) of a programming language describes the correct form in which programs may be written
  97. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 78. ISBN 0-201-71012-9. The main components of an imperative language are declarations, expressions, and statements.
  98. ^ Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 290. ISBN 0-201-71012-9.
  99. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 294. ISBN 0-201-71012-9.
  100. Rosen, Kenneth H. (1991). Discrete Mathematics and Its Applications. McGraw-Hill, Inc. p. 615. ISBN 978-0-07-053744-6.
  101. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 291. ISBN 0-201-71012-9.
  102. Rosen, Kenneth H. (1991). Discrete Mathematics and Its Applications. McGraw-Hill, Inc. p. 616. ISBN 978-0-07-053744-6.
  103. Rosen, Kenneth H. (1991). Discrete Mathematics and Its Applications. McGraw-Hill, Inc. p. 623. ISBN 978-0-07-053744-6.
  104. Rosen, Kenneth H. (1991). Discrete Mathematics and Its Applications. McGraw-Hill, Inc. p. 624. ISBN 978-0-07-053744-6.
  105. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 297. ISBN 0-201-71012-9.
  106. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. Preface. ISBN 0-256-08515-3.
  107. ^ Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 507. ISBN 0-619-06489-7.
  108. ^ Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 513. ISBN 0-619-06489-7.
  109. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 514. ISBN 0-619-06489-7.
  110. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 516. ISBN 0-619-06489-7.
  111. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 8. ISBN 0-256-08515-3.
  112. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 517. ISBN 0-619-06489-7.
  113. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 345. ISBN 0-256-08515-3.
  114. ^ Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 319. ISBN 0-256-08515-3.
  115. ^ Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 331. ISBN 0-256-08515-3.
  116. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 216. ISBN 0-256-08515-3.
  117. ^ Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 219. ISBN 0-256-08515-3.
  118. ^ Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 226. ISBN 0-256-08515-3.
  119. ^ Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 220. ISBN 0-256-08515-3.
  120. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 258. ISBN 0-256-08515-3.
  121. Schach, Stephen R. (1990). Software Engineering. Aksen Associates Incorporated Publishers. p. 259. ISBN 0-256-08515-3.
  122. ^ Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 1. ISBN 978-0-201-50480-4.
  123. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 147. ISBN 0-619-06489-7. The key to unlocking the potential of any computer system is application software.
  124. ^ Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 147. ISBN 0-619-06489-7.
  125. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 147. ISBN 0-619-06489-7. third-party software firm, often called a value-added software vendor, may develop or modify a software program to meet the needs of a particular industry or company.
  126. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 148. ISBN 0-619-06489-7. Heading: Proprietary Software; Subheading: Advantages; Quote: You can get exactly what you need in terms of features, reports, and so on.
  127. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 148. ISBN 0-619-06489-7. Heading: Proprietary Software; Subheading: Advantages; Quote: Being involved in the development offers a further level of control over the results.
  128. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 147. ISBN 0-619-06489-7. Heading: Proprietary Software; Subheading: Advantages; Quote: There is more flexibility in making modifications that may be required to counteract a new initiative by one of your competitors or to meet new supplier and/or customer requirements.
  129. ^ Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 149. ISBN 0-619-06489-7.
  130. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 11. ISBN 978-0-13-854662-5.
  131. ^ Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 21. ISBN 978-1-59327-220-3.
  132. ^ Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 22. ISBN 978-1-59327-220-3.
  133. ^ Bach, Maurice J. (1986). The Design of the UNIX Operating System. Prentice-Hall, Inc. p. 152. ISBN 0-13-201799-7.
  134. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 443. ISBN 978-0-13-291652-3.
  135. Lacamera, Daniele (2018). Embedded Systems Architecture. Packt. p. 8. ISBN 978-1-78883-250-2.
  136. Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 23. ISBN 978-1-59327-220-3.
  137. Kernighan, Brian W. (1984). The Unix Programming Environment. Prentice Hall. p. 201. ISBN 0-13-937699-2.
  138. Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 187. ISBN 978-1-59327-220-3.
  139. Haviland, Keith (1987). Unix System Programming. Addison-Wesley Publishing Company. p. 121. ISBN 0-201-12919-1.
  140. ^ Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 145. ISBN 0-619-06489-7.
  141. ^ Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 146. ISBN 0-619-06489-7.
  142. ^ Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 6. ISBN 978-0-13-291652-3.
  143. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 243. ISBN 978-0-13-291652-3.
  144. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 147. ISBN 978-0-13-291652-3.
  145. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 148. ISBN 978-0-13-291652-3.
  146. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 253. ISBN 978-0-13-291652-3.
  147. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 255. ISBN 978-0-13-291652-3.
  148. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 161. ISBN 978-0-13-291652-3.
  149. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 166. ISBN 978-0-13-291652-3.
  150. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 249. ISBN 978-0-13-291652-3.
  151. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 111. ISBN 978-0-13-291652-3.
Categories: