Sage-Code Laboratory
index<--

Programming Concepts

This article is very important for beginners. We start with basic concepts used in computer science to create computer programs. All programming languages are based on these concepts. Some of these concepts are used to make new software and some others are useful for running the applications to resolving problems or run a business.

Video Lecture

If you have time and bandwith you can watch this video to learn all basic concepts of computer programming.This video is about 35 minutes long, and is very informative for beginners.It was filmed in Chicago by Elucian in Aug, 3, 2016. Please press thumb up button if you like it. Ok? Thanks!

Programming Concepts

Bookmarks:


Computer programs

Computer programs are step by step instructions that a computer can execute. These are groupped in statements that are stored into a modules or package. A program can have many packages. Sometimes programs are also called applications.

Computer programs consist of files, stored in folders. We call this a "project" or "code base". A project contains text files havig language specific extension, data files and perhaps images, audo and video files. Usually one file is the main file that uses the other files. A project should contain also documentation files.

Program execution will resolve a problem and communicate the result or will have a physical effect: for example printing a message to the computer monitor or moving the arm of a robot. A program can be executed one or multiple times.

Sub-programs

Sub-programs have the role: "separation of concerns". It means one sub-program does only one small thing. The main program combine the effects of multiple sub-programs to implement the end-to-end functionality of the system. Sub-programs are contained in programs or delivered as components or libraries to be re-used in many applications.

Depending on the computer language sub-programs are called: sub-routines, procedures, functions, methods or rules. Common is that all encapsulate a specific functionality. The main program is orchestrating the execution of sub-programs by transferring the execution main process into sub-processes that can be executed synchronously or asynchronously (sometimes in parallel).

Parameters and results

Sub-routines can receive parameters and can compute results. Sometimes we call these "formal parameters" or "receiver variables". Usually parameters for sub-routines are immutable during sub-routine execution. The result can be collected by the main program and used in next computations. Depending on the programming language the syntax of parameters and subroutine call can be different.

Parameters are similar to local variables. They are known only inside the sub-routine. Once the subroutine is finished, the value of the parameters is lost or can be propagated back to the caller. This depends on the declaration syntax and purpose of the parameter.

Arguments and values

Subprograms must receive values that are assigned to parameters. These values are called "actual parameters" or "arguments". Important thing is that values of parameters can be: constants/expressions or variables.

If an argument is a constant, you may be able to modify them inside sub-program but this has no effect over the argument. The argument remain constant. If an argument is a variable, then you have a choice to propagate changes back to caller or to protect the parameters against this secondary effect.

Output parameters are parameters that can propagate back the modified values outside of sub-program. This is a feature many languages have. It may be implemented using "pointers" or "references". For each computer language you have to learn how to use input/output parameters.

Variable number of arguments

Lately, most languages can support a variable number of arguments using a special prefix for a parameter that accept multiple values. This symbol can be "*" or "...", depending on the language.

When you make a sub-routine call, you can use many arguments separated by comma that will be grouped to into a collection and sent to the sub-routine. Then you can access these parameters by index and use each value.

Sometimes, parameters are optional. In this case, the call can also have a variable number of arguments but the arguments that are optional must be specified by name. You will understand these concepts better if we use an example:

Example:

#last parameter is optional
def add(a, b, c = 0, *d):
    return a+b+c+sum(d)
print( add(1,2) ) # 3
print( add(1,2,3) ) # 6
print( add(1,2,c = 3) ) # 6
print( add(1,2,3, 4, 5) )# 15

Namespaces and scope

Each sub-program usually has its own "local scope". Sometimes in the local scope you can define nested sub-programs. Most languages will enable creation of variables and constants in local scope.

When you define identifiers you must know where this identifier is visible. If an identifier is visible in all your program, it is called "global" identifier. The area of visibility is called "scope". For example a program can have a single "global scope".

Shadowing Effect

Shadowing is a secondary effect of most programming languages that support local scope. If you define a variable in local scope having the same name as a variable defined in outer scope the two names will collide. To avoid collision, the languages usually hide the external variable and enable access only to local variable. This effect is called "shadowing". Parameters also have a shadowing effect over outer variables or parameters.

Dot notation

When you define a data structure, the elements in data structure can be public or private. If elements are public, usual notation called "dot notation", enable you to access a member of a collection by name.

Example:

class Person:
    name = "Barbu"
    age = 22
pass #end class
# using dot notation
print(Person.name) # Barbu
print(Person.age) # 22

Note: In previous example we define a "structure class" in Python that is also known as "data class"

Programming Symbols

A program is made of symbols that are used to create expressions. Most languages are using expressions that are similar to the ones we learn in mathematics. Sometimes beginners have hard time understending the difference between "expressions", "symbols" and relation with "functions". We will explain these things here.

Symbols can be single character or multi-character. Single character symbols can be {"digits", "letters", "ASCII", "UNICODE"}. We use symbols to create: operators, punctuation marks, numeric literals, string literals, identifiers, enumerations and data collections.

In general a program manipulate "data". This is also made of symbols. We learn in school to count using Arabic digital symbols: 0,1,2,3,4,5,6,7,8,9. We also learn how to read and write text using Latin alphabet: Aa, Bb, Cc, Dd, Ee, Ff, Gg ... Zz. This is not the case in all cultures around the globe. Some cultures are using other symbols to represent data.

Encoding

The way we represent data using the symbols is called encoding. The most popular encoding system known as Unicode. Before Unicode was invented we used different encodings system known as: ASCII. For different countries there is an Extended ASCII table that has diacritics and special characters. Also Unicode have several variations: UTF16, UTF32 and UTF8.

Binary

Computer is using binary representations of "1" and "0" for encoding symbols. The "1" is usually a positive electrical charge while "0" represents no charge. It is similar to a light bulb. When light is on we can consider it logic = "1". When lights is off we consider logic="0″.

Storage

Electronic devices can store data using specialized devices to store data. There are several phisical methods: "electro-static", "electro-magnetic", "optic" or "magneto-optic" and of course "solid state - electronics". The point is, all computation is digital, and is based on a binary system.

Numeric Systems

A numeric system is a way of representing numbers. In computer science a number can be represented by different systems and you have to learn this in order to understend code and be able to produce new code.

Decimal System

You learn in school Arithmetic using decimal system. This is based on old Arabic symbols now considered the best numeric representation. It uses one sincle symbol for each digit except 10 that is using two symbols.

Digit Names

We have a "name" for each digit. Funny though, numbers are translated different in different languages.

Decimal English Romanian Italian Spanish French
0 zero zero zero cero zéro
1 one unu uno uno une
2 two doi due dos deux
3 threetrei tre Tres trois
4 four patruquattrocuatroquatre
5 five cincicinque cinco cinq
6 six şase sei seis six
7 sevenşaptesette siete sept
8 eightopt otto ocho huit
9 nine nouă nove nueve neuf
10ten zece dieci diez dix

Large Numbers

For large numbers we use addition. We separate group of 3 large numbers using a dot or a comma separator. This is tricky business becouse Europeans are using dot separator Americans are using comma separator.

  • European number: 24.220.421 = 24000000 + 220000 + 421
  • American number: 24,220,421 = 24000000 + 220000 + 421
First digit in a large number can not be 0. First zeros are not significant. Therefore number 024.220.421 is not a correct decimal number.

Small Numbers

For small numbers we use decimals. These are numbers that follow "." (dot) - but this is American notation. In Europe we use "," (comma) to create small numbers. So now you can be totally confused becouse we are so divided even if we all lieve on the same planet.

  • European small number: 0,22
  • American small number: 0.22
Last digit in a small number can not be 0. Last zeros are not significant. Therefore number 0.220 is not a correct decimal number.

Scientific Notation

We are using multiplication of a small number "m" between (0..1) with 10 at power of "n" (m × 10ⁿ). This notation is used to display very large numbers in scientific papers. Some examples of large numbers:

  • European large number: 1,22 × 10¹² = 1.220.000.000.000
  • American large number: 1.22 × 10¹² = 1,220,000,000,000

Engineering Notation

We are using multiplication of any number "m" with 10 at power of "n" (m × 10ⁿ). This notation is used to display very large numbers in scientific papers. Some examples of large numbers:

  • European large number: 12,12 × 10¹² = 12,120,000,000,000
  • American large number: 12.12 × 10¹² = 12.120.000.000.000

Exponential Notation

We are using "E" notation in Computer Science becouse we are using ASCII symbols and there is no support for superscript. So we use letter "e" and "E" to express power of 10 without explicit using number 10. This notation looks like: "###E##" or "###e##" where "#" is a digit. Fractions are using negative exponent: "###E-##".

  • Earth mass: 5.9724E24 = 5972400000000000000000000 kg.
  • An inch is: 2.54E1 mm = 25.4 mm

Roman System

Believe or not but Romans ware very bad at mathematics. They knew how to count only up to 5000, using sticks. They do not have a representation for Null. So this numeric system is actually not used in Computer Science.

In this system, numbers are represented using sticks. It is very fun to make a program that convert an decimal number into a Roman number. Later, we will use this exercise to learn programming languages. Here are some of Roman numbers:

Symbols used:

1510501005001000
IVX L C D M

Count to 10:

I II III IV V VI VIIVIIIIX X
1 2 3 4 5 6 7 8 9 10

To create numbers you can use additions or subtraction. Therefore you can create correct and incorrect numbers. Sometimes this can be very difficult so you need a lot of practice to read large Roman numbers.

Numerals can be added together to form new numbers (e.g., III = I + I + I = 3), but no more than three of the same numeral can be added together.

In addition, to form any numbers ending with 4 or 9 or the numbers 40, 90, 400, and 900, you must subtract from the larger unit because you cannot add more than three of the same numeral. For example, IV = V − I = 5 − 1 = 4.

Correct:(Using Subtraction) Incorrect:(Using Addition) Decimal
IVIIII 4
IXVIIII 9
XLXXXX 40
XCLXXXX 90
CDCCCC 400
CMDCCCC 900

Binary System

A single storage unit is called bit. It can store exclusive: 0 or 1. Using multiple storage units grouped together we can store combinations of 0 and 1. For example on two bit we can represent 4 combinations: {00, 01, 10, 11}. This is called "binary" system. It has two digits: {0,1}.

Counting from 1 to 10

It takes 4 bits to be able to count from 0 to 10. This is one of the reason computers have memory organized in multiple of 4. Now let's learn how to count using binary:

  • 0000 = 0
  • 0001 = 1
  • 0010 = 2
  • 0011 = 3
  • 0100 = 4
  • 0101 = 5
  • 0110 = 6
  • 0111 = 7
  • 1000 = 8
  • 1001 = 9
  • 1010 = 10

Octal System

If we pair up 3 bits we can represent more digits: {0,1,2,3,4,5,6,7} this kind of representation is called "octal" system, and is very rarely used.

Hexadecimal System

We can use 4 bits and increase the number of combinations to 16 digits: {0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ,A ,B ,C ,D ,E ,F}. The first {0001 = 1} the last {1111 = F}. This representation is called "hexadecimal". One hexadecimal digit occupy 4 bits, so it matches very well with binary system.

As you can see there is not an exact match between the number of possible combinations and 10 digits. Therefore we have invented new digits: {A, B, C, D, E, F} sometimes these digits are represented using lowercase characters: { a, b, c, d, e, f }.

Memory Addresses

Computer science is using groups of 8 bits to represent memory addresses between (0..255) ≡ (00..FF). This is called "1 byte"= 8 bit.

Two bytes used to represent a "1 word" = 16 bit ≡ (0000..FFFF). On 16 bit we can represent 2¹⁶ = 65536 addresses that is numbers from 0 to 65,535.

Memory Capacity

In computer science we measure memory using Bytes (B), Kilo Bytes (KB) and Mega Bytes (MB), then Giga Bytes (GB) and Tera Bytes (TB) and Penta Bytes (PB) and Exa Bytes (EB).

  1. 1 B = 1 Byte = 8 bit
  2. 1 KB = 1024 B
  3. 1 MB = 1024 KB
  4. 1 GB = 1024 MB
  5. 1 TB = 1024 GB

On 32 bit we can represent 2³² = 4294967296 addresses. That is numbers from 0 to 4,294,967,295. Therefore operating systems on 32 bits have a limited memory capacity less than 4 GB of RAM.

On 64 bit we can represent 2⁶⁴ = 8,446,744,073,709,551,616 bytes. This is a very large number representing 16 exibytes RAM capacity. Of course in practical applications, the maximul numbers is much lower, for example AMD64 standard allows 256 TB of RAM.

Endian Encoding

Internal representation of numbers and symbols is different depending on the computer type, operating system and device. For now is suffice to know that "PC" encoding for example is different than "MAC" encoding.

In computing, endianness is the order or sequence of bytes of a word of digital data in computer memory. Endianness is primarily expressed as big-endian (BE) or little-endian (LE).

BE: A big-endian system stores the most significant byte of a word at the smallest memory address and the least significant byte at the largest.

LE: A little-endian system, in contrast, stores the least-significant byte at the smallest address.

Endianness may also be used to describe the order in which the bits are transmitted over a communication channel, e.g., big-endian in a communications channel transmits the most significant bits first.

Identifiers

When you create a program, you give names to program elements. These names are symbolic representations for data elements, structures, group of statements or sub-programs. You need a name so you can refer to data elements more easily, without repeating the data symbols themselves.

Example:

In this example we define a vector "v" using a Python single dimension list and then check if an element is member of this list. "v" and "x" are both identifiers.

v = [1,4,6,12,32]
x = input("check this number")
if x in v:
 print("found")
else:
 print("not found")

Sigil:

Sometimes a language uses some kind of prefix for variable names. This is called "sigil" and has the purpose to differentiate identifiers by purpose. For example in Ruby, the global variables use sigil "$". That means all variables that start with "$" will be global variables while sigil "@" represent an object attribute.

In PHP all variables start with "$", global or not global. Some developers find this rule annoying. In my languages Bee and EVE I use sigil "$" for global system constants and "@" for global system variables.

Type systems

Programming languages must deal with data. To do this, data is classified by category. For each category there are rules of representation and manipulation. This classification and the rules are called in computer science: "type system".

Immutable Data

Data can be embedded into the program itself or it can be external. When embedded in the program, data is immutable. That means it is constant. You can change this kind of data only if you change the program.

Mutable Data

External data is usually mutable. If is stored on a device that has R/W (Read & Write) capability like HDD (Hard Disk), or RW-CD (Read Write – Compact Disk). Sometimes a device can store external data in read only mode. For example optical disk or ROM (Read Only Memory) can store data that becomes immutable.

Variables

A variable represents data stored in RAM (Random Access Memory). We can alter value of this kind of data very easy, many times without wearing off the storage. In low level computation data is stored as bit: "0" or "1", but a high level language you can store and manipulate abstract data types, for example: "numeric", "string", "boolean", "date", "time".

Measurement Units

Most computer languages do not have notion of measurement units. So a number can represent any kind of physical or abstract concept. For example: width, height, weight. Only "date" and "time" have measurement units. Computer scientists believe that a computer language becomes too complex if is dealing with measurement units.

Static typing

In Computer Science we say a programming language is using "static typing" when the data type for a variable or a parameter must be defined in the same time with the variable and can not be changed during run-time. You can of course change the declaration in source code and then a variable will have a new type but this is a permanent change.

Dynamic typing

In Computer Science we say a programming language is using "dynamic typing" when the data type for a variable or a parameter is not defined and can be changed. This do not means a variable do not have a type. It means the type can change when you change the value dynamicly at run-time.

Data Literals

Data literals are symbols or group of symbols that represent constant data. For example: 100 represent the number 100 written in decimal. There are numerous other notations for numbers, representing different data types.

A group of multiple data elements like a list or a data set can have a special literal created with alphanumeric symbols, delimiters and separators. Once you have learned these conventions, most languages will be easier to grasp since all are using same conventions.

Example Description
'a' Single quoted string
"string" Double quoted string
0b10110 Binary number
1234 Integer number
OxFFFF Hexadecimal number
U+FFFF Unicode code-point
0.05 Decimal number
1E10 Scientific notation float numbers

Expressions

You should be familiar with this concept from mathematics. In computer science there are 3 kind of expressions: Infix, Postfix and Prefix. It is easiest to demonstrate the differences by looking at examples of operators that take two operands:

Expressions types:

+ x y : Prefix
x + y : Infix
x y + : Postfix

In these expressions x, y are operands while "+" is an operator. The most simple expressions are using one single operator and one operand. For example "-4" is an expression while "4" is not an expression but a single constant literal. "2 + 4" however is an expression even if there is no variable involved.

Expressions can be combined in larger expressions using operators. Order of execution can be controlled using operator precedence and also round parenthesis (x+y). We will investigate this in our next examples:

Infix Expressions:

x+y+z
a + b / 2 + c * 2
(a + b) / (a * b)
x ≠ y
a ≤ 5

Most computer languages are using infix expressions. You will learn details about literals and expressions in next course: CSP: Programming. Usually we describe literals and expressions as basic language concepts in first or second tutorial article about every computer language.

Programming languages

Programming languages are written artificial languages created for humans to describe and resolve problems. A computer can read these languages and can convert them into executable instructions. In general, artificial languages are much more simpler than spoken languages but more precise.

Computer programs are created using a programming language. In computer industry there are many programming languages available, with different styles of programming, called programming paradigms.

The programming languages and computer languages are considered synonyms. However I think a computer language signify languages more close to the machine while programming language signify more abstract higher generation languages.

Example: Believe it or not this is a complete program written in Python ...

# compute factorial of n
def factorial(n):
    if n==0:
        return 1
    else:
        return n*factorial(n-1)
# call factorial function and capture result
result=factorial(5)
print(result)

Language classification

Computer languages can be categorized by the complexity:

A: Close to machines

  • 1GL – First Generation:
  • 2GL – Second Generation

B: Close to humans

  • 3GL – Third Generation Language
  • 4GL – Forth Generation Language
  • 5GL – Fifth Generation Language

Examples:

  • Machine code is 1'st generation language;
  • Assembly is 2'nd generation language;
  • C is 3'rd generation language;
  • Java is 3'rd generation language;
  • SQL is a 4'th generation language;
  • Haskell is a 4'th generation language;

Programming Paradigms

Computer languages can be classified also by programming paradigm. This is a way of thinking about programming. It represents semantic rules of a language, that you must learn before you can read or write code. These are the programming paradigms that are used in diverse languages:

  • Linear programming,
  • Structured programming,
  • Procedural programming,
  • Imperative programming,
  • Declarative programming,
  • Object oriented programming,
  • Aspect oriented programming,
  • Functional programming,
  • Logic programming.
Some computer languages are pure and implement a single paradigm and others are hybrid and implement multiple paradigms. In my opinion, multi-paradigm languages are superior but more difficult to learn. Pure languages are more easy to implement but due to many restrictions, harder to use.

Examples:

  1. FORTRAN is a pure structured language,
  2. Algol is a pure procedural language,
  3. Python is multi-paradigm language,
  4. Ruby is a pure Object Oriented language,
  5. Lisp is first functional language,
  6. Haskel is pure functional language,
  7. SQL is pure declarative language,
  8. Prolog is pure logical programming language.

Learning a programming language

Making software applications implicit requires you to learn a computer language. Before you do you must understand this is not an easy task and it will require significant effort. To learn a computer language you have to learn 3 things:

  • Logic – the reasoning process to resolve problems;
  • Syntax – the words and rules of the language;
  • Semantic – the way we create statements.

Logic Science

The logic was discovered by Greeks and formalized by Aristotle. Logic means thought or reason. Initially the logic was described only by philosophy. However in time a new mathematical branch was developed. Logic is the science of truth. The purpose of logic is wisdom.

Logical Inference

Logical inference: Using valid arguments into a sentence we can derive a logical conclusion. Sound arguments will produce the same conclusion no matter who makes the arguments or who derive the conclusion.

Logical inference is the most important concept in logic. It is also known as deduction or implication. This is a conclusion we reach after we study the arguments or premises into a sentence. Inference is a way of thinking and reasoning.

The inference and reason do not work the same in all cultures. For example a Christian will not reach the same conclusion from a religious sentence as another person that is Buddhist or Muslim. The words may be interpreted in different ways and lead to different conclusions.

Sentences can be classified in three categories:

  • Imperative sentences
  • Interogative sentences
  • Declarative sentences

It is important to know that logic can't create good inference from imperative or interogative sentences but only from declarative sentences. Imperative sentence is also a command from a superior entity or authority like Emperor, Queen, Fuhrer, President or God.

Info: By making inference from imperative sentences one culture can fall into irrational believes, wrong lows and social injustice. Logic can only work based on facts and solid arguments not on irrational commands.

Fundamental logic rules and principles:

Let's say we have an argument or a premise and a sentence. Following are some logic rules that can apply to these two:

  • Identity rule: An argument is identical with itself and not something else;
  • Contradiction rule: An argument cannot be true and false in same sentence;
  • Validity rule: An argument is valid if the truth of premises lead to truth of the conclusion.

Validity and Soundness

A deductive argument is said to be valid if takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false. Otherwise, a deductive argument is said to be invalid.

A deductive argument is sound if and only if it is both valid, and all of its premises are true. Otherwise, a deductive argument is unsound. Unsound arguments are invalid.

Propositional Logic

The propositional logic study the relation between arguments and sentences. Deductive arguments can be propositions or sentences that can have Boolean values: True or False. This logic do not deal with the content of propositions but only with the relation between propositions.

In this rules we use A and B and P to represent logical propositions. Part of propositional logic are relations: IS, AND, OR, NOT. This relations have very precise rules that makes propositional logic a reliable theory.

Fundamental rules:

There are several Logical rules that are universal and can't be broken. Nobody can contradict them in any culture or language. If you understand propositional logic rules you have done your first step toward understanding computer science.

  • If A and B are true then P = A and B is true.
  • If A and B are false then P = A and B is false.
  • If A or B is false then P = A and B is false.
  • If A or B is true then P = A or B is true.
  • If A is true then P = not A is false.
  • If B is false then P = not B is true.

In addition operators "or" "and" are commutative. That means:

  • A and B == B and A
  • A or B == B or A

Morgan's laws:

Sometimes you can transform one logical expression into another equivalent expression without knowing the value of the arguments. These rules are very useful in Computer Science to optimize Boolean expressions.

  • the negation of a disjunction is the conjunction of the negations;
  • the negation of a conjunction is the disjunction of the negations;

Let's explain what it means in terms of expressions:

  • not ( not A) == A
  • not (A or B) == (not A) and (not B)
  • not (A and B) == (not A) or (not B)

Boolean algebra

Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. This contrasts with general algebra that study other numbers not only the value 1 and 0.

Logical Operations:

I have found these are the most suitable symbols for logical operators. Though no programming language yet uses them.

symbol alternative meaning notes
¬ ! NOT negation
& AND conjunction
| OR disjunction
^ XOR exclusive disjunction
  NOR p ↓ q = ¬ (p ∨ q)
  NAND p ↑ q = ¬ (p ∧ q)

The table of truth:

Next table shows all possible combinations and the result for logical operators.

p q ¬ p ¬ q p ⊕ q p ∧ q p ∨ q
1 1 0 0 0 1 1
1 0 0 1 1 0 1
0 1 1 0 1 0 1
0 0 1 1 0 0 0

Predicate logic

Predicate logic study the things that are considered predicates and relations between the things. This is much more useful and powerful then the propositional logic. The things can be represented using data and values that can be connected with relational operators and symbols.

For example we can study numbers and relation between numbers using predicate logic. A number can be greater, equal or less than other number. Also a number can be different or not equal to other number. In the next table we show you the most usual notation for comparison operators.

Math CS Description True = 1 False = 0
= == Equal 1 == 1  1 == 0
!= Not equal 1 != 0  1 != 1
> > Greater than 2 > 1 5 > 5 + 1
< < Less than 0 < 1 1 < 0
>= Greater than or equal to 1 >= 0 1 >= 2
<= Less than or equal to 1 <= 1 1 <= 0

Lists & Sets

To understand the predicate logic better we need to grasp a sets and collections of things. A set is a group of things that are unique represented and not duplicated. All things in a set can be similar or can have characteristics in common. We call members of a set: "elements".

Set Examples

Examples of 2 Sets

Sometimes we need to make a collection of elements that can be duplicated. In this case we have a "list of elements" not a sets because some of the elements are not unique. We refer in computer science to a list or a set of element by using the term: collection.

Predicate logic study the elements of collections and the relations between them. For example an element can belong to a set or do not belong to a set. Two sets can contain the same elements or different elements. Then the sets are equal or not equal.

Operations

Operations between sets can produce new sets or logic results. Also it is possible to make operations between one set and one value to check if the value belong to a set. In next table we use Unicode symbols for operators between sets:

symbol example meaning
R = A ∩ B Intersection between two sets => new set
R = A ∪ B Union between two sets => new set
b = A ⊂ B Set is A included in superset B: => boolean
b = A ⊃ B Set A contain subset B: => boolean
Δ Δ = A - B Set difference, => new set
b = x ∈ A Belong: check if element belong to collection => boolean
b = A ≡ B Equivalent: check if A has same elements as B => boolean
∀ x ∈ A Any element: used in collection qualification => boolean
∃ x ∈ A Exist: used in collection qualification => boolean

Intersection

Intersection between two sets A, B will be a smaller set R that will contain all common elements of A and B. All other elements will not be included.
Intersection

Set Intersection

Union

Union between two sets A, B will be a larger set R that contain all elements of A and all elements of B, but duplicate elements will be included only once.
Union

Set Union

Difference

Difference between two sets A, B will be a set C that contain all elements of A but not elements that are common with B. There is a second difference D that we can make between B and A.
Difference

Set Difference

Vector & Matrix

In mathematics there are 2 significant numeric collections: Vectors and Matrixes. In Python we define a "List" that can have one or more dimensions and can hold a Vector or a Matrix. Some languages use term: "Array". Sometimes "Array" and "List" are different things with similar properties.

Fuzzy Logic

Fuzzy logic is an attempt to incorporate nature's inherent fuzziness into technology. This logic works not only with values 1=True and 0=False but we some values in between. Not everything is black or white but there is a gray area. This may be related to probabilistic values that are values between 0 and 1. Fuzzy Logic is used in neural networks and machine learning.

The theory of Quantum Computing is using a similar concept called "Quantum Logic". This is based on quantum bits (qbits). The difference is that number of states between {0,1} in quantum computing is limited while in fuzzy logic the number of states between {0,1} are infinite.

State Machines

A "state machine", is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. We call this also a "Finite State Machine" FSM.

Moore machines

Moore machine was invented by Edward Moore in 1956 and is called Moore machine. Moore machines consist of states and transitions. States are able to produce outputs, and the output is determined solely by the current state, not by any input.

Mealy machines

Mealy machines were invented by George H. Mealy in 1955. In comparison with Moore machines, Mealy machines produce outputs only on transitions and not in states. This often results in state diagrams with fewer states because more logic can be put on transitions.

Harel statecharts

A statecharts a visual formalism for complex systems. Basically Harel statecharts are Mealy/Moore state machines extended by further concepts that allow us to model complex systems in a practical way.

David Harel did his PhD in 1978 at the Massachusetts Institute of Technology in Cambridge. He then became a Professor for Computer Science at the Weizmann Institute in Jerusalem in 1980.

David Harel:  A complex system cannot be beneficially described in this naive fashion, because of the unmanageable, exponentially growing multitude of states, all of which have to be arranged in a flat unstratified fashion, resulting in an unstructured, unrealistic, and chaotic state diagram.

UML statechart

UML stands for "Unified Modeling Language". UML state machines have the characteristics of both Mealy machines and Moore machines.UML state machine is an object-based variant of Harel statechart.

Usability

State machines are useful to analyze and theorize the computation using abstract notations. In practice there are efforts to create various code generators to translate the statecharts into source code.

Representation

State machines can be represented in form of tables and diagrams. States are represented in diagrams by round circles, or shapes and transitions between states by arc of circles with arrow at end. A state machine diagram is usually oriented from top down or from left to right. The entry point is usually represented by a black dot. Some states are marked as "accepting" states with double line circle.

state-machine

State Machine

Turing Machine

A Turing Machine was invented by Alan Turing to break the German encryption machine Enigma. Enigma code was broken using the first universal computation machine that was named later computer.

Today we use term: Turing Complete to classify a computer language that can do what a Turing Machine can do. Theoretical it can resolve any finite logical problem, by splitting it into simple operations. A Turing machine can be represented as a finite state machine but we show instead a conceptual diagram that is more easy to grasp:

Turing Machine

Turing Machine

Artificial Inteligence

Artificial intelligence is often used to describe machines (or computers) that mimic cognitive functions usually associated with the human mind, such as learning and reasoning.

Machine Learning

We have not yet acived artificial inteligence per se but we have created instead something called machine learning that is a fancy word for statistic. It is build on the idea that we can analyze data and predict new data by detecting a "trend". This can be potentially used to predict the future data or to recognize "patterns" therefore simulate a primitive inteligence.

Example:

Statistic Trend

statistic chart
Linear Trend

Neural Network

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance.

Most of today’s neural nets are organized into layers of nodes, and they’re feed-forward, meaning that data moves through them in only one direction. An individual node might be connected to several nodes, in layer from which it receives data, and several nodes in the layer to which it sends data.

Neural Network

Neural Network
Conceptual Representation

Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what's sometimes called the first cognitive science department.

External Resources


Read next: Programming Paradigms