Tutorial Webmasterserve's Php Lessons: From Zero To Hero

Oldwriter

Content Writer
This is a series of PHP lessons aimed at the novice programmer, made exclusively for WebMasterServe.com with the goal of taking you from Zero to the point of creating your very own custom basic Content Management System in PHP. Enjoy!
So you want to learn programming

computer-science-523236_260.jpg


Good choice! This time and age, you can't go wrong with learning the skill of the present and the foreseeable future. As long as there are computing devices around us, there is going to be the need and opportunity of programming them.

When you learn to program, you get to understand better the digital world you inevitably live in. You get to know the logic behind computing, get to appreciate “the beauty in the design” and get to enjoy the warm gooey feeling of seeing the results you wanted being brought to life as you manipulate the code, leaping from passively consuming technology to becoming an active part of the experience.

Programming is a journey that pays. Let's sift through the misconceptions, learn the foundations and ultimately unleash your creativity to build something you can be proud of. Welcome to programming!

[ You will need PHP-enabled hosting, check WebMasterServe's recommendations! ]

The big misconception

directory-466935_260.jpg


There is a common misconception floating in the air: programming computers is one and the same.

People who have this thinking believe that a person who “knows how to program computers” can make any type of program for any possible computer field and device in existence. This is akin to expecting a certain doctor to have the skill-set required to perform any type of surgery on any person in the world, something not realistic.

The reality is you end up programming specific tasks with specialized tools (i.e. an specific programming language). Some of these tools just happen to be the right one for the current job at hand.

In computing, you have a very diverse landscape of domains for your programs: microcontrollers, gaming consoles, the traditional desktop, mobile apps, the web... As much as you have the 2D and 3D design fields, you also have many sub-fields in programming.

The correct conception or approach when you deal with programming computers is learning the precise subset of tools that make you successful in the field you are aiming your program to cover, this is, your target.

There is no right or wrong choice of a programming language for getting your hands busy and begin programming. There are only better languages for a certain target, those that are more suitable to the field. You will notice the concepts employed are roughly the same across systems; it is their specific implementation what makes the difference.

Web programming

ipad-820272_260.jpg


We are going to start our programming journey with a very specific ultimate goal: creating a small custom content management system for you to use and extend.

Something to bear in mind as we begin: in web programming you have two separate fields demanding different skill sets for distinct target tasks. They are:
  • Front-end programming – Dealing with the interface, what the user sees.
  • Back-end programming – Involving all the data-procesing services that the user does not see.
Your back-end programming is also going to involve database programming. We will be making use of MySQL to provide the database service. More on it later when we consider users and data.

You are choosing the lingua franca

w3tech-php-20151231.png


Why PHP? PHP is ubiquitous. It can be considered the lingua franca or “trade language” of server-side programming.

According to some estimates, like W3Tech, PHP's market share is as high as 81.7%.

This is a tried-and-true technology, in the real world. You can see PHP featured as an integral part of popular stacks such as LAMP (Linux, Apache, MySQL, PHP), LEMP (Linux, Nginx, MariaDB/MySQL, PHP) and plenty of different bundles covering Windows, Mac, BSD and other platforms used for back-end processing.

[ Hint: All of WebMasterServe's sponsors/partners offer PHP ]

Server-side programming is regular programming

it-838379_260.jpg


...and as such, you need to manage several concepts to properly understand why things are the way they are.

We are going to review some foundational key theory and a bit of relevant computer history for you to know and -hopefully- spark your interest to comprehend what you are dealing with as a programmer.

Please take a step back on your axiety to hit the actual code and pay attention; as a computer programmer you can only be good to understand the facts of the field, from the lower-level up.

@Solmak
 
Last edited:

Oldwriter

Content Writer
Words of wisdom data

audio-2202_260.jpg


You may have heard the terms “32 bit” or “64 bit” when referring to computing. That's the size of the “word” computers use to talk to themselves and for processing data. They are the actual “unit of computing” used by the system. The underlying blocks that make data processing work.

How do these computing building blocks really make things happen? How do they relate to the web and PHP? Let's see.

Bits and Bytes

banner-904886_640.jpg


A bit is the basic unit of information in computing. It can hold one of two states. Think of it as an “either or” entity. It can be either “A” or it can be “B”, it can be “true” or it can be “false”. Only one of the two at a time.

In computing, the term bit comes from Binary Digit; “binary” simply means consisting of two, as seen in modern latin's Binarius. “Bini” is for twofold (in latin) and “ary” reminds us of arity, which is used to denote the amount of parameters accepted or dealt with in the operation at hand (remember unary = one, binary = two, ternary = three parameters, and so on).

Right now we are going to focus in learning more about binary, since it is the computer's “native language” or it's natural code.

We use computers to represent and process states. We all have heard that computers only understand 0's and 1's, this is, binary code. There are practical reasons why has this been established and accepted as the way to go.

The first and most obvious reason being that it is easy to work with bits. The next thing to having no information is having only one unit of information. Considering two states as the minimal unit, a bit can be stored in anything that can have two differentiated states too.

A bit can be stored as hole on a punched card or paper tape, as a “bump” on the surface of a DVD, as a tiny spot of magnetism, indicated by the presence of an electrical charge in a physical computer chip or transmitted as a pulse of light… it doesn't matter. It denotes the same. One unit of information.

For instance, let's work with a light bulb's states. When dealing with a light bulb, we have:

0 = Off
1
= On

Under this context, all posible states of a light bulb can be stored within a single bit. This is useful.

But what if we wanted to store the colors of an inkjet printer's cartridge?

printer-933098_200.jpg


We have cyan, magenta, yellow, and black. Four colors. A single bit is no good for holding all of them.

In this case we could use 2 bits side by side, processing them as a single unit to indicate the information (color) we want to signal:

0 0 = Cyan
0 1 = Magenta
1 0 = Yellow
1 1 = Black

We obtained 2 * 2 = 4 possible patterns with only one (1) more bit. The growth was exponential; which is one of the reasons why binary code is efficient for many domains. By adding a simple YES/NO slot, you double the amount of states you can hold exponentially.

With three (3) bits we have:

0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1


2 * 2 * 2 = 8 combinations or recognizable different patterns / states.

In case you didn't get the patterns after bit-length increase, it is simply adding 0 and 1 to all of the preceding sequence's states.

1 bit:

0
1



2 bits (contains the sequence in 1 bit, preceeded by both 0 and 1)

0 0
0 1
1 0
1 1


3 bits (contains the sequences in 2 bits, preceeded by both 0 and 1)

0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1


On this same tune, 4 bits contains the sequences in 3 bits, preceeded by both 0 and 1 too.

0 0 0 0
0 0 0 1
0 0 1 0
0 0 1 1
0 1 0 0
0 1 0 1
0 1 1 0
0 1 1 1
1 0 0 0
1 0 0 1
1 0 1 0
1 0 1 1
1 1 0 0
1 1 0 1
1 1 1 0
1 1 1 1


You get the gist! That's how they continue to add-up bits.

You can continue adding them up, with every +1 increase doubling the previoius amount of patterns it can hold:

1 bit – 2 patterns
2 bits – 4
3
bits – 8
4
bits – 16
5
bits – 32
6
bits – 64
7
bits – 128
8
bits – 256.

Let's make a pause at 8 bits to have a special consideration, for we have reached the length of the Byte.

@Solmak
 
Last edited:

Oldwriter

Content Writer
Biting your way up: processing bits.

apple-15687_260.jpg


One thing to bear in mind here being the organization and meaning of the chains of 0's and 1's is entirely arbitrary. This is, they mean nothing by themselves; a long strip of 0's and 1's is just that. We humans are the ones in charge of giving these sequences their meaning and usage in our computing applications.

Once we agree on the usage of bits, the next natural “step up” on the ladder of bit-processing is their organization into logical units of data.

The logical unit of data in the computing world today is the byte.

The word byte itself is a deliberate respelling of bite. Whenever we talk about one byte, we are actually talking about the grouping of 8 distinct bits, which as you saw before, can hold 256 patterns.

Now-a-days we mostly have to roll with it, but historically, systems and applications weren't universally set on a particular amount for the number of bits they would consider at once as their data unit.

People just used what they needed.

For instance, if you were going to employ numbers in the 0-15 range, you would have made use of a 4-bit length.

If you were going to use uppercase letters and some punctuation characters, you would have gone with a six-bit character code encoding, or if you needed more characters you could also have found yourself using the 7-bit ASCII standard.

The now-traditional and universally accepted length of 8 bits in one byte owes a lot to its ubiquitous implementation at computing and telephony systems in the 1960's by giants such as IBM and AT&T. Establishing itself as the de-facto standard with the advent of the 8-bit microprocessor in the 1970's, with Intel taking the lead.

Some curiosities: The term octet describes a group of eight bits more unambiguously than the original implementaion of the byte. Since byte is a respelling of bite, the term nibble -conveying half a byte- is accepted in computing as 4 bits.

@Solmak
 
Last edited:

Oldwriter

Content Writer
Programming evolution: from 0's and 1's in the beginning to…

evolution-24560_260.png


...to 0's and 1's ultimately (bear with me)

Talking the native language of computers has always been hard. Since computers only understand 0's and 1's, for a person to communicate with the computer he or she had to communicate in the language of 0's and 1's too. This effectively meant the first form of instructing (or programming) computers was actually writing 0's and 1's for the computer to interpret; writing computer instructions in binary code.

So the code for a program looked like:

0 1 0 0 0 1 0 1 1 1 0 1 0 0 1 1 0 0 1 1 0 1 0 0 1 0 0 1 1 0 0 1 1 0 1 1 0 1 0 0

Given the fact computers can't program themselves, in the very early days of computing, during the 1940's and 1950's, people actually programmed with binary code on puched cards.

It wasn't uncommon for computer programmers to carry a batch of punched cards holding their program.

Then came assembly programming.

The idea of it being very simple.

If a machine code instruction has this form in binary:

1 0 0 1 1 0 0 1

and every time you want to use the operation you have to write the exact same sequence of 0's and 1's, let's better give it a more-human-friendly pnemonic then let's have another program (the assembler) translate the instruction (assemble it) to binary form. This means it will ultimately be understood by the computer in the language of 0's and 1's, but humans could write the code assisted by pnemonics using regular letters, numbers and accepted keywords/statements.

Of course, you would still have to use hexadecimal numbers, or even some binary, but it became much easier for a human to program a computer using assembly pnemonics and regular numbers rather than writing everything in raw binary code.

Assembly code example

These are some instructions to illustrate assembly code:

Code:
pushf
pop ax
rol ah,1
sahf
jc bo
pushfd
pop eax
mov ecx,eax
xor eax,00200000h
push eax
popfd
pushfd
pop eax
cmp eax,ecx
je bo
xor eax,eax
inc al
cpuid
mov bx,ax
call hex
mov ah,'$'
push ax
mov ax,bx
shr al,4
call hex
xchg al,ah
call hex
push ax
mov dx,sp
mov ah,9h
int 21h
pop eax
mov al,bl
mov ah,4Ch
int 21h
and al,0Fh
cmp al,0Ah
sbb al,69h
das
ret
As you can see, these instructions are better to be entered and read by humans when compared to raw 0's and 1's. A true boon in its time.

@Solmak
 
Last edited:

Oldwriter

Content Writer
Assembly's limitation: portability

man-918983_260.jpg


While assembly worked wonders for simplifying the process of programming computers by humans, it proved to have an Achilles heel: due to assembly's very strong correspondence between instructions and machine code, programs aren't portable; meaning they can't be taken from a computer to another unless they match exactly the same internal instruction set for the machine. This is, both computers must have a compatible architecture.

Portability is BIG in computing. A portable program can be made to work in a different computing environment than the one it is originally created for. Since in the real world organizations have a variety of computing units, the lack of portability of assembly proved a hindrance in the inter-operability of heterogeneous systems.

Higher-level programming languages to the rescue.

High-level programming languages use a compiler, which can parse and translate the source code to different target architechtures.

There were many middle-ware programming languages which achieved the “holy grail of computing” back them.

One of the most successful in achieving widespread adoption was the C programming language.

The C programming language is more readable than assembly, and hence it's easier to create programs in, with the BIG benefit of the resulting program being portable.

Code:
#include<stdio.h>
 
int main(void) {
	printf("Hello World\n");
	return 0;
}
The PHP programming language has its roots in C.

It solves the difficulties of the strict type system particular to C and adopts dynamic typing as its type system.

Plenty of the core syntax of PHP can be mapped directly to C.

The traditional hello world in PHP looks like:

PHP:
<?php
Print "Hello, World!";
?>
or
PHP:
<?php
Echo "Hello, World!";
?>
Some parts of the language look as it they were taken verbatim from C:

PHP:
<?php
for ($x = 0; $x <= 10; $x++) {
	echo "The number is: $x <br>";
}
?>
Any C programmer would follow that snippet of code since it is the same structure for the for loop. Same with others.

Actually, PHP began as a series of CGI programs in C written by Rasmus Lerdorf which were extended by him to work with web forms and databases. In this sense PHP literally began as a set of extensions to C code.
 
Last edited:
Top