TOPIC FIVE -Systems and Programming
TOPIC FIVE -Systems and Programming
2.1 Introduction
Developing an information system is usually a large project. All projects need to be
planned and managed. The SDLC shows the main activities normally associated with
information systems development. It shows where to start, what to do next and where to
end. Having said that, it is a cycle so it never really ends. As you work through later units,
you will see that there are, in fact, a number of ways in which systems are developed,
using different methodologies and techniques. The SDLC has been around since the early
1970s and has a number of strengths and weaknesses. Most of the other approaches
were developed to overcome these weaknesses, but the SDLC is really the starting point
for all of them. It is a highly logical and structured concept and, in this course, it will be
used as a means of introducing the various methodologies and techniques.
The SDLC
There are several variants of the SDLC, but the basic principles are the same. Systems
development is divided into phases and you will see different diagrams which show
between four and over twenty phases.
Figure 2.1 shows a typical diagram of the basic SDLC, and, as you can see, the cycle
consists of 6 distinct phases:
Feasibility Study
Systems Analysis
System Design
System Construction
Systems Implementation
Systems Maintenance and Review
Feasibility Study
Before any project can start in earnest, it is essential to find out whether it is feasible or
not. There are a number of categories of feasibility and these will be explored in the
next unit, but basically a feasibility study will look at technical, personnel and cost issues
and examine different ways in which the system can be developed. For example – can
the system be developed in-house or will outsourcing be required, does the
organization have the necessary technical resources and expertise to undertake the
project, will the system be of financial benefit to the organization and is the money
available to develop it? The output from this phase is a feasibility report, which
summarizes the study and makes recommendations about the way forward.
Systems Analysis
Once the feasibility study has confirmed that the system can and should be developed,
the next phase consists of a detailed investigation of the requirements of the system,
invariably involving extensive consultation with the users of the system. If the new system
is to replace an old one, then it is normal to
study the existing system in depth so that its objectives, outputs and the exact way in
which it works can be understood, as well as identifying any problems associated with it
so that these are not repeated. At the
same time, it is necessary to find out what new features and functionality should be
included in the new system.
The output from this phase is a requirements specification. Many analysts view this
phase as the most important. It is all very well building a sophisticated system which
looks good and produces impressive output, but if these outputs are not what the users
want, the system is a waste of time.
System Design
This phase takes the requirements specification and converts it into a system design
specification. This involves the design of inputs, outputs, databases, computer programs
and user interfaces.
The design phase is normally split into logical and physical design. Logical design
concentrates on the business aspects of the system and is theoretically independent of
any hardware or software. Physical design takes the logical design specification and
applies it to the implementation environment. Most often the choice of programming
language and database is already decided and these technologies are taken into
account in physical design.
The system design specification contains all the detail required for the system builders to
construct the system.
System Construction
This phase is where the system is actually built. The system specifications are turned into
a working system by writing, testing and, in due course, documenting the programs
which will make up the whole system. Once the individual programs have been tested,
the whole system needs to be put together and tested as a whole. This whole phase
requires extensive user involvement.
The output from this phase consists of detailed program and file specifications which, in
total, describe exactly how the new system works.
Systems Implementation
The objective of this phase is to produce a fully functioning and documented system. It
involves training users, transferring data from the old system to the new and actually
putting the new system into operation – "going live". There are a number of different
approaches to this, as we shall see later in the course.
A final system evaluation will also need to be performed to make sure the system works
according to expectations.
During the life of a system, continual review and maintenance will need to be
performed in order to maintain its functionality. For example, new requirements may
need to be implemented and errors in the system need to be rectified. Such
maintenance is really a repetition of the other phases of the life cycle as a new
requirement or a fix for an error needs to be analyzed, designed and implemented.
Eventually all systems become outdated and need to be replaced, so the cycle starts
again, with the way in which the old system is operating and the requirements which
now apply forming the backdrop to a new feasibility study to examine whether a new
system should be developed
The Computer Professionals There are three key groups of specialist computer
professionals involved in systems development – systems analysts, systems designers and
programmers. However, the exact demarcation lines between analysis, design and
programming are not very precise and vary from organization to organization. In some
companies the programmers will design detailed file and report layouts, working from an
overall specification provided by the systems designers. As the use of programming aids
becomes more widespread, and the work involved in detailed coding is therefore
reduced, this division between different kinds of computer professionals will become
even more blurred. It is, though, a useful distinction to draw in order to consider the roles
involved at different stages in the development process.
(a) The Systems Analyst
We can say that the general role of systems analysts is continually to update the
information system of an organization. They will maintain a continual survey of information
requirements and propose changes in (or design new) systems, control the
implementation of the designs and monitor their operation. The job has a very wide
scope, and perhaps there is a need to divide it into two – an investigator with business
training and a designer with a background in computers.
Systems analyst is a problem solver rather than a technical programmer
Special skills required for system analyst include:
1. Technical Knowledge and Skills: An analyst should have fundamental technology
knowledge of
Computers / peripheral devices (hardware)
Files and database systems
Input and output components and alternatives
Computer networks and protocols
Programming languages, operating systems, and utilities
Communication and collaboration technology
2. Business Knowledge and Skills: Analyst must understand:
Business functions performed by organization
Strategies, plans, traditions, and values of the organization
Organizational structure
Organization management techniques
Functional work processes
Systems analysts typically study business administration/management in
college
3. People Knowledge and Skills: Primarily a systems analyst must be an effective
communicator A systems analyst must be able to perform various roles such as
negotiator, teacher, mentor, collaborator, and manager
4. Integrity and Ethics: Analyst has access to confidential information, such as salary,
an organization’s planned projects, security systems, and so on.
Must keep information private
Any impropriety can ruin an analyst’s career
An analyst plans the security in systems to protect confidential information
5. Analytical skills: should be able to interpret systems by breaking them down into
components and bringing them back together
(b) The Systems Designer
System designing can be defined as the act of analyzing and documenting existing
systems, and – more particularly – the act of creating new systems meeting defined
objectives. Systems designers can work in one of two ways: (a) converting an existing
system (usually clerical) into another (usually computerised) system; or(b) creating an
entirely new system to meet an entirely new need. It is obvious, therefore, that the
specification of requirements stage is again very important. Essentially, the activities of
system designers centre around converting what is to be done (given by the requirement
specification) into how it is to be done. They will thus have to undertake the following
tasks:
Study the requirement specification and determine the overall system in terms of
input, output, processing and storage.
Design, in detail, the layouts of all output documents, reports and displays to be
produced by the system and define their expected frequencies and volumes,
where these are not clearly expressed in the requirement specification.
Determine the processing logic of the system; this will define the programs which
are required, the functions they will carry out and the order in which they will run.
Very carefully determine all the control, validation and audit procedures needed
in the system, so that these can be incorporated into the design at the appropriate
places.
Design the input data layouts and define expected volumes, frequencies, etc.
Specify and design the secondary memory files; this will include detailed record
design, file organisation decisions, estimated volumes, update frequencies,
retention periods, security and control procedures.
Finalize the data specifications for input, output and storage to ensure nothing
has been overlooked.
Design the manual procedures associated with the new system.
Define the system test requirements which are to be used, to ensure that the
system is operating correctly.
(c) The Programmer
Programmers take the very precise design specifications and turn them into computer
code which can be understood by the computer. It is very important that they work
closely with the design and code it as written and specified. The programmer will run the
tests of the code as set down by the designer. Any problems will be reported back to the
designer for changes to be made to the design. The programmer will also be involved in
the actual implementation of the system in order to provide any necessary last minute
coding changes. A major area of work for programmers is in systems maintenance.
During the operating life of all systems, various bugs will appear in the code. A
programmer is responsible for correcting these in consultation with the designer. From
time to time, enhancements will be proposed for the system and again the programmer
will be closely involved in coding the design changes.
Users as Clients
Gone are the days when computing staff were the sole technical experts and the people who relied
on their systems were known as "users", although you will have noticed that we continue to refer
to "users" in this course. Information Technology staff now use systems engineering techniques to
develop products for the requirements of users as clients – i.e. to meet what they, the eventual
users, require. Such products may even operate on the client's own hardware or may form part of
a total facilities management service provided by the IT department or external contractor. A
feature of business today is "total quality" and this introduces concepts such as: Get it right first
time/zero defects, Internal clients (other departments in the company) and Continual
improvement. It is now, therefore, more important than ever that the client participates fully in the
design, development, implementation and operation of the product. the following are the categories
of client:
(a) Senior Management
The client should be represented at a senior level by one person who is the final decision maker on
all matters relating to the project. He or she will make the "go/no go" decision at the end of each
stage of the development process, but is unlikely to be involved in the day-to-day activities of the
project team. Senior management's responsibilities can be summarized as: agreeing terms of
reference for the project, defining critical success factor, reviewing progress on a regular basis,
"signing off" each major deliverable and agreeing budgets and schedules.
(b) Junior Management
These are the people who will have regular contact with, and may even be part of, the project team.
Working closely with the IT development staff they will need an appropriate degree of "IT
literacy" and have a good understanding of the methodologies being used. That is not to say they
should be IT experts since a good methodology will be readily understandable by most people.
They are likely to be managers/supervisors from the departments most affected by the introduction
of the new system and thus in a good position to define the detailed requirements. During the
operational phase of the system they will be key players in ensuring its success. The responsibilities
of junior management can be summarized as: Defining detailed objectives, Confirming
"deliverables" meet client requirements, Making recommendations to senior management,
Assisting in quality assurance, Participating in client acceptance, Helping to design training
procedures, Participating in training activities, Assisting the implementation process and Using
the implemented system.
(c) Client Staff
These are the people who will be using the system on a day-to-day basis. Some may be involved
in the development process as part of the project team but their main responsibilities will be to:
Assist the development process, Undertake training, Assist in client acceptance testing, Use the
implemented system, and Provide input to the post-investment appraisal. Ensuring that the system
is being designed to meet the "right" requirements is a key element of any successful project. The
client has a major responsibility in this process not only in confirming the critical success factors,
but also at a more detailed level. The various development methodologies place great emphasis on
diagrams on the basis that "a picture is worth a thousand words". The methodology can thus help
the client as well as the IT staff to confirm that the design is aimed at meeting requirements. The
client's main role is to confirm what is required; the IT developers have responsibility for
confirming this is the case and also how best to achieve those defined objectives. In terms of this
initial stage, the client will confirm acceptance of the following prior to commencement of the
next stage of the project cycle: Statement of requirements/terms of reference, Feasibility report,
Requirements definition.
Asking
This is the holding of discussions, whether formal or informal, between the investigator and the
staff who operate the system. Interviewing may be one method here, although this suggests a more
formal and stereotyped approach. Often, many useful ideas and suggestions can be obtained by an
informal chat.
Before systems analysts commence this work, they should arrange, via the department head, a
convenient time to approach staff and have at least an outline (or, better still, a list) of the facts
required. It may be impracticable to visit personally all the staff involved, because they may be at
branches scattered
throughout the country or there may be too many of them. If the facts required are of a relatively
There may have been previous investigations of the department at present under scrutiny.
simple nature, a questionnaire could be drawn up and despatched.
Systems analysts should read these records to help them understand the current system. However,
Observation
too great a reliance should not be placed on these documents, particularly if they were prepared
some time ago.
Systems analysts will spend a great deal of time in the department being investigated; they will be
Other documents, such as policy statements, standing instructions, memoranda and letters
asking questions, examining records, etc. During this time, they will be in a position to observe what
pertaining to the investigation should also be read.
is happening. For example, they may notice:
Measuring and counting
(i) excessive staff movement
(ii) some very busy clerks and others with much spare time
The number of items, documents, transactions, people and time taken for individual processes
(iii) frequency of file usage
should be measured, either to confirm facts already recorded or to establish the current position.
(iv) communication problems both within the department and outside.
Reading
investigation and individual job descriptions will identify an individual's area of responsibility.
Sampling may be used, but care should be taken to avoid bias. The frequency of different activities
and events should also be recorded.
The organisational structure
The organisational chart of the user department will often highlight features useful to the
(e) Relevance and Verification
You may think that it is obvious that only relevant facts should be gathered. However, it is not
always easy to see what is relevant and what must be discarded. It is easy to gather large volumes
of facts, but not so easy to glean the important ones. Systems analysts should constantly refer to
their terms of reference or other documents setting down the areas to be investigated. Ideally, every
single fact obtained should be checked. Practically, this is not possible, and a degree of tact and
diplomacy is often required where facts are being questioned.
Analysts can find themselves in a consultative role if user staff are able to carry out the fact finding
themselves. Provided that the facts are being obtained correctly, this method is to be encouraged.
It involves the user staff right from the start, and enables them to look at their work objectively.
They will need encouragement to suggest changes, but they will most likely accept changes that
they have initiated.
Interviewing
Personal interviewing is the most satisfactory method of obtaining information. It will yield the
best results provided it is carried out properly. The interviewers' approach, whilst being
disciplined, can be as flexible as the occasion demands, and they can pursue any lead provided by
the respondent to obtain the maximum amount of information. A good interviewer will achieve a
balance between discipline and flexibility in order to draw maximum benefit from the interview.
This method is often the only means of obtaining opinions from senior people, particularly where
the subject under discussion is highly technical. Such information is usually qualitative. Personal
interviews may also be the best means of obtaining a large amount of quantitative information
from respondents.
Only a personal visit allows the researcher to see written material which the respondent may
otherwise forget or ignore.
Personal interviews are, however, time-consuming and expensive. Their use may, therefore, be
limited to a few key interviews, if the remaining information can be obtained in other ways.
(a) Advantages
(b) Disadvantages
Sending written questions to people may appear as an attractive technique, in that it saves the time
needed for interviewing. However, it is fraught with major problems.
It is extremely difficult to phrase questions which are unambiguous and make it clear
exactly what information is sought.
The replies to questions are subject to misinterpretation by the analyst, for similar reasons.
The analyst can never be sure that the questionnaire was completed seriously by the
recipient. For example, has the recipient really checked the facts written, or merely
guessed?
Many recipients of questionnaires simply throw them in their waste-paper bins.
The results may thus not be a valid sample of certain opinions or facts. Expert advice on framing
questionnaires should be sought by the analyst, and possibly a junior member of the analyst's team
should help the recipients complete them. These two actions can alleviate some of the problems.
Questionnaires are a last resort method, to be used only when other methods are too time-
consuming, uneconomic or not practicable for other reasons. They are useful when a large number
of responses is sought.
(b) Use of Questionnaires in an Interview
Use of a standard questionnaire will ensure that the same questions are asked in the same way
throughout an assignment. This is particularly important when more than one interviewer is
involved. The questionnaire need not be rigidly structured, but it should always provide a guide
for the interviewer. This will enable a standardised approach to each interview to be adopted.
To achieve a logical flow of questions
It is often important to ask questions in a particular sequence in order to obtain accurate results.
Some questions should be asked only in the middle or at the end of an interview to avoid
influencing a respondent's replies to earlier questions. The interviewer often does not want to give
too much information at the beginning of an interview, as this would probably bias the respondent's
opinion.
To avoid omissions
It is essential to ask all the questions for which answers are required at every interview. An
interviewer can easily omit important questions during an interview if he or she is not working in
a systematic way. It may be difficult and time consuming to go back to the respondent once the
interview has been terminated.
A questionnaire serves as a check-list of key questions, and thus avoids omissions.
To ensure data is collected in a form suitable for tabulation
Developer Programmer
A developer is a software professional who writes, manages and debugs the code in computer
programs. Developers typically specialize in a specific type of coding language. A developer also
manages other tasks related to software creation, modification and management, such as
software documentation, architecture, databases and user experience.
Maintenance Programmer
A programmer is a coding professional. Programmers make, test and troubleshoot the coding
languages within a software application to make sure it runs successfully. Programmers often
follow specific instructions related to the application's code while thinking innovatively about ways
to make the code functional.
1. Scope of work
Developers typically have a broader scope of work than programmers. In addition to writing and
revising code, developers often manage software projects. This may include delegating tasks to
other coding professionals, giving instructions to programmers about the type of code to develop
and designing the software while keeping in mind the customer's experience.
Programmers, however, typically focus primarily on the writing, debugging and testing of their
code. A programmer may have greater technical knowledge of specific coding languages,
techniques and troubleshooting methods than a developer.
2. Job responsibilities
Developers and programmers share some of their job responsibilities. Both professionals create,
revise, test and troubleshoot code to ensure that software programs run as effectively as possible.
In addition, developers and programmers might organize software data, use encryption or
security methods to protect their software and check the compatibility of their software with
various operating systems.
However, programmers typically have more job responsibilities related specifically to coding.
Since they have more specialized knowledge of coding, they likely spend more time than
developers working on a software's backend, meaning the code that enables programs to run
successfully for users. Programmers more frequently use tools like libraries or frameworks to build
and test their code repeatedly. Developers, meanwhile, often balance a wider range of job
responsibilities. A developer might, for example, analyze user feedback regarding software
performance and make suggestions to programmers about potential coding improvements.
Developers also may work more closely with other departments, such as marketing or design.
Collecting information is not an end in itself. It must be processed, analysed and presented in a
useful form. The discipline involved in designing a questionnaire with a view to tabulating the
answers, ensures a logical flow of the questions that it is essential to ask, and eliminates the
unnecessary ones.
Sampling
Analysts will need to obtain some quantitative information about the systems they are studying.
They need to know the amount of time spent on particular aspects of the work:
How many invoices are processed each day?
What is the average number of items per invoice?
And the maximum number?
It is impossible for analysts to sit for hours on end, watching and recording every movement of the
staff. Neither do they have the time to examine many thousands of invoices in an attempt to get an
accurate count of the items of interest.
In each case, analysts will want to take a few figures, a sample, and use these figures as a basis for
calculation.
The fundamental problem is, how many figures is "a few", and how should the analyst obtain
them? How reliable are figures based on samples?
Using the theory of probability and the mathematics associated with it, it is possible to take a
sample and, from the characteristics, draw conclusions about the characteristics of the system from
which we have taken it. Since the theoretical knowledge required is fairly extensive, we cannot
describe it here, and if sampling is to be used to any extent, the analyst should take advice from a
statistician. However, we will briefly describe the ideas involved in sampling so that you are aware
of the possibilities of using these techniques in an investigation.
Sampling is a method for reducing the amount of observation which analysts have to do, as well
as giving them a "feel" for the numerical information which can be obtained from records.
The main lesson to learn is that averages are not a sound basis upon which to work.
When selecting a sample from a larger population, there are two main points to note:
The number of items in the sample must be large enough to avoid any abnormal items
having an undue effect on the average of the sample.
Every item in the population must have the same chance as any other of being in the
selected sample.
Programming Languages
A programming language is a set of commands, instructions, and other syntax use to create a
software program
Each language, from C Language to Python, has its own distinct features, though many times there
are commonalities between programming languages.
Low level language does not require any compiler or interpreter to translate the source to machine
code. An assembler may translate the source code written in low level language to machine code.
Programs written in low level languages are fast and memory efficient. However, it is nightmare
for programmers to write, debug and maintain low-level programs. They are mostly used to
develop operating systems, device drivers, databases and applications that requires direct hardware
access.
Low level languages are further classified in two more categories – Machine language and
assembly language.
Classification of low level
programming language
Machine language
Machine language is closest language to the hardware. It consists set of instructions that are
executed directly by the computer. These instructions are a sequence of binary bits. Each
instruction performs a very specific and small task. Instructions written in machine language are
machine dependent and varies from computer to computer.
Example: SUB AX, BX = 00001011 00000001 00100010 is an instruction set to subtract values
of two registers AX and BX.
In the starting days of programming, program were only written in machine language. Each and
every programs were written as a sequence of binaries.
A Programmer must have additional knowledge about the architecture of the particular machine,
before programming in machine language. Developing programs using machine language is
tedious job. Since, it is very difficult to remember sequence of binaries for different computer
architectures. Therefore, nowadays it is not much in practice.
Assembly language
Assembly language is an improvement over machine language. Similar to machine language,
assembly language also interacts directly with the hardware. Instead of using raw binary sequence
to represent an instruction set, assembly language uses mnemonics.
Mnemonics are short abbreviated English words used to specify a computer instruction. Each
instruction in binary has a specific mnemonic. They are architecture dependent and there is a list
of separate mnemonics for different computer architectures.
Examples of mnemonics are – ADD, MOV, SUB etc.
Mnemonics gave relief to the programmers from remembering binary sequence for specific
instructions. As English words like ADD, MOV, SUB are easy to remember, than binary sequence
10001011. However, programmer still have to remember various mnemonics for different
computer architectures.
Assembly language uses a special program called assembler. Assembler translates mnemonics to
specific machine code.
Assembly language is still in use. It is used for developing operating systems, device drivers,
compilers and other programs that requires direct hardware access.
1. Programs developed using low level languages are fast and memory efficient.
2. Programmers can utilize processor and memory in better way using a low level language.
3. There is no need of any compiler or interpreters to translate the source to machine code. Thus, cuts
the compilation and interpretation time.
4. Low level languages provide direct manipulation of computer registers and storage.
1. Programs developed using low level languages are machine dependent and are not portable.
5. Programmer must have additional knowledge of the computer architecture of particular machine,
for programming in low level language.
High level language provides higher level of abstraction from machine language. They do not
interact directly with the hardware. Rather, they focus more on the complex arithmetic operations,
optimal program efficiency and easiness in coding.
Low level programming uses machine friendly language. Programmers writes code either in binary
or assembly language. Writing programs in binary is complex and cumbersome process. Hence,
to make programming more programmers friendly. Programs in high level language is written
using English statements.
High level programs require compilers/interpreters to translate source code to machine language.
We can compile the source code written in high level language to multiple machine languages.
Thus, they are machine independent language.
Today almost all programs are developed using a high level programming language. We can
develop a variety of applications using high level language. They are used to develop desktop
applications, websites, system software’s, utility software’s and many more.
High level languages are grouped in two categories based on execution model – compiled or
interpreted languages.
Classification of high level language on the basis of execution model
We can also classify high level language several other categories based on programming paradigm.
1. High level languages are programmer friendly. They are easy to write, debug and maintain.
4. Easy to learn.
5. Less error prone, easy to find and debug errors.
2. High level programs are comparatively slower than low level programs.
3. Compared to low level programs, they are generally less memory efficient.
Java is the leading general-purpose application development language and framework. It was
introduced in 1991 by Sun Microsystems as a high-level, compiled, memory-managed language.
Java’s syntax is similar to C/C++, with curly braces for closures and semicolons to end statements.
Automatic memory management is one of the features that made Java so popular, quickly after its
initial release. Before Java was introduced, languages that required manual memory management,
such as C and C++, were dominant. Manual memory allocation is tedious and error-prone, so Java
was hailed as a major step forward for application developers.
The promise of Java, beyond memory management, was its cross-platform capability. This was
marketed as “write once, run anywhere.” The Java Virtual Machine (JVM) runs Java bytecode,
which is compiled from the Java language. JVMs are available for most major operating systems,
including Linux, Mac, and Windows. It doesn’t always work perfectly, but when it does, a program
written in Java can run on any platform with a compatible JVM.
Java is used for business, web, and mobile applications. It is the native language for Google’s
Android OS. Java also powers millions of set-top boxes and embedded devices. Java development
skills are highly sought after.
If you’re considering a job in software development, you should strongly consider learning Java.
2. C
Popularity: Medium
Ease of Learning: Moderate
Use Cases: General Use and Specialty
o Embedded systems
o Hardware drivers
o Local Applications
Until Java was introduced, C was the dominant high-level language. It was first introduced in
1972. The first versions of Unix, written in Assembly language, were ported to C. It was then used
in the development of other early operating systems, including IBM System/370.
C has a long history of development on older systems with slower processors and little memory.
Programs written in C had to be very efficient, so C has a reputation for high performance in cases
where speed matters.
C is still very popular due to its use in systems development, including operating systems,
embedded devices, and as firmware. The C standard library has been ported to many platforms, so
it is viable in many use cases. However, the low-level systems programming it is typically used
for is a more specialized skill than general application programming. This explains why the
second-most popular language on the TIOBE index has relatively few job postings as compared
to other languages in the top 10.
There is likely to be some overlap in the market with C++ (see the C++ listing below.)
3. Python
Python is very popular for general-purpose programming, including web applications. It has
recently become known for specialty use in artificial intelligence applications.
Python are very plentiful, so it’s easy to find a job using Python.
4. C++
Popularity: High
Ease of Learning: Difficult
Use Cases: General Use, Specialty
o Local Applications
o Web Services
o Proprietary Services
C++ extends C with object-oriented features. The “double-plus” comes from the increment
operator from C. C++ was developed to bring features from older languages to faster, more
powerful platforms.
C++ occupies a similar area in the market as C, including systems programming and low-level
hardware development. Over the years, the C++ standard libraries and specification have been
expanded considerably, leading to criticism that it has become over-complicated and difficult to
learn.
Popularity: Low
Ease of Learning: Moderate
Use Cases: General Use
o Web Applications
o Local Applications
Visual Basic.NET (VB.NET) is Microsoft’s implementation of the Visual Basic language that
compiles to .NET Intermediate Language. This allows developers to write .NET applications using
Visual Basic. Applications written in VB.NET are, more or less, just as capable as any other.
However,
VB.NET was never very popular for business applications. Application developers preferred C,
C++, and C#. Most applications written in VB.NET tend to be older, and are likely to be considered
to be “legacy” applications destined for decommission or redevelopment.
6. C#
Popularity: High
Ease of Learning: Moderate
Use Cases: General Use
o Web Applications
o Local Applications
o Services/Microservices
C# was developed and introduced by Microsoft in 2000, along with the overall .NET Framework.
Syntactically, C# is very similar to Java and C/C++. It is a compiled, object-oriented language that
compiles to .NET Intermediate Language. Originally, C# was used for Microsoft-focused
development of Windows Forms and web development with ASP.NET. The .NET ecosystem has
evolved recently with the introduction of the .NET Standard and .NET Core. These new
frameworks and standards are cross-platform, running on Windows, Linux, and Mac.
C# is popular for local and web application programming, often (but not necessarily) in systems
developed primarily based on Microsoft technology. Microsoft’s Xamarin framework allows
developers to write Android and iOS applications in C#. It is suitable for systems programming in
some cases, and has libraries available for embedded systems.
7. PHP
Popularity: High
Ease of Learning: Easy
Use Cases: General Use
o Web Applications
PHP originally stood for “Personal Home Page” as part of its first name, PHP/FI (Forms
Interpreter.) The official acronym is now PHP: Hypertext Processor. Its primary role is as a web
application server-side scripting system. It was originally developed to extend a CGI program to
support HTML forms and database access. The code of a PHP program is mixed in with the
HTML, making it similar to Microsoft’s classic (pre-.NET) Active Server Pages. The interpreter
reads the HTML and code, and executes the code portions of the page.
PHP is popular because it’s easy to learn. It is also the basis of popular web-based applications
such as WordPress and Joomla. However, PHP also has a mixed reputation relating to software
quality. Early versions lacked security controls and features that made it difficult to develop
highly-secure applications. Recent developments in PHP frameworks and libraries have made
improvements in security.
There are plenty of PHP available, for content-focused web applications like WordPress, and
proprietary systems developed in PHP.
8. JavaScript
Popularity: Very High
Ease of Learning: Moderate
Use Cases: General Use
o Local Applications
o Web Applications
JavaScript is a high-level, dynamically typed, interpreted language. It uses Java-like syntax, hence
the name JavaScript. JavaScript was first introduced in the early days of the public Internet, 1995.
JavaScript is used to write code that runs in web browsers, on the client side. If you’ve been using
the Web long enough to remember the introduction of Google Maps, you witnessed some of the
first magic: the “infinite scrolling” in Maps is done using JavaScript.
Since its first introduction, JavaScript support has been added to all major web browsers.
JavaScript frameworks including React, Angular, and Vue offer a Model-View-Controller
application development paradigm, running entirely in the browser. JavaScript now supports the
visual, browser-run elements of most modern web applications, which is why most real user
monitoring tools cater for JavaScript.
JavaScript can also be combined with HTML to make cross-platform mobile applications. NodeJS
is a web server that runs JavaScript on the server side. NodeJS applications are written entirely in
JavaScript.
Given all these use cases and support, JavaScript is both popular and in high demand. It is not very
difficult to learn, though there are advanced programming techniques that take time to master. If
you are more comfortable with object-oriented languages, consider looking into TypeScript.
TypeScript “overlays” object-oriented features and syntax, and transpiles to native JavaScript.
9. SQL
Popularity: Very High
Ease of Learning: Easy to Moderate
Use Cases: Specialty
o Database Queries
SQL stands for Structured Query Language. SQL is used to query and modify data in a Relational
Database Management System (RDBMS.) Vendor-specific implementations, such as PL/SQL
(Oracle) and T-SQL (Microsoft) offer product-specific features.
SQL isn’t a general purpose language that can be used to write applications. However, it is at least
a useful, if not required skill of most developers. The term “full-stack developer” refers to a
developer with a well-rounded skill set that includes all aspects of an application. This almost
always includes accessing and saving data to a database. SQL is not hard to learn initially, though
there are advanced use cases in Big Data and data analysis that require significant experience.
SQL is very popular with both developers and Database Administrators, so that require SQL skills
are plentiful. However, it is not a complete skill unto itself. SQL experience is a big plus on a
resume, but it is rarely the primary skill required for any given job.
10. Objective-C
Popularity: High
Ease of Learning: Difficult
Use Cases: Mobile Applications
o Apple iOS devices: iPhone, iPad
Objective-C is a general purpose, compiled, object-oriented language. Its syntax is derived from
Smalltalk. Until 2014, when Apple introduced Swift, it was the primary language used by Apple
to develop applications for MacOS and iOS.
Objective-C is still relatively popular, due to the large number of applications available that were
written using it. Now that modern MacOS and iOS development is done primarily in Swift, it is
likely that its popularity will eventually fall off as the number of supported applications tapers over
time. Objective-C is not easy to learn. It uses syntax and language conventions that are not common
to other languages, so experience with other languages does not apply well to Objective-C.
If you want to focus on software development for the Apple ecosystem, it’s a good idea to pick up
both Objective-C and Swift. This will give you the ability to work on older applications written in
Objective-C, and write new applications in Swift. Between the two, are very plentiful.
Popularity: Ultra-Niche
Ease of Learning: Moderate
Use Cases: General
o Local Applications
Delphi is a compiler and Integrated Developer Environment (IDE) for the Object Pascal language.
Object Pascal is an object-oriented derivative of Pascal, which was developed in the late 1960s.
Delphi/Object Pascal is on this list because there is a lot of software out there written in Object
Pascal, with Delphi. As we can see from the number of , Object Pascal is effectively a dead
language. If you want to write software as a profession, ignore Delphi and Object Pascal. Their
days have passed.
12. Ruby
Popularity: High
Ease of Learning: Easy to Moderate
Use Cases: General
o Web Applications
o Scripting
Ruby is an interpreted, dynamically typed, object-oriented language first introduced in the mid-
1990s. It was inspired by several other languages on this list, including Lisp, Perl, and Ada. Ruby
is very popular for web application development. The Ruby on Rails framework (now known
simply as “Rails”) is a model-view-component server-side framework written in Ruby.
Ruby is fairly easy to learn. Its common use in web applications makes job opportunities easy to
find.
13. MATLAB
Popularity: Medium
Ease of Learning: Moderate to Difficult
Use Cases: Specialty
o Mathematical Research
MATLAB is not a programming language, per se. It is an application that is used to calculate and
model complex mathematical computations. It is used primarily in research settings, at universities
and labs. MATLAB can handle complex matrix manipulations, and supports extensions to use
complex mathematical notation. Functions written in C, C#, and FORTRAN can be called from
MATLAB.
The knowledge needed to use MATLAB is more related to the mathematical concepts and skills
than knowledge of programming. If you’re already an advanced math student working on a PhD
in mathematics, MATLAB is relatively easy to learn.
Popularity: Low
Ease of Learning: Difficult
Use Cases: Specialty
o Systems Programming
o Hardware / Firmware development
“Assembly language” is a generic term for low-level code that closely represents the native
machine instructions for a given microprocessor. Most of the languages on this list are “high-level”
languages that are closer, syntactically, to English. High-level language code must be compiled
down to an intermediate bytecode, or directly to machine instructions. Assembly code is
assembled, (hence the name) not compiled.
The intent of a line of code written in C or Ruby is relatively easy to understand, just by reading
it. Assembly, by contrast, is very difficult to understand without a careful reading of the entire
program. Each operation, including math operations and moving data in and out of registers, is a
complete statement. This means that it takes a lot more assembly code than C code to do the same
amount of work.
Assembly code is most useful when performance is the most important goal. It is used for very
low-level systems programming, or in some cases may be combined with application code for a
performance boost. that require knowledge of assembly will include systems programming and
hardware development.
15. Swift
Popularity: Medium
Ease of Learning: Moderate to Difficult
Use Cases: Apple Mobile and Desktop applications
o MacBook
o iPhone
o iPad
Apple introduced Swift in 2014 as a modern alternative to Objective-C. Its goals were to be easier
to debug than Objective-C. Swift syntax is easier to read than Objective-C, and requires less code
to do the same amount of work. However, breaking changes introduced with new versions may
have stunted its adoption.
There are a fair number of available for Swift, so it is probable that Swift is here to stay. As
mentioned in the Objective-C listing, if you want to develop for the Apple ecosystem, hedge your
bets and learn both.
17. Go
Popularity: Low
Ease of Learning: Moderate
Use Cases: General
o Web Applications
o Local Applications
Go (also known as Golang) is a relatively new kid on the block. It was introduced by two Google
engineers in 2009. Go syntax borrows heavily from C and Java. The design goals for Go included
cross-platform compatibility, simplicity, and support for modern processors.
Go is relatively easy to learn. It has some of the complexities of C/C++ (such as pointers) but its
syntax and conventions are simpler. While Go are not plentiful, there is a rapidly growing
following in engineering and DevOps circles.
18. Perl
Popularity: High
Ease of Learning: Easy to Moderate
Use Cases: General
o Local Applications
o Web Applications
Perl was introduced in 1987 as a utilitarian scripting language, evolving from CGI scripting.
Recent releases of Perl are quite different from early releases.
Perl is fairly easy to learn, but it has its detractors. The development of Perl was somewhat
haphazard, leading to criticism that it is not well organized. This has left Perl with a reputation for
being less than robust.
Quite a lot of software has been written in Perl, and that continues to this day. Perl are not hard to
find. Having said that, it would be a stretch to say that Perl is a “modern” language. Perl may be a
good language to learn early in a career, as a way to get started, but it shouldn’t be the only one.
19. R
Popularity: Low
Ease of Learning: Difficult
Use Cases: Specialty
o Statistical Computation and Analysis
R programming are not hard to come by, but the number is not high due to the specialized nature
of the work. If you are a data analyst doing statistical work, there’s a good chance you’ve learned
R. If that work sounds like something you want to look into, you should strongly consider adding
R to your toolbox.
20. PL/SQL
Popularity: Low to Medium
Ease of Learning: Moderate
Use Cases: Database Queries
o Oracle Databases
PL/SQL is the vendor-specific implementation of the SQL language listed above. The syntax and
features of PL/SQL align with features of Oracle databases. All dialects of SQL are moderately
difficult to learn. Simple data querying and updating is fairly easy to learn. Joins, aggregation, and
advanced concepts such as cursors require more understanding of database theory.
Oracle is a dominant database vendor, so PL/SQL are fairly plentiful. If you are an Oracle
Database Administrator, PL/SQL is a must-learn. Full-stack developers that work at the data
“layer” should consider learning PL/SQL and other dialects.
Popularity: Low
Ease of Learning: Easy
Use Cases: General
o Local Applications
Visual Basic (VB) was introduced by Microsoft as a variant of the BASIC programming language.
It is an event-driven language and Integrated Development Environment, primarily used to develop
Windows applications. VB was designed to be easy to learn, and to rapidly produce usable
software. Visual Basic for Applications (VBA) is embedded in older versions of Microsoft Office
applications, such as Access. VBA was used to provide programmatic manipulation of Office
documents. Access databases used VBA to compose mini-applications.
Microsoft deprecated Visual Basic 6.0, the last version of Visual Basic, in 2008. It is no longer
supported. that require Visual Basic are dwindling. It is likely that any such job is focused on
maintenance and/or porting to a modern platform.
22. SAS
Popularity: Low
Ease of Learning: Difficult
Use Cases: General
o Local Applications
SAS originally stood for “Statistical Analysis System.” SAS was first developed in 1966 on
mainframe computers. It was used for statistical data analysis.
SAS is not common, though there are still some available. Modern statistical analysis tools have
overtaken SAS.
23. Dart
Dart never really took off, so it is not popular and are few.
24. F#
25. COBOL
Ease of Learning: Moderate to Difficult
Use Cases:
o Mainframe Application Development
COBOL is a very old language used primarily for mainframe development. It is somewhat difficult
to learn, by comparison with more modern languages.
Programmers that have been using COBOL for decades are enjoying high employability, due to
the scarcity of COBOL programmers that are working and not retired. This is not a good reason to
learn it if you don’t already know it, however. Much better to invest in new skills for a new
generation of languages and platforms.