0% found this document useful (0 votes)
46 views14 pages

System Installation and Change Management

revision notes computer science

Uploaded by

nanaohnana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views14 pages

System Installation and Change Management

revision notes computer science

Uploaded by

nanaohnana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

COMPUTER SCIENCE NOTES

Topic 1

SYSTEM FUNDAMENTALS

Planning and system installation

Reasons for a new system:


Old system is inefficient
Old system is no longer suitable for original purpose or
outdated To increase productivity and quality of output
To increase efficiency and minimise costs

Extent of a new system update depends on how much time, software, hardware, people needed and
the immediate environment. May need to train employees to use the system, fire employees (e.g.
secretary not needed if salespersons input orders from home PC), get more hardware (e.g.
employees need PC and network needs to be implemented), change server location etc.

To decide whether project is worth pursuing (Compatibility issues/ strategies for mergers/ data
migration/ hosting system/ installation processes are issues to be considered in the planning
stage, once the project is deemed worth pursuing), use:

Technical feasibility: Is available technology sufficient and advanced enough to implement


the system?
Economic feasibility: Is the new system cost effective? Are funds sufficient? Legal
feasibility: Are there conflicts between the system and laws/ regulations?
Operational feasibility: Are existing organisational procedures sufficient to support
maintenance and operation of the new system?
Schedule feasibility: How long will it take to implement?

Change management: Shifting people, departments and organisations from one state to the
desired state. Need to maximise benefits and minimise impact of change on people so that
stakeholders accept the change in environment. Issues regarding planning the system must
be resolved, e.g. students informed of double-sided printing to make use of the new feature

Compatibility issues
Business mergers: Two businesses combine. Need to ensure systems are compatible.
Incompatibility can arise from...

Language differences: Communication issues and different interpretations. Software


incompatibility: Different software/ systems can't operate well on the same computer or same
network.
Legacy systems: Old tech, hard/software, computer system or program. Some still satisfy user
needs and data cannot be converted to newer formats, or applications upgraded, so organisation
continues use of legacy systems even if newer technology is available.

Strategies for merging:


1. Keep both systems, develop to have same functionality (high maintenance cost)
2. Replace both systems with a new one (high initial cost)
3. Combine best systems from both companies (hard for employees to work with system from
another company)
4. Only use one company's info systems (policy issues)

Other problems to overcome (expanded on in notes further below)…


Workforce issues- Might have to lay off some workers, or
retrain Time frame- to merge two systems
Testing- of the combined systems/ new
data Costs- of aligning two systems
Changeover decisions- e.g. parallel running etc.

Using client hardware VS hosting systems remotely


Locally hosted system: Software is installed and operated on client's own hardware/
infrastructure. Like paying to buy the product/ software package and owning it. E.g. set up
open-source message board system on your own web server.
PROS:
Best for large, complex systems.
Only pay once, excluding maintenance (if you don't pay it, can still continue to
use the software but not updated versions).
Can control the data yourself in a secure data centre, less risk of data loss as
you yourself can use redundancy to whatever extent you want.
CONS:
Higher initial cost than remote
Harder to predict total cost (maybe more expensive in long run with
maintenance payments)
Have to maintain yourself (hire IT personnel).

Remote hosted system: Software As A Service (SaaS) solution. Hardware is elsewhere,


updated centrally. Users can access data and operate software from the cloud and pay for
service on a subscription basis. e.g. sign up for message board system where others take care of
maintenance. PROS:
Lower initial cost
Can predict overall cost easier
Best for when organisation doesn't have necessary hardware
already You don't have to maintain it yourself
Data secure in data centre
CONS:
Relying on a third party= risk of data loss if provider shuts down
Legislation in country of provider may be weaker than in user’s country
Performance generally lower than on-premise solutions
Remote host in different time zone, can be inconvenient for end
users? Depends on internet connection

Installation processes (adv/dis)


Implementation/ conversion: Putting new system online and retiring old one. Types...
Parallel: Both systems run parallel to each other at first to compare outputs until
satisfied
with the new system, terminate old one. If new system fails, can revert to old one =
less risk, ideal for critical systems e.g. nuclear power station. But higher cost. Not
efficient if systems have different input/outputs/ processes. Workers may be trained to
use new system for nothing.
Direct/ Big Bang: Set up new one and terminate old one at the same time. Preferred if
system not critical, due to higher risk as system might not function well. Less costly.
Pilot: In organisations with multiple sites. New system is introduced in one of the sites
(pilot site/group) then introduced to others if successful. Less risk. Worker training
Phased: Convert one module at a time e.g. per department. Training period and
implementation takes longer

Problems with data migration


Data migration: Transferring data between formats, storage types and/or computer systems when
switching to a new system/ changing/ upgrading/ merging. Problems/ risks... Incompatibility with
file formats in the new system- could lead to incomplete or incorrect data transfer
Non-recognisable data structures and formats- result in a mismatch of data, e.g. in
customer records
Data lost or corrupted during transfer due to transmission faults/lack of adequate
storage. Not usable at destination
Data misinterpretation due to conventions in different countries e.g. date,
measurements, currencies
Different validation rules between companies- could lead to inconsistent/incorrect
results Might not be able to use data while transferring, problem if it's large and takes
long

Types of testing
Testing is important because it identifies problems to be fixed, areas for improvement and
determines whether system/ software fulfils requirements. If not done properly, inadequate
system= inadequate employee productivity, reduced efficiency and output, increased costs=
end-user dissatisfaction.
1. Alpha testing: Offering early development version to other developers before available
to general public, get feedback.
2. Beta testing: Provide version to select group of users outside of company (closed beta) or
to public (open beta) and receive real-world feedback. But user report is not always best
quality, and there are many reports of the same bugs.
3. User acceptance testing: Usually last stage, provided to clients as a last-minute check
that the product satisfies target audience
4. Debugging: Systematically finding and correcting bugs/errors. Some programs do
it automatically = cheaper and faster.

User focus

User documentation
Important so users can understand, use and make the most of the system. Ensures users
can quickly adapt to the software/ system with minimal costs/inefficiencies. Can include...
Requirements- identify attributes, characteristics and functions
Technical- details on how to install and configure the product
End user- manuals for end user, support staff and system administrators. Details on how to
use the product
Marketing- how to market the product, analysis of market demand

Methods of documentation include...


Help files: Easy to access, cheap, can't be misplaced, no internet needed Online
documentation: Easier to use and search through, can have email support and can
update documentation later. However, access limited by internet connection Manuals:
Printed manuals can be accessed any time (even if system not yet installed) and not
restricted by internet connection but can't be updated. Digital manuals more cost efficient
and eco-friendlier but online ones restricted by internet connection.

User training methods (adv/dis)


Self-instruction: People can use resources like manuals, websites, video tutorials etc.
to learn on their own. Easiest and cheapest with more flexible time for user but usually
only used for easy/ common use programs with sufficient documentation as
effectiveness depends on user motivation and ability to work on their own. Formal
classes: Classroom setting, free discussion. Students can exchange ideas, direct
interaction with expert. But may be harder for members who work better on their own
and self-assured students may dominate discussions
Remote/ online training: Larger variety of courses online, can access any time, easier
to set up and include new members = cheaper. But excludes those without infrastructure/
internet or IT skills to use it, might not be as effective (especially with dependent learners)

In general, employees need to learn quickly and easily to implement new system faster to
reduce costs and minimise inefficiencies

System backup

Causes of data loss:


User error: Accidental deletion, closing before saving
Natural disasters: Fire, flood, earthquake
Malicious activities: Someone purposefully deleting/ altering/ stealing data (can
be employee or external hacking)
Computer viruses: Destruction/ corruption of data
Power failure

Consequences can be serious e.g. hospital: puts lives in danger, may have to repeat tests and
procedures. In other situations, can cut into revenue if dissatisfied customer tells others e.g.
customer makes reservation but there’s no record of it or free rooms so they have to go elsewhere.

Measures to prevent data loss


1. Regular backups: On hard disks/ magnetic tape, online or on removable media
(e.g. USB, CDs) for fast backup and storage
2. Offsite storage: Data backups stored in different geographical location
3. Firewall and antivirus: Prevent virus infections
4. Failover systems: Computer system that system can switch to in case of
hardware/ software/ network failure. Often switches automatically to reduce
time
Software deployment

It’s important that users can install updates because otherwise they might not have fixes for bugs
and errors or be able to benefit from added features/ improvements leading to performance issues.
Especially for organisations with different locations- different sites could have different versions of
the software, leading to incompatibility.

Types of updates (i.e. reasons for updates)


Patches: Used to fix known bugs and vulnerabilities. May introduce new bugs
though Updates: Fix bugs, add minor functionalities, usually free
Upgrades: As well as bug fixes, add new major functionalities or characteristics,
often need to be bought
Releases: Final, working applications gone through testing

Strategies for alerting users about updates


Automatic: Cookie is placed on the machine when software is registered and installed,
communicates with the developer automatically when software is started up. If update
is available, messages and alerts are sent back to the machine
Sending an email: User registers email and other details when installing software.
Email sent to the registered user with a link to download the update

Computer components

Hardware: Physical, tangible elements of a computer system e.g. CPU, HDD Software: Set of
instructions for the CPU to perform specific operations, can be programs or data Peripheral
device: Auxiliary device that can connect to, communicate and work with the computer. Input/ output
devices.
Human resources: The set of individuals who make up the workforce of an organisation,
business sector or economy. There is software that combines human resources functions (e.g.
payroll, recruiting and training, performance analysis) into one package

Roles of a computer in a networked world


Client: Piece of computer hardware/software that accesses services made available by
server, by sending requests to server
Server: Program/host computer that fulfils requests from client programs or
computers across network and shares info to clients
Email server: Message transfer agent that transfers electronic messages from one
computer to another in a network
DNS (Domain Name Server): Server that translates web addresses written in letters
(more memorable for humans) to the numeric IP (Internet Protocol) address Router:
Connects networks together to forward data packets between networks, deciding where
to send information so it is received by one network and then sent to another until it
reaches its destination
Firewall: Controls incoming and outgoing network traffic, determining what data
packets should be allowed through, based on a rule set. Needed to protect integrity of
client computer.
Ethical/ social issues with the networked world (interconnecting computers):
(also ethical issues associated with introduction of new IT systems) Security:
Protecting hardware, software, peripherals, data and networks from unauthorized
access
Privacy: Controlling how and to what extent data is accessed and used by others, to
protect identity e.g. GPS location services on phone, data sold to companies. But
there’s also problems with anonymity e.g. cyber bullying, hacking, terrorism etc.
Censorship: Some info may be deemed inappropriate. Network manager could make
sure no other computers can access it. e.g. China blocking sites.
People and machines: Easier communication, more information and efficiency etc.
BUT addiction, real life neglect, lack of sleep, health problems, car accidents, technical
unemployment, digital alterations (e.g. fake videos, fake news etc).
Digital divide and equality of access: Inequalities regarding use and access to
computer systems in different environments/ countries, leads to inequality in info and
education access
Surveillance: Monitoring people e.g. for law enforcement, employers, traffic control
etc. Ethics of privacy and knowledge/ consent to surveillance
Globalisation & cultural diversity: Spread info and reduce political, geographical,
financial boundaries. BUT diminishing of traditional cultures
Environment
How the system will benefit the company

Practical issues to consider when networking:


Reliability: How consistently a computer system functions according to its
specifications, with minimal system failure. Having a long mean time between failures.
Failure can = data/time/revenue loss, injury etc.
(Data) integrity & consistency: Maintenance of accuracy and consistency of data. Must
be complete, up-to-date, unaltered. Is inconsistent if there’s different versions of data
(duplication)
Standards and protocols: Rules followed in development of systems, including
proprietary standards (e.g. computers compatible with Microsoft operating system),
industry standards (formally decided, e.g. USB), de facto standards (e.g. QWERTY
keyboard)

System design and analysis

Stakeholder: Has an interest or investment in a project and is impacted by how it turns


out. System analysts have to collaborate with all stakeholders- clients and end-users.

Obtaining requirements:
Interviews: Face-to-face with verbal responses. Can be structured with the same
questions and manner for every stakeholder, or unstructured with more flexibility.
Questionnaires: Can be closed/restricted (yes/no, box checking) or open/unrestricted
(free response questions)
Direct observation of current procedures: On-site observation of different
departments to see where things can be more efficient.
Evaluating requirements
Interviews:
Talk directly to users/member of organisation and can observe non-verbal behaviour=
more reliable, valid data
Unstructured interviews can reveal more questions that otherwise wouldn’t have
been addressed= more detailed reports
Data from unstructured interviews is hard to summarise/ evaluate/ analyse
Level of detail depends on type of interview- structured interviews get less
detailed responses
Time-consuming to get detailed results
Questionnaire:
Time-saving and cost-efficient- can get info from a large group of people easily and
cheaply Closed/restricted questions= data is easier to compare
Open questions= more detailed reports
Level of detail depends on type of questions- closed questions don’t allow for clarifications,
elaborations, more details
Stakeholder could interpret question wrong= invalid answers
Observations:
Highlight aspects not detected in questionnaires/interviews= produce more detailed reports
May be more reliable than interviews- see what people actually do, instead of what they say
they do
Time consuming/expensive- might need to observe a complete business/system
cycle which could take a significant amount of time
People act differently when they know they are being watched= unreliable observations

Prototype: Early sample, version or model of a system/software/hardware, displaying the


minimum necessary features, used to test and gather feedback on a new concept or system
from clients. Clients can follow development closely and see the changes as they are made.
Iteration (iterative design): Where solutions/code/prototypes are designed, developed, tested
and evaluated in repeated cycles. With each iteration, additional features may be added until
there is a fully functional software.

This involves end user participation. Failure to involve end user in design process can lead to
software not suitable for its intended use because of lack of feedback- has adverse effect on
user productivity, efficiency etc. Need effective collaboration and communication between client,
developer and end-user.

Illustrating system requirements:


Flowcharts show flow of data through program, can show all types of processing and can refer
to hardware as well as programs, files, databases etc.

Data Flow Diagrams (DFD) show how data is stored and moved through the system, but not
type of data or storage.
Structure charts describe functions and sub-functions and relationships between modules in a
program. Involves modular design, process of designing modules individually then combining
to form solution.

Human interaction with the system


Usability: Ability to accomplish user goals. More usable= more efficient to use, easier to learn.
Accessibility: Ability of system/ device to meet needs of as many individuals as possible. Low
accessibility = barriers to certain groups e.g. disabled

Usability problems and examples:


Learnability: How easy is it to accomplish basic tasks the first time users encounter the
design? E.g. learning features of different manufacturers, accidental touches on touch
screen, right-handed mouse
Efficiency: How quickly can users perform tasks? E.g. need to locate product and details
quickly on e-commerce sites
Memorability: When returning to design after period of not using it, how easily can users
establish proficiency?
Error: How many errors do users make, how severe, how easily can they recover?
e.g. inaccurate/ outdated street data, no verification/ validation, time taken to
reschedule Satisfaction: How pleasant is it to use the design? E.g. visually appealing
website, all other factors carried out
Can be affected by...
Complexity/ simplicity: e.g. unnecessary extra apps when user just needs call & SMS,
website clearly stating what company offers and clear navigation, unclear instructions
Readability/ comprehensibility: e.g. small screen/ buttons, low quality speakers,
incomprehensible font and colours
Other stuff: Battery life, brightness outside, health problems, hardware components e.g.
camera with no flash, insufficient memory
Example- usability issues with mobile devices…
Size of screen- difficult to see/use especially in poor light
Size of keys- difficult to access functions
Battery life- may need to recharge regularly, inconvenient

Solving accessibility/ usability problems:


Usually for disability or impairment
Visual impairment: Braille keyboard/ printer, speech recognition, text-to-speech/ screen
readers, colour changers, screen magnifiers
Hearing and speech: Subtitles, visual effects
Cognitive problems: Word processors for dyslexics, special software with strong
interaction
Mobility issues: Eye typer, puff and suck switch, foot mouse, speech recognition, word
prediction software
Topic 2

COMPUTATIONAL THINKING

Thinking procedurally
This includes identifying the steps and putting them in the correct order e.g. recipes

Sub-procedure: A section of code in a program that does a specific job. Can be called by name
when needed without naming the details as these are wrapped in the procedure. It is therefore
an example of abstraction.

Thinking logically
Different actions are taken based on conditions, taking alternative procedures into account. Need
to identify conditions associated with a given decision (like an IF statement or logic gates- testing
conditions).

Thinking ahead
Need to identify inputs and outputs required in solution before carrying it out. e.g. cooking- need
to identify the different ingredients.
Gantt Charts:
Outlining tasks, how long they will take to carry out, and when they are carried out. Can
identify and show what tasks can be carried out concurrently.

^ Bar chart for project schedule management (from markscheme). Time scale on top, list of tasks
on side. Allows easy inspection and overlapping tasks, durations etc.

Pre-planning
Pre-fetching/ caching: Building libraries of pre-formed elements for future use, e.g. using Java
libraries to increase efficiency, making sure you have your most commonly used spices ready at
the front of your cupboard for cooking etc.
Pre-condition: Starting state before algorithm is executed, conditions that need to be fulfilled.
e.g. have to have the required ingredients, a place to cook, pre-conditions while making decisions
in cooking (“Are the carrots still hard? Cook them a bit longer”).
Post-condition: Final state after execution of algorithm, the state you are trying to achieve/
lead up to, the final result.

Will need to also consider exceptions when building pre-conditions, e.g. identifying conditions for
calculating end of year bonus when not all employees have worked for the company for the whole
year.

Thinking concurrently
Concurrent processing: Implementing parts of a solution at the same time e.g. assembly line
mass production- people carrying out task on one product then moving on to the next one while
the next person carries out another task at the same time.
In computers:
Execution of different instructions simultaneously by multiple processors. Each
processor processes different parts of a program's procedures and sub-procedures.

Needs better planning that accounts for different people working on the solution at the same time
due to the changes they make, e.g. database should only be accessed once edit has been made
otherwise the person wouldn't know someone else has erased their changes

Thinking abstractly
Selecting the pieces of information that are relevant to solving the problem and leaving out other
information, to enable the ability to examine a solution at a human level of interaction.
Considering something as its relevant characteristics and qualities, separated from concrete
realities, actual objects or instances.
e.g.
Map of London only showing roads and names, not the buildings because purpose is
navigation along roads for cars
Tube map showing simplified route as user is only interested in the order of stops Virtual
reality games having smaller time scale and providing icons of items in inventories City
simulation for pilots not having details like people or windows on buildings, just
landmarks and the shape and height of
buildings School decomposed into faculties
An object in OOP is an example of abstraction because it hides the details of the
code while preserving functionality

Program design

Array algorithms:
Sequential/ linear search: Usual search, go through every value and compare to the target value.
Simple to implement, data doesn't need to be in order. Inefficient with large number of elements,
may have to go through every single one of them.

Binary search: Values in order. Compare search value with middle value. If smaller, compare
to middle value of sub-array to the left. If larger, compare to sub-array to the right, and so on.
Faster than sequential search. Too complicated for small number of elements. Only works on
sorted lists, difficult if data is constantly being added.

MARKS = [5,8,23,77,89,104]
TARGET = 89
MIN=0
MAX=5
FOUND = false

loop while FOUND=false AND MIN<=MAX


MID = ((MIN+MAX)div 2)
if MARKS[MID]=TARGET then
FOUND = true
POSITION = MID
else if TARGET>MARKS[MID] then
MIN = MID+1
else
MAX=MID–1
end if
end while

Bubble sort: Compare adjacent values. If not in order, swap round


Simple to write, less code. Takes more time to sort, average time increases almost
exponentially as number of elements increase.

MARKS = [67,33,2,89,10,99]
TEMP = 0

loop X from 0 to 4 //4 = no. of elements-2


loop Y from 0 to 4 – X
if MARKS[Y] > MARKS[Y+1] then
TEMP = MARKS[Y]
MARKS[Y] = MARKS[Y+1]
MARKS[Y+1] = TEMP
end if
end loop
end loop

Selection sort: Splits array into sub-arrays. First sub-array is sorted, second is unsorted. e.g. to
sort in ascending order, find the smallest value, place it in the correct position in the first sub
array by swapping it with the element that was there, move position of beginning of sub-array
forward one, loop through the rest (second sub-array) to find the smallest value again. Repeated
for all elements.
Good for small lists. Not efficient with big number of items, have to find the smallest value
many times.

MARKS = [67,33,2,89,10,99]
MIN = 0 //position of start of un-sorted sub-array
SMALLEST = 0 //position of currently smallest value found

loop MIN from 0 to 4 //loop to no. of values-2


loop X from MIN+1 to 5 //loop through sub-array, MIN moves up 1
if MARKS[X]<MARKS[SMALLEST] then //finds smallest value
SMALLEST = X
end if //by end of loop, position of smallest value found
end loop
TEMP = MARKS[SMALLEST] //swaps smallest value with value at start of sub-array
MARKS[SMALLEST] = MARKS[MIN]
MARKS[MIN] = TEMP
end loop
Collection: Group of objects. No assumptions are made about the order of the collection (if any)
or whether it can contain duplicate elements. We add and retrieve data from them.

Loops
Just know how to code the different types of loop tbh

Suitability of an algorithm
Efficiency: Amount of computer resources required to perform algorithm's
functions Correctness: Extent to which algorithm satisfies specification
Reliability: Capability to maintain performance
Flexibility: Effort required to modify algorithm for other purposes

Big O notation: Measure of efficiency of an algorithm.


O(1) – efficiency and speed are always the same, time proportional to 1. e.g. addFront,
algorithm that adds up fixed no. of values etc.
O(n) – time and efficiency proportional to n. e.g. linear search method (proportional to length
of array, longer array = longer time searching, loop to non-constant value etc.) O(n2) –
proportional to n2. Time required increases rapidly if n increases e.g. nested loops in bubble
sort and select sort

Nature of programming

Fundamental operations of a computer:


Adding (and subtracting)
values Comparing values/data
Retrieving data
Storing data
Compound operations use combinations of fundamental operations of a computer, e.g. “find
the largest”
Conventions for pseudo code
Variables in caps
Keywords in lowercase e.g. loop, if
Method names mixed e.g. getNumber
Dot notation e.g. [Link](1)
// for comments
Boolean operators in caps e.g. AND, OR
loop X from 1 to 2, loop while, end loop, output, input
=, >, <, >=, <=, ≠, mod (returns remainder e.g. 15 mod 7 = 1), div (how many times number
fits e.g. 15 div 7 = 2)

Programming languages
Machine language: Low-level language directly understood by computer, made up of
binary numbers.
Assembly language: Low-level language using symbols for instructions and memory addresses
High-level programming language: Uses elements of natural language. Easy to use for
humans
and more understandable. Abstracts some areas of computing systems, would otherwise take
too long to write our systems in machine code.

Programming languages need to have...


Fixed vocabulary- Instructions for operations do not change. e.g. “print” will always
print Unambiguous meaning- Clear instructions
Consistent grammar and syntax- The way we declare and use language features must
be the same.
Provide a way to define basic data types and operations on those types- ability to
write functions/procedures
It has to run on/be able to be processed by a computer- it must have a compiler/interpreter

Higher level programming languages can differ by…


Method of translation- Whether by compiler or interpreter (or both)
Different programming paradigms- e.g. procedural or object-oriented
Purpose of the language- Specific for certain tasks (e.g. CSS for HTML websites or
language for AI) or general purpose (can build any program with any logic e.g. Python)
Compatibility with different environments- e.g. Java with virtual machine can run on
all OS while some languages can’t
Syntax differences- e.g. structure of statements

Source code: Original code/program developed using high level language. Needs to be
translated into machine code to be run/executed by the computer.
Object/ target program: Program translated into machine language. Translation methods:
Compiler: Executes translation process only once, translates the whole program. Object
program is saved so it doesn't need to be compiled again. All errors are displayed when
the whole program is checked, compilation ends only once errors are fixed. Example: C++
Interpreter: Reads, translates and executes program line by line. Errors are displayed
after each line is interpreted. Goes through the process every time the program is run,
much slower than a compiler. Example: BASIC

Writing code
Variable: Used to store a data element that can be changed during program execution. Has
an identifier and type.
Constant: Elements and quantities that don't change. e.g. final double PI = 3.14 Object:
Comprised of data and methods (operations that can be performed by the object)

Use of programming languages

Advantages of breaking down into sub-programs


Breaking complex job into simpler tasks
Distributing program amongst different
programmers Code reuse across multiple programs
Reducing code duplication in program

Advantages of collections
Methods are predefined algorithms, can immediately use

Software reuse

Common questions

Powered by AI

A new system may be needed if the old system is inefficient, no longer suitable for its original purpose, or outdated. Other reasons include increasing productivity and quality of output, enhancing efficiency, minimizing costs, and ensuring compatibility with new technology. Implementation extent depends on time, software, hardware, involved personnel, and the immediate environment .

Feasibility studies are crucial in the planning phase because they assess whether a project is worth pursuing based on technical, economic, legal, operational, and scheduling feasibility. They help determine if the technology is adequate, if the project is cost-effective, if there are legal obstacles, if existing processes can support the new system, and if the timeline is realistic. These assessments ensure informed decision-making and resource allocation, impacting a project's likelihood of success .

Concurrency allows several processes to execute simultaneously, which can enhance data processing and efficiency. However, it introduces challenges such as data consistency issues, requiring meticulous planning and coordination to ensure changes from one process do not interfere with another. It complicates system design and demands adequate synchronization and conflict resolution tactics .

Ethical and social concerns in networked computer systems include security risks, privacy issues, potential censorship, and the effects on social dynamics like digital addiction. Moreover, concerns about the digital divide, data surveillance, and a possible diminishment of cultural diversity may also arise. Organizations need to balance these risks with potential benefits like efficiency and information sharing .

Choosing SaaS over traditional software installations provides benefits like lower upfront costs, no need for in-house maintenance, and easier cost prediction. However, it implies dependency on external providers, which poses risks like service discontinuation or data jurisdiction issues. SaaS could be less suitable for complex tasks due to typically lower performance compared to local installations .

Merging information systems after a business merger presents challenges such as high maintenance costs if both systems are kept, high initial costs if both are replaced, difficulty for employees to adapt if only the best systems from each are combined, and policy issues if only one company's systems are used. Each strategy has its unique set of workforce challenges, time constraints, and financial implications .

An organization might choose a binary search algorithm for its speed compared to sequential search, as it efficiently narrows down possibilities, reducing time complexity from O(n) to O(log n). However, this choice is unsuitable if the data isn't sorted, as binary search requires ordered data, and becomes complicated if the array changes frequently .

Testing is crucial during the implementation of a new system as it identifies problems that need fixing and areas for improvement. Types of testing include functionality testing, integration testing, and system testing, all which ensure the system meets specified requirements and functions correctly in all scenarios it might encounter .

A parallel changeover method has the advantage of reduced risk since both systems run concurrently and the old system can be reverted to if the new one fails; this is ideal for critical systems. However, it incurs higher costs and inefficiency if the systems have different inputs/outputs/processes, possibly leading to redundant training for a system that might not be used .

Locally hosted systems have the advantages of greater control over data, one-time cost excluding maintenance, and suitability for large, complex systems. The disadvantages include a higher initial cost, unpredictable total costs, and the necessity of IT personnel for maintenance. Remote hosted systems feature lower initial costs, predictable overall costs, and no maintenance requirement from the user, but have downsides like reliance on a third party, potential data loss risks, and usually lower performance compared to on-premise solutions .

You might also like