0% found this document useful (0 votes)
15 views

Restrict Kms Access

bi

Uploaded by

hapztv
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Restrict Kms Access

bi

Uploaded by

hapztv
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

An

User Guide on
Restoring Data Dumps
&
Importing to
Analytical Tools

1|Page
Centre for Data Management and Analytics

An
User Guide on
Restoring Data dumps

&

Importing to

Analytical Tools

2|Page
Preface
Data analytics is a multistep process, comprising of collecting, restoring and connecting data
to a software for further analysis, applying tools and techniques, generating insights etc.
Contemporary software employed for data analytics are very easy to learn and use, many of
the features being intuitive and GUI based. However, the initial steps of data preparation and
connecting a data analytics software to dataset(s)/ database(s), found in various formats, appear
intimidating and difficult to a beginner.

CDMA has been steering the use of data analytics for audit in IAAD. Field offices often
approach us for issues concerning restoration of back up files created in audited entity’s
environment and connecting data analytic software with data files of different formats. Hence,
we felt the need of a user guide, which will explain these processes in a step-by-step manner.

The present user guide is an endeavour to address such issues. The content of the guide has
been finalised based on types of databases that one comes across in audit. At present, Knime,
an open source software and Tableau, a proprietary software, apart from Excel and other
options are being used for data analytics in the Department. Accordingly, the relevant topics
have been included in the user guide.

The user guide is practical oriented and intends to provide a hands on experience on restoration
of data from various formats and connecting the same with different analytical tools. It will be
possible for personnel, having a basic grounding in data analytic tools and techniques, to follow
and retrace the steps, elucidated with the help of screenshots.

As the field of data analytics is dynamic and new tools and techniques keep coming up, we
welcome any feedback or suggestions to make this guide more relevant and useful.

3|Page
Table of Contents
Preface........................................................................................................................................ 3
Chapter 1: Restoration of Backup/Dumps from Different Databases ....................................... 7
1.1 Pre-Requisite for Importing or Restoring a Dump ...................................................... 8
1.2 Restoration from Microsoft SQL Server - Database ................................................... 9
1.3 Restoration from PostgreSQL Server – Database ..................................................... 12
1.4 Restoration from IBM DB2 Server – Database ........................................................ 14
1.5 Restoration from Oracle -Database ........................................................................... 16
1.6 Restoration from MySQL – Database ....................................................................... 22
Chapter 2: Tableau Connectivity with Different Databases .................................................... 25
2.1 Oracle Connectivity with Tableau............................................................................. 25
2.2 Microsoft SQL Server Connectivity with Tableau ................................................... 27
2.3 MySQL Connectivity with Tableau .......................................................................... 28
2.4 IBM DB2 Connectivity with Tableau ....................................................................... 29
2.5 PostgreSQL Connectivity with Tableau .................................................................... 30
2.6 MS-Excel Connectivity with Tableau ....................................................................... 31
2.7 MS-Access Connectivity with Tableau ..................................................................... 32
2.8 JSON File Connectivity with Tableau ...................................................................... 33
2.9 Text File Connectivity with Tableau ......................................................................... 34
2.10 PDF file Connectivity with Tableau.......................................................................... 35
Chapter 3: CaseWare IDEA Connectivity with Tableau ......................................................... 37
3.1 Installation of Idea Driver for connectivity with Tableau. ........................................ 37
3.2 CaseWare IDEA Connectivity with Tableau ............................................................ 40
Chapter 4: KNIME Connectivity with Different Databases .................................................... 43
4.1 Uploading Jar Files in KNIME ................................................................................. 44
4.2 Microsoft SQL Server Connectivity with KNIME ................................................... 45
4.3 MySQL Connectivity with KNIME .......................................................................... 48
4.4 Oracle Connectivity with KNIME ............................................................................ 51
4.5 IBM DB2 Connectivity with KNIME ....................................................................... 53
4.6 Postgres Connectivity with KNIME ......................................................................... 55
4.7 MS Excel Connectivity with KNIME ....................................................................... 57
4.8 Text File Connectivity with KNIME ........................................................................ 59
4.9 JSON Connectivity with KNIME ............................................................................. 61
4.10 MS-Access Connectivity with KNIME .................................................................... 66

5|Page
Chapter 1: Restoration of Backup/Dumps from Different Databases
A data dump is a large amount of data that is moved from one computer system, file, or device
to another. A database dump is a major output of data that can help users to either back up or
duplicate a database. For example, a database can perform a data dump to another computer or
server on a network, where it could then be utilized by other software applications or analysed
by a person.

Currently, large number of auditee organisations are computerised and it is a challenge for
auditors to conduct audit in an IT based environment. Over a period, the size of the databases
of each organisation has increased, coupled with the use of various Relational Database
Management Systems (RDBMS). As a result, getting the relevant data tables from an RDBMS
platform becomes difficult, partly due to inadequate information about the data structure, data
definitions etc. related to databases of audited entities. Hence, for practical reasons or
sometimes owing to the scope of the audit, databases from servers are often backed up in dump
data files, and provided to the Audit. Data dumps are to be restored using the relevant tools,
and then the required table(s) and fields are identified for analysis. A schematic diagram of the
said process is described above.

The methodology used in restoration of a dump may vary depending on the RDBMS used and
the versions. For example, a database dump created using a newer version of RDBMS may not
be restored using an older version and vice versa.

The present chapter is prepared to share the experience of CDMA in handling of data
dump/backup files and restoring the same.

7|Page
Centre for Data Management and Analytics

1.1 Pre-Requisite for Importing or Restoring a Dump


Before one proceeds with the task of restoring a data dump, one should have information on
some of the parameters related to the database and environment in which it was created. It has
been summarized in the table below:

8|Page
Restoration of Backup/Dumps from Different Databases

1.2 Restoration from Microsoft SQL Server - Database


Open SQL Server 2012.

Enter the credentials.


Click Connect.

‘Object Explorer’ opens.

Right Click on the Databases.


Click Restore Database.

9|Page
Centre for Data Management and Analytics

Select ‘Device’ option.

Click on the Tab to select backup


devices.’’

A dialog box – ‘Select backup devices’ opens.


Click on Add.

Choose the backup file stored in the local


system or external storage device.
Click OK.

Backup is displayed in the ‘Select backup


devices’ dialog box.
Click OK.

10 | P a g e
Restoration of Backup/Dumps from Different Databases

A dialog box appears showing the location


where backup will be restored; the location can
be changed as per the requirements.
Click OK.

Restoring process starts and after completion, a


dialog box appears showing Database name
“NORTHWIND” restored successfully.

The NORTHWIND database can be expanded


from the ‘object explorer’ to see the list of
tables under it.

11 | P a g e
Centre for Data Management and Analytics

1.3 Restoration from PostgreSQL Server – Database


Right click on Postgres under Database.
Click Restore.

A dialog box “Restore database ‘postgres’”


opens.

Click on as shown in screenshot.


A dialog box ‘Select backup filename’ will
open.
Choose the backup file to be restored.
Click Open.

Filename field will show the path of the


backup file to be restored.
Click on Restore.

12 | P a g e
Restoration of Backup/Dumps from Different Databases

Restoration starts.

Tables restored from the backup is shown in


the screenshot.

13 | P a g e
Centre for Data Management and Analytics

1.4 Restoration from IBM DB2 Server – Database


Open IBM DB2,
Enter the name of Database Name, Location,
Alias and Comment.
Click Run.

Process starts.

Process completed successfully.

Right click on the database.


Click on ‘Backup and Restore’.
Select ‘Restore’.

14 | P a g e
Restoration of Backup/Dumps from Different Databases

A window ‘Restore database ‘DGFTIEC’


(name of database in this example) opens.
Under the Restore Objects, click on restore the
entire database.
Choose the method of selecting the backup
image.
Enter the backup image information manually.
Status will be shown as succeeded.

Note-Backup Time and Date should match


with Backup Image Details.

Process starts

Restoration process is successful.

15 | P a g e
Centre for Data Management and Analytics

1.5 Restoration from Oracle -Database


As we know, Import-Export (Logical functions) and backup-restore (Physical functions), inter
alia, are very important functions of a database. Audited entities usually provide audit teams
with the data dump file of their database, therefore, we need to restore that dump file, initially
in the same environment and then we can import the objects or tables specific to audit
requirement.

Prior to launch of oracle database11g release 2 (11.2), utilities - original export (exp) and
Import (imp), were used. But in the said version, a new feature, data pump, was introduced
for faster loading/unloading of data by using new export (expdp) and import (impdp) utilities.
A brief overview of data pump is given below.

Oracle Data Pump provides a server-side infrastructure for fast data and metadata movement
between Oracle databases. Data Pump automatically manages multiple, parallel streams of
unload and load for maximum throughput. The degree of parallelism can be adjusted as per
requirement.

Data Pump Import Data Pump Import (hereinafter referred to as Import for ease of reading)
is a utility for loading an export dump file set into a target system. The dump file set is made
up of one or more disk files that contain table data, database object metadata, and control
information. The files are written in a proprietary, binary format. During an import operation,
the Data Pump Import utility uses these files to locate each database object in the dump file set.

Import can also be used to load a target database directly from a source database with no
intervening dump files. This is known as a network import.

The Data Pump Import utility is invoked using the impdp command and can be interacted by
using a command line, a parameter file, or an interactive-command interface.

16 | P a g e
Restoration of Backup/Dumps from Different Databases

Data Pump Import Modes: One of the most significant characteristics of an import operation
is its mode, because the mode largely determines what is imported. The specified mode applies
to the source of the operation, either a dump file set or another database if the
NETWORK_LINK parameter is specified.

When the source of the import operation is a dump file set, specifying a mode is optional. If no
mode is specified, then Import attempts to load the entire dump file set in the mode in which
the export operation was run.

The mode is specified on the command line, using the appropriate parameter. The available
modes are as follows:

Full Import Mode


a) Schema Mode
b) Table Mode
c) Tablespace Mode
d) Transportable Tablespace Mode

Full Import Mode


a) Schema Mode
b) Table Mode
c) Tablespace Mode
d) Transportable Tablespace Mode

The detailed explanations for each mode is as follows:


Full Import Mode

A full import is specified using the FULL parameter. In full import mode, the entire content of
the source (dump file set or another database) is loaded into the target database. This is the
default for file-based imports. You must have the IMP_FULL_DATABASE role if the source
is another database.

Cross-schema references are not imported for non-privileged users. For example, a trigger
defined on a table within the importing user's schema, but residing in another user's schema, is
not imported.

17 | P a g e
Centre for Data Management and Analytics

The IMP_FULL_DATABASE role is required on the target database and the


EXP_FULL_DATABASE role is required on the source database if the NETWORK_LINK
parameter is used for a full import.
Schema Mode

A schema import is specified using the SCHEMAS parameter. In a schema import, only objects
owned by the specified schemas are loaded. The source can be a full, table, tablespace, or
schema-mode export dump file set or another database. If you have the
IMP_FULL_DATABASE role, then a list of schemas can be specified and the schemas
themselves (including system privilege grants) are created in the database in addition to the
objects contained within those schemas.
Cross-schema references are not imported for non-privileged users unless the other schema is
remapped to the current schema. For example, a trigger defined on a table within the importing
user's schema, but residing in another user's schema, is not imported.
Table Mode

A table-mode import is specified using the TABLES parameter. In table mode, only the
specified set of tables, partitions, and their dependent objects are loaded. The source can be a
full, schema, tablespace, or table-mode export dump file set or another database. You must
have the IMP_FULL_DATABASE role to specify tables that are not in your own schema.
You can use the transportable option during a table-mode import by specifying the
TRANPORTABLE=ALWAYS parameter in conjunction with the TABLES parameter. Note
that this requires use of the NETWORK_LINK parameter, as well.
Tablespace Mode

A tablespace-mode import is specified using the TABLESPACES parameter. In tablespace


mode, all objects contained within the specified set of tablespaces are loaded, along with the
dependent objects. The source can be a full, schema, tablespace, or table-mode export dump
file set or another database. For unprivileged users, objects not remapped to the current schema
will not be processed.
Transportable Tablespace Mode

A transportable tablespace import is specified using the TRANSPORT_TABLESPACES


parameter. In transportable tablespace mode, the metadata from a transportable tablespace
export dump file set or from another database is loaded. The data files, specified by the

18 | P a g e
Restoration of Backup/Dumps from Different Databases

TRANSPORT_DATAFILES parameter, must be made available from the source system for
use in the target database, typically by copying them over to the target system.
Encrypted columns are not supported in transportable tablespace mode.
This mode requires the IMP_FULL_DATABASE role.

Data Pump Import Modes syntax diagram

Step by step illustration of data import:

Step-1: Create Directory


Login to server for creating Directory Objects. Oracle Directory is a database object pointing
to an operating system Directory on the database Server Machine for reading and writing files.

Syntax: SQL>CREATE DIRECTORY Data_dump_dir AS 'D:\cdma'; <Enter>


Response: Directory created.

The execution of the syntax and response thereof in the command prompt can be seen in the
image below:

Note: After giving above command at SQL Prompt Directory will be created but Directory
(Data_dump_dir) will not be shown as a sub-directory because it is a virtual Directory (only
a pointer to physical directory) as it can be seen below that D:\CDMA is empty:

19 | P a g e
Centre for Data Management and Analytics

Step-2: Granting of reading and writing rights


Syntax: SQL> GRANT READ, WRITE ON DIRECTORY Data_dump_dir TO <User Name>;
<Enter>
Response: Grant succeeded.
The execution of the syntax and response thereof in the command prompt can be seen in the
image below:

Step-3: Granting Privilege rights


These privileges should only be granted to authorized users (Database Administrators).
Normally database administrators should perform export and import operations.

Syntax: SQL> GRANT datapump_imp_full_database to <User Name>; <Enter>


Response: Grant succeeded.

The execution of the syntax and response thereof in the command prompt can be seen in the
image below:

Step-4: Importing database


Case-1: Importing unconditional data dump
Syntax: C:\impdp User/Password DIRECTORY=Data_dump_dir DUMPFILE=<Dump File
Name> LOGFILE= <Log File Name.log> <Enter>

20 | P a g e
Restoration of Backup/Dumps from Different Databases

The execution of the syntax and response thereof in the command prompt can be seen in the
image below:

Case-2: Importing Full database

The OWNER parameter of imp has been replaced by the SCHEMAS parameter which is used
to specify the schemas to be imported. The following is an example of the schema Import
syntax:
Syntax: C:\impdp User/Password FULL=Y DIRECTORY=Data_dump_dir
DUMPFILE=<Dump File Name.dmp> LOGFILE= <Log File Name.log>; <Enter>

The execution of the syntax and response thereof in the command prompt can be seen in the
image below:

OR
Case-3: Importing selected tables from the database
The TABLES parameter is used to specify the tables that are to be imported. The following is
an example of the table import syntax:
Syntax: C:\impdp User/Password TABLES=<Table Name_1>,<Table Name_2>,<Table
Name_3> DIRECTORY=Data_dump_dir DUMPFILE=<Dump File Name> LOGFILE=
<Log File Name.log> <Enter>

The execution of the syntax and response thereof in the command prompt can be seen in the
image below:

21 | P a g e
Centre for Data Management and Analytics

1.6 Restoration from MySQL – Database


Click on File.
Choose Catalogs.
Under the Schema tab on left side pane, list
of schema is given.

Right Click on Schemata Tab.


Select ‘Create New Schema’.

Enter the name of Schema (PMSSY in this


example).
Click Ok.

Schema named “PMSSY” opens.

Click on ‘Restore’.
Click on ‘open back up file’.

22 | P a g e
Restoration of Backup/Dumps from Different Databases

Dialogue box opens to choose the options.


Choose the Backup to be restored from the
location in local desktop.

Click on the backup file.


Click Open.

Under the General Tab, path of file to be


restored can be seen.

Under the General Tab, Target Schema is to


be chosen from the dropdown list.

Click on Start Restore.

23 | P a g e
Centre for Data Management and Analytics

Restoring process can be seen in the


Screenshot.

List of tables restored from the backup will


be shown.

24 | P a g e
Chapter 2: Tableau Connectivity with Different Databases
Tableau can help anyone see and understand their data; connect to almost any database, and is
highly user friendly with its drag and drop features. In this section, a stepwise process of
connecting tableau with different RDBMS/DBMS/Various file formats, present on desktop or
local server, has been explained with the help of screenshots:

2.1 Oracle Connectivity with Tableau


Open Tableau and to <To a Server> left
corner of the screen.
Click <Oracle>

A dialog box opens


Enter the following credentials: -
Server, Service, Port, Username &
Password
Click Sign In.

25 | P a g e
Centre for Data Management and Analytics

After the successful connection with the


server
Under the Schema Tab, choose the database
to be connected.
Under the Table Tab, related tables of the
schema will appear.
Drag the table to workspace for Analysis.

26 | P a g e
Tableau Connectivity with Different Databases

2.2 Microsoft SQL Server Connectivity with Tableau


Open Tableau
Screen will appear as shown in screenshot.

Under the <To a Server> tab,


Choose Microsoft SQL Server
Enter the following credentials: -
Server, Username, Password
Click Sign In.

After the successful connection with the


server
Under the Database Tab, choose the
database to be connected.
Under the Table Tab, related tables of the
database will appear.
Drag the table to workspace for Analysis.

27 | P a g e
Centre for Data Management and Analytics

2.3 MySQL Connectivity with Tableau


Open Tableau
Screen will appear shown in screenshot.
Under the <To a Server> tab,
Choose MySQL

Enter the following credentials: -


Server, Username, Password
Click Sign In.

After the successful connection with the Server.


Under the Database Tab, choose the database to
be connected.
Under the Table Tab, related tables of the
database will appear.
Drag the table to workspace for Analysis.

28 | P a g e
Tableau Connectivity with Different Databases

2.4 IBM DB2 Connectivity with Tableau


Open Tableau
Screen will appear as shown in screenshot.

Under the <To a Server> tab,


Click on More
Choose IBM DB2

Enter the following credentials: -


Server, Database, Username, Password
Click Sign In.

After the successful connection with the


Server, choose the database to be connected
from the Database Tab.
Under the Schema Tab, related tables of the
database will appear.
Drag the table to workspace for Analysis.

29 | P a g e
Centre for Data Management and Analytics

2.5 PostgreSQL Connectivity with Tableau


Open Tableau
Screen will appear as shown in screenshot.

Under the <To a Server> tab,


Click on More
Choose PostgreSQL

Enter the following credentials: -


Server, Database, Username and Password.
Click Sign In.

After the successful connection with the


Server, choose the database to be connected
from the Database Tab.

Under the Tables Tab, related tables of the


database will appear.

30 | P a g e
Tableau Connectivity with Different Databases

2.6 MS-Excel Connectivity with Tableau


Open Tableau.
Under the <To a File> tab
Click on Microsoft Excel.

Choose the Excel file to be analysed in Tableau


from the local system or stored in any storage
device (Pen Drive, External Hard Disk Drive)

Drag the table to workspace for analysis

31 | P a g e
Centre for Data Management and Analytics

2.7 MS-Access Connectivity with Tableau


Open Tableau.
Under the <To a File> tab
Click on Microsoft Access.

Screen will appear, credentials needs to be


entered.

Choose the Access file to be analysed in


Tableau from the local system or stored in any
storage device (Pen Drive, External Hard Disk
Drive)

Enter the path of file to be analysed.


Click Open.

Drag the file to workspace.

32 | P a g e
Tableau Connectivity with Different Databases

2.8 JSON File Connectivity with Tableau


Open Tableau.
Under the <To a File> tab
Click on JSON File

Choose the json file to be analysed in Tableau


from the local system or stored in any storage
device (Pen Drive, External Hard Disk Drive)

The schema ‘cwf_text.json’ is chosen as


shown in the screenshot

The schema ‘records’ is also selected.


Click OK to proceed.

Drag the file to workspace.

33 | P a g e
Centre for Data Management and Analytics

2.9 Text File Connectivity with Tableau


Open Tableau.
Under the <To a File> tab
Click on Text File

Choose the text file to be analysed in Tableau


from the local system or stored in any storage
device (Pen Drive, External Hard Disk Drive)

Drag the file to workspace.

34 | P a g e
Tableau Connectivity with Different Databases

2.10 PDF file Connectivity with Tableau


Shown in the screenshot is a PDF file to be
connected using tableau.
Data should be properly aligned.
Heading should be present in all the pages.

Open Tableau. Choose PDF file.

Choose the PDF file to be analysed in Tableau


from the local system or stored in any storage
device (Pen Drive, External Hard Disk Drive)

Following dialog box will appear.


Enter the required number of pages to be
scanned.

Select the pages from left pane of the screen to


be analysed.
To select multiple PDF pages/tables at once,
drag any page to workspace and drag all other
PDF pages/tables over it.
Or
Perform Union Operation
Number of records = 273

35 | P a g e
Chapter 3: CaseWare IDEA Connectivity with Tableau
IDEA is being used in the department since a long time as a tool for CAAT. There may be
scenarios where one would require importing an idea file into tableau for further analysis or
visualisation. The procedure is as follows.

Note: Idea v.10 and above can be connected to Tableau.

3.1 Installation of Idea Driver for connectivity with Tableau.


Right Click on the Idea Set Up.
Click on Run as Administrator.

Screenshot displays the processing of


Installation.

Click Next.

Click on “I accept the terms in the license


agreement”.
Click Next.

37 | P a g e
Centre for Data Management and Analytics

Click Install.

Screenshot displays the Installation


Progress.

Click Finish.

Open Control Panel.


Click on Administrative Tools.

Click on ODBC Data Sources (64-bit).

A dialog box will appear.


Click on Add.

Choose CaseWare IDEA Driver as shown


in screenshot.
Click Finish.

38 | P a g e
CaseWare IDEA Connectivity with Tableau

Enter the Credentials. Enter the Data


source Name (Idea for Tableau).
Enter the Data Directory (Enter the
location where Idea file is resided).
Click OK.

39 | P a g e
Centre for Data Management and Analytics

3.2 CaseWare IDEA Connectivity with Tableau


Screenshot displays the sample file in Idea to
be connected with Tableau.

Under the <To a Server> tab, click on More.


Click on Other Databases (ODBC).

Click on DSN.
Choose Idea for Tableau.

Click on Connect

40 | P a g e
CaseWare IDEA Connectivity with Tableau

Under the Driver field, CaseWare IDEA


Driver is loaded.
Click on Sign In.

In Tableau, under the connections tab, Idea


for Tableau (ODBC) is displayed.
Choose the Databases.
Related tables are displayed.

41 | P a g e
Chapter 4: KNIME Connectivity with Different Databases
KNIME, the Konstanz Information Miner, is a free and open-source data analytics, reporting
and integration platform. KNIME integrates various components for machine learning and data
mining through its modular data-pipelining concept. Few example have been illustrated below
to connect KNIME tool with our Local Desktop or Local Database Server to import data for
the purpose of Analytics. Before starting Analytics with the help of KNIME tool, we should
know about format of the data, we must have Credential/Authorisation to set up the connection
with the database in the Server. In addition, KNIME should have been updated to its newest
version so that it can have various utilities among other things such as connecting Database
Utility like .JAR files to connect data with the server databases. After that select proper Node
to import data for Analytics.

Note: It may please be noticed that sometimes Knime does not connect with the database server
due to non-compatibility of utilities (for example .JAR utility) among different versions. In
such cases, it needs corresponding .JAR files to be downloaded/installed from sources like
internet and consequently shall be updated from the preferences. (File->Preference-
>Knime->Databases->Add file->path of the downloaded/installed file)

43 | P a g e
Centre for Data Management and Analytics

4.1 Uploading Jar Files in KNIME


Click on File
Go to Preferences

Under the KNIME category


Click on Databases.
Click on add file.
A dialog Box appears.
Select the jar file from location where it
is downloaded from Internet.
Click OK.
Screenshot displays the list of jar files
used.

44 | P a g e
KNIME Connectivity with Different Databases

4.2 Microsoft SQL Server Connectivity with KNIME


Choose Microsoft SQL Server Connector
Node.

Right click on Microsoft SQL Server


Connector Node, Choose Configure.

A dialog box will appear.

Enter the following credentials


Hostname, Database Name, Username,
Password.
Click Apply and then OK.

Microsoft SQL Server Connector Node


signal changes to Yellow.

Right click on Microsoft SQL Server


Connector Node, Choose Execute.

Microsoft SQL Server Connector Node


signal changes to Green.

45 | P a g e
Centre for Data Management and Analytics

Right click on Microsoft SQL Server


Connector Node.
Choose Database JDBC Connection

A dialog box will appear for successful


connection with the server. Database
Driver, Database URL, User Name &
Database Type details will be displayed

After successful connection with the


Microsoft SQL Server, choose Database
Reader.
Connect the Database Reader Node with
Microsoft SQL Server Connector Node.

Right Click on Database Reader Node,


choose Configure

Mention the name of table to be worked on.


select * from <table name>
Click Apply and OK.

Signal will change from Red to Yellow

46 | P a g e
KNIME Connectivity with Different Databases

Right Click on Database Reader Node and


choose Execute.

Both Nodes are executed successfully.


Green Signal indicates successful
execution of node.
Right click on Database Reader and choose
Data from Database.

Table will appear as shown in the


screenshot.

47 | P a g e
Centre for Data Management and Analytics

4.3 MySQL Connectivity with KNIME


Choose MySQL Connector Node.

Right click on MySQL Connector Node,


Choose Configure.

Configuration Dialog Box will appear.


Enter the following credentials
Hostname, Database Name, Username and
Password.
Click Apply and OK

MySQL Connector Node signal changes to


Yellow.
Right click on MySQL Connector Node,
Choose Execute.

MySQL Connector Node signal changes to


Green.

48 | P a g e
KNIME Connectivity with Different Databases

Right click on MySQL Connector Node.


Choose Database JDBC Connection

A dialog box will appear for successful


connection with the server.
Database Driver, Database URL, User Name
and Database Type details are displayed.

After successful connection with the MySQL


Connector Node.
Choose Database Reader.
Connect the Database Reader Node with
MySQL Connector Node.
Right Click on Database Reader Node,
choose Configure.

49 | P a g e
Centre for Data Management and Analytics

Mention the name of table to be worked on.


select * from <table name>
Click Ok and Apply.

Click on Execute.

Both Nodes are executed successfully.


Green Signal indicates successful execution
of node.
Right Click on the database reader choose
Data from Database

Table will appear as shown in the screenshot.

50 | P a g e
KNIME Connectivity with Different Databases

4.4 Oracle Connectivity with KNIME


Choose Database Connector Node for
Oracle Connection

Right Click on Node, choose Configure.

A dialog box will appear, choose the


appropriate driver from the list of
database driver list.
Enter the following credentials –
Host, Username, Password
Click Apply and Ok.

After entering Host, Username,


Password, Press Execute

51 | P a g e
Centre for Data Management and Analytics

Press Database JDBC Connection to


View connection to server

A dialog box appears as shown in


screenshot after successful connection
with server.

Choose Database Table Selector node.


Right Click on the node.
Mention the table to be worked on.
SELECT * FROM <table name>
Click Apply and ok.

Right click on the Database Table


Selector Node. Click Execute.

Table will be appeared as shown in the


screenshot.

52 | P a g e
KNIME Connectivity with Different Databases

4.5 IBM DB2 Connectivity with KNIME


Choose Database Connector Node

Right Click on Node, choose Configure.

A dialog box will appear, choose the


appropriate driver from the list of database
driver list.
Enter the following credentials –
Host, Username, Password
Click Apply and Ok.

Right Click on the Database Connector,


Choose Database JDBC connection.

Dialog Box appears in the screenshot for


successful connection with the server.

Connect Database Reader.


Right click on the node.
Click Configure.

53 | P a g e
Centre for Data Management and Analytics

Mention the table name, whose data to be


used.
Click Apply and OK.

Right click on the node and Click Execute.

Right click on the node.


Click on Data from database.

Screenshot displays the data from the table


mention.

54 | P a g e
KNIME Connectivity with Different Databases

4.6 Postgres Connectivity with KNIME


Right Click on the Database Connector,
Choose Database JDBC connection.

Right Click on Database Connector Node,


click on Execute.

A dialog box will appear for successful


connection with the server. Database
Driver, Database URL, Username,
Database Type are displayed.

Choose Database Reader node.


Right Click on the node.

55 | P a g e
Centre for Data Management and Analytics

Mention the table to be worked on.


SELECT * FROM <tablename>
Click Apply and OK.
Right click on the Database Reader Node.
Click Execute

Right Click on the database reader choose


Data from Database

Table will appear as shown in the


screenshot.

56 | P a g e
KNIME Connectivity with Different Databases

4.7 MS Excel Connectivity with KNIME


Choose Excel Reader Node.

Right Click on the Node, Click on


Configure.

A dialog Box will appear.

In the input location, Click on Browse.


Go to the location where excel file is
stored.

This is location where excel file is


stored in the local system

57 | P a g e
Centre for Data Management and Analytics

This is the preview of the file loaded in


KNIME.
Click Apply and OK.

Node is configured and ready for


execution.

Right Click on the Excel Reader Node,


click Execute.

Excel Reader node is executed.

Right Click on the Executed Node and


Choose File Table.

Table will be appeared as shown in


screenshot.

58 | P a g e
KNIME Connectivity with Different Databases

4.8 Text File Connectivity with KNIME


Choose CSV Reader Node.
Right Click on the Node and Click on
Configure.

This is the location where CSV file is


stored in the local system

In the input location, Click on Browse.

Go to the location where csv file is


stored.
Click open.

In the input location, complete path,


where file is stored.
Click Apply and OK.

59 | P a g e
Centre for Data Management and Analytics

Right Click on the CSV Reader Node,


click Execute.

Right Click on the Executed Node and


Choose File Table.

Table will be appeared as shown in


screenshot.

60 | P a g e
KNIME Connectivity with Different Databases

4.9 JSON Connectivity with KNIME


JSON Reader Node is the node to read
the JSON File.

In the KNIME Explorer, drag JSON


Reader node to Workspace

Right Click on the JSON Reader node.


Click Configure.

Insert the API Link in Input Location as


displayed in screenshot.
Click Apply and OK.

Right Click on the JSON Reader Node.


Click Execute.

61 | P a g e
Centre for Data Management and Analytics

Right Click on Node.


Click on JSON Table.

This is the output of the JSON Table.

Connect JSON Path Node to JSON


Reader Node.

Right Click on JSON Path. Click


Execute.

Right Click on JSON Path. Click Table.

62 | P a g e
KNIME Connectivity with Different Databases

This is the output of the JSON Path.

Connect JSON to Table Node to JSON


Path Node

Right Click on JSON to Table Node.


Click Execute.

Right Click on JSON to Table Node.


Click on Extracted Values.

Using Column Filter Node, Extra


columns can be removed.
Press Apply and OK.

63 | P a g e
Centre for Data Management and Analytics

Configure Split Collection Column.

Right Click on Split Collection Column.


Click Execute.

Right Click on Split Collection Column.


Click Input data with newly appended
columns.

Output of Split Collection Column is


displayed.

64 | P a g e
KNIME Connectivity with Different Databases

Configure and Execute Transpose Node.


Right Click on Transpose Node.
Click on Transpose Table.

Output of Transpose Table is displayed.

65 | P a g e
Centre for Data Management and Analytics

4.10 MS-Access Connectivity with KNIME


Choose Database Reader Node for
connecting with MS Access Database.

Right Click on Node, choose Configure.

Choose the appropriate driver from the list


of database driver list.
Enter the following credentials –
Host, Username, Password
Click Apply and Ok.

Right Click on the node and Execute.

Database Reader Node is executed.

66 | P a g e
KNIME Connectivity with Different Databases

Right click on the Database Reader Node.


Click Data from Database.

Table will appear as shown in the


screenshot.

67 | P a g e

You might also like