A Portfolio Optimization Algorithm Using Fuzzy Granularity Based Clustering
A Portfolio Optimization Algorithm Using Fuzzy Granularity Based Clustering
S. M. Aqil Burney
Institute of Business Management
Korangi Creek, Karachi, Karachi City, Sindh 75190, Pakistan
Phone: +92 21 111 002 004
[email protected]
Tahseen Jilani
University of Nottingham
Nottingham NG7 2RD, UK
Phone: +44 115 951 5151
University of Karachi
Main University Rd, Karachi, Karachi City, Sindh 75270, Pakistan
Phone: +92 21 99261300
[email protected]
Humera Tariq
University of Karachi
Main University Rd, Karachi, Karachi City, Sindh 75270, Pakistan
Phone: +92 21 99261300
[email protected]
Zeeshan Asim
Virtual University, Pakistan
M. A. Jinnah Campus, Defence Road, Off Raiwand Rd, Lda Avenue Phase 1 Lda Avenue, Lahore, Punjab,
Pakistan
Phone: +92 42 111 880 880
[email protected]
Usman Amjad
University of Karachi
Main University Rd, Karachi, Karachi City, Sindh 75270, Pakistan
Phone: +92 21 99261300
[email protected]
Abstract
Clustering algorithms are applied to numerous problems in multiple domains including historic
data analysis, financial markets analysis for portfolio optimization and image processing. Recent years
have witnessed a surge in use of nature inspired computing (NIC) techniques for data clustering to solve
various real world optimization problems. Granular Computing (GC) is an emerging technique to
handle pieces of information, known as information granules. In this paper, an ensemble of fuzzy
clustering using Particle Swarm Optimization and Granular computing for stock market portfolio
optimization. The model is then tested on stocks listed in Hong Kong Stock Exchange. Experimental
results suggested that clusters formed through Fuzzy Particle Swarm Optimization (FPSO) with
Granular computing are well suited and efficient for portfolio optimization. For comparison, we have
used a benchmark index of Hong Kong Stock Exchange called as Hang Sang Composite Index (HSCI).
Results proved that results of proposed approach are better in comparison to benchmark results of HSCI.
Keywords: Hybrid Approach for Portfolio Selection; Fuzzy C-mean Clustering (FCM); Fuzzy
Particle Swarm Optimisation (FPSO); Granular Computing; Hong Kong Composite Index.
159
BRAIN – Broad Research in Artificial Intelligence and Neuroscience
Volume 10, Issue 2 (April, 2019), ISSN 2067-3957
1. Introduction
Data Clustering is the mathematical method designed to identify relevant data within a
collection of data (Nerurkar et. al., 2018). It can be described as a methodology for assignment of data
into groups in a manner that the data points in same group or cluster are analogous to each other and
unrelated with objects of other clusters or groups (Hammouda and Karray, 2000). It is being used
efficiently in various domains for identification of natural groups present in large datasets. Data
Clustering can be used by businesses for identification of potential customers of a product by analyzing
or collecting buying patterns of customers, so as to design marketing strategies based on those behaviors
(Ravi, Pradeepkumar and Deb, 2017). Finding out clusters in a large dataset is challenging task and
usually require some data mining tool. Clustering tools usually assign data elements to a clusters based
on their similarities to the group.
Clustering remained an area of interest for researchers in last few decades, thus various
clustering techniques were developed. Clustering techniques can be generally divided into two types
(Suganya and Shanthi, 2011). Based on classical set theory there is a type of clustering called hard
clustering algorithms in which data items can only be assigned to only one group at a point of time. A
widely used hard or crisp clustering algorithm is k-means. But for real datasets where there are no
definite boundaries, this technique is not useful. (Izakian and Abraham, 2009).
Soon after the introduction of the fuzzy theory, the researchers applied fuzzy set theory on
clustering algorithms (Izakian and Abraham, 2009). There is no sharp boundary in real world data, so
Fuzzy clustering algorithms remained fruitful in those applications. It can handle real world
uncertainties efficiently by assigning membership degree to items. Membership degree in such clusters
relies on the proximity of values to the cluster centers. Widely used and famous fuzzy clustering
algorithm is Fuzzy C-Means (FCM) introduced by Bezdek in 1974 and is being applied at large
(Bezdek, 1984).
Swarm intelligence (SI) is an area of computational intelligence which comprise of algorithms
getting inspiration from population based natural phenomenon working on the basis of decentralized
control and self-organization (Shandilya et. al., 2017). It can be said that SI is “collective behavior of
decentralized and self-organized systems” (Zhang et. al., 2013). On the other hand Granular Computing
(GC) is a computation theory for efficiently using granules such as clusters, groups and subsets to build
a computational model for complicated applications that contains huge amounts of data and
information. A granule can be described as one of the various small data points or particles combining
to form a larger unit.
In this paper we have used Fuzzy Particle Swarm Optimization (FPSO) using the concept of
granular computing to divide the information granules into different clusters to build a portfolio that can
optimize the weekly investor’s returns. The experimental results using the Hong Kong Stock Exchange
data indicate that our proposed method provides better returns than the benchmark index for the Hong
Kong Stock exchange.
2. Literature Review
2.1. Fuzzy Data Clustering
Fuzzy logic concepts are based on degree of membership so imprecision concepts are dealt
with fuzzy logic in better way. Fuzzy logic can be used in data clustering so as to deal with partial
membership of data points. Fuzzy logic based data clustering algorithms assign data object partly to
more than one cluster. FCM proposed by Bezdek (Bezdek, 1984), divides the collection of n data
objects denoted as o = {o1,o2 ,...,on} in R dimensional space into c fuzzy clusters, where (1 < c < n)
with centroids or cluster centers Z = {z1, z2,..., zc}. Fuzzy clustering can be represented by using
fuzzy matrix μ with dimensions n x c. Here n is count of data objects whereas c is count of data
clusters. Data item present at ith row and jth column is represented by μij. Degree of membership of
ith and jth object is represented by μ. Degree of membership μ has following properties:
[ ] (1)
160
S. M. A. Burney, T. Jilani, H. Tariq, Z. Asim, U. Amjad, S. S. Mohammad - A Portfolio Optimization Algorithm Using Fuzzy
Granularity Based Clustering
∑ (2)
∑ (3)
Fuzzy C-Means has the objective function to minimize the following equation:
∑ ∑ (4)
s.t,
| | (5)
where m; (m > 1) is a scalar constant value called as “weighting exponent”, which manages the
fuzziness of clusters whereas dij is Euclidian distance between object oi and the cluster center zj.
Where zj indicates cluster center of jth cluster and it is obtained using equation (6)
i1 ijmoi
n
z j (6)
i1 ijm
n
ij 1 2
(7)
c dij ( m 1)
k 1 dik
There are many conditions that can be used to stop the execution of this loop. One of them is
to stop iterations of the algorithm when the change in the cluster center values becomes negligible
or the objective function as specified in the equation (4), cannot be minimized more. One problem
of FCM algorithm is that it is very much dependent on initial values and likely to fall in local
optima problem.
161
BRAIN – Broad Research in Artificial Intelligence and Neuroscience
Volume 10, Issue 2 (April, 2019), ISSN 2067-3957
11 1c
X (8)
nc
n1
In the above mentioned matrix X, μij denotes the membership value of object i to cluster j
with constraints specified in the eq. 9 and 10
[ ] (9)
∑ (10)
The position matrix specified in the above mentioned equation of each individual is similar
to fuzzy matrix μ specified in Fuzzy C-Means algorithm. Velocity of each individual is specified by
a matrix of dimension [n, c] where n denotes number of rows and c is the number of columns.
Elements of the matrix are within the range of [-1, 1]. Eq. 11 and 12 are used for changing the
velocities and positions of every particle on the basis of matrix operations.
X (t 1) X (t ) V (t 1) (12)
Here denotes the matrix addition and represents the matrix multiplication. It is
important to note here that constraints stated in eq. 9 and 10 may be violated after update of position
matrix. Thus normalizing position matrix is necessary here. For normalization purpose, all the
negative values in matrix are made zero. And if all matrix elements turn out to be zero then the
matrix is evaluated again using random numbers within range of [0, 1] and then matrix is
transformed without violating the conditions.
c 1c 1 j
c
11 j 1 1 j j 1
Xnormal (13)
c c
n1 nj nc nj
j 1 j 1
Similar to other nature inspired algorithms Fuzzy PSO algorithm uses a fitness function for
assessing the general solution. Following equation will be used for evaluation of the solutions.
K
f (X ) (14)
Jm
In equation 14, K is a constant while Jm is objective function for Fuzzy C-Means algorithm given in
eq. 15.
c n
Jm ijmdij (15)
j 1 i1
As the value of Jm is smaller, clustering results will be better fitness value f(X) being higher. Fuzzy
PSO algorithm for fuzzy clustering is described as under:
162
S. M. A. Burney, T. Jilani, H. Tariq, Z. Asim, U. Amjad, S. S. Mohammad - A Portfolio Optimization Algorithm Using Fuzzy
Granularity Based Clustering
1. Instantiate the following parameters: P the population size, w, c1, c2 and maximum number of
iterations.
2. Initialize a swarm with P number of individuals. Here X, gbest, pbest and V are matrices of size
n, c.
3. Instantiate X, V and pbest values for every individual and gbest for the whole population.
4. Determine the centers of cluster for every individual using eq. 16.
i1 ijmoi
n
z j
i1 ijm
n (16)
3. Methodology Used
From the literature review it is revealed that lot of work was done for data clustering and
portfolio management but not much work is done on clustering stock data for the portfolio
optimization using Fuzzy PSO. The use of data clustering for stock data helps in segmenting
different stocks in a way that all stocks having similar characteristics are grouped together (Cheng,
Chen and Jian, 2015). A method for creating efficient portfolios with Markowitz model by using the
clustering method to select stocks, called clustering based selection was designed by (Nanda,
Mahanty and Tiwari, 2010). To classify stocks into clusters they used Fuzzy C-Mean data clustering
algorithm. After classification of stocks, some stocks were selected from clusters for building an
optimized portfolio to minimize the risk by diversifying the portfolio. According to them, the
problem of efficient frontier can be solved more efficiently by clustering the stocks. Although fuzzy
c-means (FCM) algorithm is considered one of the most popular and widely used fuzzy clustering
techniques because of its efficiency, straightforwardness, and convenience of implementation. But
problem is that fuzzy c-means is very sensitive to initialization and it can easily be trapped in local
optima. On the other hand Particle swarm optimization (PSO) is a stochastic global optimization
tool which is used for various optimization problems. (Li, Liu and Xu, 2007) proposed a fuzzy PSO
based data clustering algorithm to overcome the shortcomings of FCM. Their suggested method
uses the power of global search in PSO algorithm to overcome the shortcomings of Fuzzy C-Means.
In our methodology the focus is on how stocks can be divided into granules and how Fuzzy
PSO based data clustering algorithm can be applied on these granules to further divide data into
small clusters and how to design a diversified portfolio using stocks from different cluster to
maximize portfolio returns. So Fuzzy PSO algorithm is applied to create clusters for each granule.
The dataset used for the experiment contains the information about financial ratios of companies
listed in Hong Kong Stock Exchange. This dataset is divided into six different sub groups known as
163
BRAIN – Broad Research in Artificial Intelligence and Neuroscience
Volume 10, Issue 2 (April, 2019), ISSN 2067-3957
information granule. Information granules are collections of entities that are arranged together due
to their similarity, functional or physical adjacency, coherence etc. A granulation criterion deals
with the question of why two objects are put into the same granule. We divided the dataset into 6
different partitions or granules based on their market capitalization value.
This is followed by calculating the optimal number of clusters in each group. Then Fuzzy
PSO data clustering algorithm is applied on each granule to divide it into optimal number of clusters
as calculated in the previous step. After that 1 to 3 stocks are selected from every cluster according
to the important fields as indicated in the Principal Component Analysis. Then average weekly
return of each stock selected is calculated in the last step on the basis of their market value during
January- 2012 to June-2012. The stocks having good positive average weekly returns are selected
for the portfolio creation.
Finally, Variance – Covariance matrix for the selected stocks is calculated in the last step.
MATLAB is used for the development of efficient portfolio against the efficient frontier. For this
purpose, we have used the MATLAB financial tool box command frontcon. This command returns
optimized portfolios as per the provided input parameters. We have taken 3 portfolios from the
given set of portfolios and calculated the actual weekly portfolio returns for each portfolio on the
basis of market value during July 2012 to December 2012. Then we calculated the Hang Seng
Composite Index weekly performance from July 2012 to December 2012 from the Bloomberg
website. Hang Seng Composite Index is benchmark index for Hong Kong Stock exchange. HSCI is
a comprehensive benchmark index and covers about 95% of total listed companies on main board
of stock exchange of Hong Kong (“SEHK”). HSCI is used as a basis for performance benchmarks.
In next step we compared these portfolio results against the HSCI and the comparison
showed that these portfolio returns are better than the HSCI. Flow chart of proposed model is
shown in Figure 1.
4. Data Description
Dataset of the Hong Kong Stock Market companies’ data for the financial year 2011 was
taken from the New York University Dataset page. In data preprocessing step, it was checked for
missing values by removing the instances with missing data from dataset. There are 774 companies’
data present in the dataset after the removal of missing values. This dataset contains companies’
data from 77 different industry groups and represents almost all industry groups of the Hong Kong
Stock Exchange. There are 42 fields for each company which includes many different types of
financial ratios to represent the financial position of that company at the end year 2011. Some of the
financial ratios include Market Capital (in US$), Total Debt (in US$), Firm Value (in US$), Cash,
Enterprise Value (in US$), Cash Firm Value, Liquidity Ratio, Book Debt to Capital Ratio, Market
Debt to Capital Ratio, Book Debt to Equity Ratio, Market Debt to Equity Ratio, Beta, Correlation
with Market, PBV, PS, Return on Equity and Return on Capital etc.
In this dataset variety of data values are used. Some fields contain very large values like
Market Value, Enterprise Value, Market Capitalization and some very small fields like Beta, Debit
to Equity Ratio. Data is first pre-processed to deal with large values. Initially, the data was
transformed to z- scores to get similar variability of the values.
Another problem is that there are 42 fields in the dataset which are difficult to handle for
calculations while performing data clustering; therefore we used Principal Component Analysis
(PCA) for this purpose. PCA uses a mathematical procedure to transforms a number of correlated
variables into a smaller number of un-correlated variables called principal components. This
process is also known as Dimension Reduction. The transformation in PCA is done in such a way
that the 1st principal component has the largest possible variance, and each succeeding component
in turn has the highest variance possible under the constraint that it will be uncorrelated with the
preceding components. Before performing PCA the data must be standardized to remove the
influence of different measurement scales and to give approximately equal weightage to all the
values. We have used SPSS tool for this purpose.
164
S. M. A. Burney, T. Jilani, H. Tariq, Z. Asim, U. Amjad, S. S. Mohammad - A Portfolio Optimization Algorithm Using Fuzzy
Granularity Based Clustering
After performing PCA on our data, 12 principle components cover 94.7% variation of the
full dataset with 42 variables. The identified fields sorted by respective Eigen values are,
1. Firm Value (in US$),
2. Book Debt to Capital Ratio,
3. Price to Sale Ratio (PS),
4. Free Cash Flow to Firm (FCFF),
5. Beta (A measure of the volatility of a portfolio in comparison to the market as a whole),
6. Liquidity Ratio,
7. Correlation with Market,
8. Return on Capital,
9. Net Profit Margin,
10. Net Debt Issued,
11. Cash Firm Value,
12. EV Invested Capital.
165
BRAIN – Broad Research in Artificial Intelligence and Neuroscience
Volume 10, Issue 2 (April, 2019), ISSN 2067-3957
First four fields represent around 62% of the dataset variation as per the PCA so we will use
these four fields for selection of stocks from different clusters.
5. Experimental Results
5.1. Granules Formation
The dataset was divided into six different sub groups also known as information granules.
Information granules are a group of objects that are organized together based on their similarity,
coherence or physical adjacency. A granulation criterion describes the rules for dividing data
objects into different granules. The categorization of companies into different partitions is made
based on market capitalization. The companies were divided into groups namely Mega, Large, Mid,
Small, Micro and Nano. There is no formal definition of the exact cutoff values. Therefore,
following market capitalization values as granule criterion are used:
1. Mega Companies: Over $10000 Million 2. Large Companies: $5000 $10000 Million
3. Mid Companies: $1000 $5000 Million 4. Small Companies: $250 $1000 Million
5. Micro Companies: Below $250 Million 6. Nano Companies: Below $50 Million
After the granules formation the number of companies in each granule is shown in the table 1:
The results of optimal number of clusters estimation for each granule using above
mentioned tool are described in table 2:
166
S. M. A. Burney, T. Jilani, H. Tariq, Z. Asim, U. Amjad, S. S. Mohammad - A Portfolio Optimization Algorithm Using Fuzzy
Granularity Based Clustering
The number of companies selected for hybrid optimal portfolio from each cluster of every granule
is given in Table 4:
Table 4. No of companies selected from each cluster
Cluster No. 1 2 3 4 5 6
Granule Name
Nano 1 1 0 0 1 1
Micro 1 1 1 1 2 --
Small 3 2 -- -- -- --
Medium 1 4 3 -- -- --
Large 3 2 -- -- -- --
Mega 3 2 -- -- -- --
167
BRAIN – Broad Research in Artificial Intelligence and Neuroscience
Volume 10, Issue 2 (April, 2019), ISSN 2067-3957
We have used this function for 20 portfolios. The portfolio 1 gives a portfolio weekly return
of 1.1127% at the risk of 1.3694% and the portfolio comprised of 8 companies. This portfolio has
the lowest risk and the risk is lowered by diversifying investment in 8 companies. The portfolio 20
gives a portfolio return at the risk of 9.0925% and the portfolio comprised of only one company.
This portfolio gives highest return but also contains the highest risk. The efficient frontier for our
given values is as under. The associated return and risk of each portfolio is described in Table 6:
168
S. M. A. Burney, T. Jilani, H. Tariq, Z. Asim, U. Amjad, S. S. Mohammad - A Portfolio Optimization Algorithm Using Fuzzy
Granularity Based Clustering
A portfolio that offers maximum expected return for a given level of risk, or conversely the
lowest level of risk for a given expected return is known as optimal portfolio. Efficient frontier is a set of
optimal portfolios that suggests highest expected return for a defined level of risk or in other words
lowest risk for a specified level of expected return. So we can say that the set of all efficient portfolios is
called the efficient frontier, shown in graph presented in Figure 2.
5.7. Portfolio Performance
To assess the efficacy of our portfolios we measured the actual weekly performance of these
stocks from July 2012 to Dec 2012 and compared it with the standard index of the Hong Kong Stock
exchange for the same duration. For this we have used the Hang Seng Composite Index (HSCI) which
is one of benchmark index for the Hong Kong Stock Exchange. The Hang Seng Composite Index
(HSCI) offers a comprehensive Hong Kong market benchmark that covers about 95% of the total
market capitalization of companies listed on the main board of the stock exchange of Hong Kong
(SEHK). HSCI uses free float adjusted market capitalization methodology, and can be used as a basis
for performance benchmarks. So to compare the portfolio performance weekly performance of HSCI is
calculated and compared with the portfolio performance. Top three portfolios having least risk for the
invested capital are used. The portfolio details of 3 portfolios formed are given in Tables 7, 8 and 9:
169
BRAIN – Broad Research in Artificial Intelligence and Neuroscience
Volume 10, Issue 2 (April, 2019), ISSN 2067-3957
The actual weekly performance of our portfolios and the benchmark index for the July 2012 to
December 2012 is given in Table 10:
170
S. M. A. Burney, T. Jilani, H. Tariq, Z. Asim, U. Amjad, S. S. Mohammad - A Portfolio Optimization Algorithm Using Fuzzy
Granularity Based Clustering
The performance of our three portfolios and the HSCI benchmark index from July 2012 to
December 2012 is shown above. The graphical view of weekly performance of our portfolios and
benchmark index are shown in Figure 3:
a granule based FPSO data clustering approach for the selection of stocks, portfolio management
and designing portfolios on the efficient frontier.
References
Bezdek, J. C., Ehrlich, R., & Full, W. (1984). FCM: The fuzzy c-means clustering algorithm.
Computers & Geosciences, 10(2-3), 191-203. doi:10.1016/0098-3004(84)90020-7.
Cheng, S., Chen, S., & Jian, W. (2015). A Novel Fuzzy Time Series Forecasting Method Based on
Fuzzy Logical Relationships and Similarity Measures. IEEE International Conference on
Systems, Man, and Cybernetics, doi:10.1109/smc.2015.393.
Hammouda, K., & Karray, F. (2000). A Comparative Study of Data Clustering Algorithms.
Retrieved from http://www.pami.uwaterloo.ca/pub/hammouda/sde625-paper.pdf.
Izakian, H., Abraham, A., & Snasel, V. (2009). Fuzzy clustering using hybrid fuzzy c-means and
fuzzy particle swarm optimization. World Congress on Nature & Biologically Inspired
Computing (NaBIC). doi:10.1109/nabic.2009.5393618.
Li, L., Liu, X., & Xu, M. (2007). A Novel Fuzzy Clustering Based on Particle Swarm Optimization.
IEEE International Symposium on Information Technologies and Applications in Education.
doi:10.1109/isitae.2007.4409243.
Li, X. Y., Sun, J. X., Gao, G. H., & Fu, J. H. (2011). Research of Hierarchical Clustering Based on
Dynamic Granular Computing. JOURNAL OF COMPUTERS, 6(12), 2526-2533.
Maciel, L., Gomide, F., & Ballini, R. (2013). Forecasting Exchange Rates with Fuzzy Granular
Evolving Modeling for Trading Strategies. Proceedings of the 8th conference of the
European Society for Fuzzy Logic and Technology. doi:10.2991/eusflat.2013.40
Nanda, S., Mahanty, B., & Tiwari, M. (2010). Clustering Indian stock market data for portfolio
management. Expert Systems with Applications, 37(12), 8793-8798.
doi:10.1016/j.eswa.2010.06.026.
Nerurkar, P., Shirke, A., Chandane, M., & Bhirud, S. (2018). Empirical Analysis of Data Clustering
Algorithms. Procedia Computer Science,125, 770-779. doi:10.1016/j.procs.2017.12.099
Östermark, R. (1996). A fuzzy control model (FCM) for dynamic portfolio management. Fuzzy
Sets and Systems. doi:10.1016/0165-0114(96)84605-7.
Rajagopal, S. (2011). Customer Data Clustering Using Data Mining Technique. International
Journal of Database Management Systems, AIRCC Publishing Corporation, 3(4).
Ravi, V., Pradeepkumar, D., & Deb, K. (2017). Financial time series prediction using hybrids of
chaos theory, multi-layer perceptron and multi-objective evolutionary algorithms. Swarm
and Evolutionary Computation, 36, 136-149. doi:10.1016/j.swevo.2017.05.003.
Shandilya, S. K., Shandilya, S., Deep, K., & Nagar, A. K. (2017). Handbook of research on soft
computing and nature-inspired algorithms. Hershey, PA: Information Science Reference.
Shi, Y., & Eberhart, R. (1998). A modified particle swarm optimizer. IEEE International
Conference on Evolutionary Computation Proceedings. IEEE World Congress on
Computational Intelligence (Cat. No.98TH8360). doi:10.1109/icec.1998.699146.
Suganya, R., & Shanthi, R. (2012). Fuzzy C- Means Algorithm- A Review. International Journal of
Scientific and Research Publications, IJSRP Inc, 2(11).
Zhang, Y., Agarwal, P., Bhatnagar, V., Balochian, S., & Yan, J. (2013). Swarm Intelligence and Its
Applications. The Scientific World Journal, 2013, 3.
Zhu, Q., & Azar, A. (Eds.). (2015). Complex System Modelling and Control Through Intelligent
Soft Computations (Vol. 319). Springer, Cham. doi: https://doi.org/10.1007/978-3-319-
12883-2.
Zhu, H., Wang, Y., Wang, K., & Chen, Y. (2011). Particle Swarm Optimization (PSO) for the
constrained portfolio optimization problem. Expert Systems with Applications, 38(8),
10161-10169. doi:10.1016/j.eswa.2011.02.075.
172
S. M. A. Burney, T. Jilani, H. Tariq, Z. Asim, U. Amjad, S. S. Mohammad - A Portfolio Optimization Algorithm Using Fuzzy
Granularity Based Clustering
Professor Dr. S. M. Aqil Burney is the Head of Actuarial Sciences, Risk Management &
Mathematics at Institute of Business Management (IoBM) Karachi. He holds
M.Sc.(Statistics), M.Phil( Risk Theory and Insurance -Statistics) from University of
Karachi (UoK) and Ph.D.(Mathematics) from Strathclyde University, Glasgow-UK along
with many courses in Population Studies of UN, Computing. He has taught for more than
40 years at UoK and extensively delivered lectures at other institutions and universities of
Pakistan and abroad. He also holds extensive experience of academic management and
organization and Provost, Registrar, Project Director Development of Dept. of Computer
Science and a Institute of Information technology and founding director of Main Communication Network of
University of Karachi. Dr. Aqil Burney was Meritorious Professor at Dept of Computer Science University
of Karachi prior to joining at IoBM. He has published more than 135 research papers and 7 books nationally
and internationally in ICT, Mathematics, Statistics and Computer Science. He has supervised more than 10
PhD and 5 MS/M.Phil in Mathematics/Computer Science/Statistics and approved HEC Supervisor .Dr. Aqil
Burney is Chairman (elect) National ICT Committee for Standard PSQCA- Ministry of Science &
technology Govt. of Pakistan and member National Computing Education Accreditation Council (NCEAC),
Member IEEE(USA), Member ACM(USA) was Fellow Royal Statistical Society UK) for 30 years or so. His
fields of interests are algorithmic analysis & design of Multivariate Time series, Stochastic Simulation and
Modeling, Software engineering, computer science, soft computing, risk theory and insurance e-health
management and Data Sciences and Fuzzy and other logical systems.
Tahseen Jilani received the B.Sc. degree in Computer Science from Government Science
Degree College, in 1998, and the M.Sc. (Statistics) and Ph.D. (Computer Science) from
University of Karachi, Pakistan, in 2001 and 2007, respectively.
He is working as Associate Professor since 2014, in the Department of Computer Science,
University of Karachi. Since January, 2016, he is engaged with the School of Computer
Science and School of Medicine as post doc data scientist, University of Nottingham- UK.
His current research interests include data sciences, machine learning in medical sciences,
Statistical techniques for big data analytics, imprecise and uncertainty data modelling. Dr.
Jilani is a member of Rough set society (RSS) and Association for Professional Health Analysts (APHA).
He is serving as member of technical committee and active reviewers for many national and international
research activities. He was the recipient of the HEC Indigenous 5000 scholarship in 2003, the National
Science Foundation grant 2010, the Nottingham University fellowship and honorary postdoc at University of
Sterling.
Usman Amjad received BS. Degree in Computer Science from University of Karachi, in
2008. He recently completed his PhD in Computer Science from University of Karachi.
His research interests include soft computing, machine learning, artificial intelligence and
programming languages. He was the recipient of the HEC Indigenous 5000 scholarship in
2013. Currently, he is working as AI solution architect at Datics.ai Solutions.
Humera Tariq received B.E (Electrical) from NED University of Engineering and
Technology in 1999. She joined MS leading to PhD program at University of Karachi in
2009 and completed her PhD in 2015. Currently she is working as Assistant Professor at
Department of Computer Science, University of Karachi. Her research interest includes
image processing, biomedical imaging, Modeling, Simulation and Machine Learning.
173