Summary of The ML Project
Summary of The ML Project
NAME: ____________________________
One of the main efficiency drawbacks of the most existing ABE schemes is that decryption is
expensive for resource-limited devices due to pairing operations, and the number of pairing
operations required to decrypt a cipher text grows with the complexity of the access policy.
decryption in this thesis work. Here emphasized that an ABE scheme with secure outsourced
decryption does not necessarily guarantee verifiability (i.e., correctness of the transformation
To improve my algorithms knowledge for machine learning. From this topic to understand for
internal and external behavior for machine learning unsupervised dataset.
1
KNN algorithm
Multilayer perceptron algorithm
Naive Bayes algorithm
Random forest algorithm
Cloud network monitoring data is dynamic and distributed. Signals to monitor the cloud can
appear, disappear or change their importance and clarity over time. Machine learning (ML)
models tuned to a given data set can therefore quickly become inadequate. A model might be
highly accurate at one point in time but may lose its accuracy at a later time due to changes in
input data and their features. Distributed learning with dynamic model selection is therefore
often required. Under such selection, poorly performing models (although aggressively tuned for
the prior data) are retired or put on standby while new or standby models are brought in.
The well-known method of Ensemble ML (EML) may potentially be applied to improve the
including the need for continuous training, excessive computational resources, and requirement
for large training datasets, high risks of over fitting, and a time-consuming model-building
process. In this paper, we propose a novel cloud methodology for automatic ML model selection
and tuning that automates model building and selection and is competitive with existing
methods. We use unsupervised learning to better explore the data space before the generation of
DevOps architecture for auto tuning and selection based on container orchestration and
2
messaging between containers, and take advantage of a new auto scaling method to dynamically
create and evaluate instantiations of ML algorithms. The proposed methodology and tool are
METHODOLOGY:
Methodology is the stage of the project when the theoretical design is turned out into a working
system. Thus it can be considered to be the most critical stage in achieving a successful new
system and in giving the user, confidence that the new system will work and be effective.
The methodology stage involves careful planning, investigation of the existing system and it’s
changeover methods.
MAIN METHODOLOGY:-
1. Cloud Platform:
ML framework lies in its ability of maintaining lightweight and powerful accountability that
combines aspects of access control, usage control and authentication. By means of the ML, data
owners can track not only whether or not the service-level agreements are being honored, but
Push mode:
The push mode refers to logs being periodically sent to the data owner or stakeholder.
Pull mode:
3
3. LOGGING AND TUNING TECHNIQUES:
1. The logging should be decentralized in order to adapt to the dynamic nature of the cloud.
More specifically, log files should be tightly bounded with the corresponding data being
controlled, and require minimal infrastructural support from any server In ML Algorithms.
2. Every access to the user’s data should be correctly and automatically logged. This requires
integrated techniques to authenticate the entity that accesses the data, verify, and record the
actual operations on the data as well as the time that the data have been accessed.
3. Log files should be reliable and tamper proof to avoid illegal insertion, deletion, and
modification by malicious parties. Recovery mechanisms are also desirable to restore damaged
4. Log files should be sent back to their data owners periodically to inform them of the current
usage of their data. More importantly, log files should be retrievable anytime by their data
owners when needed regardless the location where the files are stored.
5. The proposed technique should not intrusively monitor data recipients’ systems, nor it should
introduce heavy communication and computation overhead, which otherwise will hinder its
4
4. MAJOR COMPONENTS OF ML:
There are two major components of the ML, the first being the Auto selection, and the second
The logger is strongly coupled with user’s data (either single or multiple data items). Its main
tasks include automatically logging access to data items that it contains, encrypting the log
record using the public key of the content owner, and periodically sending them to the log
harmonizer. It may also be configured to ensure that access and usage control policies associated
with the data are honored. For example, a data owner can specify that user X is only allowed to
view but not to modify the data. The auto selection will control the data access even after it is
downloaded by user X. The auto tuning forms the central component which allows the user
access to the log files. The auto selection is responsible for auditing.
Hard Disk - 80 GB
5
Application Server : Glassfish
TESTING TECHNOLOGIES:
Used in selenium python technologies for Machine Learning algorithm. Major powerful
automation testing in industry for Selenium with Python. I used this project Selenium Python
Technologies.
that it does not require any dedicated authentication or storage system in place.
usage control for the protected data after these are delivered to the receiver.
We conduct experiments on a real cloud test bed. The results demonstrate the
detailed security analysis and discuss the reliability and strength of our
architecture.