“Yonghua joined my team at Deutsche Bank where I found him to be a very hard working and smart colleague. He worked on complex data science and NLP projects for me leveraging Spark on Hadoop YARN, PyTorch and TensorFlow to train and put into production large models in a distributed manner - these models were related to complex Compliance and Anti-Financial Crime (AFC) use cases for Deutsche Bank. It was a pleasure to work with him and I would highly recommend him for any quantitative analytics / data science team in financial services.”
About
I have been building Natural Language Processing (NLP) models that process millions of…
Activity
-
We are thrilled to have been ranked 2nd in the world and 1st in the UK and Europe in this year's QS World University Rankings! 🎊🥳 Whether you're…
We are thrilled to have been ranked 2nd in the world and 1st in the UK and Europe in this year's QS World University Rankings! 🎊🥳 Whether you're…
Liked by Yonghua Yin
-
OpenChat is currently one of the best open OpenAI ChatGPT alternatives. 🚀 The team behind OpenChat released the paper on how they achieved ChatGPT…
OpenChat is currently one of the best open OpenAI ChatGPT alternatives. 🚀 The team behind OpenChat released the paper on how they achieved ChatGPT…
Liked by Yonghua Yin
-
It crazy how far the ML field has come when it comes to fine-tuning LLMs. A year ago: it was challenging to fine-tune GPT-2 (1.5B) on a single GPU…
It crazy how far the ML field has come when it comes to fine-tuning LLMs. A year ago: it was challenging to fine-tune GPT-2 (1.5B) on a single GPU…
Liked by Yonghua Yin
Experience
Education
Publications
-
Deep Learning with Dense Random Neural Networks
ICMMI 2017 International Conference on Man-Machine Interactions
This paper models the soma-to-soma interactions in dense nuclei found in the brain based on the theory of G-network and random neural network. The mathematical model is exploited as a multi-layer architecture for deep learning. An efficient training procedure is proposed and three applications demonstrate that the model equipped with the training procedure can be trained significantly faster and achieve better performance than conventional deep learning tools, such as the MLP and CNN. This…
This paper models the soma-to-soma interactions in dense nuclei found in the brain based on the theory of G-network and random neural network. The mathematical model is exploited as a multi-layer architecture for deep learning. An efficient training procedure is proposed and three applications demonstrate that the model equipped with the training procedure can be trained significantly faster and achieve better performance than conventional deep learning tools, such as the MLP and CNN. This paper is invited as a keynote talk in the conference (http://icmmi.polsl.pl/pages/icmmi2017-erol_gelenbe).
-
Multi-Layer Neural Networks for Quality of Service oriented Server-State Classification in Cloud Servers
International Joint Conference on Neural Networks (IJCNN) 2017
Dataset collected in the cloud and deep-learning-based classification systems built up.
-
Single-Cell Based Random Neural Network for Deep Learning
International Joint Conference on Neural Networks (IJCNN) 2017
A deep learning tool based on spiking Random Neural Network is proposed for classification and demonstrated to be more efficient than four other deep learning tools .
-
Optimum Energy for Energy Packet Networks
Probability in the Engineering and Informational Sciences (Cambridge University Press)
Efficient optimization algorithms are designed to solve the energy-distribution optimization problems in Energy Packet Network. Analytic optimum solutions are also found for most of the situations.
-
ZD, ZG and IOL Controllers and Comparisons for Nonlinear System Output Tracking with DBZ Problem Conquered in Different Relative Degree Cases
Asian Journal of Control
Output-tracking controller designed with division-by-zero problem solved.
-
Deep Learning in Multi-Layer Architectures of Dense Nuclei
NIPS 2016 workshop on Brains and Bits: Neuroscience Meets Machine Learning
-
Nonnegative autoencoder with simplified random neural network
NIPS 2016 Workshop: Computing with Spikes
A deep spiking non-negative autoencoder is proposed for data-dimensionality reduction.
-
Cross-validation based weights and structure determination of Chebyshev-polynomial neural networks for pattern classification
Pattern Recognition
A neural-network classifier is proposed. Tests on 12 real-world datasets demonstrate that the proposed classifier achieves better testing classification accuracy with lower computational complexity than 4 other classifiers, including the standard MLP, ELM, RBF and the old good SVM.
Projects
-
Deep learning with spiking random neural network
-
Deep learning with artificial neural network (ANN) based on von Neumann architecture has been successful. Recently, neuromorphic computing based on spiking neural networks (SNN) that goes beyond the von Neumann architecture becomes to attract attentions. Large-scale SNN is difficult to analyse and techniques in deep learning are not directly applicable. This project is to merge the power of deep learning and SNN. Specifically, based on the queueing theory and random neural network theory…
Deep learning with artificial neural network (ANN) based on von Neumann architecture has been successful. Recently, neuromorphic computing based on spiking neural networks (SNN) that goes beyond the von Neumann architecture becomes to attract attentions. Large-scale SNN is difficult to analyse and techniques in deep learning are not directly applicable. This project is to merge the power of deep learning and SNN. Specifically, based on the queueing theory and random neural network theory, analysable and easy-to-train SNN models have been proposed or exploited, with deep learning techniques embedded. Recent work in IJCNN 2017 has presented an approach to exploit the spiking random neural network as a deep learning tool that is more efficient than the conventional ones.
Honors & Awards
-
Imperial College President's PhD Scholarship
Imperial College London
This is a 3.5 year full PhD scholarship (from Oct. 2014 to Mar. 2018) offered by Imperial College London (50 offers available worldwide each year). More details could be found in https://www.imperial.ac.uk/study/pg/graduate-school/phd-scholars/meet-year-one/
Recommendations received
1 person has recommended Yonghua
Join now to viewMore activity by Yonghua
-
Exciting news! Introducing 🤗 PEFT library, offering the latest Parameter-Efficient Fine-tuning (PEFT) techniques seamlessly integrated with 🤗…
Exciting news! Introducing 🤗 PEFT library, offering the latest Parameter-Efficient Fine-tuning (PEFT) techniques seamlessly integrated with 🤗…
Liked by Yonghua Yin
-
It appears that the first open-source equivalent of ChatGPT has arrived: https://lnkd.in/gEjn5-C4 It’s an implementation of RLHF (Reinforcement…
It appears that the first open-source equivalent of ChatGPT has arrived: https://lnkd.in/gEjn5-C4 It’s an implementation of RLHF (Reinforcement…
Liked by Yonghua Yin
-
Today is an exciting day for Deutsche Bank as we announce an innovation partnership with NVIDIA, a leader in accelerated computing and AI. This is…
Today is an exciting day for Deutsche Bank as we announce an innovation partnership with NVIDIA, a leader in accelerated computing and AI. This is…
Liked by Yonghua Yin
-
An improved version of TabNet, our interpretable deep learning architecture for tabular data, is now available in Google Cloud’s Vertex AI Tabular…
An improved version of TabNet, our interpretable deep learning architecture for tabular data, is now available in Google Cloud’s Vertex AI Tabular…
Liked by Yonghua Yin
-
🎉 OWL-ViT by Google AI is now available in Hugging Face Transformers. 🤗 OWL-ViT is a zero-shot text-conditioned object detection model that allows…
🎉 OWL-ViT by Google AI is now available in Hugging Face Transformers. 🤗 OWL-ViT is a zero-shot text-conditioned object detection model that allows…
Liked by Yonghua Yin
-
One of the coolest things I've learned about in the last few years is streaming technology. It's becoming increasingly important for online feature…
One of the coolest things I've learned about in the last few years is streaming technology. It's becoming increasingly important for online feature…
Liked by Yonghua Yin
-
Continual learning tackles the problem of training a single model on changing data distributions where different classification tasks are presented…
Continual learning tackles the problem of training a single model on changing data distributions where different classification tasks are presented…
Liked by Yonghua Yin
-
https://lnkd.in/ejtBtnrM
https://lnkd.in/ejtBtnrM
Liked by Yonghua Yin
-
[Microsoft] A typical pipeline for pre-training Document AI models usually starts with OCR or document layout analysis, which still heavily relies on…
[Microsoft] A typical pipeline for pre-training Document AI models usually starts with OCR or document layout analysis, which still heavily relies on…
Liked by Yonghua Yin
-
The Visual Transformer (ViT) has helped advance many core computer vision applications, e.g., image classification, but training can be inefficient…
The Visual Transformer (ViT) has helped advance many core computer vision applications, e.g., image classification, but training can be inefficient…
Liked by Yonghua Yin
-
#RL is changing the world! The Bellman equations are the heart of #RL. Here's a 3 (3.5:P) steps concise proof 😀 #machinelearning…
#RL is changing the world! The Bellman equations are the heart of #RL. Here's a 3 (3.5:P) steps concise proof 😀 #machinelearning…
Liked by Yonghua Yin
Other similar profiles
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More