Physicists know that no measurement is meaningful without an estimate of its error. Keane studied how to estimate the uncertainty in neural network predictions.

As part of his final year project at the School of Physical and Mathematical Sciences (SPMS), Keane Wee Jin Yen (Year 4, Physics and Applied Physics) explored the mechanics behind dimensionality reduction in deep neural networks. His work aimed to shed light on how data is compressed and clustered in machine learning models—an area often considered a “black box” even among experts. Without prior experience in the field, Keane picked up machine learning from scratch and eventually uncovered unexpected parallels between neural networks and physical concepts like manifolds.

Keane Wee Jin Yen, Year 4, Physics and Applied Physics
Supervisor: Prof Massimo Pica Ciamarra


What’s your project about – in a nutshell?

I am exploring the underlying mechanisms behind dimensionality reduction techniques in deep neural networks. I used these techniques to study how the data is being reduced.

What sparked the idea for this project?
This project was developed due to the increasing dependence of humans on machine learning. However, it is still a black box where we do not fully understand the mechanisms of neural networks.

How did your project evolve from idea to outcome?

Figure 1 Neural Network Model

Figure 1 Neural Network Model


Start – I went through an online tutorial to be introduced to Keras and TensorFlow, which are machine learning libraries found in Python. I also read a textbook discussing the basic mechanics of neural networks.
Middle – I used the MNIST Dataset to understand how the neural network clusters the different digits.
End – I applied dimensionality reduction and clustering techniques to further examine the trends and relationships the neural network had generated.

What was a tough/challenging moment, and how did you work through it?
The most challenging moment was getting started, especially having to understand the theory behind neural networks from scratch, as I had no background in machine learning before starting this FYP.

What was the most fun or satisfying part of doing this project?
The most satisfying part was gaining another perspective on principal component analysis and being able to replicate similar trends to what the research group had achieved.

One thing you learned – about the topic, or yourself?

Figure 2 Distance heatmap for a 3 layer DNN

About the topic: As I researched dimensionality reduction techniques, I realised that some are used for non-linear data points, meaning the data can be represented on a manifold. This connection between neural networks and manifolds was something I didn’t expect.

About myself: I found myself increasingly interested in the relationship between neural networks and physics. I plan to focus on the field of machine learning in the near future.

Any advice for students starting their own final year project?
Don’t be afraid to try a field or topic outside of your comfort zone.