Hands-on work with machine learning concepts including anomaly detection, time-series forecasting, and classification, with a focus on practical deployment patterns via APIs and containerization. These projects explore the full ML lifecycle: from data preparation and feature engineering through model training to serving predictions via production-ready endpoints.
Statistical and ML-based approaches to identify unusual patterns in streaming and batch data. Useful for quality control and monitoring.
Predicting future values from temporal data using LSTM networks and classical methods. Applied to operational metrics and demand patterns.
Supervised learning with Scikit-Learn and PyTorch for categorization and continuous value prediction tasks.
Data transformation, encoding, scaling, and feature selection pipelines that improve model performance and reproducibility.
Convolutional networks for image-based tasks; recurrent networks for sequential and time-series data processing.
Understanding of attention-based architectures for NLP tasks including text classification and generation.
GANs and diffusion models for synthetic data generation and augmentation.
GNNs for structured data with relational properties applicable to network topology and design graph analysis.
Every model is deployed with production in mind:
I focus on practical ML not just training models, but deploying them as reliable services. The goal is always: data in β clean predictions out β monitored in production. This mindset directly supports my professional work in data pipeline design and automation.