The project aims to create a Machine Learning workflow that helps TV planners in their work using data from multiple sources.
Fix, optimize, and improve daily data processing pipelines that suffered from reliability issues and performance bottlenecks. The changes improved data decision-making and developer productivity
Initially, the data infrastructure was unreliable, with frequent interruptions and delays in data delivery. In this project, we reworked the data infrastructure, data processing, and data storage to improve reliability and performance.
After developing a mobile application, the company extracts value from the collected data using technologies like Spark, Airflow, MLflow. Intelligent end-to-end solutions are provided to stakeholder requirements.
Using Machine Learning and NLP techniques, to infer what the customers are not buying for the supplier and target these gaps with personalized sales promotions. The product is used to assist Sales Manages in increasing basket sizes for their customers.
In stock exchange companies, the volume of collected data increases daily. Also market value for these data increases. Considering these dynamic features of the project scope, we created an architecture where historical data could be provided to users and allow them to enrich this data in their own scope.
Project aims to create a decoupled architecture to process and persist data on a daily basis, collected into GCP. Also, in the scope of this project, we defined the CI/CD strategy for all the components of the data pipeline.
Create a self-service framework for ingesting data from different sources and preprocessing, keeping in mind requirements from the end consumer of these data. The framework is designed to work in a cloud-native environment. A set of HTTP Rest APIs was made available to users to provide necessary functionality around on-demand AWS EMR clusters.
Certifications:
PROFILE
PROFESSIONAL SUMMARY
TOOLS
Machine Learning:
DBMS:
Big Data:
Cloud Technologies:
Platform:
Orchestration:
CI/CD:
Methodologies:
WORK EXPERIENCE:
03/2021 ? today
Place of Work: Munich, Germany
Role: Senior Consultant, ML/Data Engineer
Customer: Freelance12/2016 ? 03/2021
Place of Work: Munich, Germany
Role: Senior Consultant, ML/Data Engineer
Customer: Data ReplyEARLIER WORK EXPERIENCES:
01/2014 ? 07/2015
Role: Student Helper
Customer: RWTH Aachen University, Informatik Zentrum, Lehrstuhl für Informatik 5
Tasks:
09/2010 ? 08/2013
Role: Software Developer
Customer: Helius Systems, Tirana, Albania
Tasks:
Cloud Providers:
PERSONAL PROJECTS
2023 - 2023:
cost-efficient-gpu-platform
2022 - 2022:
ab-testing-for-mlops
2021 - 2021:
kubeflow-spark
2021 - 2021:
spark-dockerfile-multi-stage
2021 - 2021:
analytics-platform-diy
2020 - 2020:
immoscout-bot
The project aims to create a Machine Learning workflow that helps TV planners in their work using data from multiple sources.
Fix, optimize, and improve daily data processing pipelines that suffered from reliability issues and performance bottlenecks. The changes improved data decision-making and developer productivity
Initially, the data infrastructure was unreliable, with frequent interruptions and delays in data delivery. In this project, we reworked the data infrastructure, data processing, and data storage to improve reliability and performance.
After developing a mobile application, the company extracts value from the collected data using technologies like Spark, Airflow, MLflow. Intelligent end-to-end solutions are provided to stakeholder requirements.
Using Machine Learning and NLP techniques, to infer what the customers are not buying for the supplier and target these gaps with personalized sales promotions. The product is used to assist Sales Manages in increasing basket sizes for their customers.
In stock exchange companies, the volume of collected data increases daily. Also market value for these data increases. Considering these dynamic features of the project scope, we created an architecture where historical data could be provided to users and allow them to enrich this data in their own scope.
Project aims to create a decoupled architecture to process and persist data on a daily basis, collected into GCP. Also, in the scope of this project, we defined the CI/CD strategy for all the components of the data pipeline.
Create a self-service framework for ingesting data from different sources and preprocessing, keeping in mind requirements from the end consumer of these data. The framework is designed to work in a cloud-native environment. A set of HTTP Rest APIs was made available to users to provide necessary functionality around on-demand AWS EMR clusters.
Certifications:
PROFILE
PROFESSIONAL SUMMARY
TOOLS
Machine Learning:
DBMS:
Big Data:
Cloud Technologies:
Platform:
Orchestration:
CI/CD:
Methodologies:
WORK EXPERIENCE:
03/2021 ? today
Place of Work: Munich, Germany
Role: Senior Consultant, ML/Data Engineer
Customer: Freelance12/2016 ? 03/2021
Place of Work: Munich, Germany
Role: Senior Consultant, ML/Data Engineer
Customer: Data ReplyEARLIER WORK EXPERIENCES:
01/2014 ? 07/2015
Role: Student Helper
Customer: RWTH Aachen University, Informatik Zentrum, Lehrstuhl für Informatik 5
Tasks:
09/2010 ? 08/2013
Role: Software Developer
Customer: Helius Systems, Tirana, Albania
Tasks:
Cloud Providers:
PERSONAL PROJECTS
2023 - 2023:
cost-efficient-gpu-platform
2022 - 2022:
ab-testing-for-mlops
2021 - 2021:
kubeflow-spark
2021 - 2021:
spark-dockerfile-multi-stage
2021 - 2021:
analytics-platform-diy
2020 - 2020:
immoscout-bot