With 22 years of experience, including 19 years specializing in AI/ML, Daekeun has worked across startups, manufacturing, FSI, and cloud, gaining deep expertise in developing and deploying AI/ML products. Daekeun holds 6 first-author patents and have led AI/ML projects that contributed to the mass production of over 20 products.
As an AI/ML technical specialist, Daekeun has led over 150+ AI/ML workloads, delivered 80+ seminars as the ML community tech leader, and mentored 18 ML experts. While his career was deeply rooted in computer vision, his expertise now spans all AI/ML domains, including GenAI, with a strong focus SLM fine-tuning and SLM/LLM serving. Daekeun has a double major in computer science and mathematics, and a master’s in computer science specialized in ML.
- 🧑💻 Modeling & Deployment: SLM/LLM fine-tuning, model serving, evaluation-driven LLMOps, Traditional ML/Data Science, Computer Vision
- ☁️ Cloud ML Platforms: Amazon SageMaker, Azure ML, Hugging Face
- 📚 Research to Production: 6 academic papers, 6 1st author patents, 20+ produductions, 2 tech book translations
- 🎤 Thought Leadership: 80+ seminars, 40+ public talks, 18 ML mentees
- Daekeun Kim (2024). Fine-tune/Evaluate/Quantize SLM/LLM using the torchtune on Azure ML. Microsoft Tech Community.
- Daekeun Kim (2024). Generate Synthetic QnAs from Real-world Data on Azure. Microsoft Tech Community.
- Daekeun Kim (2024). Fine-tuning Florence-2 for VQA (Visual Question Answering) using the Azure ML Python SDK and MLflow. Microsoft Tech Community
- Manoranjan Rajguru and Daekeun Kim (2024). Fine-tune Small Language Model (SLM) Phi-3 using Azure Machine Learning. Microsoft Tech Community.
- Daekeun Kim (2023). Quickly build high-accuracy Generative AI applications on enterprise data using Amazon Kendra, LangChain, and LLMs. AWS Korea Tech Blog.
- Daekeun Kim (2023). Build a powerful question answering bot with Amazon SageMaker, Amazon OpenSearch Service, Streamlit, and LangChain. AWS Korea Tech Blog.
- Daekeun Kim (2023). Interactively fine-tune Falcon-40B and other LLMs on Amazon SageMaker Studio notebooks using QLoRA. AWS Korea Tech Blog.
- Daekeun Kim (2023). Deploy Falcon-40B with large model inference DLCs on Amazon SageMaker. AWS Korea Tech Blog.
- Hyundoo Jin, Daekeun Kim, Daeyeol Shim, and Daehoon Oh (2023). Using Amazon SageMaker Distributed Training with KakaoStyle to Model a Category Automated Classification System. AWS Korea Tech Blog.
- Daekeun Kim and Hyeonsang Jeon (2023). Train a Large Language Model on a single Amazon SageMaker GPU with Hugging Face and LoRA. AWS Korea Tech Blog.
- Sungwon Han, Heewon Ko, Hyojung Kang, Kyungdae Cho, Sanghwa Na, and Daekeun Kim (2023). SK Telecom's Case Study of Building a ML Pipeline Using AWS Inferentia and AWS Step Functions. AWS Korea Tech Blog.
- Daekeun Kim and Daeyeol Shim (2023). Co-translated “Machine Learning System Engineering in Action”, authored by Ben Wilson.
- Daekeun Kim and Youngmin Kim (2023). Co-translated “Designing Machine Learning System”, authored by Chip Huyen.