Johns Hopkins Public Health Collective

Johns Hopkins Public Health Collective

18 Posts

Students writing cool things in Public Health and AI.

MLOps: From Model to Production

MLOps: From Model to Production

Building a great model is only half the battle. MLOps (Machine Learning Operations) is the discipline of deploying, monitoring, and maintaining models in production reliably and efficiently.

Cross-Validation: The Gold Standard for Model Evaluation

Cross-Validation: The Gold Standard for Model Evaluation

A simple train/test split is not always enough. Learn how K-Fold Cross-Validation provides a much more robust estimate of your model's performance on unseen data.

Support Vector Machines: Maximizing the Margin

Support Vector Machines: Maximizing the Margin

An introduction to Support Vector Machines (SVMs), a powerful and versatile supervised learning algorithm capable of performing linear or non-linear classification, regression, and outlier detection.

The Bias-Variance Tradeoff: A Balancing Act in Machine Learning

The Bias-Variance Tradeoff: A Balancing Act in Machine Learning

A fundamental concept in machine learning, the Bias-Variance Tradeoff explains the delicate balance between a model that is too simple and one that is too complex. Understanding it is key to diagnosing model performance.

Finding the Sweet Spot: An Introduction to Hyperparameter Tuning

Finding the Sweet Spot: An Introduction to Hyperparameter Tuning

Machine learning models have many knobs and dials called hyperparameters. Learn how to tune them effectively using techniques like Grid Search and Random Search to unlock your model's true potential.

The Battle Against Overfitting: An Introduction to Regularization

The Battle Against Overfitting: An Introduction to Regularization

Learn about one of the most common pitfalls in machine learning—overfitting—and explore powerful techniques like L1 (Lasso) and L2 (Ridge) regularization to build more generalizable models.

Model Evaluation: How Good Is Your Model, Really?

Model Evaluation: How Good Is Your Model, Really?

Building a model is one thing, but how do you know if it's any good? We'll explore essential evaluation metrics for classification and regression to help you measure and compare your models' performance.

Gradient Boosting and XGBoost: The King of Kaggle Competitions

Gradient Boosting and XGBoost: The King of Kaggle Competitions

An overview of Gradient Boosting, a powerful ensemble technique, and its most famous implementation, XGBoost, which is renowned for its performance and speed, especially on tabular data.

Decision Trees and Random Forests: Interpretable Machine Learning

Decision Trees and Random Forests: Interpretable Machine Learning

A guide to understanding Decision Trees and their powerful successor, Random Forests. Learn how these intuitive, flowchart-like models make decisions and why they are so popular in machine learning.

Unsupervised Learning: Finding Patterns in the Noise

Unsupervised Learning: Finding Patterns in the Noise

A look into unsupervised learning, the branch of machine learning that finds hidden patterns and structures in unlabeled data, focusing on clustering and dimensionality reduction.

An Introduction to Reinforcement Learning: Learning by Doing

An Introduction to Reinforcement Learning: Learning by Doing

Explore the fundamentals of Reinforcement Learning (RL), the area of machine learning where agents learn to make optimal decisions by interacting with an environment and receiving rewards.

BERT and the Power of Transfer Learning in NLP

BERT and the Power of Transfer Learning in NLP

Discover how BERT (Bidirectional Encoder Representations from Transformers) revolutionized NLP by learning deep contextual relationships, and how transfer learning allows us to leverage its power for custom tasks.

The Transformer Architecture: The Model That Changed NLP Forever

The Transformer Architecture: The Model That Changed NLP Forever

An exploration of the Transformer architecture and its core component, the self-attention mechanism, which has become the foundation for modern large language models like GPT and BERT.

Long Short-Term Memory (LSTM): Overcoming RNNs' Limitations

Long Short-Term Memory (LSTM): Overcoming RNNs' Limitations

Dive into Long Short-Term Memory (LSTM) networks, a special kind of RNN that can learn long-term dependencies, revolutionizing natural language processing and time-series analysis.

Recurrent Neural Networks (RNNs): Understanding Sequential Data

Recurrent Neural Networks (RNNs): Understanding Sequential Data

An introduction to Recurrent Neural Networks (RNNs), the models that give machines a sense of memory, making them ideal for tasks like translation, speech recognition, and more.

Convolutional Neural Networks (CNNs): The Eyes of Deep Learning

Convolutional Neural Networks (CNNs): The Eyes of Deep Learning

A deep dive into Convolutional Neural Networks (CNNs), the powerhouse behind modern computer vision. Learn how they 'see' and classify images with incredible accuracy.

Demystifying Backpropagation: The Core of Neural Network Training

Demystifying Backpropagation: The Core of Neural Network Training

A beginner-friendly guide to understanding backpropagation, the fundamental algorithm that powers deep learning. We'll break down the concepts and provide a practical code example.

Generative Adversarial Networks (GANs): The Art of AI Creativity

Generative Adversarial Networks (GANs): The Art of AI Creativity

Explore the fascinating world of Generative Adversarial Networks (GANs), where two neural networks compete to create stunningly realistic images, music, and more.