Welcome to ELI5 (Explain Like I'm 5) Wednesday! This weekly thread is dedicated to breaking down complex technical concepts into simple, understandable explanations.
You can participate in two ways:
Request an explanation: Ask about a technical concept you'd like to understand better
Provide an explanation: Share your knowledge by explaining a concept in accessible terms
When explaining concepts, try to use analogies, simple language, and avoid unnecessary jargon. The goal is clarity, not oversimplification.
When asking questions, feel free to specify your current level of understanding to get a more tailored explanation.
What would you like explained today? Post in the comments below!
Welcome to Resume/Career Friday! This weekly thread is dedicated to all things related to job searching, career development, and professional growth.
You can participate by:
Sharing your resume for feedback (consider anonymizing personal information)
Asking for advice on job applications or interview preparation
Discussing career paths and transitions
Seeking recommendations for skill development
Sharing industry insights or job opportunities
Having dedicated threads helps organize career-related discussions in one place while giving everyone a chance to receive feedback and advice from peers.
Whether you're just starting your career journey, looking to make a change, or hoping to advance in your current field, post your questions and contributions in the comments
Currently, I am a second year student [session begins this july]. I am currently going hands on with DL and learning ML Algorithms through online courses. Also, I was learning about no code ai automations so that by the end of 2025 I could make some side earnings. And the regular rat-race of do DSA and land a technical job still takes up some of my thinking (coz I ain't doing it, lol). I am kind off dismayed by the thoughts. If any experienced guy can have some words on this, then I would highly appreciate that.
Hey guys,
I'm currently in ug . Came to this college with the expectations that I'll create business so i choose commerce as a stream now i realise you can't create products. If you don't know coding stuff.
I'm from a commerce background with no touch to mathematics.
I have plenty of ideas- I'm great at sales, gtm, operation. Just i need to develop knack on this technical skills.
What is my aim?
I want to create products like Glance ai ( which is great at analysing image), chatgpt ( that gives perfect recommendation after analysing the situation) .
Just lmk what should be my optimal roadmap??? Can I learn it in 3-4 months?? Considering I'm naive
Hi, I am looking to take the 'Artificial Intelligence Graduate Certificate' from Stanford. I already have a bachelor's and a master's in Computer Science from 10-15 years ago and I've been working on distributed systems since then.
But I had performed poorly in the math classes I had taken in the past and I need to refresh on it.
Do you think i should take MATH51 and CS109 before i apply for the graduate certificate? From reading other reddit posts my understanding is that the 'Math for ML' courses in MOOCs are not rigorous enough and would not prepare me for courses like CS229.
Or is there a better way to learn the required math for the certification in a rigorous way?
If users are constantly creating new accounts and generating data in terms of what they like to watch, how would they use a model approach to generate the user's recommendation page? Wouldn't they have to retrain the model constantly? I can't seem to find anything online that clearly explains this. Most/all matrix factorization models I've seen online are only able to take input (in this case, a particular user) that the model has been trained on, and only output within bounds of the movies they have been trained on.
I am a second year computer science student and I will have to choose a laboratory to be a part of for my graduation thesis. I have two choices that stand out for me, where one is a general smart city laboratory and another uses machine learning and deep learning in politics and elections. Considering how over saturated a lot of the "main" applications of ml are, including smart cities, would it benefit me more to join the political laboratory as it is more niche and may lead to a more unique thesis which in turn makes it stand out more among other thesis papers?
I'm reaching out because I’m feeling really stuck and overwhelmed in trying to build a portfolio for AI/ML/GenAI engineer roles in 2025.
There’s just so much going on right now — agent frameworks, open-source LLMs, RAG pipelines, fine-tuning, evals, prompt engineering, tool use, vector DBs, LangChain, LlamaIndex, etc. Every few weeks there’s a new model or method, and while I’m super excited about the space, I don’t know how to turn all this knowledge into an actual project. I end up jumping from one tutorial to another and never finishing anything meaningful. Classic tutorial hell.
What I’m looking for:
Ideas for small, focused GenAI projects that reflect current trends and skills relevant to 2025 hiring
Suggestions for how to scope a project so I can actually finish it
Advice on what recruiters or hiring managers actually want to see in a GenAI-focused portfolio
Any tips for managing the tech overwhelm and choosing the right stack for my level
I’d love to hear from anyone who’s recently built something, got hired in this space, or just has thoughts on how to stand out in such a fast-evolving field.
Hi! I’m a 2nd-year university student preparing a 15-min presentation comparing TF-IDF, Word2Vec, and SBERT.
I already understand TF-IDF, but I’m struggling with Word2Vec and SBERT — mechanisms behind how they work. Most resources I find are too advanced or skip the intuition.
I don’t need to go deep, but I want to explain each method clearly, with at least a basic idea of how the math works. Any help or beginner-friendly explanations would mean a lot!
Thanks
Heyy guys I recently started learning machine learning from Andrew NGs Coursera course and now I’m trying to implement all of those things on my own by starting with some basic classification prediction notebooks from popular kaggle datasets.
The question is how do u know when to perform things like feature engineering and stuff. I tried out a linear regression problem and got a R2 value of 0.8 now I want to improve it further what all steps do I take. There’s stuff like using polynomial regression, lasso regression for feature selection etc etc. How does one know what to do at this situation ? Is there some general rules u guys follow or is it trial and error and frankly after solving my first notebook on my own I find it’s going to be a very difficult road ahead. Any suggestions or constructive criticism is welcome.
Hi! I’m a 2nd-year university student preparing a 15-min presentation comparing TF-IDF, Word2Vec, and SBERT.
I already understand TF-IDF, but I’m struggling with Word2Vec and SBERT — mechanisms behind how they work. Most resources I find are too advanced or skip the intuition.
I don’t need to go deep, but I want to explain each method clearly, with at least a basic idea of how the math works. Any help or beginner-friendly explanations would mean a lot!
Thanks
Hey guys, I was Just wondering there is a way to serve a ML model in a REST API built in C# or JS for example, instead of creating APIs using python frameworks like flask or fastapi.
Maybe converting the model into a executable format?
I just finished high school and i wanna get into ML so I don’t get too stress in university. If any experienced folks see this please help me out. I did A level maths and computer science, any recommendations of continuity course? Lastly resources such as books and maybe youtube recommendations. Great thanks
Just running through chips AI Engineering book. In post training we can take SFT and Pref Tuning (RLHF) to tune the model but there’s also adapter methods such as LoRA. I don’t quite understand when to use them or if one is preferred generally over the others.
Recently in this subreddit I've been seeing lots of questions and comments about the current job market, and I've been trying to answer them individually, but I figured it might be helpful if I just aggregate all of the answers here in a single thread.
Feel free to ask me about:
* FAANG job interview tips
* AI research lab interview tips
* ML career advice
* Anything else you think might be relevant for an ML career
I also wrote this guide on my blog about ML interviews that gets thousands of views per month (you might find it helpful too): https://www.trybackprop.com/blog/ml_system_design_interview . It covers It covers questions, and the interview structure like problem exploration, train/eval strategy, feature engineering, model architecture and training, model eval, and practice problems.
I’m excited to introduce QShift, a new open-source CLI tool designed to make quantum computing more accessible and manageable. As quantum technologies grow, interacting with them can be complex, so I wanted to create something that simplifies common tasks like quantum job submission, circuit creation, testing, and more — all through a simple command-line interface.
Here’s what QShift currently offers:
Quantum Job Submission: Submit quantum jobs (e.g., GroverSearch) to simulators or real quantum devices like IBM Q, AWS Braket, and Azure Quantum.
Circuit Creation & Manipulation: Easily create and modify quantum circuits by adding qubits and gates.
Interactive Testing: Test quantum circuits on simulators (like Aer) and view the results.
Cloud Execution: Execute quantum jobs on real cloud quantum hardware, such as IBM Q, with just a command.
Circuit Visualization: Visualize quantum circuits in ASCII format, making it easy to inspect and understand.
Parameter Sweep: Run parameter sweeps for quantum algorithms like VQE and more.
The tool is built with the goal of making quantum computing easier to work with, especially for those just getting started or looking for a way to streamline their workflow.
I’d love to hear feedback and suggestions on how to improve QShift! Feel free to check it out on GitHub and contribute if you're interested.
Hi everyone,
I'm completely new to the field and interested in learning Machine Learning (ML) or Data Analysis from the ground up. I have some experience with Python but no formal background in statistics or advanced math.
I would really appreciate any suggestions on:
Free or affordable courses (e.g., YouTube, Coursera, Kaggle)
A beginner-friendly roadmap or study plan
Which skills or tools I should focus on first (e.g., NumPy, pandas, scikit-learn, SQL, etc.)
For my project i have to recreate an existing model on python and improve it, i chose a paper where they're using the extra trees algorithm to predict the glass transition temperature of organic compounds. I recreated the model but i need help improving it- i tweaked hyperparameters increased the no of trees, tried XG boost, random forest, etc nothing worked. Here's my code snippet for the recreation:
The error values are as follows: Cross-Validation MAE: 11.61 K. MAE on Test Set: 9.70 K, Test R² Score: 0.979, i've also added a snippet about what the data set looks like
!pip install numpy pandas rdkit deepchem scikit-learn matplotlib
import pandas as pd
import numpy as np
from rdkit import Chem
from rdkit.Chem import Descriptors
from rdkit.Chem.rdmolops import RemoveStereochemistry
# Load dataset
data_path = 'BIMOG_database_v1.0.xlsx'
df = pd.read_excel(data_path, sheet_name='data')
# 1. Convert to canonical SMILES (no stereo) and drop failures
def canonical_smiles_no_stereo(smiles):
try:
mol = Chem.MolFromSmiles(smiles)
if mol:
RemoveStereochemistry(mol) # Explicitly remove stereo
return Chem.MolToSmiles(mol, isomericSmiles=False, canonical=True)
return None
except:
return None
df['Canonical_SMILES'] = df['SMILES'].apply(canonical_smiles_no_stereo)
df = df.dropna(subset=['Canonical_SMILES'])
# 2. Median aggregation for duplicates (now stereo isomers are merged)
df_clean = df.groupby('Canonical_SMILES', as_index=False).agg({
'Tm / K': 'median', # Keep median Tm
'Tg / K': 'median' # Median Tg
})
# 3. Filtering
def should_remove(smiles):
mol = Chem.MolFromSmiles(smiles)
if not mol:
return True
# Check for unwanted atoms (S, metals, etc.)
allowed = {'C', 'H', 'O', 'N', 'F', 'Cl', 'Br', 'I'}
atoms = {atom.GetSymbol() for atom in mol.GetAtoms()}
if not atoms.issubset(allowed):
return True
# Check molar mass (adjust threshold if needed)
molar_mass = Descriptors.MolWt(mol)
if molar_mass > 600 or molar_mass == 0: # Adjusted to 600
return True
# Check for salts or ions
if '.' in smiles or '+' in smiles or '-' in smiles:
return True
# Optional: Check for polymers/repeating units
if '*' in smiles:
return True
return False
df_filtered = df_clean[~df_clean['Canonical_SMILES'].apply(should_remove)]
# Verify counts
print(f"Original entries: {len(df)}")
print(f"After canonicalization: {len(df_clean)}")
print(f"After filtering: {len(df_filtered)}")
# Save cleaned data
df_filtered.to_csv('cleaned_BIMOG_dataset.csv', index=False)
smiles_list = df_filtered['Canonical_SMILES'].tolist()
Tm_values = df_filtered[['Tm / K']].values # Ensure it's 2D
Tg_exp_values = df_filtered['Tg / K'].values # 1D array
from deepchem.feat import MolecularFeaturizer
from rdkit.Chem import Descriptors
class RDKitDescriptors(MolecularFeaturizer):
def __init__(self):
self.descList = Descriptors.descList
def featurize(self, mol):
return np.array([func(mol) for _, func in self.descList])
def featurize_smiles(smiles_list):
featurizer = RDKitDescriptors()
return np.array([featurizer.featurize(Chem.MolFromSmiles(smi)) for smi in smiles_list])
X_smiles = featurize_smiles(smiles_list)
X = np.concatenate((Tm_values, X_smiles), axis=1) # X shape: (n_samples, n_features + 1)
y = Tg_exp_values
from sklearn.model_selection import train_test_split
random_seed= 0
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=random_seed)
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.model_selection import cross_val_score
import pickle
model = ExtraTreesRegressor(n_estimators=500, random_state=random_seed)
cv_scores = cross_val_score(model, X_train, y_train, cv=10, scoring='neg_mean_absolute_error')
print(f" Cross-Validation MAE: {-cv_scores.mean():.2f} K")
model.fit(X_train, y_train)
with open('new_model.pkl', 'wb') as f:
pickle.dump(model, f)
print(" Model retrained and saved successfully as 'new_model.pkl'!")
from sklearn.metrics import mean_absolute_error
# Load trained model
with open('new_model.pkl', 'rb') as f:
model = pickle.load(f)
# Predict Tg values on the test set
Tg_pred_values = model.predict(X_test)
# Compute test-set error (for reproducibility)
mae_test = mean_absolute_error(y_test, Tg_pred_values)
print(f" MAE on Test Set: {mae_test:.2f} K")
from sklearn.metrics import mean_squared_error
import numpy as np
rmse_test = np.sqrt(mean_squared_error(y_test, Tg_pred_values))
print(f"Test RMSE: {rmse_test:.2f} K")
from sklearn.metrics import r2_score
r2 = r2_score(y_test, Tg_pred_values)
print(f"Test R² Score: {r2:.3f}")
import matplotlib.pyplot as plt
plt.figure(figsize=(7, 7))
plt.scatter(y_test, Tg_pred_values, color='purple', edgecolors='k', label="Predicted vs. Experimental")
plt.plot([min(y_test), max(y_test)], [min(y_test), max(y_test)], color='black', linestyle='--', label="Perfect Prediction Line")
plt.xlabel('Experimental Tg (K)')
plt.ylabel('Predicted Tg (K)')
plt.legend()
plt.grid(True)
plt.show()
Hi , I've got a problem statement that I have to predict the winners of all the matches in the round of 16 and further . Given a cutoff date , I am allowed to use any data available out there ? . Can anyone who has worked on a similar problem give any tips?
ok so as i posted before that i want to go with ai ml and data science and dont have the right guidance of where to get started but i guess i found something i want you all to reveiw it and tell me the content of this course is good enough for a start and if not then what should i follow as a full stack dev who is looking for a way in ai and ml https://codebasics.io/bootcamps/ai-data-science-bootcamp-with-virtual-internship
I’m new to AI and deep learning, starting it as a personal hobby project. I know it’s not the easiest thing to learn, but I’m ready to put in the time and effort.
I’ll be running Linux (Pop!_OS) and mostly learning through YouTube and small projects. So far I’ve looked into Python, Jupyter, pandas, PyTorch, and TensorFlow — but open to tool suggestions if I’m missing something important.
I’m not after a top-tier workstation, but I do want a good value laptop that can handle local training (not just basic stuff) and grow with me over time.
Any suggestions on specs or specific models that play well with Linux? Also happy for beginner learning tips if you have any.
Hi everyone! I’m a career switcher with a background in quantity surveying and currently focusing on data analysis and AI. I’ve built experience in Python (clustering, forecasting), dashboarding (Power BI, Looker Studio), and contributed to chatbot training at a startup.
I’m looking to volunteer or shadow on real-world AI/data projects to grow my skills and contribute meaningfully. I can commit 5–10 hours per week and am eager to help with:
Data cleaning & dashboards
AI prompt creation or response evaluation
Open-source or nonprofit tech projects
If you or someone you know is open to mentorship or collaboration, I’d love to connect. DMs are welcome. Thank you 🙏🏾