Work on Trustworthy AI with DSPLab

Join a collaborative research environment focused on high-impact problems at the intersection of AI security, privacy, and safety.
What We Look For
  • Strong foundations in machine learning, security, systems, or programming languages.
  • Curiosity about robust and privacy-preserving AI.
  • Ability to build and evaluate systems rigorously.
  • Evidence of initiative (projects, publications, open-source, competitions, or internships).

How to Apply

Send an email with:

  • CV
  • Brief statement of research interests
  • Links to relevant code, papers, or technical work
  • (Optional) transcript and references

FAQ

Yes. Motivated undergraduates with strong technical foundations are encouraged to apply.
No. Prior research output is helpful but not required. Demonstrated technical depth and curiosity matter most.
Topics include trustworthy AI, adversarial ML, AI provenance, privacy-preserving ML, and explainable AI security.
Yes. We welcome collaborations that align with our research agenda and impact goals.

Start the Conversation

Reach out if your interests align with DSPLab’s mission.

Connect

I'm always open to discussing new projects, opportunities, or just having a chat.

Find US ON

Student Applications

We review applications on a rolling basis for available openings.

Contact DSPLab