Skip to content
View VeylanSolmira's full-sized avatar
  • UCLA
  • Los Angeles, CA

Block or report VeylanSolmira

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
VeylanSolmira/README.md

Hi, I'm Veylan 👋

I'm an independent researcher and engineer transitioning into full-time work in AI safety and alignment. I bring an interdisciplinary background in computational biology, computer science, philosophy, and systems engineering—with professional experience across data engineering, software development, and site reliability at scale.


🔭 Currently working on

  • Prototype alignment environments for testing deceptive or misgeneralizing agents
  • GaiaLogos: a research testbed for ecologically-bounded agent development
  • Compression-based evaluation frameworks inspired by historiographical modeling

🌱 Currently learning

  • Red/blue team evaluation techniques (Inspect, alignment-faking games)
  • Recursive, epistemically-updating reward shaping
  • Multi-objective reinforcement learning and scalable oversight design

👯 Looking to collaborate on

  • Interpretability tools, deception detection, or oversight systems
  • Evaluation frameworks that combine philosophical grounding and engineering fluency
  • Alignment projects that treat AI as embedded in an ecological and civilizational context

🤔 Open to help with

  • Understanding large-scale interpretability workflows (SAEs, tracing tools)
  • Publishing or productionizing early-stage alignment tools
  • Connecting with mentors or institutions working on practical safety architectures

💬 Ask me about

  • Deceptive alignment, narrative compression, AI persuasion risks
  • How ecology, ethics, and epistemology can be computationally encoded
  • Philosophical perspectives on reward shaping and value alignment

📫 Reach me at
TwitterLinkedInEmailSubstack

😄 Pronouns: he/they
Fun fact: I'm working on reading through the top-100 books of all time. I'm currently on 48/100.

Pinned Loading

  1. ai-safety-researcher-compiler ai-safety-researcher-compiler Public

    A comprehensive, interactive curriculum for systematically developing AI safety research capabilities - from foundations to advanced technical contributions

    TypeScript

  2. Epishape Epishape Public

    How can epistemic signals—like belief, uncertainty, confidence, or knowledge gaps—be integrated into reinforcement learning reward structures or agent decision-making?

  3. mind-map mind-map Public

    Interactive goal management system with hierarchical organization, calendar tracking, and multiple visualization mode

    TypeScript