InstructFX2FX
CNMAT Lab, UC Berkeley · Research Assistant
- Working on instruction-guided audio effect transfer using language-conditioned models
- Developing methods for controllable audio transformation through natural language instructions
An engineer & a musician
CS @ UC Berkeley · ML Researcher · Pianist (24 solo concerts)
Download CVI'm a Computer Science student at UC Berkeley (exchange from National Tsing Hua University), passionate about the intersection of machine learning and music technology. My goal is to build Audio Language Models for music — inspired by the rapid progress of Vision-Language Models, I believe similar breakthroughs are possible in the audio domain.
Beyond engineering, I'm a classical pianist with 24 solo concerts and over 70 TV appearances across Asia. This dual perspective as both an engineer and a musician drives my research and creative work.
I'm also a 4× hackathon winner. One of my projects, ParkFlow, was integrated into Taipei's TownPass super-app, reaching over 3 million downloads.
My research focuses on Audio Language Models and music information retrieval, aiming to bring the power of modern deep learning to music understanding and generation.
CNMAT Lab, UC Berkeley · Research Assistant
AHG Music Lab, NTHU · Research Assistant
ML Pod Lead & Full-Stack Engineer
Machine Learning Intern
An open-source tool that gained widespread attention with 126K+ views on X. Enables remote development workflows with Claude.
View on GitHub →A VST plugin for voice timbre replacement, enabling real-time vocal transformation in music production environments.
I'm always open to discussing research, music, or collaboration opportunities.
vaclis@berkeley.edu