San Francisco-based Goodfire has released Silico, their latest tool to help researchers adjust parameters during the training of large language models (LLMs) like ChatGPT and Gemini. The aim is to give model makers greater control over these complex systems.
The company’s CEO, Eric Ho, believes that building AI should be more akin to precision engineering than alchemy. ‘We want to remove the trial and error and turn training models into precision engineering,’ he says. Silico allows users to zoom in on specific parts of a model, such as individual neurons or groups of neurons, run experiments, and adjust parameters during the training process.
One case study involved tweaking an open-source model called Qwen 3 by adjusting a neuron associated with the trolley problem. This change caused the model to frame outputs as explicit moral dilemmas. Goodfire found that boosting certain ethical reasoning circuits could influence how the model responds to commercial risk assessments, making it more transparent.
Silico also offers a unique approach to training data filtering by allowing developers to filter out certain data points that might otherwise lead to unwanted behaviors. For instance, models can be retrained to avoid using neurons associated with religious texts or code repositories when performing numerical tasks.
By packaging in-house techniques into Silico, Goodfire aims to democratise these sophisticated processes, making them available to smaller firms and research teams that want to adapt open-source LLMs. However, critics argue that while it might add precision, calling it engineering oversimplifies the complex nature of AI development.







