AMD has come out with a stance inline with the OSD

AMD has released their first finetuned models and has come out swinging with a stance that “open means open” with their OLMo 1B release, based on the Allen Institutes OLMo, truly open Model, countering the ongoing OSI definitional drama.

I appreciate that they are using the definition properly, and showcasing their unique commercialization opportunities; the performance of training and finetuning on AMD chipsets as well as their software suite to assist in the process of running the models on AMD hardware.

From the AMD Press Release

The AMD in-house trained series of language models (LMs), AMD OLMo, are 1 billion parameter LMs trained from scratch using trillions of tokens on a cluster of AMD Instinct™ MI250 GPUs. Aligned with the goal of advancing accessible AI research, AMD has open-sourced its complete training details and released the checkpoints for the first series of AMD OLMo models. This initiative empowers a diverse community of users, developers, and researchers to explore, utilize, and train state-of-the-art large language models. By demonstrating the capabilities of AMD Instinct™ GPUs in demanding AI workloads, AMD aims to highlight its potential for running large-scale multi-node LM training jobs with trillions of tokens to achieving improved reasoning and instruction-following performance compared to other fully open similar size LMs. In addition, the community can run such models on AMD Ryzen™ AI PCs that are equipped with Neural Processing Units (NPUs) utilizing the AMD Ryzen™ AI Software to enable easier local access without privacy concerns, efficient AI inference, and lower power consumption.

2 Likes