30 Latest AI Open Source Projects of the Week(2025.1.13-2025.1.19)
Discover 30 cutting-edge AI open-source models and frameworks, including Sky-T1-32B, LlamaV-o1, and more, featuring advanced reasoning, multimodal tasks, and automation tools.
I’m sharing some interesting AI open-source models and frameworks for this week (2025.1.13-2025.1.19).
There are a total of 30 AI open-source projects.
Project: Sky-T1-32B-Preview
Sky-T1-32B-Preview is a 32B parameter inference model developed by the NovaSky team at the Sky Computing Lab, UC Berkeley.
The model is trained from Qwen2.5-32B-Instruct, using 17K data points. Its performance is comparable to the o1-preview model in mathematical and programming tasks.
https://github.com/NovaSky-AI/SkyThought
https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview
Project: LlamaV-o1
LlamaV-o1 is a large multimodal model focused on visual reasoning tasks.
This project introduces a new multi-step reasoning benchmark (VRC-Bench) for evaluating multimodal multi-step reasoning tasks.
LlamaV-o1 excels on multiple challenging multimodal benchmarks, combining multi-step curriculum learning and beam search techniques, significantly improving reasoning accuracy and efficiency.
https://github.com/mbzuai-oryx/LlamaV-o1
Project: Riona
Riona-AI-Agent is an AI-based automation tool designed to interact with social media platforms like Instagram, Twitter, and GitHub.
It leverages advanced AI models to generate engaging content, automate interactions, and efficiently manage social media accounts. Users can train the agent by uploading YouTube video links, audio files, portfolios, or website links.
https://github.com/David-patrick-chuks/Riona-AI-Agent
Project: Awesome-Agent4SE
The Awesome-Agent4SE project aims to explore the application of large language models (LLMs) in software engineering, particularly in optimizing various tasks using agent technologies.
Through a review of 115 relevant papers, the project proposes an LLM agent framework comprising three key modules: perception, memory, and action, and summarizes the challenges and future opportunities in integrating LLMs with software engineering.
https://github.com/DeepSoftwareAnalytics/Awesome-Agent4SE
Project: FedCFA
FedCFA is a novel federated learning framework designed to mitigate the Simpson’s Paradox in model aggregation through counterfactual learning.
This method generates counterfactual samples by replacing key factors of local data with global average data, thus aligning local data distributions with global distributions and alleviating the impact of Simpson’s Paradox.
Additionally, Factor Decorrelation (FDC) loss is introduced to reduce correlations between features and enhance the independence of extracted factors.