SenseTime releases SenseNova-U1, an open-source image model that it says can "read" images without translating them to text, reducing computing power needs (Zeyi Yang/Wired)
SenseTime releases SenseNova-U1, an open-source image model that processes images directly without text translation, reducing compute requirements compared to traditional vision-language approaches.
Excerpt
<a href="https://www.wired.com/story/chinese-ai-giant-sensetime-is-running-its-new-model-on-chinese-chips/"><img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260429/i69.jpg" vspace="4" /></a>
<p><a href="http://www.techmeme.com/260429/p69#a260429p69" title="Techmeme permalink"><img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /></a> Zeyi Yang / <a href="http://www.wired.com/">Wired</a>:<br />
<span style="font-size: 1.3em;"><b><a href="https://www.wired.com/story/chinese-ai-giant-sensetime-is-running-its-new-model-on-chinese-chips/">SenseTime releases SenseNova-U1, an open-source image model that it says can “read” images without translating them to text, reducing computing power needs</a></b></span> — With US restrictions limiting its access to advanced tech, SenseTime is doubling down on open source with a new model optimized to run on Chinese-made chips.</p>
Read at source: http://www.techmeme.com/260429/p69#a260429p69