AGI.place

Форум создателей искусственных интеллектов


Логин (email) 
Пароль 
Настройки

шрифт+
шрифт-
Скачать
Назад
1
2
3
4
5
6
7
8
AGI
Аватар
create
AGI
11:22 11.09.2024 №10693
Microsoft and Quantinuum create 12 logical qubits and demonstrate a hybrid, end-to-end chemistry simulation Microsoft and Quantinuum applied Azure Quantum’s qubit-virtualization system to Quantinuum’s H2 trapped-ion quantum computer to create 12 highly reliable logical qubits. Additionally, the teams demonstrated the emerging capabilities of reliable quantum computing by using two logical qubits, integrated with an AI model and cloud high-performance computing (HPC), to accurately estimate the ground state energy of the active space of an important catalytic intermediate. These achievements demonstrate continued progress toward scientific quantum advantage, a milestone that will be reached when a hybrid quantum-classical supercomputer can solve scientific problems too complex for classical computers alone https://azure.microsoft.com/en-us/blog/quantum/2024/09/10/microsoft-and-quantinuum-create-12-logical-qubits-and-demonstrate-a-hybrid-end-to-end-chemistry-simulation/?msockid=11272055937361b634c0330492ac60ad
0 | 0
Аватар
create
AGI
17:33 11.09.2024 №10696
Attention Heads of Large Language Models: A Survey Since the advent of ChatGPT, Large Language Models (LLMs) have excelled in various tasks but remain largely as black-box systems. Consequently, their development relies heavily on data-driven approaches, limiting performance enhancement through changes in internal architecture and reasoning pathways. As a result, many researchers have begun exploring the potential internal mechanisms of LLMs, aiming to identify the essence of their reasoning bottlenecks, with most studies focusing on attention heads. Our survey aims to shed light on the internal reasoning processes of LLMs by concentrating on the interpretability and underlying mechanisms of attention heads. We first distill the human thought process into a four-stage framework: Knowledge Recalling, In-Context Identification, Latent Reasoning, and Expression Preparation. Using this framework, we systematically review existing research to identify and categorize the functions of specific attention heads. Furthermore, we summarize the experimental methodologies used to discover these special heads, dividing them into two categories: Modeling-Free methods and Modeling-Required methods. Also, we outline relevant evaluation methods and benchmarks. Finally, we discuss the limitations of current research and propose several potential future directions. https://arxiv.org/abs/2409.03752
0 | 0
Аватар
create
AGI
14:01 12.09.2024 №10723
Synthetic continued pretraining Pretraining on large-scale, unstructured internet text has enabled language models to acquire a significant amount of world knowledge. However, this knowledge acquisition is data-inefficient -- to learn a given fact, models must be trained on hundreds to thousands of diverse representations of it. This poses a challenge when adapting a pretrained model to a small corpus of domain-specific documents, where each fact may appear rarely or only once. We propose to bridge this gap with synthetic continued pretraining: using the small domain-specific corpus to synthesize a large corpus more amenable to learning, and then performing continued pretraining on the synthesized corpus. We instantiate this proposal with EntiGraph, a synthetic data augmentation algorithm that extracts salient entities from the source documents and then generates diverse text by drawing connections between the sampled entities. Synthetic continued pretraining using EntiGraph enables a language model to answer questions and follow generic instructions related to the source documents without access to them. If instead, the source documents are available at inference time, we show that the knowledge acquired through our approach compounds with retrieval-augmented generation. To better understand these results, we build a simple mathematical model of EntiGraph, and show how synthetic data augmentation can "rearrange" knowledge to enable more data-efficient learning. https://arxiv.org/abs/2409.07431
0 | 0
Аватар
create
AGI
20:22 12.09.2024 №10756
OpenAI o1-mini We're releasing OpenAI o1-mini, a cost-efficient reasoning model. o1-mini excels at STEM, especially math and coding—nearly matching the performance of OpenAI o1 on evaluation benchmarks such as AIME and Codeforces. We expect o1-mini will be a faster, cost-effective model for applications that require reasoning without broad world knowledge. Today, we are launching o1-mini to tier 5 API users(opens in a new window) at a cost that is 80% cheaper than OpenAI o1-preview. ChatGPT Plus, Team, Enterprise, and Edu users can use o1-mini as an alternative to o1-preview, with higher rate limits and lower latency (see Model Speed). https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/
0 | 0
Аватар
create
AGI
12:39 13.09.2024 №10765
CodeGenie: How Salesforce Leveraged Generative AI to Enhance Internal Developer Productivity Salesforce has been a leader in AI technology for over a decade, continuously advancing from predictive AI to generative AI and now to autonomous AI. These developments are set to revolutionize the entire software development lifecycle. As an industry leader, Salesforce has chosen to develop its own technology, creating models specifically trained on our codebase. This approach is designed to support Salesforce-specific use cases and workflows, thereby enhancing the capabilities of our developers. https://engineering.salesforce.com/codegenie-how-salesforce-leveraged-generative-ai-to-enhance-internal-developer-productivity/
0 | 0
Аватар
create
AGI
13:09 13.09.2024 №10767
Recurrent Aggregators in Neural Algorithmic Reasoning Neural algorithmic reasoning (NAR) is an emerging field that seeks to design neural networks that mimic classical algorithmic computations. Today, graph neural networks (GNNs) are widely used in neural algorithmic reasoners due to their message passing framework and permutation equivariance. In this extended abstract, we challenge this design choice, and replace the equivariant aggregation function with a recurrent neural network. While seemingly counter-intuitive, this approach has appropriate grounding when nodes have a natural ordering -- and this is the case frequently in established reasoning benchmarks like CLRS-30. Indeed, our recurrent NAR (RNAR) model performs very strongly on such tasks, while handling many others gracefully. A notable achievement of RNAR is its decisive state-of-the-art result on the Heapsort and Quickselect tasks, both deemed as a significant challenge for contemporary neural algorithmic reasoners -- especially the latter, where RNAR achieves a mean micro-F1 score of 87%. https://arxiv.org/abs/2409.07154
0 | 0
Аватар
create
AGI
11:17 16.09.2024 №10774
Introducing OpenAI o1-preview We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes. In our tests, the next model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology. We also found that it excels in math and coding. In a qualifying exam for the International Mathematics Olympiad (IMO), GPT-4o correctly solved only 13% of problems, while the reasoning model scored 83%. Their coding abilities were evaluated in contests and reached the 89th percentile in Codeforces competitions. You can read more about this in our technical research post. As an early model, it doesn't yet have many of the features that make ChatGPT useful, like browsing the web for information and uploading files and images. For many common cases GPT-4o will be more capable in the near term. But for complex reasoning tasks this is a significant advancement and represents a new level of AI capability. Given this, we are resetting the counter back to 1 and naming this series OpenAI o1. https://openai.com/index/introducing-openai-o1-preview/
0 | 0
Аватар
create
AGI
15:02 17.09.2024 №10818
0 | 0
1
2
3
4
5
6
7
8
шрифт+
шрифт-
Скачать
Назад

Главная Участники Жалобы  Поиск Исходный код О форуме