LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 10

Search options

  1. Book ; Online: Accelerating Large Language Model Decoding with Speculative Sampling

    Chen, Charlie / Borgeaud, Sebastian / Irving, Geoffrey / Lespiau, Jean-Baptiste / Sifre, Laurent / Jumper, John

    2023  

    Abstract: We present speculative sampling, an algorithm for accelerating transformer decoding by enabling the generation of multiple tokens from each transformer call. Our algorithm relies on the observation that the latency of parallel scoring of short ... ...

    Abstract We present speculative sampling, an algorithm for accelerating transformer decoding by enabling the generation of multiple tokens from each transformer call. Our algorithm relies on the observation that the latency of parallel scoring of short continuations, generated by a faster but less powerful draft model, is comparable to that of sampling a single token from the larger target model. This is combined with a novel modified rejection sampling scheme which preserves the distribution of the target model within hardware numerics. We benchmark speculative sampling with Chinchilla, a 70 billion parameter language model, achieving a 2-2.5x decoding speedup in a distributed setup, without compromising the sample quality or making modifications to the model itself.
    Keywords Computer Science - Computation and Language
    Publishing date 2023-02-02
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  2. Book ; Online: Human-Agent Cooperation in Bridge Bidding

    Lockhart, Edward / Burch, Neil / Bard, Nolan / Borgeaud, Sebastian / Eccles, Tom / Smaira, Lucas / Smith, Ray

    2020  

    Abstract: We introduce a human-compatible reinforcement-learning approach to a cooperative game, making use of a third-party hand-coded human-compatible bot to generate initial training data and to perform initial evaluation. Our learning approach consists of ... ...

    Abstract We introduce a human-compatible reinforcement-learning approach to a cooperative game, making use of a third-party hand-coded human-compatible bot to generate initial training data and to perform initial evaluation. Our learning approach consists of imitation learning, search, and policy iteration. Our trained agents achieve a new state-of-the-art for bridge bidding in three settings: an agent playing in partnership with a copy of itself; an agent partnering a pre-existing bot; and an agent partnering a human player.
    Keywords Computer Science - Artificial Intelligence
    Publishing date 2020-11-28
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: Unsupervised Learning of Object Keypoints for Perception and Control

    Kulkarni, Tejas / Gupta, Ankush / Ionescu, Catalin / Borgeaud, Sebastian / Reynolds, Malcolm / Zisserman, Andrew / Mnih, Volodymyr

    2019  

    Abstract: The study of object representations in computer vision has primarily focused on developing representations that are useful for image classification, object detection, or semantic segmentation as downstream tasks. In this work we aim to learn object ... ...

    Abstract The study of object representations in computer vision has primarily focused on developing representations that are useful for image classification, object detection, or semantic segmentation as downstream tasks. In this work we aim to learn object representations that are useful for control and reinforcement learning (RL). To this end, we introduce Transporter, a neural network architecture for discovering concise geometric object representations in terms of keypoints or image-space coordinates. Our method learns from raw video frames in a fully unsupervised manner, by transporting learnt image features between video frames using a keypoint bottleneck. The discovered keypoints track objects and object parts across long time-horizons more accurately than recent similar methods. Furthermore, consistent long-term tracking enables two notable results in control domains -- (1) using the keypoint co-ordinates and corresponding image features as inputs enables highly sample-efficient reinforcement learning; (2) learning to explore by controlling keypoint locations drastically reduces the search space, enabling deep exploration (leading to states unreachable through random action exploration) without any extrinsic rewards.

    Comment: supplementary videos at https://www.youtube.com/playlist?list=PL3LT3tVQRpbvGt5fgp_bKGvW23jF11Vi2
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Machine Learning
    Publishing date 2019-06-19
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: General-purpose, long-context autoregressive modeling with Perceiver AR

    Hawthorne, Curtis / Jaegle, Andrew / Cangea, Cătălina / Borgeaud, Sebastian / Nash, Charlie / Malinowski, Mateusz / Dieleman, Sander / Vinyals, Oriol / Botvinick, Matthew / Simon, Ian / Sheahan, Hannah / Zeghidour, Neil / Alayrac, Jean-Baptiste / Carreira, João / Engel, Jesse

    2022  

    Abstract: Real-world data is high-dimensional: a book, image, or musical performance can easily contain hundreds of thousands of elements even after compression. However, the most commonly used autoregressive models, Transformers, are prohibitively expensive to ... ...

    Abstract Real-world data is high-dimensional: a book, image, or musical performance can easily contain hundreds of thousands of elements even after compression. However, the most commonly used autoregressive models, Transformers, are prohibitively expensive to scale to the number of inputs and layers needed to capture this long-range structure. We develop Perceiver AR, an autoregressive, modality-agnostic architecture which uses cross-attention to map long-range inputs to a small number of latents while also maintaining end-to-end causal masking. Perceiver AR can directly attend to over a hundred thousand tokens, enabling practical long-context density estimation without the need for hand-crafted sparsity patterns or memory mechanisms. When trained on images or music, Perceiver AR generates outputs with clear long-term coherence and structure. Our architecture also obtains state-of-the-art likelihood on long-sequence benchmarks, including 64 x 64 ImageNet images and PG-19 books.

    Comment: ICML 2022
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Sound ; Electrical Engineering and Systems Science - Audio and Speech Processing
    Subject code 006
    Publishing date 2022-02-15
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Emergent Abilities of Large Language Models

    Wei, Jason / Tay, Yi / Bommasani, Rishi / Raffel, Colin / Zoph, Barret / Borgeaud, Sebastian / Yogatama, Dani / Bosma, Maarten / Zhou, Denny / Metzler, Donald / Chi, Ed H. / Hashimoto, Tatsunori / Vinyals, Oriol / Liang, Percy / Dean, Jeff / Fedus, William

    2022  

    Abstract: Scaling up language models has been shown to predictably improve performance and sample efficiency on a wide range of downstream tasks. This paper instead discusses an unpredictable phenomenon that we refer to as emergent abilities of large language ... ...

    Abstract Scaling up language models has been shown to predictably improve performance and sample efficiency on a wide range of downstream tasks. This paper instead discusses an unpredictable phenomenon that we refer to as emergent abilities of large language models. We consider an ability to be emergent if it is not present in smaller models but is present in larger models. Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models. The existence of such emergence implies that additional scaling could further expand the range of capabilities of language models.
    Keywords Computer Science - Computation and Language
    Publishing date 2022-06-15
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: Perceiver IO

    Jaegle, Andrew / Borgeaud, Sebastian / Alayrac, Jean-Baptiste / Doersch, Carl / Ionescu, Catalin / Ding, David / Koppula, Skanda / Zoran, Daniel / Brock, Andrew / Shelhamer, Evan / Hénaff, Olivier / Botvinick, Matthew M. / Zisserman, Andrew / Vinyals, Oriol / Carreira, Joāo

    A General Architecture for Structured Inputs & Outputs

    2021  

    Abstract: A central goal of machine learning is the development of systems that can solve many problems in as many data domains as possible. Current architectures, however, cannot be applied beyond a small set of stereotyped settings, as they bake in domain & task ...

    Abstract A central goal of machine learning is the development of systems that can solve many problems in as many data domains as possible. Current architectures, however, cannot be applied beyond a small set of stereotyped settings, as they bake in domain & task assumptions or scale poorly to large inputs or outputs. In this work, we propose Perceiver IO, a general-purpose architecture that handles data from arbitrary settings while scaling linearly with the size of inputs and outputs. Our model augments the Perceiver with a flexible querying mechanism that enables outputs of various sizes and semantics, doing away with the need for task-specific architecture engineering. The same architecture achieves strong results on tasks spanning natural language and visual understanding, multi-task and multi-modal reasoning, and StarCraft II. As highlights, Perceiver IO outperforms a Transformer-based BERT baseline on the GLUE language benchmark despite removing input tokenization and achieves state-of-the-art performance on Sintel optical flow estimation with no explicit mechanisms for multiscale correspondence.

    Comment: ICLR 2022 camera ready. Code: https://dpmd.ai/perceiver-code
    Keywords Computer Science - Machine Learning ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Sound ; Electrical Engineering and Systems Science - Audio and Speech Processing
    Subject code 004
    Publishing date 2021-07-30
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Training Compute-Optimal Large Language Models

    Hoffmann, Jordan / Borgeaud, Sebastian / Mensch, Arthur / Buchatskaya, Elena / Cai, Trevor / Rutherford, Eliza / Casas, Diego de Las / Hendricks, Lisa Anne / Welbl, Johannes / Clark, Aidan / Hennigan, Tom / Noland, Eric / Millican, Katie / Driessche, George van den / Damoc, Bogdan / Guy, Aurelia / Osindero, Simon / Simonyan, Karen / Elsen, Erich /
    Rae, Jack W. / Vinyals, Oriol / Sifre, Laurent

    2022  

    Abstract: We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. We find that current large language models are significantly undertrained, a consequence of the recent focus on scaling ... ...

    Abstract We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. We find that current large language models are significantly undertrained, a consequence of the recent focus on scaling language models whilst keeping the amount of training data constant. By training over 400 language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens, we find that for compute-optimal training, the model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled. We test this hypothesis by training a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and 4$\times$ more more data. Chinchilla uniformly and significantly outperforms Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks. This also means that Chinchilla uses substantially less compute for fine-tuning and inference, greatly facilitating downstream usage. As a highlight, Chinchilla reaches a state-of-the-art average accuracy of 67.5% on the MMLU benchmark, greater than a 7% improvement over Gopher.
    Keywords Computer Science - Computation and Language ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2022-03-29
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: Flamingo

    Alayrac, Jean-Baptiste / Donahue, Jeff / Luc, Pauline / Miech, Antoine / Barr, Iain / Hasson, Yana / Lenc, Karel / Mensch, Arthur / Millican, Katie / Reynolds, Malcolm / Ring, Roman / Rutherford, Eliza / Cabi, Serkan / Han, Tengda / Gong, Zhitao / Samangooei, Sina / Monteiro, Marianne / Menick, Jacob / Borgeaud, Sebastian /
    Brock, Andrew / Nematzadeh, Aida / Sharifzadeh, Sahand / Binkowski, Mikolaj / Barreira, Ricardo / Vinyals, Oriol / Zisserman, Andrew / Simonyan, Karen

    a Visual Language Model for Few-Shot Learning

    2022  

    Abstract: Building models that can be rapidly adapted to numerous tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. ... ...

    Abstract Building models that can be rapidly adapted to numerous tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. Flamingo models include key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of the proposed Flamingo models, exploring and measuring their ability to rapidly adapt to a variety of image and video understanding benchmarks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer, captioning tasks, which evaluate the ability to describe a scene or an event, and close-ended tasks such as multiple choice visual question-answering. For tasks lying anywhere on this spectrum, we demonstrate that a single Flamingo model can achieve a new state of the art for few-shot learning, simply by prompting the model with task-specific examples. On many of these benchmarks, Flamingo actually surpasses the performance of models that are fine-tuned on thousands of times more task-specific data.
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Artificial Intelligence ; Computer Science - Machine Learning
    Subject code 004
    Publishing date 2022-04-29
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: OpenSpiel

    Lanctot, Marc / Lockhart, Edward / Lespiau, Jean-Baptiste / Zambaldi, Vinicius / Upadhyay, Satyaki / Pérolat, Julien / Srinivasan, Sriram / Timbers, Finbarr / Tuyls, Karl / Omidshafiei, Shayegan / Hennes, Daniel / Morrill, Dustin / Muller, Paul / Ewalds, Timo / Faulkner, Ryan / Kramár, János / De Vylder, Bart / Saeta, Brennan / Bradbury, James /
    Ding, David / Borgeaud, Sebastian / Lai, Matthew / Schrittwieser, Julian / Anthony, Thomas / Hughes, Edward / Danihelka, Ivo / Ryan-Davis, Jonah

    A Framework for Reinforcement Learning in Games

    2019  

    Abstract: OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. OpenSpiel supports n-player (single- and multi- agent) zero-sum, cooperative and general-sum, one-shot and sequential, ... ...

    Abstract OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. OpenSpiel supports n-player (single- and multi- agent) zero-sum, cooperative and general-sum, one-shot and sequential, strictly turn-taking and simultaneous-move, perfect and imperfect information games, as well as traditional multiagent environments such as (partially- and fully- observable) grid worlds and social dilemmas. OpenSpiel also includes tools to analyze learning dynamics and other common evaluation metrics. This document serves both as an overview of the code base and an introduction to the terminology, core concepts, and algorithms across the fields of reinforcement learning, computational game theory, and search.
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Computer Science - Computer Science and Game Theory ; Computer Science - Multiagent Systems
    Publishing date 2019-08-25
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: Gemini

    Gemini Team / Anil, Rohan / Borgeaud, Sebastian / Wu, Yonghui / Alayrac, Jean-Baptiste / Yu, Jiahui / Soricut, Radu / Schalkwyk, Johan / Dai, Andrew M. / Hauth, Anja / Millican, Katie / Silver, David / Petrov, Slav / Johnson, Melvin / Antonoglou, Ioannis / Schrittwieser, Julian / Glaese, Amelia / Chen, Jilin / Pitler, Emily /
    Lillicrap, Timothy / Lazaridou, Angeliki / Firat, Orhan / Molloy, James / Isard, Michael / Barham, Paul R. / Hennigan, Tom / Lee, Benjamin / Viola, Fabio / Reynolds, Malcolm / Xu, Yuanzhong / Doherty, Ryan / Collins, Eli / Meyer, Clemens / Rutherford, Eliza / Moreira, Erica / Ayoub, Kareem / Goel, Megha / Tucker, George / Piqueras, Enrique / Krikun, Maxim / Barr, Iain / Savinov, Nikolay / Danihelka, Ivo / Roelofs, Becca / White, Anaïs / Andreassen, Anders / von Glehn, Tamara / Yagati, Lakshman / Kazemi, Mehran / Gonzalez, Lucas / Khalman, Misha / Sygnowski, Jakub / Frechette, Alexandre / Smith, Charlotte / Culp, Laura / Proleev, Lev / Luan, Yi / Chen, Xi / Lottes, James / Schucher, Nathan / Lebron, Federico / Rrustemi, Alban / Clay, Natalie / Crone, Phil / Kocisky, Tomas / Zhao, Jeffrey / Perz, Bartek / Yu, Dian / Howard, Heidi / Bloniarz, Adam / Rae, Jack W. / Lu, Han / Sifre, Laurent / Maggioni, Marcello / Alcober, Fred / Garrette, Dan / Barnes, Megan / Thakoor, Shantanu / Austin, Jacob / Barth-Maron, Gabriel / Wong, William / Joshi, Rishabh / Chaabouni, Rahma / Fatiha, Deeni / Ahuja, Arun / Liu, Ruibo / Li, Yunxuan / Cogan, Sarah / Chen, Jeremy / Jia, Chao / Gu, Chenjie / Zhang, Qiao / Grimstad, Jordan / Hartman, Ale Jakse / Chadwick, Martin / Tomar, Gaurav Singh / Garcia, Xavier / Senter, Evan / Taropa, Emanuel / Pillai, Thanumalayan Sankaranarayana / Devlin, Jacob / Laskin, Michael / Casas, Diego de Las / Valter, Dasha / Tao, Connie / Blanco, Lorenzo / Badia, Adrià Puigdomènech / Reitter, David / Chen, Mianna / Brennan, Jenny / Rivera, Clara / Brin, Sergey / Iqbal, Shariq / Surita, Gabriela / Labanowski, Jane / Rao, Abhi / Winkler, Stephanie / Parisotto, Emilio / Gu, Yiming / Olszewska, Kate / Zhang, Yujing / Addanki, Ravi / Miech, Antoine / Louis, Annie / Shafey, Laurent El / Teplyashin, Denis / Brown, Geoff / Catt, Elliot / Attaluri, Nithya / Balaguer, Jan / Xiang, Jackie / Wang, Pidong / Ashwood, Zoe / Briukhov, Anton / Webson, Albert / Ganapathy, Sanjay / Sanghavi, Smit / Kannan, Ajay / Chang, Ming-Wei / Stjerngren, Axel / Djolonga, Josip / Sun, Yuting / Bapna, Ankur / Aitchison, Matthew / Pejman, Pedram / Michalewski, Henryk / Yu, Tianhe / Wang, Cindy / Love, Juliette / Ahn, Junwhan / Bloxwich, Dawn / Han, Kehang / Humphreys, Peter / Sellam, Thibault / Bradbury, James / Godbole, Varun / Samangooei, Sina / Damoc, Bogdan / Kaskasoli, Alex / Arnold, Sébastien M. R. / Vasudevan, Vijay / Agrawal, Shubham / Riesa, Jason / Lepikhin, Dmitry / Tanburn, Richard / Srinivasan, Srivatsan / Lim, Hyeontaek / Hodkinson, Sarah / Shyam, Pranav / Ferret, Johan / Hand, Steven / Garg, Ankush / Paine, Tom Le / Li, Jian / Li, Yujia / Giang, Minh / Neitz, Alexander / Abbas, Zaheer / York, Sarah / Reid, Machel / Cole, Elizabeth / Chowdhery, Aakanksha / Das, Dipanjan / Rogozińska, Dominika / Nikolaev, Vitaly / Sprechmann, Pablo / Nado, Zachary / Zilka, Lukas / Prost, Flavien / He, Luheng / Monteiro, Marianne / Mishra, Gaurav / Welty, Chris / Newlan, Josh / Jia, Dawei / Allamanis, Miltiadis / Hu, Clara Huiyi / de Liedekerke, Raoul / Gilmer, Justin / Saroufim, Carl / Rijhwani, Shruti / Hou, Shaobo / Shrivastava, Disha / Baddepudi, Anirudh / Goldin, Alex / Ozturel, Adnan / Cassirer, Albin / Xu, Yunhan / Sohn, Daniel / Sachan, Devendra / Amplayo, Reinald Kim / Swanson, Craig / Petrova, Dessie / Narayan, Shashi / Guez, Arthur / Brahma, Siddhartha / Landon, Jessica / Patel, Miteyan / Zhao, Ruizhe / Villela, Kevin / Wang, Luyu / Jia, Wenhao / Rahtz, Matthew / Giménez, Mai / Yeung, Legg / Lin, Hanzhao / Keeling, James / Georgiev, Petko / Mincu, Diana / Wu, Boxi / Haykal, Salem / Saputro, Rachel / Vodrahalli, Kiran / Qin, James / Cankara, Zeynep / Sharma, Abhanshu / Fernando, Nick / Hawkins, Will / Neyshabur, Behnam / Kim, Solomon / Hutter, Adrian / Agrawal, Priyanka / Castro-Ros, Alex / Driessche, George van den / Wang, Tao / Yang, Fan / Chang, Shuo-yiin / Komarek, Paul / McIlroy, Ross / Lučić, Mario / Zhang, Guodong / Farhan, Wael / Sharman, Michael / Natsev, Paul / Michel, Paul / Cheng, Yong / Bansal, Yamini / Qiao, Siyuan / Cao, Kris / Shakeri, Siamak / Butterfield, Christina / Chung, Justin / Rubenstein, Paul Kishan / Agrawal, Shivani / Mensch, Arthur / Soparkar, Kedar / Lenc, Karel / Chung, Timothy / Pope, Aedan / Maggiore, Loren / Kay, Jackie / Jhakra, Priya / Wang, Shibo / Maynez, Joshua / Phuong, Mary / Tobin, Taylor / Tacchetti, Andrea / Trebacz, Maja / Robinson, Kevin / Katariya, Yash / Riedel, Sebastian / Bailey, Paige / Xiao, Kefan / Ghelani, Nimesh / Aroyo, Lora / Slone, Ambrose / Houlsby, Neil / Xiong, Xuehan / Yang, Zhen / Gribovskaya, Elena / Adler, Jonas / Wirth, Mateo / Lee, Lisa / Li, Music / Kagohara, Thais / Pavagadhi, Jay / Bridgers, Sophie / Bortsova, Anna / Ghemawat, Sanjay / Ahmed, Zafarali / Liu, Tianqi / Powell, Richard / Bolina, Vijay / Iinuma, Mariko / Zablotskaia, Polina / Besley, James / Chung, Da-Woon / Dozat, Timothy / Comanescu, Ramona / Si, Xiance / Greer, Jeremy / Su, Guolong / Polacek, Martin / Kaufman, Raphaël Lopez / Tokumine, Simon / Hu, Hexiang / Buchatskaya, Elena / Miao, Yingjie / Elhawaty, Mohamed / Siddhant, Aditya / Tomasev, Nenad / Xing, Jinwei / Greer, Christina / Miller, Helen / Ashraf, Shereen / Roy, Aurko / Zhang, Zizhao / Ma, Ada / Filos, Angelos / Besta, Milos / Blevins, Rory / Klimenko, Ted / Yeh, Chih-Kuan / Changpinyo, Soravit / Mu, Jiaqi / Chang, Oscar / Pajarskas, Mantas / Muir, Carrie / Cohen, Vered / Lan, Charline Le / Haridasan, Krishna / Marathe, Amit / Hansen, Steven / Douglas, Sholto / Samuel, Rajkumar / Wang, Mingqiu / Austin, Sophia / Lan, Chang / Jiang, Jiepu / Chiu, Justin / Lorenzo, Jaime Alonso / Sjösund, Lars Lowe / Cevey, Sébastien / Gleicher, Zach / Avrahami, Thi / Boral, Anudhyan / Srinivasan, Hansa / Selo, Vittorio / May, Rhys / Aisopos, Konstantinos / Hussenot, Léonard / Soares, Livio Baldini / Baumli, Kate / Chang, Michael B. / Recasens, Adrià / Caine, Ben / Pritzel, Alexander / Pavetic, Filip / Pardo, Fabio / Gergely, Anita / Frye, Justin / Ramasesh, Vinay / Horgan, Dan / Badola, Kartikeya / Kassner, Nora / Roy, Subhrajit / Dyer, Ethan / Campos, Víctor / Tomala, Alex / Tang, Yunhao / Badawy, Dalia El / White, Elspeth / Mustafa, Basil / Lang, Oran / Jindal, Abhishek / Vikram, Sharad / Gong, Zhitao / Caelles, Sergi / Hemsley, Ross / Thornton, Gregory / Feng, Fangxiaoyu / Stokowiec, Wojciech / Zheng, Ce / Thacker, Phoebe / Ünlü, Çağlar / Zhang, Zhishuai / Saleh, Mohammad / Svensson, James / Bileschi, Max / Patil, Piyush / Anand, Ankesh / Ring, Roman / Tsihlas, Katerina / Vezer, Arpi / Selvi, Marco / Shevlane, Toby / Rodriguez, Mikel / Kwiatkowski, Tom / Daruki, Samira / Rong, Keran / Dafoe, Allan / FitzGerald, Nicholas / Gu-Lemberg, Keren / Khan, Mina / Hendricks, Lisa Anne / Pellat, Marie / Feinberg, Vladimir / Cobon-Kerr, James / Sainath, Tara / Rauh, Maribeth / Hashemi, Sayed Hadi / Ives, Richard / Hasson, Yana / Li, YaGuang / Noland, Eric / Cao, Yuan / Byrd, Nathan / Hou, Le / Wang, Qingze / Sottiaux, Thibault / Paganini, Michela / Lespiau, Jean-Baptiste / Moufarek, Alexandre / Hassan, Samer / Shivakumar, Kaushik / van Amersfoort, Joost / Mandhane, Amol / Joshi, Pratik / Goyal, Anirudh / Tung, Matthew / Brock, Andrew / Sheahan, Hannah / Misra, Vedant / Li, Cheng / Rakićević, Nemanja / Dehghani, Mostafa / Liu, Fangyu / Mittal, Sid / Oh, Junhyuk / Noury, Seb / Sezener, Eren / Huot, Fantine / Lamm, Matthew / De Cao, Nicola / Chen, Charlie / Elsayed, Gamaleldin / Chi, Ed / Mahdieh, Mahdis / Tenney, Ian / Hua, Nan / Petrychenko, Ivan / Kane, Patrick / Scandinaro, Dylan / Jain, Rishub / Uesato, Jonathan / Datta, Romina / Sadovsky, Adam / Bunyan, Oskar / Rabiej, Dominik / Wu, Shimu / Zhang, John / Vasudevan, Gautam / Leurent, Edouard / Alnahlawi, Mahmoud / Georgescu, Ionut / Wei, Nan / Zheng, Ivy / Chan, Betty / Rabinovitch, Pam G / Stanczyk, Piotr / Zhang, Ye / Steiner, David / Naskar, Subhajit / Azzam, Michael / Johnson, Matthew / Paszke, Adam / Chiu, Chung-Cheng / Elias, Jaume Sanchez / Mohiuddin, Afroz / Muhammad, Faizan / Miao, Jin / Lee, Andrew / Vieillard, Nino / Potluri, Sahitya / Park, Jane / Davoodi, Elnaz / Zhang, Jiageng / Stanway, Jeff / Garmon, Drew / Karmarkar, Abhijit / Dong, Zhe / Lee, Jong / Kumar, Aviral / Zhou, Luowei / Evens, Jonathan / Isaac, William / Chen, Zhe / Jia, Johnson / Levskaya, Anselm / Zhu, Zhenkai / Gorgolewski, Chris / Grabowski, Peter / Mao, Yu / Magni, Alberto / Yao, Kaisheng / Snaider, Javier / Casagrande, Norman / Suganthan, Paul / Palmer, Evan / Irving, Geoffrey / Loper, Edward / Faruqui, Manaal / Arkatkar, Isha / Chen, Nanxin / Shafran, Izhak / Fink, Michael / Castaño, Alfonso / Giannoumis, Irene / Kim, Wooyeol / Rybiński, Mikołaj / Sreevatsa, Ashwin / Prendki, Jennifer / Soergel, David / Goedeckemeyer, Adrian / Gierke, Willi / Jafari, Mohsen / Gaba, Meenu / Wiesner, Jeremy / Wright, Diana Gage / Wei, Yawen / Vashisht, Harsha / Kulizhskaya, Yana / Hoover, Jay / Le, Maigo / Li, Lu / Iwuanyanwu, Chimezie / Liu, Lu / Ramirez, Kevin / Khorlin, Andrey / Cui, Albert / LIN, Tian / Georgiev, Marin / Wu, Marcus / Aguilar, Ricardo / Pallo, Keith / Chakladar, Abhishek / Repina, Alena / Wu, Xihui / van der Weide, Tom / Ponnapalli, Priya / Kaplan, Caroline / Simsa, Jiri / Li, Shuangfeng / Dousse, Olivier / Piper, Jeff / Ie, Nathan / Lui, Minnie / Pasumarthi, Rama / Lintz, Nathan / Vijayakumar, Anitha / Thiet, Lam Nguyen / Andor, Daniel / Valenzuela, Pedro / Paduraru, Cosmin / Peng, Daiyi / Lee, Katherine / Zhang, Shuyuan / Greene, Somer / Nguyen, Duc Dung / Kurylowicz, Paula / Velury, Sarmishta / Krause, Sebastian / Hardin, Cassidy / Dixon, Lucas / Janzer, Lili / Choo, Kiam / Feng, Ziqiang / Zhang, Biao / Singhal, Achintya / Latkar, Tejasi / Zhang, Mingyang / Le, Quoc / Abellan, Elena Allica / Du, Dayou / McKinnon, Dan / Antropova, Natasha / Bolukbasi, Tolga / Keller, Orgad / Reid, David / Finchelstein, Daniel / Raad, Maria Abi / Crocker, Remi / Hawkins, Peter / Dadashi, Robert / Gaffney, Colin / Lall, Sid / Franko, Ken / Filonov, Egor / Bulanova, Anna / Leblond, Rémi / Yadav, Vikas / Chung, Shirley / Askham, Harry / Cobo, Luis C. / Xu, Kelvin / Fischer, Felix / Xu, Jun / Sorokin, Christina / Alberti, Chris / Lin, Chu-Cheng / Evans, Colin / Zhou, Hao / Dimitriev, Alek / Forbes, Hannah / Banarse, Dylan / Tung, Zora / Liu, Jeremiah / Omernick, Mark / Bishop, Colton / Kumar, Chintu / Sterneck, Rachel / Foley, Ryan / Jain, Rohan / Mishra, Swaroop / Xia, Jiawei / Bos, Taylor / Cideron, Geoffrey / Amid, Ehsan / Piccinno, Francesco / Wang, Xingyu / Banzal, Praseem / Gurita, Petru / Noga, Hila / Shah, Premal / Mankowitz, Daniel J. / Polozov, Alex / Kushman, Nate / Krakovna, Victoria / Brown, Sasha / Bateni, MohammadHossein / Duan, Dennis / Firoiu, Vlad / Thotakuri, Meghana / Natan, Tom / Mohananey, Anhad / Geist, Matthieu / Mudgal, Sidharth / Girgin, Sertan / Li, Hui / Ye, Jiayu / Roval, Ofir / Tojo, Reiko / Kwong, Michael / Lee-Thorp, James / Yew, Christopher / Yuan, Quan / Bagri, Sumit / Sinopalnikov, Danila / Ramos, Sabela / Mellor, John / Sharma, Abhishek / Severyn, Aliaksei / Lai, Jonathan / Wu, Kathy / Cheng, Heng-Tze / Miller, David / Sonnerat, Nicolas / Vnukov, Denis / Greig, Rory / Beattie, Jennifer / Caveness, Emily / Bai, Libin / Eisenschlos, Julian / Korchemniy, Alex / Tsai, Tomy / Jasarevic, Mimi / Kong, Weize / Dao, Phuong / Zheng, Zeyu / Liu, Frederick / Zhu, Rui / Geller, Mark / Teh, Tian Huey / Sanmiya, Jason / Gladchenko, Evgeny / Trdin, Nejc / Sozanschi, Andrei / Toyama, Daniel / Rosen, Evan / Tavakkol, Sasan / Xue, Linting / Elkind, Chen / Woodman, Oliver / Carpenter, John / Papamakarios, George / Kemp, Rupert / Kafle, Sushant / Grunina, Tanya / Sinha, Rishika / Talbert, Alice / Goyal, Abhimanyu / Wu, Diane / Owusu-Afriyie, Denese / Du, Cosmo / Thornton, Chloe / Pont-Tuset, Jordi / Narayana, Pradyumna / Li, Jing / Fatehi, Sabaer / Wieting, John / Ajmeri, Omar / Uria, Benigno / Zhu, Tao / Ko, Yeongil / Knight, Laura / Héliou, Amélie / Niu, Ning / Gu, Shane / Pang, Chenxi / Tran, Dustin / Li, Yeqing / Levine, Nir / Stolovich, Ariel / Kalb, Norbert / Santamaria-Fernandez, Rebeca / Goenka, Sonam / Yustalim, Wenny / Strudel, Robin / Elqursh, Ali / Lakshminarayanan, Balaji / Deck, Charlie / Upadhyay, Shyam / Lee, Hyo / Dusenberry, Mike / Li, Zonglin / Wang, Xuezhi / Levin, Kyle / Hoffmann, Raphael / Holtmann-Rice, Dan / Bachem, Olivier / Yue, Summer / Arora, Sho / Malmi, Eric / Mirylenka, Daniil / Tan, Qijun / Koh, Christy / Yeganeh, Soheil Hassas / Põder, Siim / Zheng, Steven / Pongetti, Francesco / Tariq, Mukarram / Sun, Yanhua / Ionita, Lucian / Seyedhosseini, Mojtaba / Tafti, Pouya / Kotikalapudi, Ragha / Liu, Zhiyu / Gulati, Anmol / Liu, Jasmine / Ye, Xinyu / Chrzaszcz, Bart / Wang, Lily / Sethi, Nikhil / Li, Tianrun / Brown, Ben / Singh, Shreya / Fan, Wei / Parisi, Aaron / Stanton, Joe / Kuang, Chenkai / Koverkathu, Vinod / Choquette-Choo, Christopher A. / Li, Yunjie / Lu, TJ / Ittycheriah, Abe / Shroff, Prakash / Sun, Pei / Varadarajan, Mani / Bahargam, Sanaz / Willoughby, Rob / Gaddy, David / Dasgupta, Ishita / Desjardins, Guillaume / Cornero, Marco / Robenek, Brona / Mittal, Bhavishya / Albrecht, Ben / Shenoy, Ashish / Moiseev, Fedor / Jacobsson, Henrik / Ghaffarkhah, Alireza / Rivière, Morgane / Walton, Alanna / Crepy, Clément / Parrish, Alicia / Liu, Yuan / Zhou, Zongwei / Farabet, Clement / Radebaugh, Carey / Srinivasan, Praveen / van der Salm, Claudia / Fidjeland, Andreas / Scellato, Salvatore / Latorre-Chimoto, Eri / Klimczak-Plucińska, Hanna / Bridson, David / de Cesare, Dario / Hudson, Tom / Mendolicchio, Piermaria / Walker, Lexi / Morris, Alex / Penchev, Ivo / Mauger, Matthew / Guseynov, Alexey / Reid, Alison / Odoom, Seth / Loher, Lucia / Cotruta, Victor / Yenugula, Madhavi / Grewe, Dominik / Petrushkina, Anastasia / Duerig, Tom / Sanchez, Antonio / Yadlowsky, Steve / Shen, Amy / Globerson, Amir / Kurzrok, Adam / Webb, Lynette / Dua, Sahil / Li, Dong / Lahoti, Preethi / Bhupatiraju, Surya / Hurt, Dan / Qureshi, Haroon / Agarwal, Ananth / Shani, Tomer / Eyal, Matan / Khare, Anuj / Belle, Shreyas Rammohan / Wang, Lei / Tekur, Chetan / Kale, Mihir Sanjay / Wei, Jinliang / Sang, Ruoxin / Saeta, Brennan / Liechty, Tyler / Sun, Yi / Zhao, Yao / Lee, Stephan / Nayak, Pandu / Fritz, Doug / Vuyyuru, Manish Reddy / Aslanides, John / Vyas, Nidhi / Wicke, Martin / Ma, Xiao / Bilal, Taylan / Eltyshev, Evgenii / Balle, Daniel / Martin, Nina / Cate, Hardie / Manyika, James / Amiri, Keyvan / Kim, Yelin / Xiong, Xi / Kang, Kai / Luisier, Florian / Tripuraneni, Nilesh / Madras, David / Guo, Mandy / Waters, Austin / Wang, Oliver / Ainslie, Joshua / Baldridge, Jason / Zhang, Han / Pruthi, Garima / Bauer, Jakob / Yang, Feng / Mansour, Riham / Gelman, Jason / Xu, Yang / Polovets, George / Liu, Ji / Cai, Honglong / Chen, Warren / Sheng, XiangHai / Xue, Emily / Ozair, Sherjil / Yu, Adams / Angermueller, Christof / Li, Xiaowei / Wang, Weiren / Wiesinger, Julia / Koukoumidis, Emmanouil / Tian, Yuan / Iyer, Anand / Gurumurthy, Madhu / Goldenson, Mark / Shah, Parashar / Blake, MK / Yu, Hongkun / Urbanowicz, Anthony / Palomaki, Jennimaria / Fernando, Chrisantha / Brooks, Kevin / Durden, Ken / Mehta, Harsh / Momchev, Nikola / Rahimtoroghi, Elahe / Georgaki, Maria / Raul, Amit / Ruder, Sebastian / Redshaw, Morgan / Lee, Jinhyuk / Jalan, Komal / Li, Dinghua / Perng, Ginger / Hechtman, Blake / Schuh, Parker / Nasr, Milad / Chen, Mia / Milan, Kieran / Mikulik, Vladimir / Strohman, Trevor / Franco, Juliana / Green, Tim / Hassabis, Demis / Kavukcuoglu, Koray / Dean, Jeffrey / Vinyals, Oriol

    A Family of Highly Capable Multimodal Models

    2023  

    Abstract: This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from ... ...

    Abstract This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from complex reasoning tasks to on-device memory-constrained use-cases. Evaluation on a broad range of benchmarks shows that our most-capable Gemini Ultra model advances the state of the art in 30 of 32 of these benchmarks - notably being the first model to achieve human-expert performance on the well-studied exam benchmark MMLU, and improving the state of the art in every one of the 20 multimodal benchmarks we examined. We believe that the new capabilities of Gemini models in cross-modal reasoning and language understanding will enable a wide variety of use cases and we discuss our approach toward deploying them responsibly to users.
    Keywords Computer Science - Computation and Language ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2023-12-18
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top