LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 9 of total 9

Search options

  1. Book ; Online: What is the best recipe for character-level encoder-only modelling?

    Cao, Kris

    2023  

    Abstract: This paper aims to benchmark recent progress in language understanding models that output contextualised representations at the character level. Many such modelling architectures and methods to train those architectures have been proposed, but it is ... ...

    Abstract This paper aims to benchmark recent progress in language understanding models that output contextualised representations at the character level. Many such modelling architectures and methods to train those architectures have been proposed, but it is currently unclear what the relative contributions of the architecture vs. the pretraining objective are to final model performance. We explore the design space of such models, comparing architectural innovations and a variety of different pretraining objectives on a suite of evaluation tasks with a fixed training procedure in order to find the currently optimal way to build and train character-level BERT-like models. We find that our best performing character-level model exceeds the performance of a token-based model trained with the same settings on the same data, suggesting that character-level models are ready for more widespread adoption. Unfortunately, the best method to train character-level models still relies on a subword-level tokeniser during pretraining, and final model performance is highly dependent on tokeniser quality. We believe our results demonstrate the readiness of character-level models for multilingual language representation, and encourage NLP practitioners to try them as drop-in replacements for token-based models.

    Comment: accepted at ACL 2023
    Keywords Computer Science - Computation and Language
    Subject code 006
    Publishing date 2023-05-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  2. Book ; Online: You should evaluate your language model on marginal likelihood over tokenisations

    Cao, Kris / Rimell, Laura

    2021  

    Abstract: Neural language models typically tokenise input text into sub-word units to achieve an open vocabulary. The standard approach is to use a single canonical tokenisation at both train and test time. We suggest that this approach is unsatisfactory and may ... ...

    Abstract Neural language models typically tokenise input text into sub-word units to achieve an open vocabulary. The standard approach is to use a single canonical tokenisation at both train and test time. We suggest that this approach is unsatisfactory and may bottleneck our evaluation of language model performance. Using only the one-best tokenisation ignores tokeniser uncertainty over alternative tokenisations, which may hurt model out-of-domain performance. In this paper, we argue that instead, language models should be evaluated on their marginal likelihood over tokenisations. We compare different estimators for the marginal likelihood based on sampling, and show that it is feasible to estimate the marginal likelihood with a manageable number of samples. We then evaluate pretrained English and German language models on both the one-best-tokenisation and marginal perplexities, and show that the marginal perplexity can be significantly better than the one best, especially on out-of-domain data. We link this difference in perplexity to the tokeniser uncertainty as measured by tokeniser entropy. We discuss some implications of our results for language model training and evaluation, particularly with regard to tokenisation robustness.

    Comment: accepted at EMNLP 2021
    Keywords Computer Science - Computation and Language
    Publishing date 2021-09-06
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: Modelling Latent Skills for Multitask Language Generation

    Cao, Kris / Yogatama, Dani

    2020  

    Abstract: We present a generative model for multitask conditional language generation. Our guiding hypothesis is that a shared set of latent skills underlies many disparate language generation tasks, and that explicitly modelling these skills in a task embedding ... ...

    Abstract We present a generative model for multitask conditional language generation. Our guiding hypothesis is that a shared set of latent skills underlies many disparate language generation tasks, and that explicitly modelling these skills in a task embedding space can help with both positive transfer across tasks and with efficient adaptation to new tasks. We instantiate this task embedding space as a latent variable in a latent variable sequence-to-sequence model. We evaluate this hypothesis by curating a series of monolingual text-to-text language generation datasets - covering a broad range of tasks and domains - and comparing the performance of models both in the multitask and few-shot regimes. We show that our latent task variable model outperforms other sequence-to-sequence baselines on average across tasks in the multitask setting. In the few-shot learning setting on an unseen test dataset (i.e., a new task), we demonstrate that model adaptation based on inference in the latent task space is more robust than standard fine-tuning based parameter adaptation and performs comparably in terms of overall performance. Finally, we examine the latent task representations learnt by our model and show that they cluster tasks in a natural way.
    Keywords Computer Science - Computation and Language
    Subject code 004 ; 006
    Publishing date 2020-02-21
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Control Prefixes for Parameter-Efficient Text Generation

    Clive, Jordan / Cao, Kris / Rei, Marek

    2021  

    Abstract: Prefix-tuning is a powerful lightweight technique for adapting a large pre-trained language model to a downstream application. However, it uses the same dataset-level tuned prompt for all examples in the dataset. We extend this idea and propose a dynamic ...

    Abstract Prefix-tuning is a powerful lightweight technique for adapting a large pre-trained language model to a downstream application. However, it uses the same dataset-level tuned prompt for all examples in the dataset. We extend this idea and propose a dynamic method, Control Prefixes, which allows for the inclusion of conditional input-dependent information, combining the benefits of prompt tuning and controlled generation. The method incorporates attribute-level learnable representations into different layers of a pre-trained transformer, allowing for the generated text to be guided in a particular direction. We provide a systematic evaluation of the technique and apply it to five datasets from the GEM benchmark for natural language generation (NLG). Although the aim is to develop a parameter-efficient model, we show Control Prefixes can even outperform full fine-tuning methods. We present state-of-the-art results on several data-to-text datasets, including WebNLG.
    Keywords Computer Science - Computation and Language ; Computer Science - Artificial Intelligence ; Computer Science - Machine Learning
    Subject code 004
    Publishing date 2021-10-15
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Factorising AMR generation through syntax

    Cao, Kris / Clark, Stephen

    2018  

    Abstract: Generating from Abstract Meaning Representation (AMR) is an underspecified problem, as many syntactic decisions are not constrained by the semantic graph. To explicitly account for this underspecification, we break down generating from AMR into two steps: ...

    Abstract Generating from Abstract Meaning Representation (AMR) is an underspecified problem, as many syntactic decisions are not constrained by the semantic graph. To explicitly account for this underspecification, we break down generating from AMR into two steps: first generate a syntactic structure, and then generate the surface form. We show that decomposing the generation process this way leads to state-of-the-art single model performance generating from AMR without additional unlabelled data. We also demonstrate that we can generate meaning-preserving syntactic paraphrases of the same AMR graph, as judged by humans.

    Comment: Camera ready; accepted at NAACL-HLT 2019
    Keywords Computer Science - Computation and Language
    Publishing date 2018-04-20
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Article ; Online: Multiagent off-screen behavior prediction in football.

    Omidshafiei, Shayegan / Hennes, Daniel / Garnelo, Marta / Wang, Zhe / Recasens, Adria / Tarassov, Eugene / Yang, Yi / Elie, Romuald / Connor, Jerome T / Muller, Paul / Mackraz, Natalie / Cao, Kris / Moreno, Pol / Sprechmann, Pablo / Hassabis, Demis / Graham, Ian / Spearman, William / Heess, Nicolas / Tuyls, Karl

    Scientific reports

    2022  Volume 12, Issue 1, Page(s) 8638

    Abstract: In multiagent worlds, several decision-making individuals interact while adhering to the dynamics constraints imposed by the environment. These interactions, combined with the potential stochasticity of the agents' dynamic behaviors, make such systems ... ...

    Abstract In multiagent worlds, several decision-making individuals interact while adhering to the dynamics constraints imposed by the environment. These interactions, combined with the potential stochasticity of the agents' dynamic behaviors, make such systems complex and interesting to study from a decision-making perspective. Significant research has been conducted on learning models for forward-direction estimation of agent behaviors, for example, pedestrian predictions used for collision-avoidance in self-driving cars. In many settings, only sporadic observations of agents may be available in a given trajectory sequence. In football, subsets of players may come in and out of view of broadcast video footage, while unobserved players continue to interact off-screen. In this paper, we study the problem of multiagent time-series imputation in the context of human football play, where available past and future observations of subsets of agents are used to estimate missing observations for other agents. Our approach, called the Graph Imputer, uses past and future information in combination with graph networks and variational autoencoders to enable learning of a distribution of imputed trajectories. We demonstrate our approach on multiagent settings involving players that are partially-observable, using the Graph Imputer to predict the behaviors of off-screen players. To quantitatively evaluate the approach, we conduct experiments on football matches with ground truth trajectory data, using a camera module to simulate the off-screen player state estimation setting. We subsequently use our approach for downstream football analytics under partial observability using the well-established framework of pitch control, which traditionally relies on fully observed data. We illustrate that our method outperforms several state-of-the-art approaches, including those hand-crafted for football, across all considered metrics.
    MeSH term(s) Football ; Humans ; Learning ; Soccer
    Language English
    Publishing date 2022-05-23
    Publishing country England
    Document type Journal Article
    ZDB-ID 2615211-3
    ISSN 2045-2322 ; 2045-2322
    ISSN (online) 2045-2322
    ISSN 2045-2322
    DOI 10.1038/s41598-022-12547-0
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Book ; Online: Mind the Gap

    Lazaridou, Angeliki / Kuncoro, Adhiguna / Gribovskaya, Elena / Agrawal, Devang / Liska, Adam / Terzi, Tayfun / Gimenez, Mai / d'Autume, Cyprien de Masson / Kocisky, Tomas / Ruder, Sebastian / Yogatama, Dani / Cao, Kris / Young, Susannah / Blunsom, Phil

    Assessing Temporal Generalization in Neural Language Models

    2021  

    Abstract: Our world is open-ended, non-stationary, and constantly evolving; thus what we talk about and how we talk about it change over time. This inherent dynamic nature of language contrasts with the current static language modelling paradigm, which trains and ... ...

    Abstract Our world is open-ended, non-stationary, and constantly evolving; thus what we talk about and how we talk about it change over time. This inherent dynamic nature of language contrasts with the current static language modelling paradigm, which trains and evaluates models on utterances from overlapping time periods. Despite impressive recent progress, we demonstrate that Transformer-XL language models perform worse in the realistic setup of predicting future utterances from beyond their training period, and that model performance becomes increasingly worse with time. We find that, while increasing model size alone -- a key driver behind recent progress -- does not solve this problem, having models that continually update their knowledge with new information can indeed mitigate this performance degradation over time. Hence, given the compilation of ever-larger language modelling datasets, combined with the growing list of language-model-based NLP applications that require up-to-date factual knowledge about the world, we argue that now is the right time to rethink the static way in which we currently train and evaluate our language models, and develop adaptive language models that can remain up-to-date with respect to our ever-changing and non-stationary world. We publicly release our dynamic, streaming language modelling benchmarks for WMT and arXiv to facilitate language model evaluation that takes temporal dynamics into account.

    Comment: To appear as a Spotlight at NeurIPS 2021
    Keywords Computer Science - Computation and Language ; Computer Science - Artificial Intelligence
    Subject code 121
    Publishing date 2021-02-03
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: Game Plan

    Tuyls, Karl / Omidshafiei, Shayegan / Muller, Paul / Wang, Zhe / Connor, Jerome / Hennes, Daniel / Graham, Ian / Spearman, William / Waskett, Tim / Steele, Dafydd / Luc, Pauline / Recasens, Adria / Galashov, Alexandre / Thornton, Gregory / Elie, Romuald / Sprechmann, Pablo / Moreno, Pol / Cao, Kris / Garnelo, Marta /
    Dutta, Praneet / Valko, Michal / Heess, Nicolas / Bridgland, Alex / Perolat, Julien / De Vylder, Bart / Eslami, Ali / Rowland, Mark / Jaegle, Andrew / Munos, Remi / Back, Trevor / Ahamed, Razia / Bouton, Simon / Beauguerlange, Nathalie / Broshear, Jackson / Graepel, Thore / Hassabis, Demis

    What AI can do for Football, and What Football can do for AI

    2020  

    Abstract: The rapid progress in artificial intelligence (AI) and machine learning has opened unprecedented analytics possibilities in various team and individual sports, including baseball, basketball, and tennis. More recently, AI techniques have been applied to ... ...

    Abstract The rapid progress in artificial intelligence (AI) and machine learning has opened unprecedented analytics possibilities in various team and individual sports, including baseball, basketball, and tennis. More recently, AI techniques have been applied to football, due to a huge increase in data collection by professional teams, increased computational power, and advances in machine learning, with the goal of better addressing new scientific challenges involved in the analysis of both individual players' and coordinated teams' behaviors. The research challenges associated with predictive and prescriptive football analytics require new developments and progress at the intersection of statistical learning, game theory, and computer vision. In this paper, we provide an overarching perspective highlighting how the combination of these fields, in particular, forms a unique microcosm for AI research, while offering mutual benefits for professional teams, spectators, and broadcasters in the years to come. We illustrate that this duality makes football analytics a game changer of tremendous value, in terms of not only changing the game of football itself, but also in terms of what this domain can mean for the field of AI. We review the state-of-the-art and exemplify the types of analysis enabled by combining the aforementioned fields, including illustrative examples of counterfactual analysis using predictive models, and the combination of game-theoretic analysis of penalty kicks with statistical learning of player attributes. We conclude by highlighting envisioned downstream impacts, including possibilities for extensions to other sports (real and virtual).
    Keywords Computer Science - Artificial Intelligence ; Computer Science - Computer Science and Game Theory ; Computer Science - Multiagent Systems
    Subject code 796
    Publishing date 2020-11-18
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: Gemini

    Gemini Team / Anil, Rohan / Borgeaud, Sebastian / Wu, Yonghui / Alayrac, Jean-Baptiste / Yu, Jiahui / Soricut, Radu / Schalkwyk, Johan / Dai, Andrew M. / Hauth, Anja / Millican, Katie / Silver, David / Petrov, Slav / Johnson, Melvin / Antonoglou, Ioannis / Schrittwieser, Julian / Glaese, Amelia / Chen, Jilin / Pitler, Emily /
    Lillicrap, Timothy / Lazaridou, Angeliki / Firat, Orhan / Molloy, James / Isard, Michael / Barham, Paul R. / Hennigan, Tom / Lee, Benjamin / Viola, Fabio / Reynolds, Malcolm / Xu, Yuanzhong / Doherty, Ryan / Collins, Eli / Meyer, Clemens / Rutherford, Eliza / Moreira, Erica / Ayoub, Kareem / Goel, Megha / Tucker, George / Piqueras, Enrique / Krikun, Maxim / Barr, Iain / Savinov, Nikolay / Danihelka, Ivo / Roelofs, Becca / White, Anaïs / Andreassen, Anders / von Glehn, Tamara / Yagati, Lakshman / Kazemi, Mehran / Gonzalez, Lucas / Khalman, Misha / Sygnowski, Jakub / Frechette, Alexandre / Smith, Charlotte / Culp, Laura / Proleev, Lev / Luan, Yi / Chen, Xi / Lottes, James / Schucher, Nathan / Lebron, Federico / Rrustemi, Alban / Clay, Natalie / Crone, Phil / Kocisky, Tomas / Zhao, Jeffrey / Perz, Bartek / Yu, Dian / Howard, Heidi / Bloniarz, Adam / Rae, Jack W. / Lu, Han / Sifre, Laurent / Maggioni, Marcello / Alcober, Fred / Garrette, Dan / Barnes, Megan / Thakoor, Shantanu / Austin, Jacob / Barth-Maron, Gabriel / Wong, William / Joshi, Rishabh / Chaabouni, Rahma / Fatiha, Deeni / Ahuja, Arun / Liu, Ruibo / Li, Yunxuan / Cogan, Sarah / Chen, Jeremy / Jia, Chao / Gu, Chenjie / Zhang, Qiao / Grimstad, Jordan / Hartman, Ale Jakse / Chadwick, Martin / Tomar, Gaurav Singh / Garcia, Xavier / Senter, Evan / Taropa, Emanuel / Pillai, Thanumalayan Sankaranarayana / Devlin, Jacob / Laskin, Michael / Casas, Diego de Las / Valter, Dasha / Tao, Connie / Blanco, Lorenzo / Badia, Adrià Puigdomènech / Reitter, David / Chen, Mianna / Brennan, Jenny / Rivera, Clara / Brin, Sergey / Iqbal, Shariq / Surita, Gabriela / Labanowski, Jane / Rao, Abhi / Winkler, Stephanie / Parisotto, Emilio / Gu, Yiming / Olszewska, Kate / Zhang, Yujing / Addanki, Ravi / Miech, Antoine / Louis, Annie / Shafey, Laurent El / Teplyashin, Denis / Brown, Geoff / Catt, Elliot / Attaluri, Nithya / Balaguer, Jan / Xiang, Jackie / Wang, Pidong / Ashwood, Zoe / Briukhov, Anton / Webson, Albert / Ganapathy, Sanjay / Sanghavi, Smit / Kannan, Ajay / Chang, Ming-Wei / Stjerngren, Axel / Djolonga, Josip / Sun, Yuting / Bapna, Ankur / Aitchison, Matthew / Pejman, Pedram / Michalewski, Henryk / Yu, Tianhe / Wang, Cindy / Love, Juliette / Ahn, Junwhan / Bloxwich, Dawn / Han, Kehang / Humphreys, Peter / Sellam, Thibault / Bradbury, James / Godbole, Varun / Samangooei, Sina / Damoc, Bogdan / Kaskasoli, Alex / Arnold, Sébastien M. R. / Vasudevan, Vijay / Agrawal, Shubham / Riesa, Jason / Lepikhin, Dmitry / Tanburn, Richard / Srinivasan, Srivatsan / Lim, Hyeontaek / Hodkinson, Sarah / Shyam, Pranav / Ferret, Johan / Hand, Steven / Garg, Ankush / Paine, Tom Le / Li, Jian / Li, Yujia / Giang, Minh / Neitz, Alexander / Abbas, Zaheer / York, Sarah / Reid, Machel / Cole, Elizabeth / Chowdhery, Aakanksha / Das, Dipanjan / Rogozińska, Dominika / Nikolaev, Vitaly / Sprechmann, Pablo / Nado, Zachary / Zilka, Lukas / Prost, Flavien / He, Luheng / Monteiro, Marianne / Mishra, Gaurav / Welty, Chris / Newlan, Josh / Jia, Dawei / Allamanis, Miltiadis / Hu, Clara Huiyi / de Liedekerke, Raoul / Gilmer, Justin / Saroufim, Carl / Rijhwani, Shruti / Hou, Shaobo / Shrivastava, Disha / Baddepudi, Anirudh / Goldin, Alex / Ozturel, Adnan / Cassirer, Albin / Xu, Yunhan / Sohn, Daniel / Sachan, Devendra / Amplayo, Reinald Kim / Swanson, Craig / Petrova, Dessie / Narayan, Shashi / Guez, Arthur / Brahma, Siddhartha / Landon, Jessica / Patel, Miteyan / Zhao, Ruizhe / Villela, Kevin / Wang, Luyu / Jia, Wenhao / Rahtz, Matthew / Giménez, Mai / Yeung, Legg / Lin, Hanzhao / Keeling, James / Georgiev, Petko / Mincu, Diana / Wu, Boxi / Haykal, Salem / Saputro, Rachel / Vodrahalli, Kiran / Qin, James / Cankara, Zeynep / Sharma, Abhanshu / Fernando, Nick / Hawkins, Will / Neyshabur, Behnam / Kim, Solomon / Hutter, Adrian / Agrawal, Priyanka / Castro-Ros, Alex / Driessche, George van den / Wang, Tao / Yang, Fan / Chang, Shuo-yiin / Komarek, Paul / McIlroy, Ross / Lučić, Mario / Zhang, Guodong / Farhan, Wael / Sharman, Michael / Natsev, Paul / Michel, Paul / Cheng, Yong / Bansal, Yamini / Qiao, Siyuan / Cao, Kris / Shakeri, Siamak / Butterfield, Christina / Chung, Justin / Rubenstein, Paul Kishan / Agrawal, Shivani / Mensch, Arthur / Soparkar, Kedar / Lenc, Karel / Chung, Timothy / Pope, Aedan / Maggiore, Loren / Kay, Jackie / Jhakra, Priya / Wang, Shibo / Maynez, Joshua / Phuong, Mary / Tobin, Taylor / Tacchetti, Andrea / Trebacz, Maja / Robinson, Kevin / Katariya, Yash / Riedel, Sebastian / Bailey, Paige / Xiao, Kefan / Ghelani, Nimesh / Aroyo, Lora / Slone, Ambrose / Houlsby, Neil / Xiong, Xuehan / Yang, Zhen / Gribovskaya, Elena / Adler, Jonas / Wirth, Mateo / Lee, Lisa / Li, Music / Kagohara, Thais / Pavagadhi, Jay / Bridgers, Sophie / Bortsova, Anna / Ghemawat, Sanjay / Ahmed, Zafarali / Liu, Tianqi / Powell, Richard / Bolina, Vijay / Iinuma, Mariko / Zablotskaia, Polina / Besley, James / Chung, Da-Woon / Dozat, Timothy / Comanescu, Ramona / Si, Xiance / Greer, Jeremy / Su, Guolong / Polacek, Martin / Kaufman, Raphaël Lopez / Tokumine, Simon / Hu, Hexiang / Buchatskaya, Elena / Miao, Yingjie / Elhawaty, Mohamed / Siddhant, Aditya / Tomasev, Nenad / Xing, Jinwei / Greer, Christina / Miller, Helen / Ashraf, Shereen / Roy, Aurko / Zhang, Zizhao / Ma, Ada / Filos, Angelos / Besta, Milos / Blevins, Rory / Klimenko, Ted / Yeh, Chih-Kuan / Changpinyo, Soravit / Mu, Jiaqi / Chang, Oscar / Pajarskas, Mantas / Muir, Carrie / Cohen, Vered / Lan, Charline Le / Haridasan, Krishna / Marathe, Amit / Hansen, Steven / Douglas, Sholto / Samuel, Rajkumar / Wang, Mingqiu / Austin, Sophia / Lan, Chang / Jiang, Jiepu / Chiu, Justin / Lorenzo, Jaime Alonso / Sjösund, Lars Lowe / Cevey, Sébastien / Gleicher, Zach / Avrahami, Thi / Boral, Anudhyan / Srinivasan, Hansa / Selo, Vittorio / May, Rhys / Aisopos, Konstantinos / Hussenot, Léonard / Soares, Livio Baldini / Baumli, Kate / Chang, Michael B. / Recasens, Adrià / Caine, Ben / Pritzel, Alexander / Pavetic, Filip / Pardo, Fabio / Gergely, Anita / Frye, Justin / Ramasesh, Vinay / Horgan, Dan / Badola, Kartikeya / Kassner, Nora / Roy, Subhrajit / Dyer, Ethan / Campos, Víctor / Tomala, Alex / Tang, Yunhao / Badawy, Dalia El / White, Elspeth / Mustafa, Basil / Lang, Oran / Jindal, Abhishek / Vikram, Sharad / Gong, Zhitao / Caelles, Sergi / Hemsley, Ross / Thornton, Gregory / Feng, Fangxiaoyu / Stokowiec, Wojciech / Zheng, Ce / Thacker, Phoebe / Ünlü, Çağlar / Zhang, Zhishuai / Saleh, Mohammad / Svensson, James / Bileschi, Max / Patil, Piyush / Anand, Ankesh / Ring, Roman / Tsihlas, Katerina / Vezer, Arpi / Selvi, Marco / Shevlane, Toby / Rodriguez, Mikel / Kwiatkowski, Tom / Daruki, Samira / Rong, Keran / Dafoe, Allan / FitzGerald, Nicholas / Gu-Lemberg, Keren / Khan, Mina / Hendricks, Lisa Anne / Pellat, Marie / Feinberg, Vladimir / Cobon-Kerr, James / Sainath, Tara / Rauh, Maribeth / Hashemi, Sayed Hadi / Ives, Richard / Hasson, Yana / Li, YaGuang / Noland, Eric / Cao, Yuan / Byrd, Nathan / Hou, Le / Wang, Qingze / Sottiaux, Thibault / Paganini, Michela / Lespiau, Jean-Baptiste / Moufarek, Alexandre / Hassan, Samer / Shivakumar, Kaushik / van Amersfoort, Joost / Mandhane, Amol / Joshi, Pratik / Goyal, Anirudh / Tung, Matthew / Brock, Andrew / Sheahan, Hannah / Misra, Vedant / Li, Cheng / Rakićević, Nemanja / Dehghani, Mostafa / Liu, Fangyu / Mittal, Sid / Oh, Junhyuk / Noury, Seb / Sezener, Eren / Huot, Fantine / Lamm, Matthew / De Cao, Nicola / Chen, Charlie / Elsayed, Gamaleldin / Chi, Ed / Mahdieh, Mahdis / Tenney, Ian / Hua, Nan / Petrychenko, Ivan / Kane, Patrick / Scandinaro, Dylan / Jain, Rishub / Uesato, Jonathan / Datta, Romina / Sadovsky, Adam / Bunyan, Oskar / Rabiej, Dominik / Wu, Shimu / Zhang, John / Vasudevan, Gautam / Leurent, Edouard / Alnahlawi, Mahmoud / Georgescu, Ionut / Wei, Nan / Zheng, Ivy / Chan, Betty / Rabinovitch, Pam G / Stanczyk, Piotr / Zhang, Ye / Steiner, David / Naskar, Subhajit / Azzam, Michael / Johnson, Matthew / Paszke, Adam / Chiu, Chung-Cheng / Elias, Jaume Sanchez / Mohiuddin, Afroz / Muhammad, Faizan / Miao, Jin / Lee, Andrew / Vieillard, Nino / Potluri, Sahitya / Park, Jane / Davoodi, Elnaz / Zhang, Jiageng / Stanway, Jeff / Garmon, Drew / Karmarkar, Abhijit / Dong, Zhe / Lee, Jong / Kumar, Aviral / Zhou, Luowei / Evens, Jonathan / Isaac, William / Chen, Zhe / Jia, Johnson / Levskaya, Anselm / Zhu, Zhenkai / Gorgolewski, Chris / Grabowski, Peter / Mao, Yu / Magni, Alberto / Yao, Kaisheng / Snaider, Javier / Casagrande, Norman / Suganthan, Paul / Palmer, Evan / Irving, Geoffrey / Loper, Edward / Faruqui, Manaal / Arkatkar, Isha / Chen, Nanxin / Shafran, Izhak / Fink, Michael / Castaño, Alfonso / Giannoumis, Irene / Kim, Wooyeol / Rybiński, Mikołaj / Sreevatsa, Ashwin / Prendki, Jennifer / Soergel, David / Goedeckemeyer, Adrian / Gierke, Willi / Jafari, Mohsen / Gaba, Meenu / Wiesner, Jeremy / Wright, Diana Gage / Wei, Yawen / Vashisht, Harsha / Kulizhskaya, Yana / Hoover, Jay / Le, Maigo / Li, Lu / Iwuanyanwu, Chimezie / Liu, Lu / Ramirez, Kevin / Khorlin, Andrey / Cui, Albert / LIN, Tian / Georgiev, Marin / Wu, Marcus / Aguilar, Ricardo / Pallo, Keith / Chakladar, Abhishek / Repina, Alena / Wu, Xihui / van der Weide, Tom / Ponnapalli, Priya / Kaplan, Caroline / Simsa, Jiri / Li, Shuangfeng / Dousse, Olivier / Piper, Jeff / Ie, Nathan / Lui, Minnie / Pasumarthi, Rama / Lintz, Nathan / Vijayakumar, Anitha / Thiet, Lam Nguyen / Andor, Daniel / Valenzuela, Pedro / Paduraru, Cosmin / Peng, Daiyi / Lee, Katherine / Zhang, Shuyuan / Greene, Somer / Nguyen, Duc Dung / Kurylowicz, Paula / Velury, Sarmishta / Krause, Sebastian / Hardin, Cassidy / Dixon, Lucas / Janzer, Lili / Choo, Kiam / Feng, Ziqiang / Zhang, Biao / Singhal, Achintya / Latkar, Tejasi / Zhang, Mingyang / Le, Quoc / Abellan, Elena Allica / Du, Dayou / McKinnon, Dan / Antropova, Natasha / Bolukbasi, Tolga / Keller, Orgad / Reid, David / Finchelstein, Daniel / Raad, Maria Abi / Crocker, Remi / Hawkins, Peter / Dadashi, Robert / Gaffney, Colin / Lall, Sid / Franko, Ken / Filonov, Egor / Bulanova, Anna / Leblond, Rémi / Yadav, Vikas / Chung, Shirley / Askham, Harry / Cobo, Luis C. / Xu, Kelvin / Fischer, Felix / Xu, Jun / Sorokin, Christina / Alberti, Chris / Lin, Chu-Cheng / Evans, Colin / Zhou, Hao / Dimitriev, Alek / Forbes, Hannah / Banarse, Dylan / Tung, Zora / Liu, Jeremiah / Omernick, Mark / Bishop, Colton / Kumar, Chintu / Sterneck, Rachel / Foley, Ryan / Jain, Rohan / Mishra, Swaroop / Xia, Jiawei / Bos, Taylor / Cideron, Geoffrey / Amid, Ehsan / Piccinno, Francesco / Wang, Xingyu / Banzal, Praseem / Gurita, Petru / Noga, Hila / Shah, Premal / Mankowitz, Daniel J. / Polozov, Alex / Kushman, Nate / Krakovna, Victoria / Brown, Sasha / Bateni, MohammadHossein / Duan, Dennis / Firoiu, Vlad / Thotakuri, Meghana / Natan, Tom / Mohananey, Anhad / Geist, Matthieu / Mudgal, Sidharth / Girgin, Sertan / Li, Hui / Ye, Jiayu / Roval, Ofir / Tojo, Reiko / Kwong, Michael / Lee-Thorp, James / Yew, Christopher / Yuan, Quan / Bagri, Sumit / Sinopalnikov, Danila / Ramos, Sabela / Mellor, John / Sharma, Abhishek / Severyn, Aliaksei / Lai, Jonathan / Wu, Kathy / Cheng, Heng-Tze / Miller, David / Sonnerat, Nicolas / Vnukov, Denis / Greig, Rory / Beattie, Jennifer / Caveness, Emily / Bai, Libin / Eisenschlos, Julian / Korchemniy, Alex / Tsai, Tomy / Jasarevic, Mimi / Kong, Weize / Dao, Phuong / Zheng, Zeyu / Liu, Frederick / Zhu, Rui / Geller, Mark / Teh, Tian Huey / Sanmiya, Jason / Gladchenko, Evgeny / Trdin, Nejc / Sozanschi, Andrei / Toyama, Daniel / Rosen, Evan / Tavakkol, Sasan / Xue, Linting / Elkind, Chen / Woodman, Oliver / Carpenter, John / Papamakarios, George / Kemp, Rupert / Kafle, Sushant / Grunina, Tanya / Sinha, Rishika / Talbert, Alice / Goyal, Abhimanyu / Wu, Diane / Owusu-Afriyie, Denese / Du, Cosmo / Thornton, Chloe / Pont-Tuset, Jordi / Narayana, Pradyumna / Li, Jing / Fatehi, Sabaer / Wieting, John / Ajmeri, Omar / Uria, Benigno / Zhu, Tao / Ko, Yeongil / Knight, Laura / Héliou, Amélie / Niu, Ning / Gu, Shane / Pang, Chenxi / Tran, Dustin / Li, Yeqing / Levine, Nir / Stolovich, Ariel / Kalb, Norbert / Santamaria-Fernandez, Rebeca / Goenka, Sonam / Yustalim, Wenny / Strudel, Robin / Elqursh, Ali / Lakshminarayanan, Balaji / Deck, Charlie / Upadhyay, Shyam / Lee, Hyo / Dusenberry, Mike / Li, Zonglin / Wang, Xuezhi / Levin, Kyle / Hoffmann, Raphael / Holtmann-Rice, Dan / Bachem, Olivier / Yue, Summer / Arora, Sho / Malmi, Eric / Mirylenka, Daniil / Tan, Qijun / Koh, Christy / Yeganeh, Soheil Hassas / Põder, Siim / Zheng, Steven / Pongetti, Francesco / Tariq, Mukarram / Sun, Yanhua / Ionita, Lucian / Seyedhosseini, Mojtaba / Tafti, Pouya / Kotikalapudi, Ragha / Liu, Zhiyu / Gulati, Anmol / Liu, Jasmine / Ye, Xinyu / Chrzaszcz, Bart / Wang, Lily / Sethi, Nikhil / Li, Tianrun / Brown, Ben / Singh, Shreya / Fan, Wei / Parisi, Aaron / Stanton, Joe / Kuang, Chenkai / Koverkathu, Vinod / Choquette-Choo, Christopher A. / Li, Yunjie / Lu, TJ / Ittycheriah, Abe / Shroff, Prakash / Sun, Pei / Varadarajan, Mani / Bahargam, Sanaz / Willoughby, Rob / Gaddy, David / Dasgupta, Ishita / Desjardins, Guillaume / Cornero, Marco / Robenek, Brona / Mittal, Bhavishya / Albrecht, Ben / Shenoy, Ashish / Moiseev, Fedor / Jacobsson, Henrik / Ghaffarkhah, Alireza / Rivière, Morgane / Walton, Alanna / Crepy, Clément / Parrish, Alicia / Liu, Yuan / Zhou, Zongwei / Farabet, Clement / Radebaugh, Carey / Srinivasan, Praveen / van der Salm, Claudia / Fidjeland, Andreas / Scellato, Salvatore / Latorre-Chimoto, Eri / Klimczak-Plucińska, Hanna / Bridson, David / de Cesare, Dario / Hudson, Tom / Mendolicchio, Piermaria / Walker, Lexi / Morris, Alex / Penchev, Ivo / Mauger, Matthew / Guseynov, Alexey / Reid, Alison / Odoom, Seth / Loher, Lucia / Cotruta, Victor / Yenugula, Madhavi / Grewe, Dominik / Petrushkina, Anastasia / Duerig, Tom / Sanchez, Antonio / Yadlowsky, Steve / Shen, Amy / Globerson, Amir / Kurzrok, Adam / Webb, Lynette / Dua, Sahil / Li, Dong / Lahoti, Preethi / Bhupatiraju, Surya / Hurt, Dan / Qureshi, Haroon / Agarwal, Ananth / Shani, Tomer / Eyal, Matan / Khare, Anuj / Belle, Shreyas Rammohan / Wang, Lei / Tekur, Chetan / Kale, Mihir Sanjay / Wei, Jinliang / Sang, Ruoxin / Saeta, Brennan / Liechty, Tyler / Sun, Yi / Zhao, Yao / Lee, Stephan / Nayak, Pandu / Fritz, Doug / Vuyyuru, Manish Reddy / Aslanides, John / Vyas, Nidhi / Wicke, Martin / Ma, Xiao / Bilal, Taylan / Eltyshev, Evgenii / Balle, Daniel / Martin, Nina / Cate, Hardie / Manyika, James / Amiri, Keyvan / Kim, Yelin / Xiong, Xi / Kang, Kai / Luisier, Florian / Tripuraneni, Nilesh / Madras, David / Guo, Mandy / Waters, Austin / Wang, Oliver / Ainslie, Joshua / Baldridge, Jason / Zhang, Han / Pruthi, Garima / Bauer, Jakob / Yang, Feng / Mansour, Riham / Gelman, Jason / Xu, Yang / Polovets, George / Liu, Ji / Cai, Honglong / Chen, Warren / Sheng, XiangHai / Xue, Emily / Ozair, Sherjil / Yu, Adams / Angermueller, Christof / Li, Xiaowei / Wang, Weiren / Wiesinger, Julia / Koukoumidis, Emmanouil / Tian, Yuan / Iyer, Anand / Gurumurthy, Madhu / Goldenson, Mark / Shah, Parashar / Blake, MK / Yu, Hongkun / Urbanowicz, Anthony / Palomaki, Jennimaria / Fernando, Chrisantha / Brooks, Kevin / Durden, Ken / Mehta, Harsh / Momchev, Nikola / Rahimtoroghi, Elahe / Georgaki, Maria / Raul, Amit / Ruder, Sebastian / Redshaw, Morgan / Lee, Jinhyuk / Jalan, Komal / Li, Dinghua / Perng, Ginger / Hechtman, Blake / Schuh, Parker / Nasr, Milad / Chen, Mia / Milan, Kieran / Mikulik, Vladimir / Strohman, Trevor / Franco, Juliana / Green, Tim / Hassabis, Demis / Kavukcuoglu, Koray / Dean, Jeffrey / Vinyals, Oriol

    A Family of Highly Capable Multimodal Models

    2023  

    Abstract: This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from ... ...

    Abstract This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from complex reasoning tasks to on-device memory-constrained use-cases. Evaluation on a broad range of benchmarks shows that our most-capable Gemini Ultra model advances the state of the art in 30 of 32 of these benchmarks - notably being the first model to achieve human-expert performance on the well-studied exam benchmark MMLU, and improving the state of the art in every one of the 20 multimodal benchmarks we examined. We believe that the new capabilities of Gemini models in cross-modal reasoning and language understanding will enable a wide variety of use cases and we discuss our approach toward deploying them responsibly to users.
    Keywords Computer Science - Computation and Language ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2023-12-18
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top