LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 34

Search options

  1. Article: Making sense in the flood. How to cope with the massive flow of digital information in medical ethics.

    Spitale, Giovanni

    Heliyon

    2020  Volume 6, Issue 7, Page(s) e04426

    Abstract: Scientific publications have become the currency of Academia, hence the concept of 'publish or perish'. But there are consequences: the amount of existing literature and its proliferation rate have reached the point where keeping pace is just impossible. ...

    Abstract Scientific publications have become the currency of Academia, hence the concept of 'publish or perish'. But there are consequences: the amount of existing literature and its proliferation rate have reached the point where keeping pace is just impossible. If this is true in general, it becomes a huge issue in interdisciplinary fields such as bioethics where knowing the state of the art in more than one single discipline is a concrete necessity. If we accept the idea of building new science on an exhaustive comprehension of existing knowledge, a radical change is needed. Smart iterative search strategies, frequency analysis and text mining, techniques described in this paper, can't be a long run solution. But they might serve as a useful coping strategy.
    Language English
    Publishing date 2020-07-24
    Publishing country England
    Document type Journal Article
    ZDB-ID 2835763-2
    ISSN 2405-8440
    ISSN 2405-8440
    DOI 10.1016/j.heliyon.2020.e04426
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: COVID-19 and the ethics of quarantine: a lesson from the Eyam plague.

    Spitale, Giovanni

    Medicine, health care, and philosophy

    2020  Volume 23, Issue 4, Page(s) 603–609

    Abstract: The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the ...

    Abstract The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the Eyam Plague, read in light of Ross Upshur's Four Principles for the Justification of Public Health Intervention, and of the Siracusa Principles on the Limitation and Derogation Provisions in the International Covenant on Civil and Political Rights, could provide useful guidance in navigating the complex ethical issues that arise when quarantine measures need to be put in place.
    MeSH term(s) COVID-19 ; Coronavirus Infections/prevention & control ; England/epidemiology ; History, 17th Century ; Humans ; Infection Control/methods ; London/epidemiology ; Pandemics/prevention & control ; Plague/history ; Plague/prevention & control ; Pneumonia, Viral/prevention & control ; Public Health/ethics ; Quarantine/ethics ; Quarantine/history
    Keywords covid19
    Language English
    Publishing date 2020-08-05
    Publishing country Netherlands
    Document type Historical Article ; Journal Article
    ZDB-ID 1440052-2
    ISSN 1572-8633 ; 1386-7423
    ISSN (online) 1572-8633
    ISSN 1386-7423
    DOI 10.1007/s11019-020-09971-2
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Making sense in the flood. How to cope with the massive flow of digital information in medical ethics

    Giovanni Spitale

    Heliyon, Vol 6, Iss 7, Pp e04426- (2020)

    2020  

    Abstract: Scientific publications have become the currency of Academia, hence the concept of ‘publish or perish’. But there are consequences: the amount of existing literature and its proliferation rate have reached the point where keeping pace is just impossible. ...

    Abstract Scientific publications have become the currency of Academia, hence the concept of ‘publish or perish’. But there are consequences: the amount of existing literature and its proliferation rate have reached the point where keeping pace is just impossible. If this is true in general, it becomes a huge issue in interdisciplinary fields such as bioethics where knowing the state of the art in more than one single discipline is a concrete necessity. If we accept the idea of building new science on an exhaustive comprehension of existing knowledge, a radical change is needed. Smart iterative search strategies, frequency analysis and text mining, techniques described in this paper, can't be a long run solution. But they might serve as a useful coping strategy.
    Keywords Publications' proliferation ; Text mining ; Search strategies ; Information extraction ; Topic tracking ; Information science ; Science (General) ; Q1-390 ; Social sciences (General) ; H1-99
    Language English
    Publishing date 2020-07-01T00:00:00Z
    Publisher Elsevier
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Factiva parser and NLP pipeline for news articles related to COVID-19

    Giovanni Spitale

    2020  

    Abstract: The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. The aim of this software is to provide the means to analyze this material rapidly. Data are retrieved from ... ...

    Abstract The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. The aim of this software is to provide the means to analyze this material rapidly. Data are retrieved from Factiva and downloaded by hand(.) in RTF. The RTF files are then converted to TXT with unoconv in a unix environment. Parser: Takes as input files numerically ordered in a folder. This is not fundamental (in case of multiple retrieves from Factiva) because the parser orders the article by date using the date field contained in each of the articles. Nevertheless, it is important to reduce duplicates (because they increase the computational time needed for processing the corpus), so before adding new articles in the folder, be sure to retrieve them from a timepoint that does not overlap with the articles already retrieved. In any case, in the last phase the dataframe is checked for duplicates, that are counted and removed, but still the articles are processed by the parser and this takes computational time. The parser removes search summaries, segments the text, and cleans it using regex rules. The resulting text is exported in a complete dataframe as a CSV file; a subset containing only title and text is exported as TXT, ready to be fed to the NLP pipeline. The parser is language agnostic; just change the path to the folder containing the documents to parse. Important: there is a regex rule mentioning languages ("header_leftover"). it lists EN, DE, FR and IT. In case you need to work with another language, remember to correct that rule. NLP pipeline The NLP pipeline imports the files generated by the parser (divided by month to put less load on the memory) and analyses them. It is not language agnostic: correct linguistic settings must be specified in "setting up", "NLP" and "additional rules". First some additional rules for NER are defined. Some are general, some are language-specific, as specified in the relevant section. The files are opened and preprocessed, then lemma frequency and NE frequency are calculated per each month and in the whole corpus. important: in case of empty months (so, when analyzing less than one year of data) remember to exclude them from the mean, otherwise the mean will be distorted by the empty months. All the dataframes are exported as CSV files for further analysis or for data visualization. This code is optimized for English, German, French and Italian. Nevertheless, being based on spaCy, which provides several other models ( https://spacy.io/models ) could easily be adapted to other languages. The whole software is structured in Jupyter-lab notebooks, heavily commented for future reference.
    Keywords natural language processing ; NLP ; media analysis ; factiva ; covid19
    Subject code 410
    Language English
    Publishing date 2020-08-19
    Publishing country eu
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Factiva parser and NLP pipeline for news articles related to COVID-19

    Giovanni Spitale

    2020  

    Abstract: The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. The aim of this software is to provide the means to analyze this material rapidly. Data are retrieved from ... ...

    Abstract The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. The aim of this software is to provide the means to analyze this material rapidly. Data are retrieved from Factiva and downloaded by hand(.) in RTF. The RTF files are then converted to TXT with unoconv in a unix environment. Parser: Takes as input files numerically ordered in a folder. This is not fundamental (in case of multiple retrieves from Factiva) because the parser orders the article by date using the date field contained in each of the articles. Nevertheless, it is important to reduce duplicates (because they increase the computational time needed for processing the corpus), so before adding new articles in the folder, be sure to retrieve them from a timepoint that does not overlap with the articles already retrieved. In any case, in the last phase the dataframe is checked for duplicates, that are counted and removed, but still the articles are processed by the parser and this takes computational time. The parser removes search summaries, segments the text, and cleans it using regex rules. The resulting text is exported in a complete dataframe as a CSV file; a subset containing only title and text is exported as TXT, ready to be fed to the NLP pipeline. The parser is language agnostic; just change the path to the folder containing the documents to parse. Important: there is a regex rule mentioning languages ("header_leftover"). it lists EN, DE, FR and IT. In case you need to work with another language, remember to correct that rule. NLP pipeline The NLP pipeline imports the files generated by the parser (divided by month to put less load on the memory) and analyses them. It is not language agnostic: correct linguistic settings must be specified in "setting up", "NLP" and "additional rules". First some additional rules for NER are defined. Some are general, some are language-specific, as specified in the relevant section. The files are opened and preprocessed, then lemma frequency and NE frequency are calculated per each month and in the whole corpus. important: in case of empty months (so, when analyzing less than one year of data) remember to exclude them from the mean, otherwise the mean will be distorted by the empty months. All the dataframes are exported as CSV files for further analysis or for data visualization. This code is optimized for English, German, French and Italian. Nevertheless, being based on spaCy, which provides several other models ( https://spacy.io/models ) could easily be adapted to other languages. The whole software is structured in Jupyter-lab notebooks, heavily commented for future reference. This work is part of the PubliCo research project.

    This work is part of the PubliCo research project, supported by the Swiss National Science Foundation (SNF). Project no. 31CA30_195905
    Keywords natural language processing ; NLP ; media analysis ; factiva ; covid19
    Subject code 410
    Language English
    Publishing date 2020-08-19
    Publishing country eu
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: Lemmas and Named Entities analysis in major media outlets regarding Switzerland and Covid-19

    Giovanni Spitale

    2020  

    Abstract: The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. This dataset, generated with an ad-hoc parser and NLP pipeline, analyzes the frequency of lemmas and named ... ...

    Abstract The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. This dataset, generated with an ad-hoc parser and NLP pipeline, analyzes the frequency of lemmas and named entities in news articles (in German, French, Italian and English ) regarding Switzerland and COVID-19. The analysis of large bodies of grey literature via text mining and computational linguistics is an increasingly frequent approach to understand the large-scale trends of specific topics. We used Factiva, a news monitoring and search engine developed and owned by Dow Jones, to gather and download all the news articles published between January and July 2020 on Covid-19 and Switzerland. Due to Factiva's copyright policy, it is not possible to share the original dataset with the exports of the articles' text; however, we can share the results of our work on the corpus. All the information relevant to reproduce the results is provided. Factiva allows a very granular definition of the queries, and moreover has access to full text articles published by the major media outlet of the world. The query has been defined as follows (syntax in bold, explanation in italics): ((coronavirus or Wuhan virus or corvid19 or corvid 19 or covid19 or covid 19 or ncov or novel coronavirus or sars) and (atleast3 coronavirus or atleast3 wuhan or atleast3 corvid* or atleast3 covid* or atleast3 ncov or atleast3 novel or atleast3 corona*)) Keywords for covid19; must appear at least 3 times in the text and ns=(gsars or gout) Subject is “novel coronaviruses” or “outbreaks and epidemics” and “general news” and la=X Language is X (DE, FR, IT, EN) and rst=tmnb Restrict to TMNB (major news and business publications) and wc>300 At least 300 words and date from 20191001 to 20200801 Date interval and re=SWITZ Region is Switzerland It is important to specify some details that characterize the query. The query is not limited to articles published by Swiss media, but to articles regarding Switzerland. The reason is simple: a Swiss user googling for “Schweiz Coronavirus” or for “Coronavirus Ticino” can easily find and read articles published by foreign media outlets (namely, German or Italian) on that topic. If the objective is capturing and describing the information trends to which people are exposed, this approach makes much more sense than limiting the analysis to articles published by Swiss media. Factiva’s field “NS” is a descriptor for the content of the article. “gsars” is defined in Factiva’s documentation as “All news on Severe Acute Respiratory Syndrome”, and “gout” as “The widespread occurrence of an infectious disease affecting many people or animals in a given population at the same time”; however, the way these descriptors are assigned to articles is not specified in the documentation. Finally, the query has been restricted to major news and business publications of at least 300 words. Duplicate check is performed by Factiva. Given the incredibly large amount of articles published on COVID-19, this (absolutely arbitrary) restriction allows retrieving a corpus that is both meaningful and manageable. metadata.xlsx contains information about the articles retrieved (strategy, amount) The PDF files document the execution of the Jupyter notebooks. The zip file contains the lemma and NE frequencies data, divided by language. The "Lemmas" folder contains a CSV file per month and a general timeseries; the "Entities" folder contains a CSV file per month, a general timeseries, plus subsets that are category-specific. For a comprehensive explanation about categories, you can check the PDF files. This work is part of the PubliCo research project.

    This work is part of the PubliCo research project, supported by the Swiss National Science Foundation (SNF). Project no. 31CA30_195905
    Keywords natural language processing ; NLP ; media analysis ; factiva ; covid19
    Subject code 070
    Language English
    Publishing date 2020-09-18
    Publishing country eu
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Article ; Online: COVID-19 and the ethics of quarantine

    Spitale, Giovanni

    Medicine, Health Care and Philosophy ; ISSN 1386-7423 1572-8633

    a lesson from the Eyam plague

    2020  

    Keywords Health Policy ; Education ; Health(social science) ; covid19
    Language English
    Publisher Springer Science and Business Media LLC
    Publishing country us
    Document type Article ; Online
    DOI 10.1007/s11019-020-09971-2
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Article ; Online: COVID-19 and the ethics of quarantine

    Spitale, Giovanni

    Spitale, Giovanni (2020). COVID-19 and the ethics of quarantine: a lesson from the Eyam plague. Medicine, Health Care and Philosophy, 23(4):603-609.

    a lesson from the Eyam plague

    2020  

    Abstract: The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the ...

    Abstract The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the Eyam Plague, read in light of Ross Upshur’s Four Principles for the Justification of Public Health Intervention, and of the Siracusa Principles on the Limitation and Derogation Provisions in the International Covenant on Civil and Political Rights, could provide useful guidance in navigating the complex ethical issues that arise when quarantine measures need to be put in place.
    Keywords Institute of Biomedical Ethics and History of Medicine ; 610 Medicine & health ; Health Policy ; Education ; Health(social science) ; covid19
    Language English
    Publishing date 2020-12-01
    Publisher Springer
    Publishing country ch
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Article: COVID-19 and the ethics of quarantine: a lesson from the Eyam plague

    Spitale, Giovanni

    Med Health Care Philos

    Abstract: The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the ...

    Abstract The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the Eyam Plague, read in light of Ross Upshur's Four Principles for the Justification of Public Health Intervention, and of the Siracusa Principles on the Limitation and Derogation Provisions in the International Covenant on Civil and Political Rights, could provide useful guidance in navigating the complex ethical issues that arise when quarantine measures need to be put in place.
    Keywords covid19
    Publisher WHO
    Document type Article
    Note WHO #Covidence: #696732
    Database COVID19

    Kategorien

  10. Article: Making sense in the flood. How to cope with the massive flow of digital information in medical ethics

    Spitale, Giovanni

    Heliyon. 2020 July, v. 6, no. 7

    2020  

    Abstract: Scientific publications have become the currency of Academia, hence the concept of ‘publish or perish’. But there are consequences: the amount of existing literature and its proliferation rate have reached the point where keeping pace is just impossible. ...

    Abstract Scientific publications have become the currency of Academia, hence the concept of ‘publish or perish’. But there are consequences: the amount of existing literature and its proliferation rate have reached the point where keeping pace is just impossible. If this is true in general, it becomes a huge issue in interdisciplinary fields such as bioethics where knowing the state of the art in more than one single discipline is a concrete necessity. If we accept the idea of building new science on an exhaustive comprehension of existing knowledge, a radical change is needed. Smart iterative search strategies, frequency analysis and text mining, techniques described in this paper, can't be a long run solution. But they might serve as a useful coping strategy.
    Keywords bioethics ; concrete ; coping strategies ; floods ; solutions
    Language English
    Dates of publication 2020-07
    Publishing place Elsevier Ltd
    Document type Article
    Note NALT-AP-4-rerunAP2-fuzzy
    ZDB-ID 2835763-2
    ISSN 2405-8440
    ISSN 2405-8440
    DOI 10.1016/j.heliyon.2020.e04426
    Database NAL-Catalogue (AGRICOLA)

    More links

    Kategorien

To top