LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Suchergebnis

Treffer 1 - 10 von insgesamt 35

Suchoptionen

  1. Artikel: Making sense in the flood. How to cope with the massive flow of digital information in medical ethics.

    Spitale, Giovanni

    Heliyon

    2020  Band 6, Heft 7, Seite(n) e04426

    Abstract: Scientific publications have become the currency of Academia, hence the concept of 'publish or perish'. But there are consequences: the amount of existing literature and its proliferation rate have reached the point where keeping pace is just impossible. ...

    Abstract Scientific publications have become the currency of Academia, hence the concept of 'publish or perish'. But there are consequences: the amount of existing literature and its proliferation rate have reached the point where keeping pace is just impossible. If this is true in general, it becomes a huge issue in interdisciplinary fields such as bioethics where knowing the state of the art in more than one single discipline is a concrete necessity. If we accept the idea of building new science on an exhaustive comprehension of existing knowledge, a radical change is needed. Smart iterative search strategies, frequency analysis and text mining, techniques described in this paper, can't be a long run solution. But they might serve as a useful coping strategy.
    Sprache Englisch
    Erscheinungsdatum 2020-07-24
    Erscheinungsland England
    Dokumenttyp Journal Article
    ZDB-ID 2835763-2
    ISSN 2405-8440
    ISSN 2405-8440
    DOI 10.1016/j.heliyon.2020.e04426
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  2. Artikel ; Online: COVID-19 and the ethics of quarantine: a lesson from the Eyam plague.

    Spitale, Giovanni

    Medicine, health care, and philosophy

    2020  Band 23, Heft 4, Seite(n) 603–609

    Abstract: The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the ...

    Abstract The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the Eyam Plague, read in light of Ross Upshur's Four Principles for the Justification of Public Health Intervention, and of the Siracusa Principles on the Limitation and Derogation Provisions in the International Covenant on Civil and Political Rights, could provide useful guidance in navigating the complex ethical issues that arise when quarantine measures need to be put in place.
    Mesh-Begriff(e) COVID-19 ; Coronavirus Infections/prevention & control ; England/epidemiology ; History, 17th Century ; Humans ; Infection Control/methods ; London/epidemiology ; Pandemics/prevention & control ; Plague/history ; Plague/prevention & control ; Pneumonia, Viral/prevention & control ; Public Health/ethics ; Quarantine/ethics ; Quarantine/history
    Schlagwörter covid19
    Sprache Englisch
    Erscheinungsdatum 2020-08-05
    Erscheinungsland Netherlands
    Dokumenttyp Historical Article ; Journal Article
    ZDB-ID 1440052-2
    ISSN 1572-8633 ; 1386-7423
    ISSN (online) 1572-8633
    ISSN 1386-7423
    DOI 10.1007/s11019-020-09971-2
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  3. Artikel ; Online: Beyond Trade-Offs: Autonomy, Effectiveness, Fairness, and Normativity in Risk and Crisis Communication.

    Germani, Federico / Spitale, Giovanni / Biller-Andorno, Nikola

    The American journal of bioethics : AJOB

    2024  , Seite(n) 1–4

    Abstract: This paper addresses the critiques based on trade-offs and normativity presented in response to our target article proposing the Public Health Emergency Risk and Crisis Communication (PHERCC) framework. These critiques highlight the ethical dilemmas in ... ...

    Abstract This paper addresses the critiques based on trade-offs and normativity presented in response to our target article proposing the Public Health Emergency Risk and Crisis Communication (PHERCC) framework. These critiques highlight the ethical dilemmas in crisis communication, particularly the balance between promoting public autonomy through transparent information and the potential stigmatization of specific population groups, as illustrated by the discussion of the mpox outbreak among men who have sex with men. This critique underscores the inherent tension between communication effectiveness and autonomy versus fairness and equity. In response, our paper reiterates the adaptability of the PHERCC framework, emphasizing its capacity to tailor messages to diverse audiences, thereby reducing potential stigmatization and misinformation. Through community engagement and feedback integration, the PHERCC framework aims to optimize the effectiveness of communication strategies while addressing ethical concerns. Furthermore, by involving affected communities in the communication strategy from the onset, the framework seeks to minimize ethical trade-offs and enhance the acceptance and effectiveness of public health messages.
    Sprache Englisch
    Erscheinungsdatum 2024-05-20
    Erscheinungsland United States
    Dokumenttyp Journal Article ; Comment
    ZDB-ID 2060433-6
    ISSN 1536-0075 ; 1526-5161
    ISSN (online) 1536-0075
    ISSN 1526-5161
    DOI 10.1080/15265161.2024.2353826
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  4. Artikel ; Online: Making sense in the flood. How to cope with the massive flow of digital information in medical ethics

    Giovanni Spitale

    Heliyon, Vol 6, Iss 7, Pp e04426- (2020)

    2020  

    Abstract: Scientific publications have become the currency of Academia, hence the concept of ‘publish or perish’. But there are consequences: the amount of existing literature and its proliferation rate have reached the point where keeping pace is just impossible. ...

    Abstract Scientific publications have become the currency of Academia, hence the concept of ‘publish or perish’. But there are consequences: the amount of existing literature and its proliferation rate have reached the point where keeping pace is just impossible. If this is true in general, it becomes a huge issue in interdisciplinary fields such as bioethics where knowing the state of the art in more than one single discipline is a concrete necessity. If we accept the idea of building new science on an exhaustive comprehension of existing knowledge, a radical change is needed. Smart iterative search strategies, frequency analysis and text mining, techniques described in this paper, can't be a long run solution. But they might serve as a useful coping strategy.
    Schlagwörter Publications' proliferation ; Text mining ; Search strategies ; Information extraction ; Topic tracking ; Information science ; Science (General) ; Q1-390 ; Social sciences (General) ; H1-99
    Sprache Englisch
    Erscheinungsdatum 2020-07-01T00:00:00Z
    Verlag Elsevier
    Dokumenttyp Artikel ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  5. Buch ; Online: Factiva parser and NLP pipeline for news articles related to COVID-19

    Giovanni Spitale

    2020  

    Abstract: The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. The aim of this software is to provide the means to analyze this material rapidly. Data are retrieved from ... ...

    Abstract The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. The aim of this software is to provide the means to analyze this material rapidly. Data are retrieved from Factiva and downloaded by hand(.) in RTF. The RTF files are then converted to TXT with unoconv in a unix environment. Parser: Takes as input files numerically ordered in a folder. This is not fundamental (in case of multiple retrieves from Factiva) because the parser orders the article by date using the date field contained in each of the articles. Nevertheless, it is important to reduce duplicates (because they increase the computational time needed for processing the corpus), so before adding new articles in the folder, be sure to retrieve them from a timepoint that does not overlap with the articles already retrieved. In any case, in the last phase the dataframe is checked for duplicates, that are counted and removed, but still the articles are processed by the parser and this takes computational time. The parser removes search summaries, segments the text, and cleans it using regex rules. The resulting text is exported in a complete dataframe as a CSV file; a subset containing only title and text is exported as TXT, ready to be fed to the NLP pipeline. The parser is language agnostic; just change the path to the folder containing the documents to parse. Important: there is a regex rule mentioning languages ("header_leftover"). it lists EN, DE, FR and IT. In case you need to work with another language, remember to correct that rule. NLP pipeline The NLP pipeline imports the files generated by the parser (divided by month to put less load on the memory) and analyses them. It is not language agnostic: correct linguistic settings must be specified in "setting up", "NLP" and "additional rules". First some additional rules for NER are defined. Some are general, some are language-specific, as specified in the relevant section. The files are opened and preprocessed, then lemma frequency and NE frequency are calculated per each month and in the whole corpus. important: in case of empty months (so, when analyzing less than one year of data) remember to exclude them from the mean, otherwise the mean will be distorted by the empty months. All the dataframes are exported as CSV files for further analysis or for data visualization. This code is optimized for English, German, French and Italian. Nevertheless, being based on spaCy, which provides several other models ( https://spacy.io/models ) could easily be adapted to other languages. The whole software is structured in Jupyter-lab notebooks, heavily commented for future reference.
    Schlagwörter natural language processing ; NLP ; media analysis ; factiva ; covid19
    Thema/Rubrik (Code) 410
    Sprache Englisch
    Erscheinungsdatum 2020-08-19
    Erscheinungsland eu
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  6. Buch ; Online: Factiva parser and NLP pipeline for news articles related to COVID-19

    Giovanni Spitale

    2020  

    Abstract: The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. The aim of this software is to provide the means to analyze this material rapidly. Data are retrieved from ... ...

    Abstract The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. The aim of this software is to provide the means to analyze this material rapidly. Data are retrieved from Factiva and downloaded by hand(.) in RTF. The RTF files are then converted to TXT with unoconv in a unix environment. Parser: Takes as input files numerically ordered in a folder. This is not fundamental (in case of multiple retrieves from Factiva) because the parser orders the article by date using the date field contained in each of the articles. Nevertheless, it is important to reduce duplicates (because they increase the computational time needed for processing the corpus), so before adding new articles in the folder, be sure to retrieve them from a timepoint that does not overlap with the articles already retrieved. In any case, in the last phase the dataframe is checked for duplicates, that are counted and removed, but still the articles are processed by the parser and this takes computational time. The parser removes search summaries, segments the text, and cleans it using regex rules. The resulting text is exported in a complete dataframe as a CSV file; a subset containing only title and text is exported as TXT, ready to be fed to the NLP pipeline. The parser is language agnostic; just change the path to the folder containing the documents to parse. Important: there is a regex rule mentioning languages ("header_leftover"). it lists EN, DE, FR and IT. In case you need to work with another language, remember to correct that rule. NLP pipeline The NLP pipeline imports the files generated by the parser (divided by month to put less load on the memory) and analyses them. It is not language agnostic: correct linguistic settings must be specified in "setting up", "NLP" and "additional rules". First some additional rules for NER are defined. Some are general, some are language-specific, as specified in the relevant section. The files are opened and preprocessed, then lemma frequency and NE frequency are calculated per each month and in the whole corpus. important: in case of empty months (so, when analyzing less than one year of data) remember to exclude them from the mean, otherwise the mean will be distorted by the empty months. All the dataframes are exported as CSV files for further analysis or for data visualization. This code is optimized for English, German, French and Italian. Nevertheless, being based on spaCy, which provides several other models ( https://spacy.io/models ) could easily be adapted to other languages. The whole software is structured in Jupyter-lab notebooks, heavily commented for future reference. This work is part of the PubliCo research project.

    This work is part of the PubliCo research project, supported by the Swiss National Science Foundation (SNF). Project no. 31CA30_195905
    Schlagwörter natural language processing ; NLP ; media analysis ; factiva ; covid19
    Thema/Rubrik (Code) 410
    Sprache Englisch
    Erscheinungsdatum 2020-08-19
    Erscheinungsland eu
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  7. Buch ; Online: Lemmas and Named Entities analysis in major media outlets regarding Switzerland and Covid-19

    Giovanni Spitale

    2020  

    Abstract: The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. This dataset, generated with an ad-hoc parser and NLP pipeline, analyzes the frequency of lemmas and named ... ...

    Abstract The COVID-19 pandemic generated (and keeps generating) a huge corpus of news articles, easily retrievable in Factiva with very targeted queries. This dataset, generated with an ad-hoc parser and NLP pipeline, analyzes the frequency of lemmas and named entities in news articles (in German, French, Italian and English ) regarding Switzerland and COVID-19. The analysis of large bodies of grey literature via text mining and computational linguistics is an increasingly frequent approach to understand the large-scale trends of specific topics. We used Factiva, a news monitoring and search engine developed and owned by Dow Jones, to gather and download all the news articles published between January and July 2020 on Covid-19 and Switzerland. Due to Factiva's copyright policy, it is not possible to share the original dataset with the exports of the articles' text; however, we can share the results of our work on the corpus. All the information relevant to reproduce the results is provided. Factiva allows a very granular definition of the queries, and moreover has access to full text articles published by the major media outlet of the world. The query has been defined as follows (syntax in bold, explanation in italics): ((coronavirus or Wuhan virus or corvid19 or corvid 19 or covid19 or covid 19 or ncov or novel coronavirus or sars) and (atleast3 coronavirus or atleast3 wuhan or atleast3 corvid* or atleast3 covid* or atleast3 ncov or atleast3 novel or atleast3 corona*)) Keywords for covid19; must appear at least 3 times in the text and ns=(gsars or gout) Subject is “novel coronaviruses” or “outbreaks and epidemics” and “general news” and la=X Language is X (DE, FR, IT, EN) and rst=tmnb Restrict to TMNB (major news and business publications) and wc>300 At least 300 words and date from 20191001 to 20200801 Date interval and re=SWITZ Region is Switzerland It is important to specify some details that characterize the query. The query is not limited to articles published by Swiss media, but to articles regarding Switzerland. The reason is simple: a Swiss user googling for “Schweiz Coronavirus” or for “Coronavirus Ticino” can easily find and read articles published by foreign media outlets (namely, German or Italian) on that topic. If the objective is capturing and describing the information trends to which people are exposed, this approach makes much more sense than limiting the analysis to articles published by Swiss media. Factiva’s field “NS” is a descriptor for the content of the article. “gsars” is defined in Factiva’s documentation as “All news on Severe Acute Respiratory Syndrome”, and “gout” as “The widespread occurrence of an infectious disease affecting many people or animals in a given population at the same time”; however, the way these descriptors are assigned to articles is not specified in the documentation. Finally, the query has been restricted to major news and business publications of at least 300 words. Duplicate check is performed by Factiva. Given the incredibly large amount of articles published on COVID-19, this (absolutely arbitrary) restriction allows retrieving a corpus that is both meaningful and manageable. metadata.xlsx contains information about the articles retrieved (strategy, amount) The PDF files document the execution of the Jupyter notebooks. The zip file contains the lemma and NE frequencies data, divided by language. The "Lemmas" folder contains a CSV file per month and a general timeseries; the "Entities" folder contains a CSV file per month, a general timeseries, plus subsets that are category-specific. For a comprehensive explanation about categories, you can check the PDF files. This work is part of the PubliCo research project.

    This work is part of the PubliCo research project, supported by the Swiss National Science Foundation (SNF). Project no. 31CA30_195905
    Schlagwörter natural language processing ; NLP ; media analysis ; factiva ; covid19
    Thema/Rubrik (Code) 070
    Sprache Englisch
    Erscheinungsdatum 2020-09-18
    Erscheinungsland eu
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  8. Artikel ; Online: COVID-19 and the ethics of quarantine

    Spitale, Giovanni

    Medicine, Health Care and Philosophy ; ISSN 1386-7423 1572-8633

    a lesson from the Eyam plague

    2020  

    Schlagwörter Health Policy ; Education ; Health(social science) ; covid19
    Sprache Englisch
    Verlag Springer Science and Business Media LLC
    Erscheinungsland us
    Dokumenttyp Artikel ; Online
    DOI 10.1007/s11019-020-09971-2
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  9. Artikel ; Online: COVID-19 and the ethics of quarantine

    Spitale, Giovanni

    Spitale, Giovanni (2020). COVID-19 and the ethics of quarantine: a lesson from the Eyam plague. Medicine, Health Care and Philosophy, 23(4):603-609.

    a lesson from the Eyam plague

    2020  

    Abstract: The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the ...

    Abstract The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the Eyam Plague, read in light of Ross Upshur’s Four Principles for the Justification of Public Health Intervention, and of the Siracusa Principles on the Limitation and Derogation Provisions in the International Covenant on Civil and Political Rights, could provide useful guidance in navigating the complex ethical issues that arise when quarantine measures need to be put in place.
    Schlagwörter Institute of Biomedical Ethics and History of Medicine ; 610 Medicine & health ; Health Policy ; Education ; Health(social science) ; covid19
    Sprache Englisch
    Erscheinungsdatum 2020-12-01
    Verlag Springer
    Erscheinungsland ch
    Dokumenttyp Artikel ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  10. Artikel: COVID-19 and the ethics of quarantine: a lesson from the Eyam plague

    Spitale, Giovanni

    Med Health Care Philos

    Abstract: The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the ...

    Abstract The recent outbreak of the SARS-CoV-2 coronavirus is posing many different challenges to local communities, directly affected by the pandemic, and to the global community, trying to find how to respond to this threat in a larger scale. The history of the Eyam Plague, read in light of Ross Upshur's Four Principles for the Justification of Public Health Intervention, and of the Siracusa Principles on the Limitation and Derogation Provisions in the International Covenant on Civil and Political Rights, could provide useful guidance in navigating the complex ethical issues that arise when quarantine measures need to be put in place.
    Schlagwörter covid19
    Verlag WHO
    Dokumenttyp Artikel
    Anmerkung WHO #Covidence: #696732
    Datenquelle COVID19

    Kategorien

Zum Seitenanfang