6
U

Attribut:Erhebungsverfahren

Aus Quali-ontology
Wechseln zu: Navigation, Suche
(Erstellte ein Attribut des Datentyps Has type::Text.)
 
(kein Unterschied)

Aktuelle Version vom 2. Juli 2019, 16:27 Uhr

Dies ist ein Attribut des Datentyps Text.

Seiten mit dem Attribut „Erhebungsverfahren“

Es werden 6 Seiten angezeigt, die dieses Attribut verwenden:

Zeige (vorherige 25 | nächste 25) (20 | 50 | 100 | 250 | 500)

B
Barnstars - research data +Our first step was to build a parser to extract barnstars from the user and user talk pages. From the November 2006 English Wikipedia database dump, we extracted 14,573 barnstars given to 4880 unique users  +
F
Forschungsdaten Deutungsmuster von Eltern und Lehrern +Expter*inneninterview mach Gläser/Laudel  +
Forschungsdaten Medienpädagogische Deutungsmuster +problemzentrierten Interview (vgl. Witzel 1982, 2000) + Grundgedanken und Verfahrensweisen Uwe Flicks zum episodischen Interview  +
R
Research data for Hardware Companions? +2010-2011  +
Research data for Voice Conversational Agents +Data was scraped using a python script from the Amazon product page. Only verified purchase reviews or reviews written by Amazon customers who purchased the item directly from Amazon were analyzed. A sample of 200 reviews from Figure 1: Amazon Echo is Cylindrical speaker with blue circle light at the top Data was collected from the Amazon product page. Only verified purchase reviews or reviews written by Amazon customers who purchased the item directly from Amazon were analyzed. A sample of 200 reviews from August 10, 2016 – February 6, 2017 was used for pilot data. The remainder of the data from June 19 , 2015 to August 9,2016 was analyzed for the formal data and included 25,010 posts. There are over 25000 reviews that currently fit this criterion but for the scope of this class, we selected the first 101 reviews that were a verified review purchase and only considered 1-, 2-, 4- and 5- star reviews  +
Research data talk before you type +For the data creation, featured articles from Wikipedia dumps from 2007 were taken. The talk pages of the featured articles were scraped using a python-script. A sample of 10% of the featured articles has been coded  +