Browsing by Subject "crowdsourcing"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Annotating named entities in Twitter data with crowdsourcing(Association for Computational Linguistics, 2010-06-06) Finin, Tim; Murnane, Will; Karandikar, Anand; Keller, Nicholas; Martineau, Justin; Dredze, MarkWe describe our experience using both Amazon Mechanical Turk (MTurk) and Crowd Flower to collect simple named entity annotations for Twitter status updates. Unlike most genres that have traditionally been the focus of named entity experiments, Twitter is far more informal and abbreviated. The collected annotations and annotation techniques will provide a first step towards the full study of named entity recognition in domains like Facebook and Twitter. We also briefly describe how to use MTurk to collect judgements on the quality of “word clouds.”Item Crowdsourced Alternative Healthcare: Mobile Applications for User-Generated Natural Remedies(2018-12) Uzoma, Ada; Holman, Lucy; Walsh, Greg; University of Baltimore. Yale Gordon College of Arts and Sciences; Master of Science in Interaction Design and Information ArchitectureSmartphone applications have the ability to improve population health, mostly because of their widespread use; their rapid technological advancements and updates; and their use of features such as geolocation, video and audio recording, and internet access. In recent years, the internet has given rise to a phenomenon known as user-generated content (UGC). This paper explores usage of natural remedies, user-generated content, and user reviews and how the three concepts can be leveraged in the creation of a mobile application.Item Ensuring data integrity and user retention within BANDIT(2018) Turner, Brandon; Walsh, Greg; Blodgett, Bridget; Yale Gordon College of Arts and Sciences; Master of Science in Interaction Design and Information ArchitectureData collection is a vital service completed by the government to help ensure accuracy on a range of issues. Currently the Bird Banding Lab (BBL) is tasked with the collection of bird migratory data across North America. Data collection by the BBL is vital to the continued implementation of policies and regulations that affect far reaching sectors such as the environment, economy, and healthcare. In order to collect data sets of the magnitude that the BBL is mandated to do a large-scale citizen science program has been created, by crowdsourcing data from a large group of users who voluntarily submit data. While crowdsourcing has been proven to be a powerful tool it does not come without its own set of issues, in particular keeping participants engaged and keeping data accurate. This paper user research to examine the issues plaguing the BANDIT system and attempts to provide solutions on how the organization should address these issuesItem Integrating Machine Learning into the UX Design Process(2018-12) Ahmed, Tauhid; Walsh, Greg; Blodgett, Bridget; University of Baltimore. School of Information Arts and Technologies; Masters of Science in Interaction Design and Information ArchitectureThe following paper discusses how machine learning is becoming the new user experience tool for designers. Throughout the last decade, machine learning has vastly improved the user experience and human-machine interfacing by permitting machines to learn who we are and how we would like our systems to communicate with us. Machine learning opens the door for user interface and user experience design opportunities that could further meet users’ needs. To explore this phenomenon, Coupon Buddy was designed using a prototyping strategy to explore how machine learning could classify comments and adapt to user interaction and feedback. More specifically, the application functioned as a research channel to observe how UX designers could improve design processes for better user experiences through the accumulation of machine learning. Coupon Buddy was designed to allow users to save all their coupons in one place and use it for their shopping needs. Not only did the creation of Coupon Buddy prototypes allow us to investigate how much knowledge of machine learning our participants already had, but it facilitated ideas for how machine learning corresponds to a stronger UX design approach.Item Mobile Technology for Nonprofits: Harnessing the Power of Crowdsourcing(2011-12) Chin, Michelle Toyo; Salter, Anastasia; University of Baltimore. School of Information Arts and Technologies; University of Baltimore. Master of Science in Information Design and Information ArchitectureA mobile app is one of the easiest ways a nonprofit can increase awareness, raise funds, and promote events. The portability and popularity of smartphones can strengthen the connection between users and the nonprofit they support. By creating an app that empowers users to easily participate and engage, more people might be willing to take actions and spread the word.Item Recovered Memories: Bringing the Air Force Archive Into the Digital Age(2023-05-31) BYRD, David Allan; KOHL, Deborah; BLODGETT, Bridget; WOLF, Richard; University of Baltimore. Yale Gordon College of Arts and Sciences; University of Baltimore. Doctor of Science in Information and Interaction DesignRecruiting unpaid volunteers through a “crowdsourcing” technique has become a near-ubiquitous tactic of libraries, archives, and other institution seeking to textually digitize their analog holdings. Determining 1) certain demographic characteristics of those volunteers, 2) their familiarity with the topic, 3) their motivation, and 4) the process they use that correlate with higher performance in that task, has been little studied. Recovered Memories investigates 9 such variables both individually and combined. Optical character recognition (OCR) technology is one automated method for converting text, but has proven to be unsatisfactory for creating web content or e-books, mining data, creating data for artificial intelligence (AI) and machine learning (ML) software, and even some search functions. This paper theorized that some of the variables studied will correlate with higher performance. This research project examined the efficacy of a custom-built application to gather data (www.airforcehistory.net) One hundred and twelve historic documents from the Air Force Historical Research Agency’s archive were used in the examination to measure participants’ performance. Despite a relatively small sample size (n=50) and the lack of control endemic to field research, the participant variables of ‘familiarity with U.S. history in Vietnam (PV5),’ ‘process choice (PV8),’ and ‘age (PV2), group affiliation (PV7), and ‘familiarity with U.S. Air Force operations (PV6)’ were related. Multiple regression showed three factors correlated with better performance: gender, familiarity/Vietnam, and process selection. The author argues that OCR correction rather than copying/transcription, i.e. process choice, results in best performance and might be generalizable. Given the interest at the federal level in textually digitizing the holdings of military archives, this study has strong implications for policy and practice.