Crowdsourcing points by on-click to web map with moderation?

I want to build a really lightweight web map interface where a user can click anywhere on a map, create a point, type in a comment -- ideally all via Facebook, but I don't want to get ahead of myself -- and have that comment pop up next to the point on mouse-over. Eventually, a large "crowdsourced" smattering of points with comments will be formed… kind of like this New York Times map.

I have access to everything I need to build something with the ArcGIS API for JavaScript, and some basic knowledge of JavaScript / that API, but am stymied by the need for the following features, in order of importance (1 & 2 are about equal):

  1. A moderator of some sort (me) can go in & remove any inappropriate points/comments.
  2. User X cannot edit any other users' points.
  3. User X who makes a point on the map can go back and edit any of their own points.

It seems the ArcGIS API can help you make a web map that allows a point-editing free-for-all, but nothing like the above features, as far as I know. And I don't know much about the server/database side of web technologies. What kind of technology would even be necessary to:

  • Recognize User X as User X & not User Y?
  • Recognize points a, b, and c as belonging to user X?
  • Letting user X edit those points and no other points?

I imagine there would need to be some kind of account creation / logging-in situation, tied to map editing capability? Where would one even begin crafting something like that?

For functionality, Crowdmap (like this example here) does exactly what I'm looking for in terms of the key requirements 1 & 2, but I want something a lot more simple… just one click on the map, type in a box to put your comment, and you're done, no clicking through to "report submission" forms with tons of fields and sending it off somewhere to get processed.

For aesthetics & some functionality, I really like the look & simplicity of this cool map, which also fulfills 1 & 2, but I have no idea how user-submitted polygons are being read and presumably appended to a geo-enabled table/database, and in fact that very part seems crucially incomplete, because the things you draw on the map don't seem to actually get saved.

Would love to get pointed in the right direction, as I would be thrilled to know how to do this! Or even what I need to learn more broadly.

As I understand it, Esri's IdentityManager is designed for managing users:

This class provides the framework to implement a solution for managing user credentials for (1) ArcGIS Server resources secured using token-based authentication and (2) Secured resources (i.e. web maps).

Your users would need to register for an Esri global account - this is fine for staff members within your organisation, but not so good if you want to allow random users to sign in via Facebook, Google, or your own login system.

You may need to write a "wrapper" which handles the user management aspects of your application.

As a suggestion, you could create an editable feature layer. To incorporate your user management, you'd need functionality which allows edits via the Editor dijit when the user is properly authorised (that's the bit you'd have to write, or perhaps find a plugin which does this).

For example, you could wrap your entire application in a WordPress page, and use something like this user access manager to control who can do what. Or write something in Rails as in this example. (These are both untested - I just mean to find something which already handles common user actions like Register, Sign In, Forgot Password, etc so you don't have to write it yourself.)

In terms of the approval queue, you could write functionality which flagged new edits for you to handle - again there's nothing out-of-the-box for this, to my knowledge.

I recently accomplished almost exactly what you describe using ArcGIS (JavaScript API), ArcGIS Server 10.2 (this controls the administration and edit tracking), and PostGRESQL. I originally used this (outdated) Flex API example as a template. In this example, the user logs in and creates some features with related attribute information. These features are then stored in the database along with information related to the specific user.

Do you have an ArcGIS Server license? This is a key component as it hosts your feature services and acts as the bridge between your database and the front-end web mapping interface.

GeoServer is a free alternative to ArcGIS Server, but the learning curve is a bit steeper.

Well, I'm going to "answer" my own question -- while the below isn't an ideal solution, it is pretty darn close and meets all 3 of the requirements I outlined in the question. Note that the limitations were: No immediately available access to our own map server (as it turned out), I don't have the know-how to write my own user registration system in only a couple of days, and no readily available database back end to store the data or user account info anyway.

First, I set up a Google spreadsheet and created a form in HTML that would submit its values into this spreadsheet. That would be kind of like my "database", that I'd have full moderator access to. Upon submission, Google has built-in functionality to let you go back and edit your own submission, as long as you don't leave the site. You can't mess with anyone else's submissions.

Then, I used the Google Maps JavaScript API to embed a map above the form. When the user clicks the map (within a restricted area delineated by a polygon I threw in there), it automatically writes the coordinates of their click to two fields in the form (and puts a marker there, to let them know that's where they clicked).

When they submit, it's written to the spreadsheet, which I can then pull down and map however I want. So, collecting the crowdsourced points is decoupled from actually mapping them, but that could probably be easily remedied as well.

I wanted something that could potentially take advantage of my own database and ArcGIS Server, so Stephen's answer above did a great job of pointing the way for what I should learn next, but I think this was a great solution given the time and tech constraints. Hope someone finds it helpful.

ArcGIS Online for Organization allows to Track edits and restrict edits to feature owners which state:

You can have ArcGIS Online keep track of who created the features in the published feature layer and restrict access accordingly. To track the edits, follow the steps to edit web layer details and check Keep track of who created and last updated features.

In some scenarios, you may want to allow someone to delete or modify the features they created but not delete or modify others' features. This might be the case with volunteered geographic information (VGI) apps in which you want to limit the control each contributor has over the data. To restrict feature modification to just the person who created the feature, check Editors can only update and delete the features they add.

Edits are not tracked if you choose to make the service public.

As the administrator of your organization, you still maintain full editing control over the feature layer. The only downside is that, per the term of service, you will need an AGOL named user login credential for each contributor.

Crowdsourced incident reporting—a feature already available in Google Maps and Waze—is coming to Apple Maps: the beta release of iOS 14.5 enables users to report accidents, road hazards and speed checks, with Siri and CarPlay integration. More at CNet’s Roadshow and MacRumors, among others the final, public release of iOS 14.5 should come out some time in the spring, I think.

In the summer of 2019, a research project spearheaded by Monument Lab asked St. Louis residents and visitors to draw personal maps of the city’s monuments and important sites. “Some maps celebrate famous sites like the St. Louis Zoo and the statue of St. Louis himself atop Art Hill in Forest Park. Others point to things that have been removed from the landscape, like the mounds built by native Mississippians,” St. Louis Public Radio reports. “Another shows a street map of downtown St. Louis with notations for ‘incidents of racism, from microaggression to racial violence.’” A total of 750 people contributed maps, which you can see at this Flickr gallery as well as on the project website, which has accompanying data and analysis. [Osher]

Great examples of Crowdsourcing

Crowdsourcing has grown in popularity over the last few years. Possibly because of the realisation amongst companies that the B2C relationship has changed, and actually consumers hold all the power now. Instead of going against it or denying it, they’ve chosen to partner up with them, and use it to their advantage. It’s been a wise move for many, and is something you should seriously consider for your business. Take a look at these great examples of Crowdsourcing for a little inspiration.

#1. Waze

One of the most successful crowd-powered start-ups is Waze. It’s an app that allows users to report traffic jams and automatically gives directions for the best route to take. Waze crowd sources information by measuring drivers speed to determine traffic jams and by asking users to report road closures.

It’s a great app that proves a dedicated crowd is sometimes all a company needs. It also attracted some big-name investors and suitors.

#2. McDonalds Burger builder

In 2014, McDonalds decided to give their customers free reign and submit ideas for the types of burgers they’d like to see in store. They could create their perfect burgers online and the rest of the country could vote for the best ones. In Germany, creators were also encouraged to create their own campaigns, which included viral videos and other valuable content marketing, which of course cost McDonalds nothing.

Once the winners were crowned, McDonalds released the burgers weekly, along with the picture and short bio of the creator.

#3. My Starbucks idea

Starbucks has a strong presence on multiple social networks, and regularly encourages consumers to submit, view and discuss submitted ideas along with employees from various Starbucks departments. They even have a website dedicated to this very purpose, which includes a leader board to track which customers are most active.

Experimentation and social media together with customer engagement and market research results in a cocktail that has made the brand excel.

#4. Lego

Toy company, Lego is responsible for probably one of the best examples of Crowdsourcing we’ve seen. The company allows users to design new products, and at the same time, test the demand. Any user can submit a design that other users are able to vote for. The idea with the most amount of votes gets moved to production and the creator receives a 1% royalty on the net revenue.

Lego has been successful in increasing the number of product ideas while also improving customer engagement. And this specific kind of engagement generates a certain buzz that’s difficult to recreate by any other method. Just like McDonalds, creators take it upon themselves to promote their idea, and in doing so promote Lego as a company, too.

#5. Samsung

Even the big guys such as Samsung realise the value of Crowdsourcing. Samsung has the largest Crowdsourcing facility in Palo Alto. What they seek from others is innovative solutions for existing electronic products and technologies. They also seek collaboration with other firms and interested individuals.

In 2013, Samsung partnered with product development platform, Marbler to Crowd Source ideas on how they could utilise newly discovered patents from NASA. They offered users the chance to help create the company’s next product and earn a share of revenue along the way.

#6. Lays

The chip manufacturer certainly reaped the rewards of their ten month long ‘Do Us A Flavour’ Crowdsourcing campaign. It encouraged consumers to create their very own flavour of chip and just like the others, people voted for their favourite.

#7. Pebble

Pebble only exists because of Crowdfunding, it used Kickstarter to raise funds for the development of Pebble technology that soon became the Pebble Smartwatch, and then Pebble Time Steel. The company grew from a simple idea on popular fund raising website Kickstarter, to a brand that has crowd produced a product that rivals the likes of Apple and Samsung.

The Pebble Smartwatch is Kickstarter’s biggest crowd funding success to date, but not only did Pebble crowd fund this project, they also crowd sourced it by encouraging people to get onboard and share their knowledge and talents. Of course ‘backers’, that’s funders to you and me, receive rewards once the product is produced. They either get it at a discounted price, or they get it before anyone else.

#8. Greenpeace

One of the easier and most popular ways of Crowdsourcing is to crowd source for ads. Greenpeace turned heads into 2012, when they crowd sourced environmental activist quotes for their Shell Oil “Let’s Go” advertisements.

They ran a contest to get controversial, sarcastic and satirical quotes from their followers and then used them on advertisements targeting oil company, Shell. An example for you, “The ice caps won’t melt themselves, people. Let’s Go.”

#9. Airbnb

You could say that Airbnb’s whole business model is based on Crowdsourcing – it’s essentially a travel website that allows individuals to let out their homes all over the world. If it wasn’t for them, there’d be no site.

More recently however, they teamed up with eYeka and worked on a Crowdsourcing project that asked filmmakers from all over the world to create fresh, authentic video content about the places they call home. The videos had to be 60 seconds long, and the winners win a share of 20, 000 euros. But this isn’t the first time they’ve crowdsourced content. In 2013 they asked users to submit scripted shots from all over the world in the form of Vines, via Twitter. They then put the clips together, named it Hollywood & Vines, and used it as a TV ad.

From these campaigns, not only have Airbnb acquired millions of unique content that adds quality and authentic value to the Airbnb brand, they’ve also saved themselves a substantial amount of money.

There are even more great examples of Crowdsourcing if you look into it, and lots of ways you can incorporate Crowdsourcing into your business model. Whether you’re a new company just starting out, a fairly established company looking for extra marketing, or a company looking to engage more with your customers. You can go as little or as large as you like with Crowdsourcing, and it’s almost always going to benefit you in some way.

What’s your experience with Crowdsourcing? Is it something you’d consider?

Tweak Your Biz is a thought leader global publication and online business community. Today, it is part of the Small Biz Trends stable of websites and receives over 300,000 unique views per month. Would you like to write for us?

An outstanding title can increase tweets, Facebook Likes, and visitor traffic by 50% or more. Generate great titles for your articles and blog posts with the Tweak Your Biz Title Generator.

Using Crowdsourcing to Counter the Spread of False Rumors on Social Media During Crises

My new colleague Professor Yasuaki Sakamoto at the Stevens Institute of Tech-nology (SIT) has been carrying out intriguing research on the spread of rumors via social media, particularly on Twitter and during crises. In his latest research, “Toward a Social-Technological System that Inactivates False Rumors through the Critical Thinking of Crowds,” Yasu uses behavioral psychology to under-stand why exposure to public criticism changes rumor-spreading behavior on Twitter during disasters. This fascinating research builds very nicely on the excellent work carried out by my QCRI colleague ChaTo who used this “criticism dynamic” to show that the credibility of tweets can be predicted (by topic) with-out analyzing their content. Yasu’s study also seeks to find the psychological basis for the Twitter’s self-correcting behavior identified by ChaTo and also John Herman who described Twitter as a “Truth Machine” during Hurricane Sandy.

Twitter is still a relatively new platform, but the existence and spread of false rumors is certainly not. In fact, a very interesting study dated 1950 found that “in the past 1,000 years the same types of rumors related to earthquakes appear again and again in different locations.” Early academic studies on the spread of rumors revealed that “that psychological factors, such as accuracy, anxiety, and impor-tance of rumors, affect rumor transmission.” One such study proposed that the spread of a rumor “will vary with the importance of the subject to the individuals concerned times the ambiguity of the evidence pertaining to the topic at issue.” Later studies added “anxiety as another key element in rumormongering,” since “the likelihood of sharing a rumor was related to how anxious the rumor made people feel. At the same time, however, the literature also reveals that counter-measures do exist. Critical thinking, for example, decreases the spread of rumors. The literature defines critical thinking as “reasonable reflective thinking focused on deciding what to believe or do.”

“Given the growing use and participatory nature of social media, critical thinking is considered an important element of media literacy that individuals in a society should possess.” Indeed, while social media can “help people make sense of their situation during a disaster, social media can also become a rumor mill and create social problems.” As discussed above, psychological factors can influence rumor spreading, particularly when experiencing stress and mental pressure following a disaster. Recent studies have also corroborated this finding, confirming that “differences in people’s critical thinking ability […] contributed to the rumor behavior.” So Yasu and his team ask the following interesting question: can critical thinking be crowdsourced?

“Not everyone needs to be a critical thinker all the time,” writes Yasu et al. As long as some individuals are good critical thinkers in a specific domain, their timely criticisms can result in an emergent critical thinking social system that can mitigate the spread of false information. This goes to the heart of the self-correcting behavior often observed on social media and Twitter in particular. Yasu’s insight also provides a basis for a bounded crowdsourcing approach to disaster response. More on this here, here and here.

“Related to critical thinking, a number of studies have paid attention to the role of denial or rebuttal messages in impeding the transmission of rumor.” This is the more “visible” dynamic behind the self-correcting behavior observed on Twitter during disasters. So while some may spread false rumors, others often try to counter this spread by posting tweets criticizing rumor-tweets directly. The following questions thus naturally arise: “Are criticisms on Twitter effective in mitigating the spread of false rumors? Can exposure to criticisms minimize the spread of rumors?”

Yasu and his colleagues set out to test the following hypotheses: Exposure to criticisms reduces people’s intent to spread rumors which mean that ex-posure to criticisms lowers perceived accuracy, anxiety, and importance of rumors. They tested these hypotheses on 87 Japanese undergraduate and grad-uate students by using 20 rumor-tweets related to the 2011 Japan Earthquake and 10 criticism-tweets that criticized the corresponding rumor-tweets. For example:

Rumor-tweet: “Air drop of supplies is not allowed in Japan! I though it has already been done by the Self- Defense Forces. Without it, the isolated people will die! I’m trembling with anger. Please retweet!”

Criticism-tweet: “Air drop of supplies is not prohibited by the law. Please don’t spread rumor. Please see 4-(1)-4-.”

The researchers found that “exposing people to criticisms can reduce their intent to spread rumors that are associated with the criticisms, providing support for the system.” In fact, “Exposure to criticisms increased the proportion of people who stop the spread of rumor-tweets approximately 1.5 times [150%]. This result indicates that whether a receiver is exposed to rumor or criticism first makes a difference in her decision to spread the rumor. Another interpretation of the result is that, even if a receiver is exposed to a number of criticisms, she will benefit less from this exposure when she sees rumors first than when she sees criticisms before rumors.”

Findings also revealed three psychological factors that were related to the differences in the spread of rumor-tweets: one’s own perception of the tweet’s accuracy, the anxiety cause by the tweet, and the tweet’s perceived importance. The results also indicate that “exposure to criticisms reduces the perceived accuracy of the succeeding rumor-tweets, paralleling the findings by previous research that refutations or denials decrease the degree of belief in rumor.” In addition, the perceived accuracy of criticism-tweets by those exposed to rumors first was significantly higher than the criticism-first group. The results were similar vis-à-vis anxiety. “Seeing criticisms before rumors reduced anxiety associated with rumor-tweets relative to seeing rumors first. This result is also consistent with previous research findings that denial messages reduce anxiety about rumors. Participants in the criticism-first group also perceived rumor-tweets to be less important than those in the rumor-first group.” The same was true vis-à-vis the perceived importance of a tweet. That said, “When the rumor-tweets are perceived as more accurate, the intent to spread the rumor-tweets are stronger when rumor-tweets cause more anxiety, the intent to spread the rumor-tweets is stronger when the rumor-tweets are perceived as more im-portance, the intent to spread the rumor-tweets is also stronger.”

So how do we use these findings to enhance the critical thinking of crowds and design crowdsourced verification platforms such as Verily? Ideally, such a platform would connect rumor tweets with criticism-tweets directly. “By this design, information system itself can enhance the critical thinking of the crowds.” That said, the findings clearly show that sequencing matters—that is, being exposed to rumor tweets first vs criticism tweets first makes a big differ-ence vis-à-vis rumor contagion. The purpose of a platform like Verily is to act as a repo-sitory for crowdsourced criticisms and rebuttals that is, crowdsourced critical thinking. Thus, the majority of Verily users would first be exposed to questions about rumors, such as: “Has the Vincent Thomas Bridge in Los Angeles been destroyed by the Earthquake?” Users would then be exposed to the crowd-sourced criticisms and rebuttals.

In conclusion, the spread of false rumors during disasters will never go away. “It is human nature to transmit rumors under uncertainty.” But social-technological platforms like Verily can provide a repository of critical thinking and ed-ucate users on critical thinking processes themselves. In this way, we may be able to enhance the critical thinking of crowds.

  • Wiki on Truthiness resources (Link)
  • How to Verify and Counter Rumors in Social Media (Link)
  • Social Media and Life Cycle of Rumors during Crises (Link)
  • How to Verify Crowdsourced Information from Social Media (Link)
  • Analyzing the Veracity of Tweets During a Crisis ( Link )
  • Crowdsourcing for Human Rights: Challenges and Opportunities for Information Collection & Verification (Link)
  • The Crowdsourcing Detective: Crisis, Deception and Intrigue in the Twittersphere ( Link )

Share this:

Like this:

A Mobile Mapping Roundup

Rerouting. Lifehacker talks about how to prevent mapping apps from rerouting you on the fly, and lists some options. [R. E. Sieber]

Traffic. Traffic congestion is a key feature of mobile mapping, and predicting it involves looking at historical data. CityLab reports on a recent study suggests that time-of-day electricity usage patterns can be used to predict traffic congestion patterns. A household that starts using power earlier in the morning gets up earlier and presumably will go to work earlier.) It’s another variable that can be put to use in traffic modelling.

Trail difficulty. OpenStreetMap doesn’t differentiate between “walk-in-the-park” trails and mountaineering routes, and that may have had something to do with hikers needing to be rescued from the side of a British Columbia mountain recently. The hikers apparently used OSM on a mobile phone app, and in OSM trail difficulty is an optional tag. The wisdom of using OSM in safety-critical environments notwithstanding, this is something that OSM editors need to get on. [Ian Dees]

Social media, students and digital footprints (PTAS research findings)

Thursday, 22nd October 2015, 2 – 3.30pm, IAD Resources Room, 7 Bristo Square, George Square, Edinburgh.

“This short information and interactive session will present findings from the PTAS Digital Footprint research

In order to understand how students are curating their digital presence, key findings from two student surveys (1457 responses) as well as data from 16 in-depth interviews with six students will be presented. This unique dataset provides an opportunity for us to critically reflect on the changing internet landscape and take stock of how students are currently using social media how they are presenting themselves online and what challenges they face, such as cyberbullying, viewing inappropriate content or whether they have the digital skills to successfully navigate in online spaces.

The session will also introduce the next phase of the Digital Footprint research: social media in a learning & teaching context. There will be an opportunity to discuss e-professionalism and social media guidelines for inclusion in handbooks/VLEs, as well as other areas.”

I am also really excited about this event, at which Louise Connelly, Sian Bayne, and I will be talking about the early findings from our Managing Your Digital Footprints project, and some of the outputs from the research and campaign (find these at:Â

Although this event is open to University staff and students only (register via the Online Bookings system, here), we are disseminating this work at a variety of events, publications etc. Our recent ECSM 2015 paper is the best overview of the work to date but expect to see more here in the near future about how we are taking forward this work. Do also get in touch with Louise or I if you have any questions about the project or would be interested in hearing more about the project, some of the associated training, or the research findings as they emerge.


Within all those published cases (section 2) and detected usage patterns (section 3), different role patterns have been identified. Research regarding types of users active on social media began by identifying individual roles and proceeded with the development of role typologies. In their literature review, Eismann et al. ( 2016 ) state that different actor types make use of social media in similar ways, but perceive different conditions and restrictions for social media usage in disaster situations. These roles and role typologies take either a citizens’ (public) or authorities’ (organizational) perspective and are related to either the real or virtual realm. Based on the analysis of existing roles, this section proposes a role typology matrix for individual and collective roles.

4.1 Citizens, or public perspective

Citizens might be classified in various roles. Hughes and Palen ( 2009 ) initially identified information brokers who collect information from different sources to help affected citizens. For Starbird and Palen ( 2011 ), the second step was to recognize the actions of remote operators as digital volunteers who progress from simple Internet-based activities like retweeting or translating tweets to more complex ones, for example, verifying or routing information. To further differentiate potential user roles, Reuter et al. ( 2013 ) distinguish between activities in the “real“world as opposed to the “virtual“world: real emergent groups (Stallings & Quarantelli, 1985 ), whose involvement usually takes the form of neighbourly help and work on-site and virtual digital volunteers (Starbird & Palen, 2011 ), who originate from the Internet and work mainly online. Ludwig, Reuter, Siebigteroth, and Pipek ( 2015 ) build on it and address these groups by enabling the detection of physical and digital activities and the assignment of specific tasks to citizens. Based on a timeline and qualitative analysis of information and help activities during the 2011 Super Outbreak, Reuter et al. ( 2013 ) suggest a more specific classification of Twitter users in different roles: helper, reporter, retweeter, repeater and reader. Kaufhold and Reuter ( 2014 ) additionally suggested the role of the moderator.

Furthermore, according to Blum et al. ( 2014 ), three roles contribute to collective sensemaking in social media: The inspectors who define the boundaries of events the contributors who provide media and witness statements and construct rich but agnostic grounded evidence and investigators who conduct sensemaking activities to arrive a broad consensus of event understanding and promote situation awareness. Table 2 presents terms that authors have used to describe different (overlapping) social media users in crisis from the public perspective.

References Role Description
Stallings and Quarantelli ( 1985 ) Emergent groups “Private citizens who work together in pursuit of collective goals relevant to actual or potential disasters […]” – actually not a social media role but still important.
Gorp ( 2014 ) V&TC Virtual & Technical Communities with expertise in data processing and technologies development, have potential to inform aid organizations.
Starbird and Palen ( 2011 ) Digital volunteers Element of the phenomena popularly known as crowdsourcing during crises. In the twitter sphere, they are called Voluntweeters.
Wu, Hofman, Mason, and Watts ( 2011 ) Celebrities Celebrities are among the most followed elite users.
Reuter et al. ( 2013 ) Helper Provide emotional assistance and recommendations for action, offer and encourage help, are involved in virtual and real activities.
Reporter Integrate external sources of information, thus providing generative and synthetic information as a news channel or eyewitness.
Retweeter Distribute important derivative information to followers or users, correspond with the information broker (Hughes & Palen, 2009 ).
Repeater Generate, synthesize, repeat and distribute a certain message to concrete recipients.
Reader Passive information-catching participants who are interested in or affected by the situation.
Kaufhold and Reuter ( 2014 ) Moderator Establishes supportive platforms, mediates offers and requests, mobilizes resources and integrates information.

4.2 Authorities, or organizational perspective

While the previous role descriptions and models address the public use of social media, Bergstrand, Landgren, and Green ( 2013 ) examined the utilization of Twitter by authorities and suggest an account typology containing high-level formal organizational accounts, accounts for formal functions and roles, formal personal accounts and affiliated personal accounts. Furthermore, Reuter, Marx, and Pipek ( 2011 ) proposed community scouts as amateur “first informers” to the perceived unreliability of social media information for authorities and St. Denis & Hughes ( 2012 ) describe the use of trusted digital volunteers during the 2011 Shadow Lake fire in virtual teams to inform a type I incident management team about social media activities. On a higher level, Ehnis, Mirbabaie, Bunker, and Stieglitz ( 2014 ) distinguish media organizations, emergency management agencies (EMAs), commercial organizations, political groups, unions and individuals.

From an emergency services’ perspective, the German Red Cross contributed with the definition of unbound helpers which are nonaffected citizens that mobilize and coordinate their relief activities autonomously and event-related, especially via social media (DRK, 2013 ). Accordingly, Kircher ( 2014 ) summarizes the helper groups by their organization form as well as their spatial and social affection to the catastrophic event into the four categories self-helpers and neighbourhood helpers (I), unbound helpers, ad hoc helpers and spontaneous helpers (II), preregistered helpers and first responders (III), and honorary office and full-time helpers in disaster management (IV). Detjen, Volkert, and Geisler ( 2016 ) further specify the characteristics of these helper groups. Hence, unbound helpers (I, II) conduct reactive and (partially-)bound helpers (III, IV) proactive activities. From I to IV, the prosocial behaviour evolves from spontaneous to sustainable characteristics the helping process grows in terms of long-term, continuous, plannable, involved, professional and formal engagement and the helper properties increase in awareness, commitment, experience and professionalism. Table 3 presents terms that authors have used to describe different (overlapping) social media users in crisis from the organizational perspective.

References Role Description
Olteanu et al. ( 2015 ) Media organizations Traditional or Internet media have a large presence on Twitter, in many cases more than 30% of the tweets.
Ehnis et al. ( 2014 ) Commercial organizations Publish rather small number of messages, for example, containing humorous marketing messages.
Olteanu et al. ( 2015 ) Government A relatively small fraction of tweets source from government officials and agencies, because they must verify information.
Reuter et al. ( 2011 ) Community scouts Proposed as amateur “first informers” to overcome the perceived unreliability of social media information for authorities.
St. Denis and Hughes ( 2012 ) Trusted digital volunteers Used during the 2011 Shadow Lake fire in virtual teams to inform a Type I incident management team about social media activities.
Bergstrand et al. ( 2013 ) High-level formal organizational accounts Used to formally inform the public about ongoing events in a unidirectional way of communication.
Accounts for formal functions and roles Distribute information about certain entities, retweet other civil security actors, and maintain a bidirectional communication.
Formal personal accounts Disseminate role-specific information and references of official work or actual topics.
Affiliated personal accounts Used for an expressive dissemination of information, personal opinions, reflections, and social conversation.
Kircher ( 2014 ) Self-helpers and neighbourhood helpers Directly affected by the event and work on overcoming it with or without organizational forces.
Unbound, ad hoc, and spontaneous helpers Come from areas, which are not directly affected, are motivated by news and media, and work self-organized or in an organization.
Preregistered helpers and first responders Have registered themselves before the event and contribute with personal but no special disaster control qualifications.
Honorary office and full-time helpers Trained in specific tasks for disaster control.

4.3 Towards a classification of roles related to social media use

The literature review on roles and role typologies reveals two constant dimensions upon which a classification of roles seems suitable. Identified roles either (a) affiliate to the citizens’ (public) or authorities’ domain (Reuter et al., 2012 ) or (b) perform their activities in the real (Stallings & Quarantelli, 1985 ) or virtual realm (Reuter et al., 2013 ). Adopting the matrix style, four different role patterns may be distinguished considering the realm of the role's action (x-axis) and the affiliation of the role (y-axis). The idea of the role typology matrix (Figure 2) is to provide an overview, to encourage systematic analysis and development of role patterns and to promote the successful implementation of roles in public and organizational domains. However, there are further criteria to be considered in the classification of role patterns, for instance: in literature, roles are often defined according to the research interest or unit of analysis, for example, collective sensemaking (Blum et al., 2014 ) or self-help activities (Reuter et al., 2013 ). Further criteria are types of activities (e.g., information processing, the status of the user (elite or ordinary), administrative autonomy (unbound or (partially-)bound), coordination (instructed or self-coordinated) or personal skills (none, personal or disaster-specific skills).

Emergent groups who include people “whose organization has not yet become institutionalized” (Stallings & Quarantelli, 1985 ) represent the public-real response. Typical roles of this pattern are affected citizens, self-helpers and neighbourhood helpers. Beyond, the public-virtual response is best characterized with Virtual and Technical Communities (V&TCs) who “provide disaster support with expertise in geographic information systems, database management, social media, and online campaigns” (Gorp, 2014 ). Roles like celebrities, digital volunteers, readers, repeaters and retweeters fit in this pattern. However, because emergent groups and V&TC's potentially (horizontally) collaborate in the course of an emergency (Kaufhold & Reuter, 2016 ), there are roles performing activities in both realms, for example, different types of helpers, media or reporters. Additionally, moderators even seek a direct collaboration with authorities.

Regarding the real-authority response, Incident Management Teams perform on-the-ground operations aiming “to save human lives, mitigate the effect of accidents, prevent damages, and restore the situation to the normal order” (Chrpa & Thórisson, 2013 ). To integrate the virtual-authority response, emergency services deploy Virtual Operations Support Teams (VOST) adapting “to the need for emergency management participation in social media channels during a crisis, while also having that activity support but not interfere with on-the-ground operations” (St. Denis & Hughes, 2012 ). For this activity, official personnel or roles like community scouts or trusted digital volunteers are considered. Furthermore, to cover both the real and virtual realms in authorities, horizontal collaboration is required. For instance, incident managers are required to synthesize real and virtual information in the decision-making process. Besides that, different kinds of vertical collaboration take place during emergencies. During the 2013 European floods, for instance, emergent groups and incident teams worked together to overcome the emergency (Kaufhold & Reuter, 2016 ). However, because virtual communities on Facebook and Twitter influenced the work of emergent groups, a collaboration between authorities and citizens became necessary to coordinate relief efforts. Therefore, moderators closely collaborated with authorities to eventually fulfil the role of trusted digital volunteers.

Another Geolocation Horror Show, This Time from South Africa

Remember the farm in Kansas that, thanks to an error in MaxMind’s geolocation database, became the default physical location for any IP address in the United States that couldn’t be resolved? It’s happened again, this time to a couple in Pretoria, South Africa, who received online and physical threats and visits from the police because IP addresses that were from Pretoria, but whose precise location couldn’t be resolved any further, defaulted to their front yard. Kashmir Hill, who covered the Kansas incident, has the story for Gizmodo. It’s a fascinating long read that burrows into the sources of geolocation data and the problematic ways in which it’s used.

In this case the problem was traced to the National Geospatial-Intelligence Agency, which assigned the lat/long coordinates for Pretoria to this family’s front yard. The end result: one home becomes the location for one million IP addresses in Pretoria. (The NGA has since changed it.)

The problem here is twofold. First, a failure to account for accuracy radius: a city or a country is represented by a single, precise point at its centre. That’s a real problem when the data point being geotagged can’t be more specific than “Pretoria” or “United States,” because the geotagging is made artificially precise: it’s not “somewhere in Pretoria,” it’s this specific address. Second is the misuse of IP location data. It’s one thing to use a web visitor’s IP address to serve them local ads or to enforce geographical restrictions on content, quite another to use that data for official or vigilante justice. The data, Hill points out, isn’t good enough for that. [MetaFilter]

6. Discussion & Future Work

The hidden Markov model is a general framework that is widely used for modeling sequence data in areas such as natural language processing

(Manning and Schütze, 1999) , speech recognition (Jelinek, 1997 Rabiner and Juang, 1993) , and biological sequencing (Durbin et al., 1998 Sonnhammer et al., 1998) . However, we demonstrate its utility for modeling interest from interaction with a visualization system. There are many possible variations for the model, the implementation, and parameters settings. Examples include choices for the diffusion parameters, number of particles for the particle filter, and prediction set sizes. A designer may tune these parameters or customize them based on the visualization or task. We see this as a strength of the approach which can seed many opportunities for future work.

Although, the evaluation uses a single interface, we posit that the approach in this paper is generalizable under transparent assumptions. We leverage data mapping principles and the notion that we can represent a visualization as a set of primitive visual marks and channels. Designers can apply the approach to any visualization that can be specified in this manner. The model assumes that the visual marks are perceptually differentiable, and relies heavily on good design practices. To specify a user’s evolving attention, we must first carefully define the mark space, M . One way to improve this process is to automatically extract the visual marks and channels from the visualization’s code. However, this is beyond the scope of the paper.

Modeling attention can be a rich signal for inferring goals, intention and interest (Horvitz, 1999a Horvitz et al., 2003) , and information about users’ current and future attention can be useful for allocating computational resources (Horvitz et al., 2003) or for supporting data exploration. For example, the system can perform pre-computation or pre-fetching based on its predictions. For large datasets that may have overlapping points, a straightforward approach can be to redraw the points in the prediction set. Doing so can make it easier for users to interact with points that match their interests but may have initially been occluded by other visual marks. For more passive adaptations, designers can use the approach in this paper to inform techniques for target assistance (Bateman et al., 2011) . The bubble cursor technique, for example, does not change the visual appearance of the interface but increases the click radius for the given target, thereby making them more accessible (Grossman and Balakrishnan, 2005) . Another possibility is target gravity, which attracts the mouse to the target (Bateman et al., 2011) . Future work can explore how to utilized to support the user during data exploration and analysis tasks.

The general idea of mixed initiative systems (Allen et al., 1999 Horvitz, 1999a, b, 2007) or tailoring an interface based on users’ skills or needs has existed for many years in HCI (Gajos and Weld, 2004) . Researchers have explored the tradeoff between providing support and minimizing disruptions (Afergan et al., 2013 Peck, 2014 Solovey et al., 2011 Treacy Solovey et al., 2015) . The work in the paper aligns well with this broader research agenda. We believe that the proposed approach is a significant step toward creating tools that can automatically learn and anticipate future actions, and opens possibilities for future work.

6.1. Future Work

One possible path for future work is to investigate the model’s performance for more complex tasks. In our experiment, we controlled the tasks by instructing participants to either search for a specific reported crime or identify a pattern in the dataset. While these tasks were designed based on realistic scenarios, they assume that the user has a specific and unchanging goal when they interact with the visualization. As a result, the search patterns we observed may not generalize to open-ended scenarios, or when the user’s interest change while interacting with the data. It is also possible that there are some scenarios where the user’s attention cannot be represented at as subspace of the visualization marks (e.g., attending to negative space). Future work can evaluate the approach with open-ended tasks.

The combination of visual marks and channels is an essential factor when defining the hidden state space for our probabilistic model. The map used in our experiment was simplistic compared to other real-world visual analytics systems. Future work can test the model using different combinations of visual variables and channels on a single map, or an entirely different type of visualization. It is also common for designers to aggregate the data based on the zoom level of the interface. It is essential to validate the technique by changing and increasing the size of the dataset, which can result in the drastic changes in the appearance and number of visual marks.

German Digital Council: An ‘Inside-Out’ Case Study

Report offers window into innovative expert council model

German Chancellor Angela Merkel appointed a group to bring a digital mindset into government

Prediction from Lumen Database researchers cited by Vice

A new video shows Beverly Hills cops playing the Beatles to trigger Instagram's algorithmic copyright filter, raising concerns that some law enforcement are using copyright…

Gaining power, losing control

In the wake of the Capitol riots and the deplatforming of Donald Trump, Harvard Law tech expert Jonathan Zittrain explores the clash of free speech and public health online

In the wake of the Capitol riots and the deplatforming of Donald Trump, Harvard Law tech expert Jonathan Zittrain explores the clash of free speech and public health online

Some Reflections on the Role of Government in Trustworthy AI

Urs Gasser shares his remarks from the Launch Event of Colombia’s AI Ethics Framework.

Web Integrity Project joins the Berkman Klein Center

Project spotlights government transparency

Project researches government transparency and censorship

HLS Students Examine Digital Responses to COVID-19

Harvard Law School Students Develop Reports on Anti-COVID Tech

Harvard students pivot from Governing Digital Technology with reports on the pandemic

The Breakdown: evelyn douek on doctored media, platform response and responsibility

Do tech companies have a responsibility for false content beyond the impact of that content on the information environment on their sites?

The second episode of our new video series asks, do tech companies have a responsibility for false content beyond the impact of that content on the information environment on…

Lumen database enables Wall Street Journal investigation

Removal request data from Google uncovers false takedown claims

The WSJ collaborated with Lumen, exploring its database of copyright removal notices and uncovering false takedown claims to Google.

Watch the video: Erweitern Sie Ihren eigenen AutoCAD IQ - Einführung in das präzise Zeichnen (October 2021).