• Login
  • My Account
  • Subscribe
  • Contact Us
  • Submit Content
News Messenger
  • Local Stories
  • Sports
  • Obituaries
  • Opinions
  • School
  • Spiritual
    • Parabola
    • Transcendental Meditation
    • The Episcopal Diocese of Virginia
  • Legals
  • eEdition
  • News From Around The State
  • News From Around The Country
No Result
View All Result
  • Local Stories
  • Sports
  • Obituaries
  • Opinions
  • School
  • Spiritual
    • Parabola
    • Transcendental Meditation
    • The Episcopal Diocese of Virginia
  • Legals
  • eEdition
  • News From Around The State
  • News From Around The Country
No Result
View All Result
News Messenger
No Result
View All Result

RU professor’s work helps identify hazardous online user-generated content

by Mountain Media
February 3, 2021
in Local Stories
0
VIEWS
Share on FacebookShare on Twitter
  • Share on Twitter
  • Share on Email
  • Share on Facebook

Since the dawn of the Internet, web users have raised questions about who is responsible for a website’s third-party content.

RU professor Richard Gruss’ work helps ID online third-party product liability.

It began in the early 1990s with sites publishing user-generated content on bulletin boards. It continues today with social media platforms, like Twitter and Facebook, and online buying and shopping sites such as eBay and Amazon.

It’s not only an issue of right and wrong but also a legal conundrum.

In the late 1990s, “legal scholars began to wonder whether Amazon.com was responsible for products from third-party sellers that were shoddy, illegal, unsafe or misrepresented in the product descriptions,” said Radford University Assistant Professor of Management Richard Gruss.

“Opinion showed signs of converging on ‘yes’ just last year,” Gruss said, “when a California appeals court ruled that Amazon.com was legally liable for defective products sold on its site by third-party sellers.”

That ruling presented a tremendous challenge for Amazon and others that relied on third-party, user-generated content. How could they hire enough people to monitor the enormous amount of content?

Each day, Amazon sells more than 12 million products, each with descriptive text that may range from a couple of sentences to multiple paragraphs. Social media have the same issue, most recently brought to light with Twitter’s rejection of political extremists on its platform. Twitter users send more than 500 million messages a day – roughly 200 billion tweets a year – making it impossible, it seems, for a team of readers to catch all hazardous or problematic content.

The solution, Gruss said, is a solid working relationship between humans and machines.

“It’s just not feasible to hire a team of readers given the workload, so we need reliable automated methods of discovering critical information hidden within mountains of text,” said Gruss, whose educational background in language, computer science and text analytics makes him uniquely qualified for this research.

“But when the potential damage from a false negative is high, it’s not a good idea to rely entirely on automated methods,” Gruss said. “Some optimal combination of machine pre-processing and human judgment is called for.”

For nearly a decade, Gruss has been collaborating with scholars from Loyola Marymount University in Los Angeles, San Diego State University and Virginia Tech to find a solution.

“We applied methods from natural language processing, information theory and supervised machine learning to develop models for identifying safety hazards in Amazon.com reviews,” Gruss said. “We went on to demonstrate their efficacy in finding hazards in children’s toys, baby cribs, dishwashers and over-the-counter medicines.”

Gruss and his fellow researchers recently began an initiative to augment these back-end statistical methods with a browser extension that alerts shoppers to suspicious language within the reviews for products in which they may be interested.

“This new sequence of experiments is designed to determine the optimal way to present information to the said. “We hope to zero in on the ideal collaboration between computer algorithms and human judgment, and in the process, we hope to promote public safety.”

This system, Gruss said, can have broader application for any hosted content.

“For example, our models could be used to identify inflammatory misinformation that should be automatically removed,” he said. “For borderline cases, language that might be problematic could be highlighted, summarized or aggregated for the user in real time, and they can use their judgment, be better informed and more wary as to the possible dangers.”

The identification and removal of erroneous information, “especially those hazardous to physical and mental health, that are posted on Internet sites, is undoubtedly one of the challenging problems of the 21st century,” said Radford University Davis College of Business and Economics Dean Joy Bhadury. “Dr. Gruss’s research, based on using natural language processing and artificial intelligence tools, represents a feasible and pragmatic approach to tackling this immense problem. As one of the most active researchers within the Davis College, Dr. Gruss’s work underscores both the need for and the significant societal impact of the scholarly efforts of our faculty.”

 

Chad Osborne

Radford University

 

  • Share on Twitter
  • Share on Email
  • Share on Facebook
Previous Post

NRV passenger rail update: $30 million in budget; Virginia House authorizes authority

Next Post

Virginia Tech intends to help students following Town of Blacksburg’s act of generosity

Next Post
Virginia Tech intends to help students following Town of Blacksburg’s act of generosity

Virginia Tech intends to help students following Town of Blacksburg’s act of generosity

No Result
View All Result
  • Local Stories
  • Sports
  • Obituaries
  • Opinions
  • School
  • Spiritual
    • Parabola
    • Transcendental Meditation
    • The Episcopal Diocese of Virginia
  • Legals
  • eEdition
  • News From Around The State
  • News From Around The Country

© 2020 Mountain Media, LLC.

  • Sign in

Forgot your password?

Lost your password? Please enter your email address. You will receive mail with link to set new password.

Back to login