How cities are reinventing the public-private partnership − Four lessons from around the globe

The Ruta N partnership in Medellín, Colombia, generated thousands of jobs. Jorge Calle/Anadolu Agency via Getty Images

The Ruta N partnership in Medellín, Colombia, generated thousands of jobs. Jorge Calle/Anadolu Agency via Getty Images

Cities tackle a vast array of responsibilities – from building transit networks to running schools – and sometimes they can use a little help. That’s why local governments have long teamed up with businesses in so-called public-private partnerships. Historically, these arrangements have helped cities fund big infrastructure projects such as bridges and hospitals.

However, our analysis and research show an emerging trend with local governments engaged in private-sector collaborations – what we have come to describe as “community-centered, public-private partnerships,” or CP3s. Unlike traditional public-private partnerships, CP3s aren’t just about financial investments; they leverage relationships and trust. And they’re about more than just building infrastructure; they’re about building resilient and inclusive communities.

As the founding executive director of the Partnership for Inclusive Innovation, based out of the Georgia Institute of Technology, I’m fascinated with CP3s. And while not all CP3s are successful, when done right they offer local governments a powerful tool to navigate the complexities of modern urban life.

Together with international climate finance expert Andrea Fernández of the urban climate leadership group C40, we analyzed community-centered, public-private partnerships across the world and put together eight case studies. Together, they offer valuable insights into how cities can harness the power of CP3s.

READ THE FULL ARTICLE >>
(The Conversation, Dec 16, 2024)

News Contact

Walter Rich

Atlanta Veterans Affairs Medical Center Info Session

The Atlanta Veterans Affairs Medical Center (VAMC) Research office will hold an in-person info session for Georgia Tech faculty, research scientists, and postdocs to learn about grant and collaboration opportunities.

Speakers/Panelists

Membrane Biosensor Wins Convergence Innovation Competition in Asia

Team Membrane Biosensor

Winning check presented to Team Membrane Biosensor. Pictured left-to-right: Michael Best, Three Students from Team Membrane Sensor, and Shelton Chan.

Team Membrane Biosensor from Yuan Ze University, Taiwan won the Georgia Tech Institute for People and Technology’s (IPaT) annual Convergence Innovation Competition (CIC) held for the first time in Taipei, Taiwan, December 7, 2024. 

The winning team members were Jia-Wei Chen, Hsu-Hung Kuo, Ngoc-Ngan Dao, and Ngo-My-Uyen Nguyen. The winning team won $2,000 dollars plus each team member were given ACER laptops and other prizes. The team’s faculty sponsor was Alex Wei, dean of the Industrial Academy at Yuan Ze University.

Their innovative membrane biosensor platform offered a rapid, accurate, and cost-effective solution for disease detection, revolutionizing diagnostic systems, and enabling early intervention for improved patient outcomes and control the pandemic.

CIC is a competition recognizing student innovation and entrepreneurship responding to today’s global challenges and opportunities. Founded in 2007 in Atlanta, Georgia, CIC is organized by IPaT at the Georgia Institute of Technology. 

This year, the competition expanded globally to Asia to forge new partnerships and foster more collaborations with universities across the Asian continent. IPaT’s CIC Asia Faculty Fellows helped cultivate team projects and the students so they could showcase their innovative ideas in this competition.

“The CIC students, the competition finale, and the forum all far exceeded my expectations,” said IPaT executive director Michael Best. “All four of the student finalist projects represented the very best in people-centered technologies responding to global challenges.”

CIC Asia is distinct in how it brings teams from multiple countries together to interact and network. Most innovation competitions are single university or country.

The three remaining finalist teams each received $1,000 dollars in prize money. The CIC Asia finalist team projects and team members are shown below:

  • BurnUp was a project from the students at Fulbright University Vietnam. Their project aimed to create a product that protects motorbikes' engines from water penetrating through the exhaust pipe during heavy rain and small floods. 

    Team members included: Võ Ngọc Đan Khuê, Trần Thanh Tùng, Trương Công Gia Hiếu, Phan Xuân Quang, Trần Nam Anh. The team’s faculty sponsor was Lan Phan, head of the center, Center for Entrepreneurship & Innovation at Fulbright University Vietnam.
     
  • GLU@U is a project from a student team at Universiti Putra Malaysia. It is a smart management system for people with abnormal sugar metabolism (i.e. diabetes). It integrates three modules: smart hardware, intelligent data management analysis + decision-making system, and medical passport care management. It uses technologies such as rtCGM, AI, cloud computing, and the Internet of Things to integrate the collection and analysis of relevant user data, the hospital-side SaaS system, and the personal health management app to form a closed loop of digital health monitoring and management inside and outside the hospital. The medical care operation and service system built by GLU@U, as well as the Internet cloud computing platform support system, constitute the full-scene, multi-dimensional operation of GLU@U's "artificial intelligence + chronic disease" intelligent monitoring and digital medical and health management. 

    Team members included: Jiao Fenglei, Zhang Hua, Jiang Anqi. The team’s faculty sponsor was Iskandar Ishak, associate professor of Computer Science at Universiti Putra Malaysia.
     
  • Guardian Crossing is a project from a student team at Universiti Tenaga Nasional. Guardian Crossing is a safety device that leverages deep learning to enhance indicators aimed in reducing accident risk for pedestrians with limited ability when crossing the road.

    Team members included: Nur Zafirah binti Mohd Zaini, Wan Qistina binti Wan Izahan Zameree, Syabil Fikri bin Sabri,Muhammad Danial bin Noor Shamsudin. The team’s faculty sponsor was Nur Laila Ab Ghani, lecturer of infomatics at Universiti Tenaga Nasional.


Global Technology Strategy and Workforce Development Forum

The CIC event took place alongside the Global Technology Strategy and Workforce Development Forum which was also organized by IPaT. The forum featured panel discussions on innovation and entrepreneurship, talent development, artificial intelligence (AI), and sustainable business practices. Close to 200 leaders from industry, academia, civil society, and government across Asia attended the forum and witnessed the CIC students presentations and award ceremony.

Prominent figures from Taiwan’s industry, government, academia, and research sectors participating in the forum included Liu Cheng, vice president of Tunghai University; Chang Ruey-Shiong, former president of National Taipei University of Business; Cai Qiyan, CIO of Taiwan Mobile; Albert Weng, Chairman and CEO Assistant of Qisda Corporation; Nicole Chan, chairwoman of the Artificial Intelligence Foundation; and Kai Hua, Chief Technology Officer of Microsoft Taiwan.

The event was also co-hosted by the Lee Kuan Yew Technology Development Foundation, and the Southeast Asia Impact Alliance according to Shelton Chan, managing director for international development, Asia region, with the Georgia Institute of Technology.

The Forum was mainly three panels, one on AI and sustainability, one on workforce development, and one on innovation and entrepreneurship. Panelists were a diverse group of university leaders, industry leaders, policy innovators, and included Georgia Tech faculty and alumni.

“CIC Asia and the Global Technology Strategy and Workforce Development Forum event illustrate ways that IPaT continues to grow Georgia Tech’s global influence,” said Best. “The audience was made up of high-level movers and shakers in the Asian technology ecosystem and I think we really impressed them.”

Pictures of CIC Asia and the Forum can be viewed here.

###

CIC Asia 2024 Group Picture

Group picture of participating students, faculty and some attendees to CIC Asia 2024 in Taipei, Taiwan.

News Contact

Walter Rich

New Dataset Takes Aim at Subjective Misinformation in Earnings Calls and Other Public Hearings

CSE NeurIPS 2024

Georgia Tech researchers have created a dataset that trains computer models to understand nuances in human speech during financial earnings calls. The dataset provides a new resource to study how public correspondence affects businesses and markets. 

SubjECTive-QA is the first human-curated dataset on question-answer pairs from earnings call transcripts (ECTs). The dataset teaches models to identify subjective features in ECTs, like clarity and cautiousness.   

The dataset lays the foundation for a new approach to identifying disinformation and misinformation caused by nuances in speech. While ECT responses can be technically true, unclear or irrelevant information can misinform stakeholders and affect their decision-making. 

Tests on White House press briefings showed that the dataset applies to other sectors with frequent question-and-answer encounters, notably politics, journalism, and sports. This increases the odds of effectively informing audiences and improving transparency across public spheres.   

The intersecting work between natural language processing and finance earned the paper acceptance to NeurIPS 2024, the 38th Annual Conference on Neural Information Processing Systems. NeurIPS is one of the world’s most prestigious conferences on artificial intelligence (AI) and machine learning (ML) research.

"SubjECTive-QA has the potential to revolutionize nowcasting predictions with enhanced clarity and relevance,” said Agam Shah, the project’s lead researcher. 

[MICROSITE: Georgia Tech at NeurIPS 2024]

“Its nuanced analysis of qualities in executive responses, like optimism and cautiousness, deepens our understanding of economic forecasts and financial transparency."

SubjECTive-QA offers a new means to evaluate financial discourse by characterizing language's subjective and multifaceted nature. This improves on traditional datasets that quantify sentiment or verify claims from financial statements.

The dataset consists of 2,747 Q&A pairs taken from 120 ECTs from companies listed on the New York Stock Exchange from 2007 to 2021. The Georgia Tech researchers annotated each response by hand based on six features for a total of 49,446 annotations.

The group evaluated answers on:

  • Relevance: the speaker answered the question with appropriate details.
  • Clarity: the speaker was transparent in the answer and the message conveyed.
  • Optimism: the speaker answered with a positive outlook regarding future outcomes.
  • Specificity: the speaker included sufficient and technical details in their answer.
  • Cautiousness: the speaker answered using a conservative, risk-averse approach.
  • Assertiveness: the speaker answered with certainty about the company’s events and outcomes.

The Georgia Tech group validated their dataset by training eight computer models to detect and score these six features. Test models comprised of three BERT-based pre-trained language models (PLMs), and five popular large language models (LLMs) including Llama and ChatGPT. 

All eight models scored the highest on the relevance and clarity features. This is attributed to domain-specific pretraining that enables the models to identify pertinent and understandable material.

The PLMs achieved higher scores on the clear, optimistic, specific, and cautious categories. The LLMs scored higher in assertiveness and relevance. 

In another experiment to test transferability, a PLM trained with SubjECTive-QA evaluated 65 Q&A pairs from White House press briefings and gaggles. Scores across all six features indicated models trained on the dataset could succeed in other fields outside of finance. 

"Building on these promising results, the next step for SubjECTive-QA is to enhance customer service technologies, like chatbots,” said Shah, a Ph.D. candidate studying machine learning. 

“We want to make these platforms more responsive and accurate by integrating our analysis techniques from SubjECTive-QA."

SubjECTive-QA culminated from two semesters of work through Georgia Tech’s Vertically Integrated Projects (VIP) Program. The VIP Program is an approach to higher education where undergraduate and graduate students work together on long-term project teams led by faculty. 

Undergraduate students earn academic credit and receive hands-on experience through VIP projects. The extra help advances ongoing research and gives graduate students mentorship experience.

Computer science major Huzaifa Pardawala and mathematics major Siddhant Sukhani co-led the SubjECTive-QA project with Shah. 

Fellow collaborators included Veer KejriwalAbhishek PillaiRohan BhasinAndrew DiBiasioTarun Mandapati, and Dhruv Adha. All six researchers are undergraduate students studying computer science. 

Sudheer Chava co-advises Shah and is the faculty lead of SubjECTive-QA. Chava is a professor in the Scheller College of Business and director of the M.S. in Quantitative and Computational Finance (QCF) program.

Chava is also an adjunct faculty member in the College of Computing’s School of Computational Science and Engineering (CSE).

"Leading undergraduate students through the VIP Program taught me the powerful impact of balancing freedom with guidance,” Shah said. 

“Allowing students to take the helm not only fosters their leadership skills but also enhances my own approach to mentoring, thus creating a mutually enriching educational experience.”

Presenting SubjECTive-QA at NeurIPS 2024 exposes the dataset for further use and refinement. NeurIPS is one of three primary international conferences on high-impact research in AI and ML. The conference occurs Dec. 10-15.

The SubjECTive-QA team is among the 162 Georgia Tech researchers presenting over 80 papers at NeurIPS 2024. The Georgia Tech contingent includes 46 faculty members, like Chava. These faculty represent Georgia Tech’s Colleges of Business, Computing, Engineering, and Sciences, underscoring the pertinence of AI research across domains. 

"Presenting SubjECTive-QA at prestigious venues like NeurIPS propels our research into the spotlight, drawing the attention of key players in finance and tech,” Shah said.

“The feedback we receive from this community of experts validates our approach and opens new avenues for future innovation, setting the stage for transformative applications in industry and academia.”

CSE NeurIPS 2024
News Contact

Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu

Researchers Say AI Copyright Cases Could Have Negative Impact on Academic Research

Deven Desai and Mark Riedl

Deven Desai and Mark Riedl have seen the signs for a while. 

Two years since OpenAI introduced ChatGPT, dozens of lawsuits have been filed alleging technology companies have infringed copyright by using published works to train artificial intelligence (AI) models.

Academic AI research efforts could be significantly hindered if courts rule in the plaintiffs' favor. 

Desai and Riedl are Georgia Tech researchers raising awareness about how these court rulings could force academic researchers to construct new AI models with limited training data. The two collaborated on a benchmark academic paper that examines the landscape of the ethical issues surrounding AI and copyright in industry and academic spaces.

“There are scenarios where courts may overreact to having a book corpus on your computer, and you didn’t pay for it,” Riedl said. “If you trained a model for an academic paper, as my students often do, that’s not a problem right now. The courts could deem training is not fair use. That would have huge implications for academia.

“We want academics to be free to do their research without fear of repercussions in the marketplace because they’re not competing in the marketplace,” Riedl said. 

Desai is the Sue and John Stanton Professor of Business Law and Ethics at the Scheller College of Business. He researches how business interests and new technology shape privacy, intellectual property, and competition law. Riedl is a professor at the College of Computing’s School of Interactive Computing, researching human-centered AI, generative AI, explainable AI, and gaming AI. 

Their paper, Between Copyright and Computer Science: The Law and Ethics of Generative AI, was published in the Northwestern Journal of Technology and Intellectual Property on Monday.

Desai and Riedl say they want to offer solutions that balance the interests of various stakeholders. But that requires compromise from all sides.

Researchers should accept they may have to pay for the data they use to train AI models. Content creators, on the other hand, should receive compensation, but they may need to accept less money to ensure data remains affordable for academic researchers to acquire.

Who Benefits?

The doctrine of fair use is at the center of every copyright debate. According to the U.S. Copyright Office, fair use permits the unlicensed use of copyright-protected works in certain circumstances, such as distributing information for the public good, including teaching and research.

Fair use is often challenged when one or more parties profit from published works without compensating the authors.

Any original published content, including a personal website on the internet, is protected by copyright. However, copyrighted material is republished on websites or posted on social media innumerable times every day without the consent of the original authors. 

In most cases, it’s unlikely copyright violators gained financially from their infringement.

But Desai said business-to-business cases are different. The New York Times is one of many daily newspapers and media companies that have sued OpenAI for using its content as training data. Microsoft is also a defendant in The New York Times’ suit because it invested billions of dollars into OpenAI’s development of AI tools like ChatGPT.

“You can take a copyrighted photo and put it in your Twitter post or whatever you want,” Desai said. “That’s probably annoying to the owner. Economically, they probably wanted to be paid. But that’s not business to business. What’s happening with Open AI and The New York Times is business to business. That’s big money.”

OpenAI started as a nonprofit dedicated to the safe development of artificial general intelligence (AGI) — AI that, in theory, can rival human thinking and possess autonomy.

These AI models would require massive amounts of data and expensive supercomputers to process that data. OpenAI could not raise enough money to afford such resources, so it created a for-profit arm controlled by its parent nonprofit.

Desai, Riedl, and many others argue that OpenAI ceased its research mission for the public good and began developing consumer products. 

“If you’re doing basic research that you’re not releasing to the world, it doesn’t matter if every so often it plagiarizes The New York Times,” Riedl said. “No one is economically benefitting from that. When they became a for-profit and produced a product, now they were making money from plagiarized text.”

OpenAI’s for-profit arm is valued at $80 billion, but content creators have not received a dime since the company has scraped massive amounts of copyrighted material as training data.

The New York Times has posted warnings on its sites that its content cannot be used to train AI models. Many other websites offer a robot.txt file that contains instructions for bots about which pages can and cannot be accessed. 

Neither of these measures are legally binding and are often ignored.

Solutions

Desai and Riedl offer a few options for companies to show good faith in rectifying the situation.

  • Spend the money. Desai says Open AI and Microsoft could have afforded its training data and avoided the hassle of legal consequences.

    “If you do the math on the costs to buy the books and copy them, they could have paid for them,” he said. “It would’ve been a multi-million dollar investment, but they’re a multi-billion dollar company.”
     
  • Be selective. Models can be trained on randomly selected texts from published works, allowing the model to understand the writing style without plagiarizing. 

    “I don’t need the entire text of War and Peace,” Desai said. “To capture the way authors express themselves, I might only need a hundred pages. I’ve also reduced the chance that my model will cough up entire texts.”
     
  • Leverage libraries. The authors agree libraries could serve as an ideal middle ground as a place to store published works and compensate authors for access to those works, though the amount may be less than desired.

    “Most of the objections you could raise are taken care of,” Desai said. “They are legitimate access copies that are secure. You get access to only as much as you need. Libraries at universities have already become schools of information.”

Desai and Riedl hope the legal action taken by publications like The New York Times will send a message to companies that develop AI tools to pump the breaks. If they don’t, researchers uninterested in profit could pay the steepest price.

The authors say it’s not a new problem but is reaching a boiling point.

“In the history of copyright, there are ways that society has dealt with the problem of compensating creators and technology that copies or reduces your ability to extract money from your creation,” Desai said. “We wanted to point out there’s a way to get there.”

News Contact

Nathan Deen

 

Communications Officer

 

School of Interactive Computing

Mothbox Science Workshop

Mothbox

Mothbox under construction.

Georgia Tech hosted a two-day Mothbox science workshop held on October 28-29, 2024. The workshop was sponsored by the Agile Systems Lab (run by Simon Sponberg in the School of Physics) through the Multidisciplinary Research Initiative with support from the Institute for People and Technology (IPaT). This hands-on workshop was spearheaded by Yash Sondhi, a postdoctoral researcher from the Agile Systems Lab at Georgia Tech.

IPaT’s lab spaces (the Craft Lab and Prototyping Lab) provided both space and technical assistance for the workshop participants. IPaT faculty member Tim Trent manages both labs and provided generous assistance throughout the workshop to build the traps.

The MothBox is an automated light trap that attracts and photographs moths and other nocturnal insects. A raspberry pi (mini-computer) controls a super high-resolution camera and lights, so that the MothBox can be deployed and programmed to collect data at a pre-defined schedule. A computer vision model then processes the images and automatically identifies the insects captured by the trap. Insect censuses are valuable tools for assessing the state of an ecosystem, especially insects’ vast numbers, short lifespan, and proximity to the base of the food chain. 

Mothbox was selected as a 2024 WILDLABS Awards winner.

A detailed review of the workshop was posted by WILDLABS.NET discussing the construction of the moth boxes where participants gained hands-on experience building and testing them.

Read the full workshop article here >>

Watch Andy Quitmeyer's Mothbox lecture >> he delivered as a keynote lecture at Georgia Tech on Oct. 29, 2024 in support of the workshop. Andy Quitmeyer, Ph.D., designs new ways to interact with the natural world. His transdisciplinary work spans scientific and design processes, from material exploration and natural experimentation to artistic outreach.

News Contact

Walter Rich

Foley Scholars 2024 Winners and Finalists

Foley Scholars for 2024-2025: Vanessa Oguamanam, Charles Ramey, and Momin Siddiqui. Jiawei Zhou, bottom right, was unable to attend.

Foley Scholars for 2024-2025: Vanessa Oguamanam, Charles Ramey, and Momin Siddiqui. Jiawei Zhou, bottom right, was unable to attend.

The Foley Scholar Awards recognize the achievements of top graduate students whose vision and research are shaping the future of how people interact with and value technology. 

Winners and finalists for the 2024 Foley Scholar Awards were celebrated at Georgia Tech's hotel and convention center on November 12, 2024. The event was hosted by the Institute for People and Technology with its executive director, Michael Best, serving as the master of ceremonies as each finalist was recognized for their innovative research. James Foley, professor emeritus and for whom the awards are named for, once again delivered inspiring and valuable insight at the conclusion of the evening's festivities celebrating the achievements of all finalists.

"Congratulations to the awardees and finalists who represent the finest that Georgia Tech has to offer," said Michael Best. "Our judges had a difficult task of selecting winners this year because each finalist was so outstanding," said Best.

Congratulations to the 2024 Foley Scholars who are:

  • Momin Siddiqui, M.S. student in computer science was awarded $1,000.
  • Vanessa Oguamanam, Ph.D. student in computer science was awarded $5,000.
  • Charles Ramey, Ph.D student in computer science was awarded $5,000.
  • Jiawei Zhou, Ph.D student in human centered computing was awarded $5,000.

The finalists in the master's category were Jordan Brown, Jared Lim, Da Hee (Stephanie) Kim, and Momin Siddiqui.

The finalists in the Ph.D. category were Beatriz Palacios Abad, Adam Coscia, Eric Greenlee, Alexandra Teixeira Riggs, Vishal Sharma, Vanessa Oguamanam, Charles Ramey, and Jiawei Zhou.

A short description of each finalists' unique research along with their Georgia Tech faculty advisor is listed below:

Jordan Brown is a master's student in human computer interaction advised by Andrea Parker. Her research vision is to design and innovate technology that empowers and promotes the emotional and physical wellbeing for underrepresented groups, specifically Black women.

Jared Lim is a master's student in computer science advised by Judith Uchidiuno. His primary research interest is providing computer science opportunities for children from low-resource communities through informal settings or settings outside the traditional classroom.

Da Hee (Stephanie) Kim is a master's student in human computer interaction advised by Mengyao Li. Her research is focused on leveraging robot-mediated intimacy to help couples in long-distance relationships maintain and deepen their emotional intimacy, using an interdisciplinary approach between philosophical, psychological, and human-robot interaction methods and theories.

Momin Siddiqui is a master's student in computer science advised by Chris MacLellan. His research wants to understand how to leverage artificial intelligence to build education technologies that foster a creative, adaptive, and constructionist learning experience for students.

Beatriz Palacios Abad is a Ph.D. student in computer science advised by Ellen Zegura. Her research work lies at the intersection of networking, policy, and human centered computing, focusing on mobile broadband mapping. Her overall research vision is to inform policy and technological efforts in the pursuit of digital inclusion. Specifically, with the goal of supporting localized, community organizing efforts around broadband.

Adam Coscia is a Ph.D. student in human centered computing advised by Alex Endert. His research vision is to develop and deploy responsible and trustworthy AI in education. The advent of generalizable and scalable AI models, namely large language models (LLMs) such as ChatGPT, has catalyzed educational communities to begin integrating LLMs into novel adaptive learning tools, such as chatbots for answering questions about course material, or interactive conversational aids for learning and feedback. Yet LLMs have also been shown to introduce potential pedagogical risks and harms, such as responding with misinformation and discriminatory language and biasing scores when used for grading.

Eric Greenlee is a Ph.D. student in computer science advised by Josiah Hester and Ellen Zegure. His research aims to build relationships with historically marginalized communities and to co-design environmental sensing systems that promote their sovereignty and self-advocacy. He also develops novel electronic cyberinfrastructure that provides information about the environment in both a socially and environmentally sustainable manner.

Alexandra Teixeira Riggs is a Ph.D. student in digital media advised by Anne Sullivan. Their overarching research vision is to develop a set of design recommendations and approaches for queering, or critically reorienting, the design of tangible embodied interactive experiences that explore queer history. They are drawing from several prior projects to conceptualize a body of work, looking to how they have each involved archival ephemera, critical human computer interaction, and tangible making, towards reframing histories and empowering queer communities today.

Vishal Sharma is a Ph.D. student in human centered computing advised by Neha Kumar. As a sociotechnical researcher, he studies the design and use of digital technologies in supporting climate justice. He aims to expand the human-computer interaction scholarship on climate justice, paving the way for a future where technology actively supports sustainable development for all. 

Vanessa Oguamanam is a Ph.D. student in computer science advised by Andrea Parker. Her research contributes to the fields of human-computer interaction, digital health equity, and mobile and ubiquitous computing. She conducts empirical research examining the utilization and perceptions of consumer digital health technologies to support mental health among perinatal Black women, assessing the extent to which these tools satisfy their needs. Her insights underscore the importance of nuanced approaches to digital interventions that can accommodate women's unique needs and perspectives with particular intersectional experiences and identities. 

Charles Ramey is a Ph.D. student in computer science advised by Thad Starner and Melody Jackson. His research utilizes wearable and embedded computers, along with AI, to enable humans to communicate with, better care for, and work with non-human animals. He believes that advances gained in understanding the sensory, cognitive, and communicative abilities of non-human animals will create a world more empathetic to all species with whom we share our planet.

Jiawei Zhou is a Ph.D. student in human centered computing advised by Munmun DeChoudhury. According to Zhou, information is integral to every aspect of our lives, from personal decisions to professional activities. Careful and mindful approaches to meeting informational needs are vital to navigating the abundance of available information, critically consuming content, and protecting ourselves from misinformation and manipulation. Her goal is to pursue a research agenda on the role of technologies in shaping individual wellbeing and social ecologies, as well as responsible communication and public education of technological capabilities and limitations.


About the James D. Foley Endowment
The James D. Foley Endowment, established in 2007, is named for James D. Foley, professor and founder of the GVU Center (now integrated with IPaT as of January, 2023) at Georgia Tech. The award was established by Foley's colleagues and IPaT/GVU alumni to honor his significant contributions in the field of computing, his influence on the work of others, and his dedication to the development of new research directions.

Funds from the Foley Endowment are used to support the students and research activities of the Institute for People and Technology (IPaT), including the Foley Scholars Fellowships, awarded annually to two graduate students on the basis of personal vision, brilliance, and potential impact. Foley Scholars are selected by an advisory board comprised of alumni, current faculty, and industry partners during the fall semester.

Group picture of Foley 2024 finalists

Group picture of Michael Best and Jim Foley with the Foley 2024 finalists with their faculty mentors.

News Contact

Thriving Hive: Faculty and Staff Forum to Support Student Mental Health

Thriving Hive is a biweekly virtual forum where Georgia Tech faculty and staff can meet with mental health professionals to discuss ways to support the mental health of students. Attendees can ask general questions, request consultation about a specific concern, and get additional information about mental health topics and resources. Sessions are intended for education and consultation — if direct action is needed for a student issue, session facilitators will make a referral to the appropriate support office.

Future of AI and Policy Among Key Topics at Inaugural School of Interactive Computing Summit

School of IC's Josiah Hester (left) and Cindy Lin discuss AI's future impact on sustainability.

School of IC's Josiah Hester (left) and Cindy Lin discuss AI's future impact on sustainability. Photo by Terence Rushin/College of Computing.

This month, the future of artificial intelligence (AI) was spotlighted as more than 120 academic and industry researchers participated in the Georgia Tech School of Interactive Computing’s inaugural Summit on Responsible Computing, AI, and Society.

With looming questions about AI's growing roles and consequences in nearly every facet of modern life, School of IC organizers felt the time was right to diverge from traditional conferences that focus on past work and published research.

“Presenting papers is about disseminating work that has already been completed. Who gets to be in the room is determined by whose paper gets accepted,” said Mark Riedl, School of IC professor.

“Instead, we wanted the summit talks to speculate on future directions and what challenges we as a community should be thinking about going forward.”

The two-day summit, held at Tech’s Global Learning Center Oct. 28-30, convened to discuss consequential questions like:

  • Is society ready to accept more responsibility as greater advancements in technologies like AI are made?
  • Should society stop to think about potential consequences before these advancements are implemented on its behalf, and what could those consequences be?
  • What policies should be enacted for these technologies to mitigate harms and augment societal benefits?

A highlight of the summit’s opening day was Meredith Ringel Morris's keynote address. As director of human-AI interaction research at Google DeepMind, she presented a possible future in which humans could use AI to create a digital afterlife.

In her remarks, Morris discussed AI clones, which are AI avatars of specific human beings with high autonomy and task-performing capabilities. Someone could leave such an agent behind as a memory for loved ones to enjoy once they are gone, and future generations could access it to learn more about an ancestor.

On the other hand, it could easily lead to loved ones experiencing extended grief because they have trouble moving on from losing a family member.

These AI capabilities are in development and will soon be publicly available. As industry and academic researchers continue to develop them, the public needs to learn about their eminent impacts.

“There’s a lot that needs to be done in educating people,” Morris said. “It’s hard for well-intentioned and thoughtful system designers to anticipate all the harm. We must be prepared some people are going to use AI in ways we don’t anticipate, and some of those ways are going to be undesirable. What legal and education structures can we create that will help?”

In addition to Morris’s keynote, the summit’s first day included 20 talks about future and emerging technologies in AI, sustainability, healthcare, and other fields. 

The second day featured eight talks on translating interventions and safeguards into policy.

Day-two speakers included: 

  • Orly Lobel, Warren Distinguished Professor of Law and director of the Center for Employment and Labor Policy at the University of California-San Diego. Lobel worked on President Obama’s policy team on innovation and labor market competition, and she advises the Federal Trade Commission (FTC). 
  • Sorelle Friedler, Shibulal Family Professor of Computer Science at Haverford College. She worked in the Office of Science and Technology Policy (OSTP) under the Biden-Harris Administration and helped draft the AI Bill of Rights. 
  • Jake Metcalf, researcher and program director for AI on the Ground at the think tank Data & Society. The organization produces reports on data science and equity for the US Government. 
  • Divyansh Kaushik, Vice President of Beacon Global Strategies, has given testimony to the US Senate on AI research and development.

Kaushik earned a Ph.D. in machine learning from Carnegie Mellon University before beginning his career in policy. He highlighted the importance of policymakers fostering relationships with academic researchers.

“Policymakers think about what could go wrong,” Kaushik said. “Academia can offer evidence-based answers.”

The summit also hosted a doctoral consortium, which allowed advanced Ph.D. students to present their research to experts and receive feedback and mentoring.

“Being an interdisciplinary researcher is challenging,” said Shaowen Bardzell, School of IC chair.

“We wanted the next generation to be in the room listening to the experts share their visions and also to provide our own experiences when possible on how to navigate the challenges and rewards of doing work in the intersection of AI, healthcare, sustainability, and policy.”

Meredith Ringel Morris, director of human-AI interaction research at Google Deepmind, gives the keynote talk Oct. 29 at the School of Interactive Computing's Summit on Responsible Computing, AI, and Society.
News Contact

Nathan Deen
Georgia Tech School of Interactive Computing
Communications Officer
nathan.deen@cc.gatech.edu

Pinar Keskinocak Named Chair of H. Milton Stewart School of Industrial and Systems Engineering

Pinar

Pinar Keskinocak has been selected as the next leader of Georgia Tech’s H. Milton Stewart School of Industrial and Systems Engineering (ISyE). She will serve as the H. Milton and Carolyn J. Stewart School Chair beginning January 1.

Keskinocak is the William W. George Chair and Professor and serves as ISyE’s associate chair for faculty development. 

She will be ISyE’s ninth permanent chair, leading a school renowned for its top-ranked graduate and undergraduate industrial engineering programs. U.S. News & World Report has consistently ranked ISyE as the nation's best since the mid-1990s.

“Pinar is a proven and respected leader both on campus and within her academic and research community,” said Raheem Beyah, dean of the College of Engineering and Southern Company Chair. “She is well-positioned to continue advancing ISyE’s national prominence and accelerate the School’s trajectory.” 

Keskinocak is the cofounder and director Georgia Tech’s Center for Health and Humanitarian Systems, an interdisciplinary research center focused on education, outreach, and developing innovative solutions via advanced modeling, analytics, and systems engineering.

Keskinocak’s research has had broad societal impact. This includes policies and practices for improved emergency preparedness and response, disease prevention and public health, healthcare access, resource allocation, and supply chain management. 

Keskinocak has collaborated with the Centers for Disease Control and Prevention, The Carter Center, and other governmental and nongovernmental organizations to translate research into real-world solutions that benefit people and communities.

“I am honored to have the privilege of serving our School in this important leadership role,” Keskinocak said. “As ISyE continues to expand our core activities in education and research, we will strive to advance our excellence and leadership and grow our impact. I look forward to collaborating with our faculty, staff, students, and alumni, as well as with the leadership of the College, Georgia Tech, and our broader community and partners.”

A highly regarded researcher, Keskinocak has published extensively in top-tier academic journals. She served in various leadership roles within professional societies, including as the 2020 president and a two-time board member of INFORMS (The Institute for Operations Research and Management Sciences). She is the cofounder, and has been president, of multiple INFORMS subdivisions. She also has served on several National Academies of Sciences, Engineering, and Medicine committees. 

Keskinocak is a fellow of INFORMS and recipient of the society’s George E. Kimball Medal, President’s Award, and Daniel H. Wagner Prize. At Georgia Tech, she has been recognized with the Outstanding Achievement in Research Program Development Award, Class of 1934 Outstanding Service AwardOutstanding Professional Education Award, and Denning Award for Global Engagement

A dedicated mentor, educator, and advocate for broadening participation in STEM fields, Keskinocak served as the College’s ADVANCE Professor from 2014 to 2020. She was recognized with the INFORMS Women in OR/MS Award and the Georgia Tech Women in Engineering Excellence Teaching Faculty Award. 

Keskinocak replaces Edwin Romeijn, who will return to the ISyE faculty after 10 years as chair. 

“I am thankful to Edwin for his very successful tenure, during which ISyE enrollment grew from 1,800 students to more than 8,000,” Beyah said. “I’m also grateful to our search committee and chair Arijit Raychowdhury. This group of students, faculty, and staff diligently worked to help identify a national, diverse pool of strong candidates.”

News Contact

Jason Maderer