Accept

Our website is for marketing purposes only and is not intended to be used for services, which are provided over the phone or in person. Accessibility issues should be reported to us ((917) 997-6577) so we can immediately fix them and provide you with direct personal service.

We use basic required cookies in order to save your preferences so we can provide a feature-rich, personalized website experience. We also use functionality from third-party vendors who may add additional cookies of their own (e.g. Analytics, Maps, Chat, etc). Further use of this website constitutes acceptance of our Cookies, Privacy Policy and Terms of Service.

Artificial Intelligence: How Worried Should We Be?

April 04 2023
April 04 2023
By

It has been said that a lot of science fiction is really a documentary from the future. If that is the case, we all know where the latest boom in Artificial Intelligence (AI) is taking us: sentient machines and software will gain the upper hand in our daily lives, telling us what we can and cannot do and —insidiously — what we can and cannot think or believe. George Orwell thought it might happen in 1984, Stanley Kubrick picked 2001; both crafted their tales of machine dominance by choosing dates far in the future. Those years have come and gone. It’s 2023 and, while we have recently seen truly phenomenal breakthroughs in the availability and application of AI, our futures remain under our control, at least for now.

The release and accessibility of ChatGPT, and other AI-powered bots and chat applications —get used to hearing the term “Large Language Models (LLMs)” — has propelled us into a new era. If anyone reading this blog has not yet used ChatGPT, I urge you to stop right now and go play around with the free service. Nothing anyone can write or say illustrates the potential and the problems posed by these new capabilities as much as asking a few questions and then seeing the nearly instantaneous responses. Generally cogent and well-reasoned, the ChatGPT answers sound authoritative and are likely to become the first and last stop for many inquiries. And, like Hal in “2001,” ChatGPT refers to itself in the first person.

There is no question that some advances in AI, both online and offline, have already had beneficial consequences for us and the planet. My smart thermostat allows me to turn up the heat at home when I land at the airport during a snowstorm, and it lowers overall heating and cooling demand in ways that are good for my wallet and the environment. I am ready for my AI oven that will keep me from burning my dinner. However, prudence and experience tell us that we must be mindful and vigilant of the changes occurring in front of us and beneath the hood as we slide into this new AI-powered era.

The legal and regulatory challenges of AI are subjects for others. Our concern here is the effect that AI will have on people and institutions fighting for a greener planet with greater social and racial justice. Investors, foundations, consultants, and advisors need to begin considering what will be the true impacts of their programs and policies in a world where AI plays a larger and larger role. Additionally, for those in the glass half full camp, new analytic tools may pave the way for a stronger case for reparative investments addressing prior inequities that have long harmed people, communities, and the world.

In recent weeks, well over $30 billion in new investment in AI related businesses has been announced in the US. Probably at least as much has been ponied up in China, Russia, and around the world. Google has declared that everything they do will soon have an AI component. No question that AI and related data manipulations are here to stay.

As we worry about the future, we already see ample signs of present-day problems attributable to our dependance on “smart” machines. Think about people who have driven off the road when the GPS tells them to turn right even if there is no road. It’s funny but a fair example of how increasingly dependent we already are on the results we get from computers, machine learning, and artificial intelligence. The New York Times ran an article in February about IRS audits by race. The Biden Administration had told the IRS to study the incidence of audits and found that Black Americans were three to five time more likely to be audited regardless of income or any other non-race related variables. We all know that people of color are not the main issue when it comes to tax fraud and avoidance.

Now, I’m not here to make us all feel sorry for IRS agents…but here’s the thing: agents don’t see the racial identity of the people that are selected for audit. The subjects of audits are chosen by…wait for it….algorithms. So, there it is. Structural bias built into a system and largely hidden from view produces unnecessary and unfair harm to people of color.

Progressives must be on the lookout for AI and related algorithmic instances of bias, but there are more challenges ahead. ChatGPT and other AI bots are going to be scrutinized by the same radical extremists who are banning books and perverting school curriculums. Currently, ChatGPT provides responses that reference “structural racism”, “redlining”, “racial bias," and other terms that are accepted as fact by most but are incendiary to a few. It’s only a matter of time before the anti-woke crusaders come after the AI bots. When they do, they will face the same challenge as the rest of us; attempting to modify and control a system too complex to understand. When a biased algorithm is detected at the IRS, the assumptions and parameters that went into that algorithm can be inspected and corrected by human beings who can separate whatever is useful about the audit detector from what reflects bias. With a ChatGPT, it’s far more difficult to separate the baby from the bathwater, and it may not produce the same output if asked the same question twice. The risk is that companies and individuals wanting the benefits of handy AI-powered tools may be willing to use them regardless of persistent and embedded bias.

At Community Capital Management, we do a lot of work in affordable homeownership investing especially for people of color. Many of the loans we buy are produced using Desktop Underwriter (DU), a tool used throughout the industry. A consistent challenge for many prospective lower income and minority homebuyers is lack of credit history. For years, housing policy advocates have urged that payment of rent be included in consideration of mortgage eligibility. Intuitively it makes sense that a history of paying rent on time is an indicator to consider when thinking about a potential borrower’s future ability to make mortgage payments. But, rental payments are highly distributed and not particularly available electronically. Ergo not in the DU algorithm. Can’t track it, can’t measure it…might as well not exist. After years of advocacy, guess what, rent payments are now allowed to be a factor in mortgage eligibility. Thousands of potential borrowers, including many people of color, are now eligible to own a home. Once again, algorithms built by humans and run by machines, have real consequences for real people.

We dwell in a time when the absolutes are never right. The technology enthusiasts who tell us that the AI-powered future will be all roses need to rewatch The Matrix movies (all three as punishment). The 21st Century Luddites who would have us turn away from progress fail to see that processing power can be an extension of the human brain with positive results. The same community of thought leaders that called attention to the importance of climate change and gender and racial inequality must now lead the way in searching for the responsible application of AI with a shared goal of doing more good than harm.

 

David F. Sand is Chief Impact Strategist at Community Capital Management LLC. He has been working in responsible investing since 1980.

Community Capital Management, LLC (CCM) is an investment adviser registered with the Securities and Exchange Commission under the Investment Advisers Act of 1940. A full list of regulatory disclosures for Community Capital Management, LLC are available by visiting https://www.ccminvests.com/regulatory-disclosures/.

 


 

Sand author photo

David F. Sand, Chief Impact Strategist, Community Capital Management

 

Disclaimer: Confluence blogs may contain external links to other resources and comments or statements by individuals who do not represent Confluence Philanthropy, Inc. Confluence Philanthropy, Inc. makes no representation whatsoever regarding the content that you may access as a result of our blog, nor the statements of any third parties whose comments may be expressed therein.