Hire Freelance Software Engineers

Table of Contents:

Building The Future of Freelance Software / slashdev.io

Accelerating Progress: Embracing the Potential of AI Advancements/

Michael

Michael

Michael is a software engineer and startup growth expert with 10+ years of software engineering and machine learning experience.

0 Min Read

Twitter LogoLinkedIn LogoFacebook Logo
Accelerating Progress: Embracing the Potential of AI Advancements
Accelerating Progress Embracing the Potential of AI Advancements

The conversation around Artificial Intelligence has started to take on a binary quality, rather prematurely, as if we were debating the two sides of a coin rather than a more complex shape. “Let builders build as is” vs “Regulate.” Ironically, both positions are outputs of acknowledging the incredible early power and promise of the tipping point we’ve reached, but neither incorporate the ambiguity. Fortunately there’s some case law here which might help, and we only have to go back to earlier Internet days and the concept of safe harbor.

What is Safe Harbor?

‘Safe harbor’ is a regulatory framework which provides that certain conduct won’t break a rule so long as specific conditions are met. It’s used to provide clarity in an otherwise complex situation, or to provide the benefit of the doubt to a party so long as they abide by generally acceptable reasonable standards. Perhaps the most well-known example in our industry is the 1998 Digital Millennium Copyright Act (DMCA) which provided safe harbor to Internet businesses around copyright infringement performed by their end users so long as several preconditions were met (such as direct financial benefit, knowledge of infringing materials, and so on).

The Importance of Safe Harbor

The DMCA allowed for billions of people globally to express themselves online, prompted new business model experiments, and created guardrails for any entrepreneur to stay legal. It’s not perfect, and it can be abused, but it met the reality of the moment in a meaningful way. And it made my career possible, working with user generated content (UGC) at Second Life, AdSense, and YouTube.

The Rise of AI

The Rise of AI

Now 25 years later we have a new tidal wave but it’s not UGC, it’s AI and, uh, User Generated Computer Content (UGCC), or something like that. And from my point of view it’s a potential shift in capabilities as significant as anything I’ve experienced so far in my life. It’s the evolution of what I hoped — not software eating the world, but software enabling it. And it’s moving very very quickly.

The Need for AI Safe Harbor

So much so that it’s perfectly reasonable to suggest the industry slow itself, specifically stop training new models while we all digest the impact of the change. But it’s not what I’d advocate. Instead let’s speed up creating a temporary safe harbor for AI, so our best engineers and companies can continue their innovation while being incentivized to support guardrails and openness.

What Would an AI Safe Harbor Look Like?

What Would an AI Safe Harbor Look Like

What would an AI Safe Harbor look like? Start with something like, “For the next 12 months any developer of AI models would be protected from legal liability so long as they abide by certain evolving standards.” For example, model owners must:

Transparency: for a given publicly available URL or submitted piece of media, to query whether the top level domain is included in the training set of the model. Simply visibility is the first step — all the ‘do not train on my data’ (aka robots.txt for AI) is going to take more thinking and tradeoffs from a regulatory perspective.

Prompt Logs for Research: Providing some amount of statistically significant prompt/input logs (no information on the originator of the prompt, just the prompt itself) on a regular basis for researchers to understand, analyze, etc. So long as you’re not knowingly, willfully and exclusively targeting and exploiting particular copyrighted sources, you will have infringement safe harbor.

Responsibility: Documented Trust and Safety protocols to allow for escalation around violations of your Terms of Service. And some sort of transparency statistics on these issues in aggregate.

Conclusion

In conclusion, the conversation around AI development has been reduced to a binary argument of “let builders build as is” vs “regulate,” without considering the complex nature of this field. A regulatory framework similar to the safe harbor concept can provide clarity and a reasonable standard for developers while allowing them to continue innovating. The safe harbor can include specific conditions, such as transparency, responsibility, prompt logs, and observability, to ensure that AI models are developed safely and ethically. By creating a temporary safe harbor for AI development, the industry can continue to innovate while adhering to common standards and promoting the common good. With proper guidelines and guardrails, AI has the potential to enable the world in ways we can’t even imagine yet.