Jump to content

Search the Community

Showing results for tags 'ethics'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 4 results

  1. Copyright is something of a minefield right now when it comes to AI, and there’s a new report claiming that Apple’s generative AI - specifically its ‘Ajax’ large language model (LLM) - may be one of the only ones to have been both legally and ethically trained. It’s claimed that Apple is trying to uphold privacy and legality standards by adopting innovative training methods. Copyright law in the age of generative AI is difficult to navigate, and it’s becoming increasingly important as AI tools become more commonplace. One of the most glaring issues that comes up, again and again, is that many companies train their large language models (LLMs) using copyrighted works, typically not disclosing whether they license that training material. Sometimes, the outputs of these models include entire sections of copyright-protected works. The current justification for why copyrighted material is so widely used as far as some of these companies to train their LLMs is that, not dissimilar to humans, these models need a substantial amount of information (called training data for LLMs) to learn and generate coherent and convincing responses - and as far as these companies are concerned, copyrighted materials are fair game. Many critics of generative AI consider it copyright infringement if tech companies use works in training and output of LLMs without explicit agreements with copyright holders or their representatives. Still, this criticism hasn’t put tech companies off from doing exactly that, and it’s assumed to be the case for most AI tools, garnering a growing pool of resentment towards the companies in the generative AI space. (Image credit: Shutterstock/photosince) The forest of legal battles and ethical dilemmas in generative AI There have even been a growing number of legal challenges mounted in these tech companies’ direction. OpenAI and Microsoft have actually been sued by the New York Times for copyright infringement back in December 2023, with the publisher accusing the two companies of training their LLMs on millions of New York Times articles. In September 2023, OpenAI and Microsoft were also sued by a number of prominent authors, including George R. R. Martin, Michael Connelly, and Jonathan Franzen. In July of 2023, over 15,000 authors signed an open letter directed at companies such as Microsoft, OpenAI, Meta, Alphabet, and others, calling on leaders of the tech industry to protect writers, calling on these companies to properly credit and compensate authors for their works when using them to train generative AI models. In April of this year, The Register reported that Amazon was hit with a lawsuit by an ex-employee alleging she faced mistreatment, discrimination, and harassment, and in the process, she testified about her experience when it came to issues of copyright infringement. This employee alleges that she was told to deliberately ignore and violate copyright law to improve Amazon’s products to make them more competitive, and that her supervisor told her that "everyone else is doing it" when it came to copyright violations. Apple Insider echoes this claim, stating that this seems to be an accepted industry standard. As we’ve seen with many other novel technologies, the legislation and ethical frameworks always arrive after an initial delay, but it looks like this is becoming a more problematic aspect of generative AI models that the companies responsible for them will have to respond to. (Image credit: Apple) The Apple approach to ethical AI training (that we know of so far) It looks like at least one major tech player might be trying to take the more careful and considered route to avoid as many legal (and moral!) challenges as possible - and somewhat surprisingly, it’s Apple. According to Apple Insider, Apple has been pursuing diligently licensing major news publications’ works when looking for AI training material. Back in December, Apple petitioned to license the archives of several major publishers to use these as training material for its own LLM, known internally as Ajax. It’s speculated that Ajax will be the software for basic on-device functionality for future Apple products, and it might instead license software like Google’s Gemini for more advanced features, such as those requiring an internet connection. Apple Insider writes that this allows Apple to avoid certain copyright infringement liabilities as Apple wouldn’t be responsible for copyright infringement by, say, Google Gemini. A paper published in March detailed how Apple intends to train its in-house LLM: a carefully chosen selection of images, image-text, and text-based input. In its methods, Apple simultaneously prioritized better image captioning and multi-step reasoning, at the same time as paying attention to preserving privacy. The last of these factors is made all the more possible for the Ajax LLM by it being entirely on-device and therefore not requiring an internet connection. There is a trade-off, as this does mean that Ajax won’t be able to check for copyrighted content and plagiarism itself, as it won’t be able to connect to online databases that store copyrighted material. There is one other caveat that Apple Insider reveals about this when speaking to sources who are familiar with Apple’s AI testing environments: there don’t currently seem to be many, if any, restrictions on users utilizing copyrighted material themselves as the input for on-device test environments. It's also worth noting that Apple isn't technically the only company taking a rights-first approach: art AI tool Adobe Firefly is also claimed to be completely copyright-compliant, so hopefully more AI startups will be wise enough to follow Apple and Adobe's lead. I personally welcome this approach from Apple as I think human creativity is one of the most incredible capabilities we have, and I think it should be rewarded and celebrated - not fed to an AI. We’ll have to wait to know more about what Apple’s regulations regarding copyright and training its AI look like, but I agree with Apple Insider’s assessment that this definitely sounds like an improvement - especially since some AIs have been documented regurgitating copyrighted material word-for-word. We can look forward to learning more about Apple’s generative AI efforts very soon, which is expected to be a key driver for its developer-focused software conference, WWDC 2024. YOU MIGHT ALSO LIKE... Microsoft says it will defend AI Copilot users from copyright infringement lawsuitsAI generations can be copyrighted now - on one conditionApple is secretly spending big on a ChatGPT rival that will reinvent Siri and AppleCare View the full article
  2. As a business leader, you know that artificial intelligence (AI) is no longer just a buzzword—it’s a transformative force that is reshaping every industry, redefining customer experiences, and unlocking unprecedented efficiencies. In his groundbreaking book, Adaptive Ethics for Digital Transformation, Mark Schwartz shines a light on the moral challenges that arise as companies race to harness the power of AI. He argues that our traditional, rule-based approaches to business ethics are woefully inadequate in the face of the complexity, uncertainty, and rapid change of the digital age. So, as a leader, how can you ensure that your organization is wielding AI in a way that is not only effective but also ethical? Here are some key takeaways from Schwartz’s book that can help you navigate this new terrain: Cultivate a Culture of Ethical Awareness and Accountability Too often, discussions about AI ethics are siloed within technical teams or relegated to an afterthought. Schwartz stresses that ethical considerations must be woven into the fabric of your organization’s culture. This means actively encouraging all employees, from data scientists to business leaders, to raise ethical questions and concerns. Foster an environment where it’s not only acceptable but expected to hit the pause button on an AI initiative if something doesn’t feel right. Celebrate those who have the courage to speak up, even if it means slowing down progress in the short term. By making ethics everyone’s responsibility, you can catch potential issues early before they spiral out of control. Embrace Humility and Adaptability One of the most dangerous traps in the realm of AI is overconfidence. We may be tempted to believe that we can anticipate and control every possible outcome of the intelligent systems we create. But as Schwartz points out, the reality is that we are often venturing into uncharted territory. Instead of clinging to a false sense of certainty, Schwartz advises embracing humility and adaptability. Approach AI initiatives as ongoing ethical experiments. Put forward your best hypotheses for how to encode human values into machine intelligence, but be prepared to continuously test, learn, and iterate. This means building mechanisms for regular ethical review and course correction. It means being willing to slow down or even shut down an AI system if unintended consequences emerge. In a world of constant change, agility is not just a technical imperative, but an ethical one. Make Transparency and Interpretability a Priority One of the biggest risks of AI is the “black box” problem—the tendency for the decision-making logic of machine learning models to be opaque and inscrutable. When we can’t understand how an AI system arrives at its conclusions, it becomes nearly impossible to verify that it is operating in an ethical manner. Schwartz emphasizes the importance of algorithmic transparency and interpretability. Strive to make the underlying logic of your AI systems as clear and understandable as possible. This may require investing in tools and techniques for explaining complex models, or even sacrificing some degree of performance for the sake of transparency. The goal is to create AI systems that are not just high-performing, but also accountable and auditable. By shining a light into the black box, you can build trust with stakeholders and ensure that your AI is aligned with your organization’s values. Keep Humans in the Loop Another key ethical principle that Schwartz stresses is the importance of human oversight and accountability. Even as AI becomes more sophisticated, it is critical that we resist the temptation to fully abdicate decision-making to machines. Establish clear protocols for human involvement in AI-assisted decisions, especially in high-stakes domains like healthcare, criminal justice, and financial services. Create mechanisms for human review and override of AI recommendations. Importantly, Schwartz cautions against using AI as a scapegoat for difficult decisions. We must be careful not to simply “blame the algorithm” when thorny ethical trade-offs arise. At the end of the day, it is human leaders who bear the responsibility for the AI systems they choose to deploy and the outcomes they generate. Use AI as a Mirror to Examine Societal Biases One of the most powerful ideas in Schwartz’s book is the notion of using AI as a tool for ethical introspection. Because AI models are trained on historical data, they often reflect and amplify the biases and inequities that are embedded in our society. Rather than seeing this as a flaw to be ignored or minimized, Schwartz encourages leaders to seize it as an opportunity. By proactively auditing your AI systems for bias, you can surface uncomfortable truths about the way your organization and society operate. This can spark much-needed conversations about fairness, inclusion, and social responsibility. In this way, AI can serve as a catalyst for positive change. By holding up a mirror to our collective blind spots, AI can challenge us to confront long-standing injustices and build a more equitable future. Conclusion As you embark on your own digital transformation journey, the insights from Adaptive Ethics for Digital Transformation provide an invaluable roadmap for navigating the ethical challenges of AI. By cultivating a culture of ethical awareness, embracing humility and adaptability, prioritizing transparency and human oversight, and using AI as a tool for introspection, you can harness the power of this transformative technology in a way that upholds your values and benefits society as a whole. The path forward won’t always be clear or easy. But with the right ethical framework and a commitment to ongoing learning and adaptation, you can lead your organization confidently into the age of AI—and create a future that you can be proud of. The post Navigating the Ethical Minefield of AI appeared first on IT Revolution. View the full article
  3. In a recent panel discussion, a thought-provoking question was posed to us, one that delves into the murky waters of cyber security and governmental responsibility. The query centered on the obligation of governments regarding the vulnerabilities they discover and utilize for intelligence and espionage, especially in the context of public safety. This conversation took us on a […] The post Ethics of Cyber Security: To Disclose or Not? appeared first on VERITI. The post Ethics of Cyber Security: To Disclose or Not? appeared first on Security Boulevard. View the full article
  4. Some 74% are testing generative AI, principles should be adopted to foster trust, according to the firm’s 2023 State of Ethics and Trust in Technology report.View the full article
  • Forum Statistics

    44.2k
    Total Topics
    43.8k
    Total Posts
×
×
  • Create New...