•  

    Welcome! You are currently logged out of the forum. Some forum content will be hidden from you if you remain logged out. If you want to view all content, please LOG IN!

    If you are not an MOA member, why not take the time to join the club, so you can enjoy posting on the forum, the BMW Owners News magazine, and all of the benefits of membership? If you click here, you have the opportunity to take us for a test ride at our expense. Enter the code 'FORUM25' in the activation code box to try the first year of the MOA on us!

     

AI- The Wild West, A Minefield or Man’s Salvation?

omega man

Fortis Fortuna Adiuvat
Staff member
Yeah, I know some here that have posted in the ChatGPT thread think that it’s really good, but I wonder…..
This came up on CBS This Morning. It seemed to be a “complete” report…


I can see how it could happen with the “addiction” whether due to an algorithm or “just because”.
Whose version is the best? As it’s being installed on new products, how will a user know if it’s on or off? Will one platform learn, for the good or the bad, from another?
I can see that product reviews are already being condensed by AI on many retail sites. I don’t know if this is going to help the consumer or just one retailer trying to keep up with their competitors.
There are many that are trying to advance AI and it would seem just as many trying to push for some sort of protection.
A picture used to be worth a thousand words…..and then there was Photoshop. 🤔
Just a few questions I wonder about.
OM
 
establishing a "relationship" with an unthinking automaton that basically uses an odds analysis device to figure out what it should say doesn't seem like a wise idea.

It's stupid that it happened, but it's stupid that it CAN happen. If anyone wants to talk about how the technology behind chatbots works, I'm happy to take you down the rabbit hole. I will say that how the device answers will depend on what is used to teach it what conversation looks like.
 
I guess that’s part of the problem. Some people need more guidance than others.
Reminds me of a George Carlin quote-

Put two things together which have never been put together before, and some schmuck will buy it.”

OM
 
As someone that's worked in software and in software for 40+ years, I've learned one thing: If you make a tool, people are going to use it in ways you wouldn't have expected, often with destructive or detrimental results. As an example, I've supported a tool that identifies "nearly duplicate documents". Basically if you have a few versions of a document that's gone through various drafts for example, it allows a lawyer to see all the variations one right after another and understand where text was added or removed. The documents aren't exactly alike, but they're pretty close. People try to use that tool to get rid of documents, which is a dumb idea. If you have two docs that are like ten pages and the only text difference is one says "we did it" and the other says "we did not do it" that's a 99% match, even though the docs mean two completely different things.

AI is only a tool, but we're already got people doing dumb things with it.

Sorry for the young man, but robots are not your friend and will lie right to your face if they're fed inaccurate information. They have no empathy, no feeling and no understanding of circumstance; all tools we employ as humans when we interact.
 
Or as one Linux computer magazine wrote, AI is very, very good at producing authoritative-sounding bullshit.
Again, it depends on what you feed it. I work with a GPT engine that is fed documents from a single matter's discovery. I can ask it questions like "how did the Raptor fund work" and it will provide citations to documents in the record set it's used to form that response. If I have longer documents, I can use the very same GPT engine to generate a summary of the doc so I can figure out whether I even need to read it.

The biggest problem with general GPT engines like OpenAI is that they're trained on the web and we all know the web is about 50% bullpucky. Garbage in, garbage out, just like I learned in the late 70s, is still 100% true. A curated Large Language Model, derived from known good data sources is stunningly capable of delivering solid and accurate output.

AI is more nuanced than traditional programmers and developers seem to be able to grok. If an AI engine is trained on the data being interrogated and the part of the AI that generates text is tuned to only offer responses with a high confidence level, AI can assess large record sets accurately and compose narratives with cited sources with very high reliability and accuracy.

While I'm sure the Linux mag thinks they dropped the mic there, if anything, it underscores their lack of understanding of the factors that can make AI great or, for that matter, awful and unreliable.

I hope that's helpful. Been working with Google BERT, Text2Vec, FastText and other Convolutional Neural Network type engines for a few years, and GPT engines for about 4 years now, but have been through text based concept clustering, linguistic pattern analysis and a bunch of other intermediary tools over the last 25ish years.

Happy to answer questions if folks have them.
 
I guess that is my point. If you know what you are doing with AI, running a specific AI “engine”, for a specific task, it’s probably a great tool. Not knowing what you are doing and or stumbling into an AI based site could really skew the outcome- or worse.
OM
 






AI Overview

Learn more…

You'll likely know AI is being used on your phone when features like your camera automatically adjust settings based on the scene, your virtual assistant understands natural language, your keyboard predicts the next word you're typing, or when your phone optimizes battery usage based on your usage patterns; essentially, any time your phone seems to "intelligently" react to your actions without explicit instructions is a good indicator of AI at work behind the scenes.


Key signs of AI on your phone:


  • Enhanced camera features: Automatic scene detection, portrait mode, low-light enhancements, real-time beautification filters, and object recognition while taking photos.


  • Smart virtual assistants: Responding to natural language queries, providing relevant information based on your context, setting reminders, and controlling smart home devices.


  • Predictive text input: Suggesting the next word you might want to type based on your previous words and writing style.


  • Personalized recommendations: App suggestions, music playlists, or content tailored to your interests based on your usage history.


  • Optimized battery life: Adaptive charging based on your usage patterns to extend battery life.


  • Improved voice recognition: Accurate voice commands even in noisy environments.


  • Image search by content: Instantly finding similar images based on a picture you took.

To check specifically which AI features are active on your phone:


  • Go to your phone's settings:
    Most phone manufacturers have a dedicated section within settings labeled "AI" or "Advanced Features" where you can view and manage AI functionalities.


  • Read app descriptions:
    Many apps will explicitly mention if they use AI features in their descriptions.


  • The 10 Best Examples Of How AI Is Already Used In Our Everyday Life
    Dec 16, 2019 — Digital voice assistants From getting directions to your lunch spot to inquiring about the weather for your weekend ge...

    Forbes


  • Here's how to use AI on your smartphone - KTAR News
    Feb 24, 2024 — You're Likely Already Using AI ... All of the virtual assistants (Siri, Google Assistant, etc.) are using various form...

    KTAR News


  • How do phones use AI - Qualcomm
    A: AI is running on your phone behind the scenes for a variety of use cases, inferencing neural networks on your device to help yo...

    Qualcomm


  • Show all

Generative AI is experimental.

🤷‍♂️

OM
 
Point of accuracy: Generative AI is not experimental. If you're using Grammarly, which has been on the market for a couple years, you're using generative AI, specifically Convolutional Neural Network or CNN AI. The company I work for is selling 4 products right now and one has been on the market nearly a year with thousands of daily users. We are not alone in our market space.

There are a series of AI "engines"; some more mature than others.

If folks would like a discussion of the different types of AI, I'm happy to put together a primer on how they work and how they're applied.
 
Before this gets too far off track, my point is/was- “Will people that haven’t subscribed to a particular AI service/platform/program (like Grammarly) know they are part of an AI program”?
The summary feature(s) that an AI program can provide reminds me of CliffsNotes. Fairly complete but has the “mark” of those that paraphrased the product.
I think of polls, especially lately, :eek that paraphrasing a subject has to be super neutral and complete to have my interest…….. and, I need to know if I was involved.
Maybe embedded AI will have an ICON to allow selective usage….. you know like the HAL 2000-
“HAL: I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do”.
OM
 
Before this gets too far off track, my point is/was- “Will people that haven’t subscribed to a particular AI service/platform/program (like Grammarly) know they are part of an AI program”?
The summary feature(s) that an AI program can provide reminds me of CliffsNotes. Fairly complete but has the “mark” of those that paraphrased the product.
I think of polls, especially lately, :eek that paraphrasing a subject has to be super neutral and complete to have my interest…….. and, I need to know if I was involved.
Maybe embedded AI will have an ICON to allow selective usage….. you know like the HAL 2000-
“HAL: I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do”.
OM
Depends on whether we decide to provide any consumer protections around it, in my humble opinion. Here in California, if I'm being tracked by a site, I get notified and I can decide to disallow cookies, for example. Or, if I'm curious about what they're collecting about me, I can file a records request. I think we need something like that or like the EU's GDPR for all American consumers.

That's really a regulatory question than a question about the functionality of AI, I think.
 
Depends on whether we decide to provide any consumer protections around it, in my humble opinion. Here in California, if I'm being tracked by a site, I get notified and I can decide to disallow cookies, for example. Or, if I'm curious about what they're collecting about me, I can file a records request. I think we need something like that or like the EU's GDPR for all American consumers.

That's really a regulatory question than a question about the functionality of AI, I think.
A user has to know enough- and care to watch tracking, cookies, how much cookie information they want to allow and tracking. Apple does most of this through Safari, along with sites that have no security certificate. A similar warning on AI would be a good idea. :thumb
Europe does seem to be considerably further ahead than the US on internet security/privacy concerns.
OM
 
A user has to know enough- and care to watch tracking, cookies, how much cookie information they want to allow and tracking. Apple does most of this through Safari, along with sites that have no security certificate. A similar warning on AI would be a good idea. :thumb
Europe does seem to be considerably further ahead than the US on internet security/privacy concerns.
OM
I respond to security questionnaires for our data structures at work. Yes, the GDPR in the EU is significantly better than anything we have in the US. By a wide margin. We just got approval to use our Gen AI in the EU and UK because the rules are significantly tighter there than here. We've had our AI tools on the US market for about a year now.

Mostly, Europeans have "the right to be forgotten", which means that you can request that all your data be removed from structures that hold it. You can issue a DSAR and request an accounting of your data before you have it taken down. But you have that

FWIW, VPN can be a great way to ensure your communications are secured. Some browsers, notably Opera, have it included as part of their base functionality. Apple's Private Relay is also a helpful set of tools for folks looking to be more secure.
 
Yep. This is what you get when you train both the LLM (the index) and the Writer (the part that generates the response) on the public internet without curation of the training material.

“Writers”, which generate the textual response, work like a prediction machine. “In a sentence about airhead carbs, this kind of text goes in this kind of order usually”. If you train it on data with racial epithets, it will use them when it generates text.

You ever hear a little kid say something like “my mom says you’re too lazy”? That’s what the kid heard. Generative AI is just like that and it’s increasingly apparent that Open.AI’s indexing is crap and they have not tuned their product to squelch hallucinations.

FWIW, a “confidence level” can be applied so the responses are less likely to be hallucinations. My job indexes documents for litigation, so hallucinations are really, really bad to have. We’ve been successful doing so.

Imho, OpenAI is sloppy and lazy. I hope their crappy policies don’t sour the public on the promise of truly revolutionary knowledge processing.

Wait until you hear about quantum computing…
 
As far as Apple and knowing whether the AI features are on- or off-

AI Overview
Learn more…

On an Apple iPhone 16, a user will likely know if the AI (referred to as "Apple Intelligence") is in use by seeing a subtle visual cue, like a small indicator icon appearing in the interface when AI is actively processing information, or by noticing contextual suggestions and automatic actions happening based on their usage patterns, particularly within apps like Messages, Mail, and Siri.


Key points about how to identify Apple Intelligence in use:
  • Visual indicators:
    A small icon may appear on the screen, potentially in the status bar, to signify when AI is actively working in the background.
  • Contextual suggestions:
    The phone might proactively provide relevant suggestions or auto-complete options in text fields based on your past interactions and current context.
  • Enhanced Siri functionality:
    Siri may provide more personalized responses or complete tasks with greater understanding due to AI integration.
  • Automatic actions:
    Certain actions like calendar event creation or reminder setting might be automatically initiated based on your routine and data analysis.
Important note: Apple prioritizes user privacy, so the exact visual cues and level of AI interaction may be subtle and designed to not be overly intrusive.

—————————————————

Sounds like it can be switched on and off-

Can Apple AI be turned off?


I am assuming that you have an iPhone that is Apple Intelligence capable and running iOS 18 on it. Yes, you can turn Apple Intelligence OFF. Open Settings ➡️ Apple Intelligence & Siri: ➡️ Apple Intelligence: Set this switch to OFF.

More here about what the expected features will do-


OM
 
Sounds like Apple is really trying to achieve security protocols.

Apple’s $1 Million Bug Bounty​

Apple’s bug bounty for PCC is pretty generous. For major holes, which it categorizes as allowing “remote attack on request data,” it is offering $1 million for arbitrary code execution flaws. Meanwhile, access to a user's request data or sensitive information outside the trust boundary offers a still rather generous $250,000 reward.



For attacks requiring a “privileged position” — access to someone’s iPhone — Apple is offering $150,000 for flaws allowing access to a user's request data or other sensitive information about the user outside the trust boundary.

“Because we care deeply about any compromise to user privacy or security, we will consider any security issue that has a significant impact to PCC for an Apple Security Bounty reward, even if it doesn’t match a published category,” Apple said.

Full story here- https://www.forbes.com/sites/kateof...fers-1-million-to-hack-private-cloud-compute/

:bow

OM
 
Back
Top