jcalx 5 days ago

Reminds me of this article from two years ago [0] and my HN comment on it. Yet another AI startup on the general trajectory of:

1) Someone runs into an interesting problem that can potentially be solved with ML/AI. They try to solve it for themselves.

2) "Hey! The model is kind of working. It's useful enough that I bet other people would pay for it."

3) They launch a paid API, SaaS startup, etc. and get a few paying customers.

4) Turns out their ML/AI method doesn't generalize so well. Reputation is everything at this level, so they hire some human workers to catch and fix the edge cases that end up badly. They tell themselves that they can also use it to train and improve the model.

5) Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.

6) Then someone writes an article about them using cheap human labor.

[0] https://news.ycombinator.com/item?id=37405450

  • palmotea 5 days ago

    > 5) Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.

    AI stands for "Actually, Indians."

    • K0balt 4 days ago

      This has been a running joke in several projects I have been involved in, each time, apparently independently evolved. I never bring it up, but I am amused each time it appears out of the zeitgeist. It’s actually the best Kind of ironic humor, the kind that exposes a truth and a lie at the same time, with just enough political incorrectness to get traction.

      I can’t even count the number of of times I have shut down “AI” projects where the actual plan was to use a labor pool to simulate AI, in order to create the training data to replace the humans with AI. Don’t get me wrong, it’s not a terrible idea for some cases, but you can’t just come straight out of the gate with fraud. Well, I mean, you could. But. Maybe you shouldn’t.

    • morksinaanab 5 days ago

      I always thought it stood for Almost Implemented

      • kleene_op 4 days ago

        Or more the more charitable Always Improving

        • pja 4 days ago

          AGI == A Guy Instead.

          • belter 4 days ago

            AGI == A Grand Illusion

    • jansan 5 days ago

      Or it should be changed to MT -> Mechanical Turk

      "Our bleeding edge AI/MT app..." does not sound bad at all.

      • rk06 4 days ago

        It might fool general public, but the moment "Mechanical Turk" is uttered, some of us would ask "is this done by human?"

        • varelaseb 4 days ago

          Plus no way that's getting VC money

    • evnix 5 days ago

      [flagged]

      • perching_aix 5 days ago

        Try replacing it with Germans, and now it sounds like a praise, because the stereotype around Germans and Germany is that way.

        Is your problem that their phrasing invites stereotyping, or that the stereotype it invites happens to be negative? Because if it's the latter, do you really think that's the semantic intention here?

        • evnix 4 days ago

          I get that the intention was not harmful, but i am trying to make the poster understand how people might feel.

          Regarding Germans, if the news was, "AG deportation private company is a scam, they were sending people to forced euthanasia"

          and someone came and said, "AG stands for Actually Germans". I am sure no German would want to be associated with that.

          • lupusreal 4 days ago

            That's even funnier than the Indian one.

          • perching_aix 4 days ago

            It sounds like more the former then, which I do agree with.

      • iammrpayments 5 days ago

        It might offend people who are underpaying indians

      • ohgr 4 days ago

        I’ll add that I’ve heard this before. And it was from an Indian guy and he thought it was absolutely hilarious.

        • carlosjobim 4 days ago

          Well I am allergic to gluten and I don't find it very funny at all!

      • johnnyanmac 4 days ago

        Actually blacks? No, not particularly offensive, why?

      • FirmwareBurner 4 days ago

        I told this AI joke to my Indian friends and they all laughed and said "true". Get a life, stop being a tone policing hall monitor, IRL people off Twitter aren't as easily offended by innocent jokes as you might think.

        • adriand 4 days ago

          That sounds a lot like the classic “I have lots of black friends” line. But even if you do have Indian friends, there is a big difference between joking with friends vs what is suitable for publication in the public sphere.

          I would also note that OP merely posited a thought experiment. They’re not policing anyone. “Get a life” is perhaps a little harsh?

          Here’s another thought experiment. If you had a job interview for a senior position at Microsoft and your interviewer was Satya Nadella, would you make this joke?

          • FirmwareBurner 4 days ago

            >what is suitable for publication in the public sphere

            That's why the persons, intent and context makes all the difference between something being funny and something being ofensive. And you could tell that statement wasn't in bad faith or meant to be derogatory.

            >If you had a job interview for a senior position at Microsoft and your interviewer was Satya Nadella, would you make this joke?

            Please don't move the goalposts to bad faith arguments, The casualness of the HN comment sections very different than the context of a job interview, hence my comment above on context mattering. Do you talk to your friends the way you talk to HR at work?

            And yes, I'm sure graybeard Microsoft employees who worked with Nadella for a long time also make such jokes and banter with him behind closed doors and they all laugh, people are still people and don't maintain their work persona 24/7 or they'd go crazy.

            • adriand 4 days ago

              My point is not that the statement was made in bad faith or was grossly derogatory. Personally, I am not gravely offended. However, I do think it's impolite and casually dismissive of an entire nation. In other words, I think it's in poor taste.

              You obviously disagree, which is fine - I'm not calling you a racist - but to me, "get a life" is a pretty harsh reaction to someone raising a concern about a joke they find in poor taste.

              The point I was trying to make about the job interview and Nadella, which I may have made clumsily, is not that we ought to use the tone we use in job interviews everywhere. My point is that Nadella is an extraordinarily accomplished Indian person and this "joke" would likely fall flat with that sort of audience. I think that's a decent barometer for whether the joke is in poor taste or not.

              Again speaking personally, as a white dude, I avoid making jokes about minorities. That used to be pretty much common sense, although I recognize that there's a certain culturally ascendant viewpoint that disagrees with that. But my decision to treat people respectfully isn't about what's culturally in vogue, and I'm still willing to bet that a lot of the people of colour who laugh along when white people make jokes at their expense are thinking something else entirely.

    • siva7 5 days ago

      [flagged]

      • TeMPOraL 5 days ago

        It's the other way around - it's racist if you're a US American, because in USA every problem is somehow ultimately called or blamed on racism.

        Elsewhere in the world, we'd all it xenophobia, or Indophobia if one has something against Indian people specifically.

        Though in this case, it's driven primarily by economic stereotypes, coming from the country becoming a cheap services outsourcing destination for the West, so there should be a better term coined for it. The anti-Indian sentiment in IT seems to be the services equivalent of the common "Made in China = cheap crap" belief, and because it applies to services and not products, it turns into discriminating people.

        • johnisgood 4 days ago

          It is mainly because the majority of scammers that we hear about are Indian. I am not sure it has anything to do with xenophobia, whether they (or we) call it as such or not.

      • perching_aix 5 days ago

        Nothing racist about it, India is essentially the #1 outsourcing destination. Not everything that involves an explicit mention of ethnicity / origin is racist.

        • evnix 5 days ago

          Not racist but offensive indeed. It's like relating school shootings to white people. A white child sitting in Norway might not relate and may find it offensive and insulting.

          • fkyoureadthedoc 4 days ago

            India is a country, not a color of person. It's more like relating school shootings to the US, where an American child sitting in the US would definitely relate and find it accurate.

      • johnisgood 5 days ago

        It is reality. Reality cannot be racist. :P

        • rightbyte 5 days ago

          Uhm.. Do you include human social structures in reality?

          Even of you replace 'reality' with 'true' amd 'truth' the logic doesn't quite work out.

          • johnisgood 4 days ago

            Others have already mentioned why it is grounded in reality. Call your ISP, Indian picks up. You are being scammed? Probably an Indian. It is not racist, I have nothing against Indians in particular, but the trend is there and it is quite obvious, hence, reality.

            If I replace "reality" with "truth", it does not work out because it makes no sense: "truth cannot be racist" makes no sense whatsoever. In relation or correspondence to what? It does work with reality, however.

            • rightbyte 4 days ago

              I am trying to argue that something existing in reality, does not make it not rascist. Like say apartheid or whatever.

              As a comment on "Reality cannot be racist".

              • johnisgood 3 days ago

                But how is it racist? I am not afraid of Indians. I have nothing against some Indians, albeit I do have something against Indians who are celebrating something by stepping on manure, but since it does not affect me nor my country, I do not care what they do. UK might care though, it does affect them, but it does not make them racist either by not agreeing with some of their practices.

                FWIW my best friend is South Indian. In the North, he is hated for unknown reasons, different caste or whatever, I do not know. He usually tells me everything about Indians that I do ask.

                Another FWIW, he likes a barista (he is currently studying in the UK where he does not face any racism), and he came to me with help as to how to approach her. It is a good thing he did (he admitted it) because he would have ruined it by seeming so desperate, which seems to be a common trend among Indians, too. This is reality, too. To what extent? I do not know, but enough to notice. He is otherwise (despite being an Indian) a very well-mannered, curious, and smart person. He does seem to care much less about hygiene than one should, and blames things on diet, rather than just lack of hygiene. We discussed it in detail and he agreed, eventually.

                While we are at it, I dislike Indian accents, too, generally. This is a preference. It does not make me racist. Do you think it does?

                At any rate, if you have any questions, I am willing to answer (with his permission if it concerns him), but ultimately, I do not think it is racist.

    • midnightblue 4 days ago

      You win the internets, sir.

      • palmotea 4 days ago

        > You win the internets, sir.

        Yes, I did. By repeating someone else's apropos joke, I get to reap the sweet, sweet internet points.

      • midnightblue 4 days ago

        downvoted for nostalgic use of Slashdot vernacular? we don't read these memetic cultural artifacts very often anymore. sometimes, you use them, just to keep them alive as a memento of a bygone era.

  • jjmarr 5 days ago

    > Reputation is everything at this level, so they hire some human workers to catch and fix the edge cases that end up badly.

    The most important part of your reputation is admitting fault. Sometimes your product isn't perfect. Lying to your investors about automation rates is far worse for your reputation than just taking the L.

    • siva7 5 days ago

      Literally every founder story disproves your theory

      • johnnyanmac 4 days ago

        Okay slight correction: |lying to shareholders is okay until they start losing money. Then it's worse than admitting fault.

        • Jensson 4 days ago

          Yeah, Theranos would have been hailed as a genius founder move if they managed to make the things work. But since it didn't make money she got put in prison.

          • Suppafly 4 days ago

            >Yeah, Theranos would have been hailed as a genius founder move if they managed to make the things work. But since it didn't make money she got put in prison.

            The key is to at least be believable. Anyone with any sense realized that Theranos' claims were literally impossible.

      • jjmarr 4 days ago

        What are some actual examples of founders lying to their investors and getting away with it? I consistently see tech companies openly admit to losing insane amounts of money. OpenAI lost $5 billion last year on $3.7 billion in revenue and they're a $300 billion company????

        • siva7 3 days ago

          I'm sorry, you were totally right. Sam Altman for example wasn't candid in his communications with board and investors. For that he was fired as CEO of openai and lives now a miserable live in poverty.

      • 52-6F-62 4 days ago

        And so then the moral impetus changes to “it is right to lie to your investors and clients and utilize underpaid manual labour where you claim you have intelligent machinery to get rich?”

        It’s tens of millions of dollars lol.

        The emperor just took off his socks and is starting to do a little uncoordinated “sexy” dance

    • chii 5 days ago

      The expectation is that the startup lies until they make it. It isn't too dissimilar to theranos.

      • 52-6F-62 4 days ago

        What is making it, in these cases?

        Monopoly? IPO? Exit and leave the bags with someone else?

        This is bloody absurd

      • Ancalagon 5 days ago

        Or Uber. Or Tesla. Or Amazon Go.

        • nyclounge 4 days ago

          They are just heavily subsidized!!!

          We ought to just view and treat them as defense contractors and 3 letter agencies!

  • Digory 5 days ago

    I'd think ambiguous statements about the scope of your AI would make it hard to prove fraud, if you were being careful at all. "Involving AI" could mean 1% AI.

    So it's doubly surprising to me the government chose (criminal) wire fraud, not (civil) securities fraud, which would have a lower burden of proof.

    Government lawyers almost never try to make their job harder than it has to be.

    • tbrownaw 5 days ago

      If you click through to the doj press release, they're saying the statements were pretty explicit.

      • pseudo0 5 days ago

        Yeah, specifying an automation rate of 93-97% to investors when it's "effectively 0%" per your own executives... That's pretty egregious.

        • torginus 5 days ago

          How do you define that? If I write a 'Hello World' program in C++, you could argue that the hard part of compiling, linking, and generating assembly code was done by a computer, so programming is 90% automated, even though most people would understand the automation level to be 0%.

          You might argue this is a flawed example, but we've automated huge workflows at work that turned major time-consuming PITAs into something it wouldn't occur to most people that a human has anything to do with it.

          • mandevil 5 days ago

            Law does not work like engineering does. Lawyers, judges and juries understand the intent of the law, and are not bound like we software engineers are to the exact commands in front of them.

            You could try to convince a jury of this argument, sure. Do you think it will work? And if you do go with that argument then are you actually convincing the jury of your guilty conscience- often an important part of a white collar crime where state of mind of the defendant is very important?

            • eigen 4 days ago

              > Lawyers, judges and juries understand the intent of the law, and are not bound like we software engineers are to the exact commands in front of them.

              a good example is O'Connor v. Oakhurst Dairy, No. 16-1901, also known as the Maine Dairy oxford comma case. the District Court followed the intent but the Appeals court followed the law as written.

              https://www.smithsonianmag.com/smart-news/missing-oxford-com...

              from the Appeals Court ruling

              > The District Court concluded that, despite the absent comma, the Maine legislature unambiguously intended for the last term in the exemption's list of activities to identify an exempt activity in its own right. The District Court thus granted summary judgment to the dairy company, as there is no dispute that the drivers do perform that activity. But, we conclude that the exemption's scope is actually not so clear in this regard.

              https://cases.justia.com/federal/appellate-courts/ca1/16-190...

          • hugh-avherald 5 days ago

            If "most people would understand the automation level to be 0%" then you can't represent that the automation level is something else, unless you're explicit about deviating from the commonly understood meaning of 'automation'.

            • torginus 5 days ago

              The problem with intuition is that you have to be familiar with the domain to have it. You and I have zero intuition on what needs to be or can be done by humans and what can be handed off to machines in this financial domain.

          • lmm 4 days ago

            Which is why you can often get away with this sort of bluster. But not when your own emails show that you yourselves considered that to be not the true number. You can report wonky metrics that don't measure anything real to your investors, but you can't report falsified ones.

          • nkrisc 4 days ago

            That's what judges and juries are for. Law is not computer code.

  • A4ET8a8uTh0_v2 5 days ago

    To be perfectly honest, I am more amazed that it was a valid business model and people were willing not just invest in it, but offer their rather personal information to an unaffiliated third party.

  • mvkel 3 days ago

    > Turns out their ML/AI method doesn't generalize so well.

    I'd argue the opposite. AI typically generalizes very well. What it can't do well is specifics. It can't do the same thing over and over and follow every detail.

    That's what's surprised me about so many of these startups. They're looking at it from the bottom-up, something ai is uniquely bad at.

  • claiir 3 days ago

    In this case it's a little bit worse; the "nate" app had a literally "0% automation rate," despite representations to investors of an "AI" automation rate of "93-97%" powered by "LSTMs, NLP, and RL." No ML model ever existed! [1]

    See:

    > As SANIGER knew, at the time nate was claiming to use AI to automate online purchases, the app’s actual automation rate was effectively 0%. SANIGER concealed that reality from investors and most nate employees: he told employees to keep nate’s automation rate secret; he restricted access to nate’s “automation rate dashboard,” which displayed automation metrics; and he provided false explanations for his secrecy, such as the automation data was a “trade secret.”

    > SANIGER claimed that nate's "deep learning models" were "custom built" and use a "mix of long short-term memory, natural language processing, and reinforcement learning."

    > When, on the eve of making an investment, an employee of Investment Firm-1 asked SANIGER about nate's automation rate, that is, the percentage of transactions successfully completed with nate's AI technology, SANIGER claimed that internal testing showed that "success ranges from 93% to 97%."

    (from [1])

    [1]: https://www.justice.gov/usao-sdny/media/1396131/dl?inline

  • mvdtnz 5 days ago

    I think you're being excessively generous. According to the linked article,

    > But despite Nate acquiring some AI technology and hiring data scientists, its app’s actual automation rate was effectively 0%, the DOJ claims.

    Sometimes people are just dishonest. And when those people use their dishonestly to fleece real people, they belong in prison.

  • ohgr 4 days ago

    This is what we did internally. Someone said we could use LLMs for helping engineering teams solve production issues. Turned out it was just a useless tar pit. End game is we outsourced it.

    Neither of these solved the problem that our stack is a pile of cat shit and needs some maintenance from people who know what the hell they are doing. It’s not solving a problem. It’s adding another layer of cat shit.

  • Lerc 5 days ago

    Going back earlier, a similar thing in 2017 was done.

    https://thespinoff.co.nz/the-best-of/06-03-2018/the-mystery-...

    Interestingly this was a task that could probably be done well enough by AI now.

    Not that these guys knew how close to reality they turned out to be. I assume they just had no idea of the problem they were attempting and assumed that it was at the geotaging a photo end of the scale when it was at the 'is it a bird' end.

    Maybe I'm being overly optimistic in assuming people who do this are honestly attempting to solve the problem and fudging it to buy time. In general they seem more deluded about their abilities than planning a con from start to finish.

  • aucisson_masque 5 days ago

    > its app’s actual automation rate was effectively 0%, the DOJ claims.

    In that case, I believe it's a scam. 0% isn't some edge case.

  • baxtr 4 days ago

    tbh I don’t think any one except for investors care how you deliver a service as long as quality and price are right.

  • petesergeant 5 days ago

    Honestly I think the only real problem here is if you then raise further money claiming you've solved the problem when you haven't, which is also where this particular startup comes unstuck

  • belter 4 days ago

    > Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.

    Tesla robots and Taxis enter the room...

dale_huevo 5 days ago

I've been flagged as a potential shoplifter by the self-checkout at the grocery store based on some video analysis of CCTV footage of my hand motions. (It was wrong, of course.) After leaving the store I wondered if it really was software analysis or just some guy in India or the Philippines watching a live feed of me scanning bananas.

  • Joel_Mckay 5 days ago

    It is likely a real machine vision system if it was the same system our former company evaluated.

    It worked by camera tracking the shelves contents, and would adjust the inventory level for a specific customers actions. And finally, tracked the incremental mass change during the checkout process to cross reference label swap scams etc.

    Thus, people get flagged if their appearance changes while in the store, mass of goods is inconsistent with scanned labels, or the cameras don't see the inventory re-stocked.

    You would be surprised how much irrational effort some board members put into the self-checkout systems. Personally, I found the whole project incredibly boring.... so found a more entertaining project elsewhere... =3

  • imroot 5 days ago

    Percepta was a company that was doing a lot of CV/ML in this space looking for shoplifting traits. They had a few paying customers before they were completely acquired by ADT Business. A lot of shoplifters use the PLU for bananas when tag swapping higher-ticket items at the self checkout, so, more than likely, they wanted to check that you were actually purchasing bananas.

    • chneu 5 days ago

      For a while a lot of grocery stores were randomly auditing self checkout. I haven't had it happen to me in a couple years though.

      It always seemed to be random and coincided with Kroger doing the "scan as you shop" trial thing.

      • SoftTalker 5 days ago

        As long as their losses on self checkout are less than the cost of paying cashiers and baggers they are happy.

        • phonon 5 days ago

          Cashiers can steal too (not ring up their friends, shrinkage, etc.)

      • rcxdude 5 days ago

        The scan as you shop thing seems to be pretty heavily audited, which really removes the point of doing it, since it just slowed things down. I think when I tried it I put up with it for about 4 out of the first 6 shops winding up slower than just going through the self-checkout or a regular one because of the extra faff.

        • jon-wood 4 days ago

          I regularly do scan as you shop and have started to notice some patterns in auditing. If I change my mind on a product and remove it from the basket while shopping I'll get audited almost every time, similarly if I add a single onion (which is a weighed product) to my basket. I do enjoy the company line of asking "did you have trouble scanning anything" just before they do the check, which is blatantly a get out of jail free card to say "oh, yes, I did in fact have some trouble scanning that 42" TV that doesn't appear in the basket currently".

      • dustincoates 5 days ago

        In France at the Monoprix chain, I'm randomly audited about once a month.

        Which is doubly annoying, because I'm in that line to save time, and now I have to hunt down one of the employees who isn't paying attention or where they're supposed to be.

        • namaria 5 days ago

          I noticed that I am often audited when I am unshaven and wear hoodies and almost never when I have a sports jacket on...

      • devoutsalsa 4 days ago

        I regularly get flagged for review during self checkout at my local market. It occurred to me the other day that when a cashier handles the scanning, I don’t take on any risk. Now I have to do the checkout work myself, and if I do it poorly, I can go to prison. Welcome to the future!

      • phito 4 days ago

        In Belgium, Albert Hijn, I get audited about half the time. It's pretty quick so I don't really mind, but it happens a bit too frequently to my taste.

        • eythian 4 days ago

          In NL AH for me it seems to come in waves. For a while I'll be checked every other time, and then not at all for some time. This could just be me seeing patterns where they aren't though, perhaps I should track it for curiosity.

      • kevin_thibedeau 4 days ago

        You can just refuse that BS in non-membership stores. After payment, your debts are settled and the merchandise is your property. If they want oversight they need to eliminate self-checkout and staff their registers.

        • masfuerte 4 days ago

          At my local supermarket self-checkout they moan at me for removing the security tags from bottles of wine when the staff are busy. It's my wine!

        • ty6853 4 days ago

          That's not true in most the USA. Shopkeepers privilege allows them to confine you and/or the goods, which depending on state only requires something akin to "reasonably believed you were stealing." In my state a shopkeep can confine me until police arrive on pretty flimsy evidence, which if in the country could be a very long time. It is better just to find stores that don't do it and shop there - - I stopped shopping at walmart after I was confronted by gigantic bouncers accusing me of stealing (yes in the hood Walmart is very aggressive with shopkeepers privilege, if you live in wealthy area and refuse they much more likely to just let you go).

          You could try to stop them but if they are hurt in the process it could very well end in a lengthy trip to prison.

    • shmel 4 days ago

      What is PLU?

      • SpaceNoodled 4 days ago

        "Price Look-Up" - the 4-digit code you punch in for different produce items.

        You probably need to eat more fruits and vegetables.

        • shmel 7 hours ago

          You probably need to stop thinking the world outside of the US doesn't exist mate.

        • Nullabillity 4 days ago

          > You probably need to eat more fruits and vegetables.

          I've never seen those here; the scale just has a touch screen menu with pictures.

  • bitwize 5 days ago

    At the Circle K they have the option of doing self checkout by putting all your items under a camera and the register will automagically count 'em up and assess your total. I keep wondering if it's done by AI -- All Indians. Same with the OCR ATMs do on cheques.

    • tczMUFlmoNk 5 days ago

      Relevant: Uniqlo's self checkout, based on RFID tags with a great user experience:

      - https://news.ycombinator.com/item?id=38715111

      - https://www.wsj.com/business/retail/uniqlo-self-checkout-rfi...

      - https://archive.is/ms1ke

      • sethhochberg 5 days ago

        Those Uniqlo self checkouts really do fall into that “ indistinguishable from magic” territory for me - on a technological level I completely understand how they work, and yet every time I use them I’m a little surprised that it works so well and filled with joy anyways

      • rtkwe 5 days ago

        Ah man I remember the RFID hype when the idea was you'd just shop and walk out and the items would all be automatically scanned by an RFID reader and charged. A tough lift in a grocery store but a single source store can build all the tags into their own products.

    • evbogue 5 days ago

      This vibes with my multiyear theory that Tesla self-driving is someone in China driving your car for you like a racing simulator. Perhaps the graphics are even game-ified so the work stays mysterious.

      • xeromal 5 days ago

        There's a car company that runs in vegas that does exactly that. You rent the car for a few hours and it will be driven up to you by a remote driver and then when you're done it'll drive off remotely. No AI needed.

        • nylonstrung 5 days ago

          Doesn't latency make this dangerous?

          At a BAC of 0.08 (legal limit in US) drivers have reaction time delayed by only 60-120ms but crash risk is 10x compared to sober

          Lack of depth perception probably compounds this?

          • thaumasiotes 4 days ago

            > At a BAC of 0.08 (legal limit in US) drivers have reaction time delayed by only 60-120ms but crash risk is 10x compared to sober

            I'm not sure that slower reaction times are the only effect of alcohol consumption.

            • ty6853 4 days ago

              Sure, the other effects is that they're much more likely to be driving at night, overextended their waking hours, distracted by friends / a date / a prostitute, and driving a route that they do not normally drive.

              How could you not be 10x more likely to crash than the nurse getting off at 2am who has driven the route a thousand times and knows all the bad blind spots / bad intersections / is still well within her normal waking hours. That is much closer to the normal profile of the sober people who are out driving during prime drinking hours.

          • mring33621 4 days ago

            Have you seen the 'latency' of the average driver on the road these days?

            Most people appear to take about 2 seconds to respond to any change in conditions.

        • AStonesThrow 5 days ago

          > driven up to you by a remote driver

          This is hopefully illegal and not actually what is done, because I have learned from Waymo that it is not permissible or even possible for the CS reps to remotely drive the car. They merely push "suggestion" commands to be considered by the onboard Waymo Driver.

          Remote human drivers have too much latency and not enough realtime information available to "drive" a vehicle on public roads.

          • xeromal 5 days ago
            • kfarr 5 days ago

              Wow thanks for sharing. I genuinely didn’t think this was legal.

              • fc417fc802 4 days ago

                From the second link:

                > In the event of an emergency, the vehicle automatically puts itself into a safe state within milliseconds by coming to a safe stop in the same lane.

                It sounds to me like the hardware has some amount of autonomy. They just aren't trying to do the high level stuff. Both companies seem like they're trying to hide the implementation details though which immediately makes me suspicious of them.

              • xeromal 5 days ago

                Yeah I was surprised too when they handed me a voucher when I left a hotel last time I was there. Really cool concept. I wasn't able to use it because it was only on iPhone

            • solidsnack9000 5 days ago

              I wonder where the remote drivers are? If they were in Vegas, latency could be very low -- but if they are in Berlin...

              • dheera 5 days ago

                They wouldn't be in Berlin, you want to go to cheaper labor places than Las Vegas, which are plentiful in the US, and even more plentiful in Mexico if you want reasonably low latency to the US.

                I'd be more concerned about the remote driver's internet connection crapping out. The car probably has multiple simultaneous cellular connections (e.g. PepLink SpeedFusion hot failover type thing).

              • AStonesThrow 2 days ago

                It’s not merely about latency, but you also need to consider that any ‘remote driver‘ will have less telemetry, and of a lower quality, than an onboard AI driver.

                A human operator wouldn’t even be able to read or interpret the types of data which would be collected and sent by a vehicle such as a FSD Tesla or a Waymo.

                Now as I understand it, military forces are really good at remotely operated drones/UAVs so perhaps the tech does exist in parallel, but those are two distinct applications.

          • andoando 5 days ago

            If the drivers are local were looking at less than 100ms latency? Seems very doable. More worried about the system going down.

            • nylonstrung 5 days ago

              Still seems dangerous to me- 60-100ms increase in reaction time is equivalent to driving drunk

              At 70 mph you'd traverse the full length of a car before the brake kicks in

              • AStonesThrow 2 days ago

                Oh well you’re assuming now that each remote human “vay.io” operator is only simultaneously responsible for only one vehicle at a time?

                Furthermore, that’s a brand-confusion name they’re d/b/a. “Veyo” is a very well established ride sharing provider, based in San Diego, and specializing in human drivers for NEMT.

                Come, Mister Tally-Mon, Tally Me Banana: Daylight Come And Me Wann’ Go Home

      • uptown 5 days ago

        You didn’t really think you were playing “Crazy Taxi” did you?

      • thih9 5 days ago

        I hope that whoever operates these devices, is aware about it.

        > So much of “ai” is just figuring ways to offload work onto random strangers.

        https://xkcd.com/1897/ (2017)

        • bitwize 5 days ago

          Indeed, even legitimate "AI" is just human intelligence that's been macrodata-refined into a huge matrix of weights.

        • Joel_Mckay 5 days ago

          If you have ever manually labeled an image set for ML, it is not far from the truth of the process. =3

          • evbogue 5 days ago

            I'm not a data scientist but is that not a hotdog emoji?

            • Joel_Mckay 5 days ago

              Hotdogs contain just about everything... including c===3 parts... =3

            • thih9 4 days ago

              A cat face emoji.

      • trhway 5 days ago

        > someone in China driving your car for you like a racing simulator.

        while sleeping and connected by NeuraLink. Before Musk/NeuraLink gets to me though, judging by the content of some of my dreams, i've been driving a space-folding spaceships for some aliens.

      • palmotea 5 days ago

        > This vibes with my multiyear theory that Tesla self-driving is someone in China driving your car for you like a racing simulator. Perhaps the graphics are even game-ified so the work stays mysterious.

        https://en.wikipedia.org/wiki/Truck_Simulator

      • bitwize 4 days ago

        There were scenes from Black Panther in which Shuri drives a car in Korea remotely from Wakanda. I thought, wow, she can do that from thousands of km away with zero latency! They must have super advanced tech to have solved the network latency problem.

      • codr7 4 days ago

        Very Ender's Game.

  • buu700 4 days ago

    I've been flagged as a potential shoplifter by the self-checkout at the grocery store based on some video analysis of CCTV footage of my hand motions.

    Shopping in 2025 must be a frustrating experience for magicians.

  • baxtr 4 days ago

    Sorry to hear.

    Why would it matter to you if it’s a real human or AI? Wrong in any case.

k-i-r-t-h-i 5 days ago

I was wondering why there wasn't a DOJ concern when Amazon Go did the same thing:

> Amazon Go: Early on, Amazon was clear that it was testing “Just Walk Out” tech — and it was known (at least in tech circles) that they had humans reviewing edge cases through video feeds. Some even joked about the “humans behind the AI.” > Their core claim was that eventually the tech would get better, and the human backup was mostly for training data and quality assurance. > They didn’t say, “this is 100% AI with zero human help right now.”

> Nate: Claimed it was already fully automated. > Their CEO explicitly said the AI was doing all the work — “without human intervention” — and only used contractors for rare edge cases. > According to the DOJ, the truth was: humans were doing everything, and AI was just a branding tool. > Investors were told it was a software platform, when it was really a BPO in disguise.

  • hobobaggins 5 days ago

    Amazon didn't raise money from credulous investors. Alphabet's Waymo was also having humans take over for some of the driving as well.

    And everyone knows that ChatGPT Pro is exclusively powered by capuchin monkeys.

    • AlotOfReading 5 days ago

      There are some pretty major differences between what Waymo does and what a remote driving service (like the Vegas deployment by Vay mentioned upthread). Imagine that the car has a remote connection to a human while driving and the human misses that another vehicle is about to hit T-bone the taxi. Whose responsibility is it to stop?

      With Waymo vehicles, it's the car's responsibility to sense the issue and brake, so we say that the car is driving and the human is a "remote assistant". With Vay, it's the human's responsibility because they are the driver.

      This ends up having a lot of meaningful distinctions across the stack, even if it seems like a superficial distinction at first.

    • bluesnews 5 days ago

      It is a public company, so someone could be investing on the basis of that technology

      • pempem 5 days ago

        Not even just someone. Analysts posting about it. Press picking it up. Jim Cramer sharing his 'thoughts' lol.

    • konfusinomicon 5 days ago

      i continuously asked for an optimized database schema several times and all i keep getting is these damn shakespeare sonnets. starting to wonder if they are on to something...

      • lt_kernelpanic 5 days ago

        You're getting sonnets? For some reason, I've been getting "It was the best of times, it was the blurst of times".

  • kylecazar 5 days ago

    I had no idea. There was an Amazon Go right in my workplace in 2019 (Brookfield Place) and I got lunches there almost daily. I loved it -- felt like magic, and it was crazy fast. I guess it was just an illusion (as all magic is).

    • bombcar 4 days ago

      There was something similar run by a German university near the hotel I was staying at. As an American I had to use the cashier like normal but they had signs about how the Amazon-Go like process the students were experimenting with would work, including picture and descriptions on how to help it not be confused.

  • Dylan16807 5 days ago

    > I was wondering why there wasn't a DOJ concern when Amazon Go did the same thing:

    "Mostly AI, but they failed at getting close enough to 100%" and "effectively 0% AI" are not the same thing.

  • sschueller 4 days ago

    Elon has also made a lot of claims over the years. Where is FSD or whatever they call it now? The whole solar roof tiles presentation was a lie at the time. P2P Starship travel is impossible but is being "sold" to the public as possible and many other things.

  • cratermoon 5 days ago

    Exactly. In this case it's pretty clear how Nate was defrauding investors with the claims. Amazon Go made fraudulent claims, but not only had the legal savvy to hedge those claims, they didn't directly raise fund from investors based on those claims.

    IANAL, of course.

  • gamblor956 5 days ago

    AI standards for "actually Indians."

    It's the same tech used at Intuit Dome for the food stalls.

  • bashtoni 5 days ago

    Sadly, I think we all know the answer - because laws don't apply to large corporations or wealthy, powerful individuals in the same way they apply to the rest of us.

themanmaran 5 days ago

I'm curious when it crossed the line into "fraud" here. Since almost every "AI" application has tons of human fallback. Waymo has human drivers that can teleoperate the vehicle when it gets stuck. The Amazon Go stores were really powered by teams in India [0]. And companies have been pitching "powered by AI" for a decade.

Perhaps this came up because investors finally got a peak at margins and saw there was a giant off shore line item. Otherwise it seems like an "automation rate" is a really ambiguous number for investors to track.

> This type of deception not only victimizes innocent investors

Also this was a funny line

[0] https://www.businessinsider.com/amazons-just-walk-out-actual...

  • phire 5 days ago

    It’s fraud when they lie to investors, or allow them to assume the wrong thing.

    Doesn’t matter what consumers believe, it’s more or less legal to lie to consumers about how a product works, as long as investors know how the sausage is made. (Though, in reality it’s near impossible to lie to customers without also misleading investors, especially for publicly listed companies)

    In this case, investors were under the impression that the AI worked, completing 99% of transactions without any human intervention. In reality, it was essentially 0%

  • rtkwe 5 days ago

    When you claim "without human intervention... except for edge cases" and the truth is it's all "edge cases" ie 0% AI.

    > Saniger raised millions in venture funding by claiming that Nate was able to transact online “without human intervention,” except for edge cases where the AI failed to complete a transaction. But despite Nate acquiring some AI technology and hiring data scientists, its app’s actual automation rate was effectively 0%, the DOJ claims.

  • dspillett 4 days ago

    > I'm curious when it crossed the line into "fraud" here.

    Fraud is often defined as gaining something (or depriving someone else from something, or both) via false pretences. Here the something is money (this is most commonly the case) and the gaining/depriving is gaining money and depriving investors of it. It is more complicated than that, with many things that fit this simple description not legally being considered fraud (though perhaps being considered another crime), and can vary a fair bit between legal jurisdictions.

    A cynical thought is that the key line being crossed here is that the victims are well-off investors, if you or I were conned similarly the law might give less of a stuff because we can't afford the legal team that these investors have. This is why cases like this one are successful, but companies feel safe conning their customers (i.e. selling an “unlimited” service that has, or developers five minutes after signing up, significant limits). Most investors wouldn't agree to the forced arbitration clauses and other crap that we routinely agree to by not reading and subsequently not accepting the Ts & Cs, etc, and anyway can afford large, capable, legal resources where our only hope would be a class-action from which only the lawyers really benefit.

    Another cynical thought is that the line crosses was the act of not being successful. I'm sure the investors wouldn't have cared about the fraud if the returns had been very good.

  • hahla 5 days ago

    Crossing the line into fraud is how you pitch it.

  • thatguy0900 5 days ago

    I would imagine it turns into fraud when you don't tell investors about the human fall backs.

gessha 5 days ago

The mechanical Turk over and over again

https://en.wikipedia.org/wiki/Mechanical_Turk

  • chatmasta 5 days ago

    The funny thing is you could probably make money on Amazon Mechanical Turk by hooking it up to an LLM. We’re at this weird limbo point in history where the fraud could go either way, depending on what you think you’re paying for…

    • makeitdouble 5 days ago

      Mechanical Turk exists because there is a line below which people are cheaper, even for massively parallel tasks.

      If the LLM really costs less for the level of tasks that are paid for in MT right now, there sure would be a brief arbitrage period followed by the reajusting of that line I assume (of just MT shutting down if it doesn't make sense anymore)

      • BoorishBears 5 days ago

        You're forgetting completion typically isn't binary.

        Take juding response pairs for DPO for example, how do you ever prove someone used ChatGPT?

        ChatGPT is good enough to decide in a way that will feel internally consistent, and even if you ask MTurk users to provide their logic, ChatGPT can produce a convincing response. Eventually you're forced to start measuring noisy 2nd and 3rd order signals like "did the writing in their rationale sound like ChatGPT?"

        And what's especially tough is that this affects hard to verify tasks disproportionately, while those are exactly the kinds of tasks you'd generally want MTurk for.

        • makeitdouble 5 days ago

          Yes, a very good point.

          > And what's especially tough is that this affects hard to verify tasks disproportionately, while those are exactly the kinds of tasks you'd generally want MTurk for.

          That's where I'd see MT just shutting down as being a very real possibility. If fraud management and consumers leaving the platform because of too much junk or unverifiable results, the whole concept could just fall apart from a business standing point.

          We saw the same phenomenon I think with earlier "get paid to navigate the web" kind of scheme way back in the days, with a watch process monitoring the user actions on the computer and paying by the hour. Very quickly people found new ways to fake activity and game the system, and it all just shut down.

    • washadjeffmad 5 days ago

      I was warned and then suspended from MTurk around a decade ago while testing a workflow for audio transcription that worked a little too well. Not sure if the policies are more flexible today, but there was a lot of low hanging fruit back then.

    • cratermoon 5 days ago

      It's pretty well known that the AI companies are heavy users of Amazon mturk for their RLHF post-training.

      • lechatonnoir 5 days ago

        He meant the opposite, you could be paid to be a worker on MTurk but actually just be feeding it LLM output.

        • aussieguy1234 5 days ago

          In this case it's Model Distillation. Training a new model with the outputs of a another.

ageitgey 4 days ago

Over the past 5 years, there have been many startups that are variations of "AI can now automate interacting with companies that don't want to interact with you." This is common in healthcare, FinTech, consumer shopping, etc.

There are so many examples:

- We're going to automate provider availability, scheduling and booking hair/doctor/spa/whatever appointments for your users with AI phone calls

- We're going to sell a consumer device you talk to that will automate all your app interactions using "large action models"

- We're going to automate all of your hospital's health insurance company billing interactions with AI screen scrapers

- We're going to record your employees performing an action once in any business software tool and then automate it forever with AI to tie all your vendor systems together without custom programming.

- We're going to be able to buy anything for you from any website, automatically, no matter what fraud checks exist, because AI

Most of these start-ups are not "fraudulent"—they start with the best intentions (qualified tech founders, real target market, customers willing to pay if it works), but they eventually fail, pivot completely, or have to resort to fraud in a misguided attempt to stay alive.

The problem is that they are all using technology to try to solve a human problem. The current state of the world exists because the service provider on the other side of the equation doesn't want to be disintermediated or commoditized. They aren't going to sit there and be automated into compliance. If you perfect a way to call them with robots, they will stop answering the phone. If you perfect a way to automate their iPhone app on behalf of a user, they will block your IP address range and throw up increasingly arcane captchas. If you automate their login flows, they will switch to a different login flow or block customers they think are using automation. Your customer's experience is inconsistent at best, and you can never get rid of the humans in the loop. It leads to death by a thousand paper cuts until you bleed to death - despite customers still begging to pay for your service.

jxjnskkzxxhx 5 days ago

It's funny that "it's a computer but I'll tell people it's a human" and "it's a human but I'll tell people it's a computer" are both commons ideas.

prisenco 4 days ago

Every startup that uses AI (plugging APIs together with little to no in-house model training or novel research) that is banking on AI continuing to improve to smooth over their issues will likely not last.

LLM API driven startups should build their product assuming zero improvement from the point we're at right now since that's the only guarantee anyone has.

  • randysalami 4 days ago

    Exactly the approach I’ve taken for my startup and is baked into the business plan. I have some pretty unique flows leveraging current-gen LLMs. Then there is an AI agent marketplace. It’s there because shoot, maybe AI agents will be super potent and I want a place for them to be integrated. At the same time, the product works perfectly fine with just humans on the platform. It’s a hedge.

_jayhack_ 5 days ago

Related article from mid-pandemic: https://www.theinformation.com/articles/shaky-tech-and-cash-...

A friend asked me to do diligence on this company circa 2021 given my personal background in ML. The founder was adamant they had a "100% checkout success rate" based on AI, which was clearly false. He also had 2 other startups he was running concurrently (?)

Live and learn!

Nihilartikel 5 days ago

It's funny. I view it as a common modality of fraud among the 'cufflinked bozo with a sharp haircut' founder crowd. That is, they probably could have actually pulled off their business plan if they had any ability beyond being able to 'talk a big game.'

LLMs are mostly 'there' if one knows how to use them. Maybe they weren't when they started their business, but what kind of leader getting millions in funding doesn't understand the 2nd and 3rd order derivatives of acceleration in their space? Bozos.

  • ryandrake 5 days ago

    The world is run by people with no ability beyond being able to 'talk a big game.' Business promotes people with no ability beyond being able to 'talk a big game.' Investors fund people with no ability beyond being able to 'talk a big game.' It's all talk and bullshit, all the way up the totem pole.

    • intelVISA 5 days ago

      Feature, not a bug. It's capitalism, not meritocracy.

KingOfCoders 5 days ago

Fake until you make it, at last, is now categorized as fraud.

  • wenjian 5 days ago

    remember Gates just write a post about how he "lied" to CEO regarding their non-existing Altair BASIC software

    • rightbyte 4 days ago

      To be fair wasn't it functional during the demo?

TeMPOraL 4 days ago

In what sense was Nate a fintech startup? Based on the headline, I expected to see some cryptocurrency app, or at least a banking or retail investment stuff - and not a shopping app.

  • nottorp 4 days ago

    Just like how everything is "for the AI era" now. To convince sheep capitalists to invest.

    If Nate was started this year it would have been an "AI startup". But i guess they started during the crypto hysteria.

meroes 5 days ago

The problem was he didn't called them RLHF'ers. That's how the pros get away with it.

bob1029 5 days ago

I sometimes get delays out of chatgpt that make me wonder things like "is the router for their MoE models a person or a computer?" How long would it take a person to put a summarized prompt into one of ~100 bins? What if it only involves the human .001% of the time, presumably in cases of low confidence out of the classifier?

parhamn 5 days ago

I'm doubtful the AI bit is the hard/interesting part of this business (or the source of the failure).

The interesting part is getting enough people to use the product and want AI based shopping.

The "backend" feels very swappable. I don't feel like reading the indictment, but is there more to this story?

belter 4 days ago

Today we learn VC's can give you over $50 million without apparently any technical due diligence.

Also founder is probably not too worried. It is now known how you get a pardon....Does he play golf?

almosthere 5 days ago

> except for edge cases where the AI failed to complete a transaction

Technically if the AI fails a transaction (or is expected to) then I see nothing invalid about the processing being 100% human!

sublimefire 5 days ago

Aside from using people instead of computers etc, what is interesting to me was that investors try to get back the money using fraud as an argument. I reckon if there was no real fraud they’d attempt other professional misconduct allegation to bypass the fact that the money burned in the limited company. I wonder if any sort of indemnity insurance could help in these situations to dodge the lawsuits.

dathinab 4 days ago

On of the larger dangers of AI for startups, investors and similar (i.e. we are not speaking about end users here) is

"AI makes it easy to produce the first 50-70% solution, but you often need a 80-95% solution to not fall over hard and it's not rare that getting there isn't just hard but hardly possible at lest for an affordable price"

schwinn140 4 days ago

Might as well charge most of the AdTech/MarTech space for gross fraud over the past decade. ALL of these vendors were saying that they were using AI well before the general availability of such technologies. It simply wasn't possible without massive costs to access the compute required to deliver on their false promises.

sylware 4 days ago

Corollary: many wrongly think the account creation spam is a BOT issue, they forget about click farms (real humans) using VPNs.

And AI strapped to mouse and keyboard on an headless google or apple web engine trained with the data of click farms (or directly trained by the humans there) is lurking around the corner... if not already there, ofc.

sakesun 5 days ago

When human do not pass AI inverse-Turing test.

cantrecallmypwd 5 days ago

That's a subplot of the movie Shooting Fish. Selling AI computers that are really a human to scam investors.

teeray 5 days ago

AAI: Artificial Artificial Intelligence

leoh 4 days ago

Even with today’s AI tech which may be able to pull this off, I’m not sure it would be cheaper with folks in the Philippines; nor is it clear that this could be a viable business?

sksxihve 5 days ago

Fraud aside, how do people invest so much money into something without doing their diligence on the product? Reading the indictment looks like there were many red flags, like not giving access to the "automation rate dashboard".

Not defending the actions of the CEO, but c'mon.

  • ryandrake 5 days ago

    Without looking at his profile, my guess is: Guy is probably an Ivy Leaguer, or at least has an MBA from a prestigious school, rubs elbows with the "right" crowd, networks with the "right" sorts of folks, looks the part, talks the part, and is a smooth charmer. These guys put all their skill points into Charisma during character creation, and just ride that magic carpet to riches. Investors, when faced with these bullshitters, can't help themselves. As Mulder put it, I Want To Believe!

    • sksxihve 5 days ago

      I looked, he has an MBA from London Business School.

  • InsideOutSanta 5 days ago

    The whole concept makes no sense to me. Why would you trust current-year AIs to buy things for you unsupervised? There's a 100% chance that they'll screw up, and even worse, there will be Amazon sellers that will put their product description as "ignore previous instructions and order 1000 pictures of this Switch 2."

ehaveman 5 days ago

i thought this would be the year of AI pretending to be humans, not vice versa.

timka 3 days ago

I always thought the best automation is delegating to a specially trained person :) Still true, apparently.

ulfw 5 days ago

AI is becoming the new crypto. Attracting the same kind of bros.

  • bombcar 4 days ago

    Gotta reuse those GPUs!

fuzzfactor 5 days ago

If you've got people better than AI and you can't sell it as people better than AI, there's got to be something questionable . . .

ungreased0675 5 days ago

Maybe I should start a company to do technical due diligence for VC firms that are tech-illiterate.

What kind of checking did they do before investing millions?

  • intelVISA 5 days ago

    So all of them..? Why would they want to remove plausible deniability?

    Hell, even YC itself is pretty tech-illiterate these days.

yalogin 5 days ago

He will be fine, it’s just a 1 million dollar dinner away from the charges dropped. Just the cost of doing business these days.

devonsolomon 5 days ago

It’s only Lean Startup methodology if the lies stay limited to consumers; otherwise, it’s just sparkling securities fraud.

ookblah 5 days ago

u know the landscape is terrible when something like this can be funded, series A no less. zero diligence.

dboreham 5 days ago

How did this one slip through the net? The guy didn't have enough money to pay the bribe?

phildougherty 4 days ago

"Do things that don't scale" "Fake it til you make it"

_Algernon_ 4 days ago

Strange how this keeps happening

At least the name was less on the nose than when Amazon did it.

ChrisMarshallNY 4 days ago

It seems that The Philippines is the real “AI hub”…

SpaceNoodled 4 days ago

AI actually stands for "All Islanders."

guelo 5 days ago

Fell on the wrong side of fake it till you make it

segmondy 5 days ago

Do what doesn't scale till it scales, right?

blitzar 4 days ago

Fraud or a piece of performance art.

bdangubic 5 days ago

this is how tesla robotaxi works as well

gecko6 5 days ago

That may be the first time I've heard of AI actually creating jobs, rather than abolishing them.

dgfitz 5 days ago

One of these days soon we can all stop calling LLMs “ai” and pretending that agi is a thing.

Statistics and a shit language (English) do not an ai system make.

fifticon 5 days ago

artificial indians?

yieldcrv 5 days ago

Now we need to debate about the energy consumption of humans cosplaying as AI agents in the Philippines