7 comments

  • Zambyte 2 hours ago
    > The WiFi-DensePose project represents a framework/prototype rather than a functional WiFi-based pose detection system. While the architecture is excellent and deployment-ready, the core functionality requiring WiFi signal processing and pose estimation is largely unimplemented.

    > Current State: Sophisticated mock system with professional infrastructure Required Work: Significant development to implement actual WiFi-based pose detection Estimated Effort: Major development effort required for core functionality

    > The codebase provides an excellent foundation for building a WiFi-based pose detection system, but substantial additional work is needed to implement the core signal processing and machine learning components.

    https://github.com/ruvnet/wifi-densepose/tree/main/docs/revi...

    Over 1k stars. Has a single person tried running it? Even the author?

    • reed1234 2 hours ago
      From the author’s GitHub bio:

      ‘One of the most captivating aspects of AI models like GPT-4 is their ability to "hallucinate" – generating completely new ideas and concepts that go beyond mere data processing. This capability underscores AI's potential to create, not just analyze.’

      ‘My projects represent this space, a space of infinite possibilities only one step removed from reality.’

      Rather honest I suppose.

  • heavyset_go 2 hours ago
    The dense AI docs say a lot to convey little, both the user guide and deployment guide do little to explain what's needed on the router side.

    For example, their diagram has several CSI sources. Does the user need 3 or more CSI sources?

    I'm capable of pointing an LLM at a GitHub repository, what I want is real documentation written by a human to address users' needs, not emoji-filled docs that read like ad copy.

    • amluto 2 hours ago
      It’s right there, clear as mud:

      from wifi_densepose import WiFiDensePose

          # Initialize with default configuration
          system = WiFiDensePose()
      
          # Start pose estimation
          system.start()
      
          # Get latest pose data
          poses = system.get_latest_poses()
          print(f"Detected {len(poses)} persons")
      
          # Stop the system
          system.stop()
      
      AI solves the Emporer’s Nose problem: you have no data whatsoever going in and you estimate the result!

      After a bit more browsing, I found:

          # Hardware Settings
          WIFI_INTERFACE=wlan0
          CSI_BUFFER_SIZE=1000
          HARDWARE_POLLING_INTERVAL=0.1
      
      So maybe it uses one WiFi interface to collect CSI from multiple BSSIDs? Does 802.11 support this well? (I assume you can get one-way CSI data, single-in-multiple-out, from a beacon if you really want to. [1]) Does commodity hardware support this? Do the drivers support this?

      But I’d be rather impressed if that’s all that’s needed to get poses without any calibration for the actual positions of all involved devices especially if the CSI available is all of this form. This whole repo smells a bit like it’s almost 100% vibes and no content.

      Wasn’t 802.11bf supposed to make real channel state information available for vendor-neutral use? What happened to it?

      [1] Yes, I know, reciprocity. One-way and two-way data ought to be the same. But those nice access points almost all have at least two transmit/receive chains these days, possibly more, and they support multiple frequencies, and unless you can convince them to cooperate with you by sending known test patterns that let you disambiguate between the two antennas or at least collect vector or matrix data in a consistent basis, you don’t get to take advantage of it, and as far as I know Wi-Fi beacons don’t do this. APs do try to get something like this data for downlink MU-MIMO purposes, and stations that are receiving data with a multiple stream code get vector data fairly directly, but I’m not sure any of this works without being associated. I do wonder whether appropriate hardware can passively listen to a transmission intended for someone else and decode enough of it to extract the full CSI matrix from the transmitter to yourself.

  • joshchaney 2 hours ago
    I've learned that if the project describes itself as "Production-ready", it was definitely vibe-coded.
    • cwmoore 2 hours ago
      Who let the arms races reign?
    • N_Lens 2 hours ago
      And the green checkmarks
  • archermarks 2 hours ago
    Putting "privacy first" as the first bullet point on something like this sure is rich.
    • N_Lens 2 hours ago
      (Violation of)
    • jd172 2 hours ago
      For real, this is straight up dystopian
  • Aurornis 2 hours ago
    After trying to click through some of the docs and realizing most of those sections don’t exist, I checked the commit log. I can confidently say there is a lot of AI slop in here. Anyone who has watched one of the AI coding tools add imaginary sounds-good features to a project and draw useless diagrams in README files will recognize it.

    So now the question is: Does this repo actually contain anything useful at all? Or is it just one big AI vibecoding project that amassed 1.3K stars based on sounding really amazing from the README? I’m leaning toward the latter.

    There are no usable instructions for actually trying this out, as far as I can see. It does claim to have a section for deploying and scaling with Kubernetes, which is hilarious for something that is supposedly working with WiFi routers.

    I’m continually amazed at how much leverage people are getting out of letting vibecoding tools run absolutely wild and then posting it to GitHub. I wouldn’t be surprised if the author was leveraging this in job interviews based on the almost certainly correct assumption that many interviewers will assume it’s real without checking anything. This kind of trick won’t work at a real company or with a serious hiring manager, but if you can impress a recruiter and get in front of a checked out hiring manager who just wants to build their empire this kind of thing can work. For a while.

    EDIT: This has 123 forks!? Now I’m going down the rabbit hole of exploring all of the other vibecoding and spam accounts that are forking this. This is a weird chapter in GitHub development.

  • ralsei 2 hours ago
    The Docker repository and PyPi packages in the README link to nowhere. There are only 3 issues. Is this legitimate?
    • heavyset_go 2 hours ago
      All signs point to someone letting an LLM run wild.
  • luketaylor 2 hours ago
    This whole repository is a bunch of vibe-coded boilerplate that doesn’t include almost any of the core thing it claims to do. The README is generic slop and the “performance metrics” (“Pose Detection Accuracy”; “Person Tracking Accuracy”) appear to be completely invented / hallucinated. In other words, it isn’t real.