It auto-discovers semantic relationships between datasets (e.g., drugs ↔ genes ↔ categories) with 99.999% accuracy — without deep learning or lossy compression.
It supports CSV/TSV/Excel data, runs via Docker, and outputs semantic clusters, property stats, similarity scores, and visualizations.
No setup needed. Works on any data. Fully explainable.
GitHub: https://github.com/fikayoAy/MatrixTransformer Docker: fikayomiayodele/hyperdimensional-connection
Happy to answer any questions!
we’ve been exploring the inverse direction: letting semantic tension stretch and bend across interaction histories, and then measuring the ΔS divergence between the projection layers.
we benchmarked it as an external semantic resonance layer that wraps around Claude/GPT etc — boosted multi-turn coherence +42% in evals.
would love to see if your static-matrix pipeline could "snap-in" upstream for symbolic grounding.
drop by our playground if curious (PDF only, no setup): https://github.com/onestardao/WFGY