How I Made A Ghost Hunting App In Just One Day
I thought I’d have a go at making a ghost hunting app – how hard can it be?
There’s a big disconnect between what a lot of ghost hunting apps claim and what they actually do. For instance, you’ll often see claims like “this app uses the phone’s in-built sensors to allow spirits to communicate words.” While it’s true that an app can use the phone’s array of sensors, and the app might indeed spew out words, there’s almost always a lack of detail about what actually happens between the sensor input and the word output.
It would have been easy to develop an app that randomly generated words, leaving users to link those words to their current situation or environment. However, I wanted to create something with at least a theory behind it. That’s why I decided to base my app on a Random Number Generator (RNG), a tool often used in parapsychology to test psychokinesis, the ability to influence physical objects with the mind.
Research institutes, like Princeton Engineering Anomalies Research (PEAR), have tested whether individuals can use their minds to influence random processes, such as the output of RNGs or the results of dice rolls and coin flips. Some small anomalies have been reported in these experiments, though the results are widely disputed and often chalked up to statistical errors or flaws in the experiment’s design. It’s also believed by some that supernatural entities might also be able to affect random selection.
I decided to use random numbers in bulk across multiple iterations instead of simply picking a single random number between 1 and 100, which would have different results every time and no real meaning. So, I programmed the app to generate 1,000 random numbers between 1 and 100. The app includes a speed setting that changes the rate at which these 1,000 numbers are generated, with the idea being that varying the time might give any supernatural forces more opportunity to influence the outcome.
Once the app generates the series of random numbers, it analyses the average of all the numbers. Normally, this average should be close to a predictable value known as the Mean Chance Expectation (MCE). Since the app is generating random numbers between 1 and 100, the MCE is 50 in this case. To determine whether anything has interacted with this random generation process, the app then looks for any significant deviation from the expected average.
If the average is significantly lower than 50, this is considered a “negative” result, and the app displays a red light. Conversely, if the average is significantly higher than 50, it’s considered a “positive” result, and the app displays a green light.
I’ve deliberately left the interpretation of these results open. A negative result could be seen as indicating negative energy or a malevolent entity, while a positive result could indicate positive energy or a benign spirit. You might decide that green represents a positive response and red a negative one, or you could assign different meanings based on the specific questions you’re asking. This method allows for flexible communication depending on the context of your investigation.
I also included an option to switch the app to “yes/no mode,” so you can use it for direct yes/no questions, such as “Are you here with us?” The app offers a few tweakable settings controlled by three sliders. First, as mentioned, there’s the option to change the speed at which the app generates the random numbers. Second, there’s a slider to control the number of random numbers generated during each session, with 1,000 as the default. A higher number of iterations increases the accuracy of detecting deviations, as it provides a larger data set to analyse, which means fewer false triggers. However, it might take longer to complete the random generation process.
The third setting is the threshold, which determines how far the result must deviate from the MCE to be considered significant. I was surprised to find that over a large number of iterations, like 1,000, the result is usually very close to the MCE, so the threshold for a significant change doesn’t need to be large. By default, this is set to 3, but you can adjust it to any value between 0.1 and 10.
I’m not sure what the exact threshold should be to indicate a supernatural reaction, but if I were using this app in a robust and sceptical way, I’d set the iterations slider to 5,000 to get the largest data set possible, make the threshold 10 to ensure only the most significant deviations trigger a result, and set the speed to a rate that allows the cycle to complete in about 10-20 seconds. This way, if anything supernatural is focusing on the app, it has a reasonable amount of time to influence the random nature, but not so long that it might stop partway through or lose interest, or only manage to influence some of the iterations.
The big takeaway for me with all of this is how close the final average always is to the MCE. This seemed really odd to me to start with, after all, we’re generating thousands of numbers, so surely the numbers could be anything and the average could be all over the place. It turns out, the more iterations the app runs, the closer it gets to the MCE of 50.
At first, this is surprising because it seems like randomness should produce a more varied result. However, true randomness over a large enough sample size is actually quite predictable in terms of averages. The closer you get to infinity in terms of sample size, the closer your average will get to the MCE.
This unexpected outcome is rooted in the law of large numbers, a fundamental concept in probability theory that states that as you increase the number of trials or iterations in a random process, the average of the results will converge towards the expected value.
The app is generating random numbers uniformly between 1 and 99. The MCE for this range is the average of all possible outcomes, which is 50. Because it’s also generating a lot of these numbers, 1,000 or depending on the user’s settings, the average of these numbers will tend to converge towards the MCE. Small fluctuations or deviations in individual numbers are “averaged out” over the large sample size, resulting in an overall average close to the expected value.
This behaviour is why deviations from the MCE in my app are treated as significant, because it’s statistically unlikely for a large random sample to deviate significantly from the MCE. When it does, it could be interpreted as a sign of an external influence.
I’ve named the app ParaNomaly, combining “para” from “paranormal” and “nomaly” from “anomaly.” Beyond the admittedly weak theory that ghosts can affect random numbers, I have no reason to believe this app should work. However, at least I can explain how it *might* work, which is more than can be said for many other ghost hunting apps out there. Although admittedly, there is still some disconnect in how this works. It’s not really clear how a ghost could affect a random number being generated in software on a phone.