• ### Thought of the Day

I just want you to know that I try really, really hard. In fact, when I fell asleep one time in class, I disciplined myself by not sleeping again for two nights in a row. I do that quite often.

Daydream Believer Cover by Youmou & Ohana

Huckleberry Finn by Mark Twain

• ### 42

 Blabberbox » Ideas » Lossless Algorithms

# Lossless Algorithms

August 12th, 2018 | Posted by pftq in Ideas | #
The following is an algorithm I wrote for generating support/resist in trading with no thresholds or parameters. On each price move, the price range traveled loses a point in score. The resulting score of any price range is its support/resist strength, which declines the more it is traveled across (zero being strongest and untraveled). Visually, the price line looks like an eraser scrubbing away ranges on the chart; ranges least scrubbed thin out to support/resist lines.  The reason I call it a "lossless" algorithm is because it doesn't estimate anything or use any seeded values/thresholds.  It is analogous to lossless audio/image file formats.  There is no sampling or use of statistics.  There are no knobs to tweak and no assumptions to take.  It is a perfect 1-to-1 map of where price moves most and least freely.  It is also extremely light both in computation and space because all you're doing is a single subtraction per datapoint and the max number of ranges to keep score of is one new range per datapoint.

Those familiar with other algorithms I've written will note I habitually design my algorithms this way (to not rely on any seeded values, parameters, or numbers in general).  It's especially hard to explain to those who only think in math or stats-based formulas, as the algorithm is more like a set of instructions on how a draw or perform a task.  It's like pouring water over a surface to detect cracks rather than laboriously examining it; I wouldn't know where to start to describe that as a math equation, let alone prove it with empirical data (not that data actually proves anything to begin with, see Inductive vs Deductive Reasoning).  The trendlines algorithm I wrote years earlier for Tech Trader is designed similarly as sticks falling toward the price line and "landing" the only physical way possible.  This is in contrast to most quants/data-scientists who calculate backwards instead to some arbitrary lookback window or threshold to define things.  Not only is their approach computationally expensive (by constantly re-sampling the last x datapoints), but the result fundamentally can only ever be an estimate, one that changes drastically depending on assumptions used.  Annoyingly, I've had some insist that all this I do still uses numbers somewhere, missing the point that nothing is actually pre-configured or adjustable.  People always ask me what numbers I use for lookback window, granularity, thresholds, assumptions, etc, but for an algorithm like the one here, those things literally don't exist.  It's like asking for the number of pixels in a vector image; there's no such thing.  It might as well be infinite.

I don't go out of my way to avoid having parameters in my algorithms; it's just not how I think.  I try to imagine how I would do a task by hand, and that is what becomes the algorithm.  I never see myself crunching statistics or running some equation in my head, and I code my algorithms to go through the same logical processes.  I've always hated calculating numbers anyway, so I guess it makes sense I end up coding without them.

=======================

20210315 Added Note: Since a lot of people, even after reading this, get confused and still think to code this with set parameters, ranges, or other seeded information, here's an example walking through what the code actually would do to check your work:

The important part to realize is there are no actual line objects coded - support and resistance are just interpretations by the person looking at the chart.  The code itself just shades a canvas white based on how often any particular range has been visited by the price, and the remaining least shaded areas are what we perceive as lines (even though they don't "exist" in the code).

If it crosses from \$9-10, that's a score of -1, and the shading of that range is white with opacity of 10%.
If it crosses back down from \$10 to \$9, the score is -2 and the opacity increases to 20%.
It crosses from \$9 to \$8.15 for the first time, and that new range is a score of -1, white opacity at 10%.
It crosses back from \$8.15 to \$10 and the \$8.15-9 range score goes to -2, \$9-10 score now goes to -3.
Any remaining areas untraveled have a score of 0, leaving "support" and "resistance" areas left dark.

The actual percentage of opacity/shading is arbitrary and purely aesthetic.  The important part is the score that is unbounded and does not require any fitting or adjustment.

================