Score: 0.7739726901054382; Reported for: String similarity
Open both answers

Present in both answers;
Present only in the new answer;
Present only in the old answer;

No algorithm really has an O(1/n) running time, but that notation might be used informally for an algorithm who's running time is really O(m/n) where m is large and not expected to vary. Here's an example of such an algorithm:

Given an unordered set (e.g. a hash table) containing n values from the integer range 0≤x<m (for some m∈N∗). (Note that n≤m, as a set can't contain any duplicate values.)

We want to find the minimum value in the set as efficiently as possible.

If n is much smaller than m, then a linear scan through the values in the set is the most efficient solution (O(n)).

But if n is greater than m−−√, it will often be more efficient to iterate over the potential values starting with 0 and going up, testing each one to see if it's in the set. We can of course stop as soon as we find one, since it will be the smallest one.

The expected number of tests (assuming the data is randomly distributed over the range) is m/n. If we consider m to be a constant (as in real-world applications it will often be determined by something we can't easily change like the size of our integer data type), this seems like it would be O(1/n) (with a large constant factor of m). And that does give an accurate description of how the performance of the algorithm changes as n goes from m−−√ towards m while m remains the same.

But the notation is not strictly correct. Asymptotic notation is intended to describe behavior as its variable terms go towards infinity. If an O(1/n) algorithm could run for n approaching infinity, its running time would tend towards zero, but the 1/n term would be dominated by constant terms (and so O(1) would be a better description of its performance). For an algorithm that only makes sense on a bounded n, it's not correct to use asymptotic notation. For a strict asymptotic analysis, you need to consider m a variable too so that both m and n can increase towards infinity.

I suspect that if you ever see O(1/n), it's being used as an informal shorthand for O(m/n).

Given an unordered set (e.g. a hash table) containing n values from the integer range 0≤x<m (for some m∈N∗). (Note that n≤m, as a set can't contain any duplicate values.)

We want to find the minimum value in the set as efficiently as possible.

If n is much smaller than m, then a linear scan through the values in the set is the most efficient solution (O(n)).

But if n is greater than m−−√, it will often be more efficient to iterate over the potential values starting with 0 and going up, testing each one to see if it's in the set. We can of course stop as soon as we find one, since it will be the smallest one.

The expected number of tests (assuming the data is randomly distributed over the range) is m/n. If we consider m to be a constant (as in real-world applications it will often be determined by something we can't easily change like the size of our integer data type), this seems like it would be O(1/n) (with a large constant factor of m). And that does give an accurate description of how the performance of the algorithm changes as n goes from m−−√ towards m while m remains the same.

But the notation is not strictly correct. Asymptotic notation is intended to describe behavior as its variable terms go towards infinity. If an O(1/n) algorithm could run for n approaching infinity, its running time would tend towards zero, but the 1/n term would be dominated by constant terms (and so O(1) would be a better description of its performance). For an algorithm that only makes sense on a bounded n, it's not correct to use asymptotic notation. For a strict asymptotic analysis, you need to consider m a variable too so that both m and n can increase towards infinity.

I suspect that if you ever see O(1/n), it's being used as an informal shorthand for O(m/n).

Most of the rest of the answers interpret big-O to be exclusively about the running time of an algorithm. But since the question didn't mention it, I thought it's worth mentioning the other application of big-O in numerical analysis, which is about error.

Many algorithms can be O(h^p) or O(n^{-p}) depending on whether you're talking about step-size (h) or number of divisions (n). For example, in [Euler's method](https://en.wikipedia.org/wiki/Euler_method), you look for an estimate of y(h) given that you know y(0) and dy/dx (the derivative of y). Your estimate of y(h) is more accurate the closer h is to 0. So in order to find y(x) for some arbitrary x, one takes the interval 0 to x, splits it up until n pieces, and runs Euler's method at each point, to get from y(0) to y(x/n) to y(2x/n), and so on.

So Euler's method is then an O(h) or O(1/n) algorithm, where h is typically interpreted as a step size and n is interpreted as the number of times you divide an interval.

You can also have O(1/h) in real numerical analysis applications, because of [floating point rounding errors](https://en.wikipedia.org/wiki/Floating-point_arithmetic#Accuracy_problems). The smaller you make your interval, the more cancellation occurs for the implementation of certain algorithms, more loss of significant digits, and therefore more error, which gets propagated through the algorithm.

For Euler's method, if you are using floating points, use a small enough step and cancellation and you're adding a small number to a big number, leaving the big number unchanged. For algorithms that calculate the derivative through subtracting from each other two numbers from a function evaluated at two very close positions, approximating y'(x) with (y(x+h) - y(x) / h), in smooth functions y(x+h) gets close to y(x) resulting in large cancellation and an estimate for the derivative with fewer significant figures. This will in turn propagate to whatever algorithm you require the derivative for (e.g., a boundary value problem).

Many algorithms can be O(h^p) or O(n^{-p}) depending on whether you're talking about step-size (h) or number of divisions (n). For example, in [Euler's method](https://en.wikipedia.org/wiki/Euler_method), you look for an estimate of y(h) given that you know y(0) and dy/dx (the derivative of y). Your estimate of y(h) is more accurate the closer h is to 0. So in order to find y(x) for some arbitrary x, one takes the interval 0 to x, splits it up until n pieces, and runs Euler's method at each point, to get from y(0) to y(x/n) to y(2x/n), and so on.

So Euler's method is then an O(h) or O(1/n) algorithm, where h is typically interpreted as a step size and n is interpreted as the number of times you divide an interval.

You can also have O(1/h) in real numerical analysis applications, because of [floating point rounding errors](https://en.wikipedia.org/wiki/Floating-point_arithmetic#Accuracy_problems). The smaller you make your interval, the more cancellation occurs for the implementation of certain algorithms, more loss of significant digits, and therefore more error, which gets propagated through the algorithm.

For Euler's method, if you are using floating points, use a small enough step and cancellation and you're adding a small number to a big number, leaving the big number unchanged. For algorithms that calculate the derivative through subtracting from each other two numbers from a function evaluated at two very close positions, approximating y'(x) with (y(x+h) - y(x) / h), in smooth functions y(x+h) gets close to y(x) resulting in large cancellation and an estimate for the derivative with fewer significant figures. This will in turn propagate to whatever algorithm you require the derivative for (e.g., a boundary value problem).

Present in both answers;
Present only in the new answer;
Present only in the old answer;