Chebyshev's inequality versus Confidence Intervals

Can someone help me understand the relationship here ? Chebyshev’s inequality states given any distribution, observations that lie within k standard deviations is at least 1 - 1/k^2 Compared to confidence intervals for normal distributions where we say for any normal distribution 68% of outcomes are within 1 standard deviation, 95% within 2 standard deviation. If I plug 1 standard deviation into Chebyshev’s formula I get 0. If I plug 2 i get 75%. Can someone provide some insight into this ?

Chebyshev’s is a weaker conclusion, but that’s the price you pay for its ability to work for any distribution (e.g., barbell).

Chebyshev won’t work for k=1. k has to be either less than -1 or greater than 1 for chebyshev to make sense.

You forget that: - Chebyshev’s inequality works BY DEFINITION only for k>1, while - the number of observations within k standard deviations from the mean is given as being AT LEAST EQUAL to 1-1/k^2

Chebyshev’s inequality returns the MINIMUM % of observations that will fall in k standard deviations of the mean.

Chebyshev works just fine for k = 1; it means that it is possible that no observations lie within 1 sd of the mean.