Standard Deviation (often abbreviated as "Std Dev" or "SD") provides an indication of how far the individual responses to a question vary or "deviate" from the mean.
SD tells the researcher how spread out the responses are -- are they concentrated around the mean, or scattered far & wide?
Standard Deviation and Standard Error are perhaps the two least understood statistics commonly shown in data tables.
The following article is intended to explain their meaning and provide additional insight on how they are used in data analysis.
Did all of your respondents rate your product in the middle of your scale, or did some love it and some hate it?
Let's say you've asked respondents to rate your product on a series of attributes on a 5-point scale.But the higher SD for reliability could indicate (as shown in the distribution below) that responses were very polarized, where most respondents had no reliability issues (rated the attribute a "5"), but a smaller, but important segment of respondents, had a reliability problem and rated the attribute "1".Looking at the mean alone tells only part of the story, yet all too often, this is what researchers focus on.Instead, it is "standardized," a somewhat complex method of computing the value using the sum of the squares.For practical purposes, the computation is not important.The Standard Deviation of 1.15 shows that the individual responses, on average*, were a little over 1 point away from the mean.Another way of looking at Standard Deviation is by plotting the distribution as a histogram of responses.The SE of 0.13, being relatively small, gives us an indication that our mean is relatively close to the true mean of our overall population.The margin of error (at 95% confidence) for our mean is (roughly) twice that value ( /- 0.26), telling us that the true mean is most likely between 2.94 and 3.46.Most tabulation programs, spreadsheets or other data management tools will calculate the SD for you.More important is to understand what the statistics convey.