Statistical Methods


Statistical methods: Methods of collecting, summarizing, analyzing, and interpreting variable numerical data. Statistical methods can be contrasted with deterministic methods, which are appropriate where observations are exactly reproducable or are assumed to be so. While statistical methods are widely used in the life sciences, in economics, and in agricultural science, they also have an important role in the physical sciences in the study of measurement errors, of random phenomena such as radioactivity or meteorological events, and in obtaining approximate results where deterministic solutions are hard to apply.

Banner Data Protection

Statistical Methods:
Experimental and observational studies
Levels of measurement
Statistical techniques
Specialized disciplines
Statistical computing
Experimental and observational studies

A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables or dependent variables on response. There are two major types of causal statistical studies: experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies in how the study is actually conducted. Each can be very effective.

An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead, data are gathered and correlations between predictors and response are investigated.

An example of an experimental study is the famous Hawthorne studies, which attempted to test the changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in determining whether increased illumination would increase the productivity of the assembly line workers. The researchers first measured the productivity in the plant, then modified the illumination in an area of the plant and checked if the changes in illumination affected the productivity. It turned out that the productivity indeed improved (under the experimental conditions). (See Hawthorne effect.) However, the study is heavily criticized today for errors in experimental procedures, specifically for the lack of a control group and blindness.

An example of an observational study is a study which explores the correlation between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers, perhaps through a case-control study, and then look for the number of cases of lung cancer in each group.

The basic steps of an experiment are;
– Planning the research, including determining information sources, research subject selection, and ethical considerations for the proposed research and method.
– Design of experiments, concentrating on the system model and the interaction of independent and dependent variables.
– Summarizing a collection of observations to feature their commonality by suppressing details. (Descriptive statistics)
– Reaching consensus about what the observations tell about the world being observed. (Statistical inference)
– Documenting / presenting the results of the study.

Levels of measurement

There are four types of measurements or levels of measurement or measurement scales used in statistics: nominal, ordinal, interval, and ratio. They have different degrees of usefulness in statistical research. Ratio measurements have both a zero value defined and the distances between different measurements defined; they provide the greatest flexibility in statistical methods that can be used for analyzing the data. Interval measurements have meaningful distances between measurements defined, but have no meaningful zero value defined (as in the case with IQ measurements or with temperature measurements in Fahrenheit). Ordinal measurements have imprecise differences between consecutive values, but have a meaningful order to those values. Nominal measurements have no meaningful rank order among values.

Since variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically, sometimes they are called together as categorical variables, whereas ratio and interval measurements are grouped together as quantitative or continuous variables due to their numerical nature.

Statistical techniques

Some well known statistical tests and procedures are:

* Analysis of variance (ANOVA)
* chi-square test
* Correlation
* Factor Analysis
* Mann-Whitney U
* Mean Square Weighted Deviation MSWD
* Pearson product-moment correlation coefficient
* Regression analysis
* Spearman’s rank correlation coefficient
* Student’s t-test
* Time Series Analysis

Specialized disciplines

Some fields of inquiry use applied statistics so extensively that they have specialized terminology. These disciplines include:

* Actuarial science
* Applied information economics
* Biostatistics
* Bootstrap & Jackknife Resampling
* Business statistics
* Chemometrics (for analysis of data from chemistry)
* Data analysis
* Data mining (applying statistics and pattern recognition to discover knowledge from data)
* Demography
* Economic statistics (Econometrics)
* Energy statistics
* Engineering statistics
* Epidemiology
* Geography and Geographic Information Systems, specifically in Spatial analysis
* Image processing
* Psychological statistics
* Quality
* Reliability engineering
* Social statistics
* Statistical literacy
* Statistical modeling
* Statistical surveys
* Structured data analysis (statistics)
* Survival analysis
* Statistics in various sports, particularly baseball and cricket

Statistics form a key basis tool in business and manufacturing as well. It is used to understand measurement systems variability, control processes (as in statistical process control or SPC), for summarizing data, and to make data-driven decisions. In these roles, it is a key tool, and perhaps the only reliable tool.

Statistical computing
R and gretl are examples of open source statistical packages.

The rapid and sustained increases in computing power starting from the second half of the 20th century have had a substantial impact on the practice of statistical science. Early statistical models were almost always from the class of linear models, but powerful computers, coupled with suitable numerical algorithms, caused an increased interest in nonlinear models (especially neural networks) as well as the creation of new types, such as generalised linear models and multilevel models.

Increased computing power has also led to the growing popularity of computationally-intensive methods based on resampling, such as permutation tests and the bootstrap, while techniques such as Gibbs sampling have made use of Bayesian models more feasible. The computer revolution has implications for the future of statistics with new emphasis on “experimental” and “empirical” statistics. A large number of both general and special purpose statistical software are now available.

flipkart-pureprocess.in