Generalizability
Husserl distinguishes procedure of generalization and specialization of procedures of formalization and deformalization. Generalization is the transition from the spirit to higher on the tree of generic entities and formalization - a "meaningful generalization in a purely formal logic." For example, the transition from "red" to "sensory quality" is a generalization, and the transition from "red" to "adjective" or "chalk" to "beings" or "existing" is a formalization. Similarly, deformalization is filling the empty logical form something meaningful and specialization - descent of the tree of generic.
Examples of both we find in mathematics. Example of formalization: the transition from an intuitive understanding of the topological space as a kind of soft, stretchable, but not tearing to the surface is not very similar to the original presentation of the formal definition: a topological space - it is set in which highlights some subsets so that
1. The empty set is highlighted.
2. End the intersection of the selected sets are highlighted.
3. Arbitrary association dedicated sets allocated.
Intuitive idea of a continuous map of topological spaces as not to break the mapping space is transformed into a formal definition: a map from a topological space to another is called continuous if the inverse images of the selected sets are highlighted. Without the intuitive idea we, of course, would be able to prove theorems about topological spaces, but would lose the thread guide: what is the theorem we need? Mathematical discoveries can be done to formalization, as evidenced by the entire history of mathematics and, in particular, the discovery of the Pythagoreans incommensurability diagonal of a square and its side.
Other examples: the formalization of Euclidean geometry Gilbert (straight - this is what satisfies axioms) or Tarski formalization of the concept of limit of a sequence, the formalization of mathematical analysis Cauchy formalization Kolmogorov probability theory, formalization of arithmetic (as possible), the formalization of naive set theory, logic, formalization of the concept of a real number as the length of the segment, etc. Generalization occurs constantly in mathematics (any example from the table).
Formalization can also provide new opportunities for generalization and generalization - to hint at the possibility of a good formalization. For example, in the Erlangen program Klein can be viewed as a formalization of the concepts of geometry and how generalization is already known at that time geometries.
Type 1 Error and Type 2 Error
Conclusions based on statistical data can often contain errors.
Errors while checking stat hypotheses can be of two kinds. Type 1 error is the negation of the null hypothesis when in fact it is true. Type 2 error is the denial of alternative hypothesis when it is true.
Statistical Power
Statistical power in mathematical statistics is the probability of rejection principal (or null) hypothesis at a significance test when competing (or alternative) hypothesis is correct. The higher power of the statistical test, the less likely to make a mistake the second kind. The amount of power is also used to calculate the sample size required to confirm the hypothesis from the measure of the effect desired.
Hypothesis
Statistical hypothesis is a certain assumption about the properties of the probability distribution underlying the observed random phenomena. The observation results are usually presented as the implementation of a set of random variables, finite or infinite. In this case, the joint distribution of these random variables is not fully known, and statistical hypothesis suggests that it belongs to some particular class of distributions. In this situation, the task is to test statistical hypotheses.
Purposeful Sample
Purposeful sampling is a type of non-random selection of observations for a better understanding of the phenomenon. This method involves the selection of a particular group of cases from the total sample, which are characterized to study the properties. Further statistical analyzes are conducted on a selected group, with high statistical power of the tests. An example of the use of purposive sampling is the situation when you need to study the effectiveness of social programs for the poor. Then, among the total number of observations need to select the poorest families. Statistical procedures will be carried out even on the basis of a new, specially selected sample.
Simple Random Sampling
Simple random sampling (SRS) is a probabilistic sampling method, according to which each element of the population has a known and equal probability of selection. Each element is selected independently of each other element, and selecting arbitrary sampling members formed of a base sample.
When a simple random sample researcher first forms the basis of sample survey, in which each element is assigned a unique identification number. Then generate a random number to determine the number of elements to be included in the sample. These random numbers may be generated by a computer program.
Simple random sampling has obvious advantages. This method is extremely easy to understand. The results of the study can be extended to the target population. Most approaches to obtaining statistical findings involve the collection of information by means of simple random sampling. However, the method of simple random sampling has at least four important limitations. Firstly, it is often difficult to provide a basis of sampling observation that would hold a simple random sample.
Second, the result of applying a simple random sample can be large body, or a set of distributed over a large geographical area, which significantly increases the time and cost of data collection. Third, the results of a simple random sample are often characterized by low accuracy and higher standard error than the results of other probabilistic methods. Fourth, as a result of SRS can be formed unrepresentative sample. Although the sample obtained by simple random sampling, on average, adequately represents the general population, some of them are extremely ill target population. This is especially likely when the sample size is small. Simple random sampling is not often used in marketing research. More popular is method of systematic sampling.
Stratified Random Sample
Stratified sampling is a process consisting of two phases, in which the aggregate is divided into subgroups (strata). These strata must be mutually exclusive and mutually reinforce one another, so that each element of the set belongs to one and only one layer, and no item was lost. Further, each layer of randomly selected elements in this method is generally used simple random sampling. Technically, the selection of elements of each layer can be carried out only via SRS. However, in practice sometimes used systematic sampling and other probabilistic sampling methods. Unlike stratified sampling from the quota is that the elements in it are chosen more by accident rather than from the hotel either on the basis of the opinions of the researcher. The main task of stratified sampling is to increase accuracy without increasing costs.
Cluster Sampling
First, the target population is divided into mutually exclusive and complementary sub-groups called clusters. Then, using a probabilistic sampling method, such as a simple random sample, clusters are selected. The sample included all the elements of a selected cluster, or carried out their selection of a probabilistic manner.
The main difference between the cluster and stratified sampling is that in the first case uses only the selected subgroup (clusters), whereas in the subgroups stratified sample (fibers) are used for further selection. These methods have different goals. The purpose of cluster sampling is to increase sampling efficiency, reduce the cost of its implementation. The aim of stratified sampling is to increase accuracy. According to the criteria of homogeneity and heterogeneity of formation clusters are directly opposite to the criteria of layers forming. Cluster members must be as diverse and clusters themselves - as homogeneous as possible. Ideally, each cluster should be a small model of the universe. When cluster sampling based on sample observation is needed only for clusters that are included in the sample.
Common form of cluster sampling is an area sample (area sampling), in which the clusters are composed of geographical areas, such as districts, residential areas or neighborhoods. If the selection of basic elements is performed in a single step (for example, a researcher chooses some quarters, and then all the families living in these neighborhoods are included in the sample), this sampling method called single-stage area sampling. If the selection of basic elements is carried out in two (or more) stage (the researcher selects blocks, then in each such quarter selects families to be included in the sample), this method is called two-stage (or multi-stage) area sampling. The distinguishing feature of the single-stage area sampling is that all families of the selected blocks (or geographical areas) are included in the sample.
Statistical Significance
In statistics, the value is called statistically significant if the probability of its random occurrence is small or occurrence of even more extreme values. Here, under extreme is the degree of deviation of the test statistic from the null hypothesis.
The difference is called statistically significant if the appearance of the available data (or even more extreme data) would be unlikely, if we assume that this difference is not present; this expression does not mean that this difference should be large, important or significant in the general sense of the word.
Independent and Dependent Variables
The independent variable is the variable in an experiment that intentionally manipulated or selected by the experimenter in order to determine its effect on the dependent variable.
The dependent variable is a measured variable in a scientific experiment with changes associated with changes in the independent variable.
Independent variable, for example, in a psychological experiment can be considered as the intensity of the stimulus and the dependent is the ability to feel the test stimulus.
Intervening Variable
This kind of variables serves to explain the association between independent and dependent variables, when this association is not direct. The examples of such variables might be intelligence, motivation, expectation, intention, etc.
- Compare and contrast type 1 error and type 2 error.
Can a doctor reduce the frequency (probability) of the first kind of error to zero? Yes, if you will always take the basic hypothesis. In this case, all patients will be directed to operation. To nullify the error of the second kind, it is necessary not to do operations. It is understood that both options are unacceptable extremes. Although there are exceptions. For example, if we consider vaccination operation (still shot, the introduction of vaccination, it is small, but the operation), it turns out that doctors are the first scenario: do a little "preventive" operation all that in the future to reduce the error of the first kind to zero .
"The consequences of errors can be different" - what is the meaning of this statement in the context of this example? If the patient does not need surgery, but it is made (type II error) is very unpleasant. If needed, but is not made (the error of the first kind), it is possible and death.
Thus, the error of the first kind is more dangerous, but we can’t completely avoid it.
- Compare and contrast independent, dependent and intervening variables.
I give the example from the field of psychology. In the simplest case, the experimenter randomly change one factor (variable), leaving the other factors unchanged.
A Random variable is called the independent variable if it is changed by experimentor. Variable changes are observed in the experiment is called the dependent variable. As a rule, one group of subjects is under the influence of the independent variable, and the other group, which we call "control", this impact is not getting.
As in the rest of the experimental conditions for both groups are identical, the differences in the values of the dependent variable is likely explained by changes in the independent variable. In this process, a psychologist hopes to ascertain the effect of one variable on another, and to identify a causal relationship, or the functional dependence of the behavior.
Some authors refer to this process in terms of "management theory." This definition expresses the idea of a well-conducted study. The experimenter is trying to control the variables arbitrarily change them or leaving permanent. Implementation of such control allows greater certainty identify the factors that determine the behavior of the test.
As already mentioned, in a simple experiment is affected only one variable, whereupon the observations are made for how those changes affect the other variable. Freely change the variable is called the independent variable and the observed variable - dependent variable.
In experimental psychology, the dependent variable often takes the form of direct answers to the questions of the experimenter. Dependent reactions used in experimental psychology for a very long time and are used today. At the same time, there are two types used in the psychology of the independent variables.
The first type occurs when the experimenter systematically manipulated variable, such as changing the value of the animal reinforcements for the correct answer. The second type of independent variable is based on the selection of subjects having a certain feature or features of interest.
- Compare and contrast purposeful, stratified random, simple random and cluster samplings.
We have already discussed the definitions of these types of samplings. Now it’s time to briefly compare them.
Simple random sampling is an approach in which each member of the population has the same probability of being chosen unknown. The process of forming such a sample is simple. Write the names of tennis balls, put them in a large bowl, mix thoroughly and select the desired number of elements. In practice, the cup is replaced with a table of random numbers. Simple random sampling is the surest way to achieve a representative sample.
When using a stratified sampling efficiency is improved due to the fact that the accuracy of the research is growing faster than its cost. For example, in the course of market research you need to get information on the smoking behavior of students of the school. It is clear that the opinion of younger students will almost certainly while in high variability of responses may increase. In this case, it makes more sense to break all the students, for example, into three groups: Grades 1-5, 6-9 offices, and 10-11 to form a simple random sample in each group.
If you are using cluster sampling efficiency increases due to the fact that the cost decreases faster accuracy. As in the case of stratified sampling cluster formation sample - the process consists of two stages. During the formation of the cluster sampling is not divided on the totality "strata", and "clusters". Further clusters are selected randomly, and all members of the selected clusters under investigation. For example, to find out the attitude to smoking high school students in the city, it is necessary to form a random sample from the set of all graduating classes of the city, and then to interview all the students selected classes.
Nonrandom (purposeful) sampling is a method of selection of units for which we can not calculate in advance the probability of hitting each element in the sample population, which of course makes it impossible to calculate how well (representative) sample. Such sampling is used when the purpose of the study is an in-depth qualitative description of any particular social phenomenon.
References
Dodge, Y. (2006) The Oxford Dictionary of Statistical Terms, OUP. ISBN 0-19-920613-9
Lund Research Ltd. "Descriptive and Inferential Statistics". statistics.laerd.com. Retrieved 2014-03-23.
Moses, Lincoln E. (1986) Think and Explain with Statistics, Addison-Wesley, ISBN 978-0-201-15619-5 . pp. 1–3
Hays, William Lee, (1973) Statistics for the Social Sciences, Holt, Rinehart and Winston, p.xii, ISBN 978-0-03-077945-9