共查询到20条相似文献,搜索用时 0 毫秒
1.
Computer processing of electroencephalographic data 总被引:1,自引:0,他引:1
D D Elsberry 《American journal of veterinary research》1972,33(1):235-241
2.
Quality assurance of the data generating processes in epidemiologic studies is a prerequisite for the internal validity of study results. This paper presents practical aspects of such a quality assurance system pertaining to the planning, data gathering, data entry and data processing phase of a study. It is concerned with data obtained in the framework of a project rather than with data accumulating continuously in private practices, research institutes or veterinary faculties. During the planning phase of a project, standard operating protocols should be developed that assure a reliable performance of observation, coding and data entry. The data base structure, consisting of tables, input validation rules and queries, should be predefined and well documented. A data safety concept will provide the necessary integrity, physical safety and availability of the data. The paper presents technical solutions to common data processing problems with emphasis on re-coding and relational data base facilities (Microsoft-ACCESS) using a hypothetical study on risk factors for mastitis. 相似文献
3.
4.
Breeding and Genetics Symposium: really big data: processing and analysis of very large data sets 总被引:1,自引:0,他引:1
Modern animal breeding data sets are large and getting larger, due in part to recent availability of high-density SNP arrays and cheap sequencing technology. High-performance computing methods for efficient data warehousing and analysis are under development. Financial and security considerations are important when using shared clusters. Sound software engineering practices are needed, and it is better to use existing solutions when possible. Storage requirements for genotypes are modest, although full-sequence data will require greater storage capacity. Storage requirements for intermediate and results files for genetic evaluations are much greater, particularly when multiple runs must be stored for research and validation studies. The greatest gains in accuracy from genomic selection have been realized for traits of low heritability, and there is increasing interest in new health and management traits. The collection of sufficient phenotypes to produce accurate evaluations may take many years, and high-reliability proofs for older bulls are needed to estimate marker effects. Data mining algorithms applied to large data sets may help identify unexpected relationships in the data, and improved visualization tools will provide insights. Genomic selection using large data requires a lot of computing power, particularly when large fractions of the population are genotyped. Theoretical improvements have made possible the inversion of large numerator relationship matrices, permitted the solving of large systems of equations, and produced fast algorithms for variance component estimation. Recent work shows that single-step approaches combining BLUP with a genomic relationship (G) matrix have similar computational requirements to traditional BLUP, and the limiting factor is the construction and inversion of G for many genotypes. A na?ve algorithm for creating G for 14,000 individuals required almost 24 h to run, but custom libraries and parallel computing reduced that to 15 m. Large data sets also create challenges for the delivery of genetic evaluations that must be overcome in a way that does not disrupt the transition from conventional to genomic evaluations. Processing time is important, especially as real-time systems for on-farm decisions are developed. The ultimate value of these systems is to decrease time-to-results in research, increase accuracy in genomic evaluations, and accelerate rates of genetic improvement. 相似文献
5.
The data involved, and the steps taken, in establishing a fertility control programme are discussed. The incorporation of this fertility control programme into a computer programme is described. This involves the processing of data on reproduction (on calving, oestrus, insemination and pregnancy diagnosis) by computer system. This programme is designed to provide the dairy farmers with information to facilitate herd management. Thus, each month the farmer is sent an advisory list stating which cows should be inseminated, which should be dried off, and those expected to calve. In addition, he will receive a bi-annual report on the fertility status of his herd. Among others, the following terms are listed: the pregnancy rate after first insemination, the interval between parturition and first insemination, the interval between parturition and conception, the number of inseminations per conception and the number of cows culled through failure to conceive. The initial results and advantages of operating such a programme are given. 相似文献
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
The application of MUMPS in a computerised recording system for herd health and production control on dairy farms is reviewed. MUMPS is an interactive multi-user database management system, which is both an operating system and a high level computer language. In this system, coding of veterinary and management events prior to data entry is not needed. Programmes and data structure can easily be adapted and extended due to the features of MUMPS. The system for dairy farms allows epidemiological analyses, due to the flexibility of the programme. The programme is used by farmers and veterinary surgeons by means of terminals linked to a central computer. The system provides action lists for farmers and veterinary surgeons; the information on these lists is presented in a multidisciplinary way. Several herd reports and analyses, including frequency distributions and graphs, are given. These reports enable the investigation of cross-relations between farm aspects, and aid in the detection of problem areas. 相似文献