首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The study was conducted to develop methodology for least-cost strategies for using polymerase chain reaction (PCR)/probe testing of pooled blood samples to identify animals in a herd persistently infected with bovine viral diarrhea virus (BVDV). Cost was estimated for 5 protocols using Monte Carlo simulations for herd prevalences of BVDV persistent infection (BVDV-PI) ranging from 0.5% to 3%, assuming a cost for a PCR/probe test of $20. The protocol associated with the least cost per cow involved an initial testing of pools followed by repooling and testing of positive pools. For a herd prevalence of 1%, the least cost per cow was $2.64 (95% prediction interval = $1.72, $3.68), where pool sizes for the initial and repooled testing were 20 and 5 blood samples per pool, respectively. Optimization of the least cost for pooled-sample testing depended on how well a presumed prevalence of BVDV-PI approximated the true prevalence of BVDV infection in the herd. As prevalence increased beyond 3%, the least cost increased, thereby diminishing the competitive benefit of pooled testing. The protocols presented for sample pooling have general application to screening or surveillance using a sensitive diagnostic test to detect very low prevalence diseases or pathogens in flocks or herds.  相似文献   

2.
Testing of composite fecal (environmental) samples from high traffic areas in dairy herds has been shown to be a cost-effective and sensitive method for classification of herd status for Mycobacterium avium subsp. paratuberculosis (MAP). In the National Animal Health Monitoring System's (NAHMS) Dairy 2007 study, the apparent herd-level prevalence of MAP was 70.4% (369/524 had ≥1 culture-positive composite fecal samples out of 6 tested). Based on these data, the true herd-level prevalence (HP) of MAP infection was estimated using Bayesian methods adjusting for the herd sensitivity (HSe) and herd specificity (HSp) of the test method. The Bayesian prior for HSe of composite fecal cultures was based on data from the NAHMS Dairy 2002 study and the prior for HSp was based on expert opinion. The posterior median HP (base model) was 91.1% (95% probability interval, 81.6 to 99.3%) and estimates were most sensitive to the prior for HSe. The HP was higher than estimated from the NAHMS Dairy 1996 and 2002 studies but estimates are not directly comparable with those of prior NAHMS studies because of the different testing methods and criteria used for herd classification.  相似文献   

3.
We developed a stochastic simulation model to compare the herd sensitivity (HSe) of five testing strategies for detection of Mycobacterium avium subsp. paratuberculosis (Map) in Midwestern US dairies. Testing strategies were ELISA serologic testing by two commercial assays (EA and EB), ELISA testing with follow-up of positive samples with individual fecal culture (EAIFC and EBIFC), individual fecal culture (IFC), pooled fecal culture (PFC), and culture of fecal slurry samples from the environment (ENV). We assumed that these dairies had no prior paratuberculosis-related testing and culling. We used cost-effectiveness (CE) analysis to compare the cost to HSe of testing strategies for different within-herd prevalences. HSe was strongly associated with within-herd prevalence, number of Map organisms shed in feces by infected cows, and number of samples tested. Among evaluated testing methods with 100% herd specificity (HSp), ENV was the most cost-effective method for herds with a low (5%), moderate (16%) or high (35%) Map prevalence. The PFC, IFC, EAIFC and EBIFC were increasingly more costly detection methods. Culture of six environmental samples per herd yielded >or=99% HSe in herds with >or=16% within-herd prevalence, but was not sufficient to achieve 95% HSe in low-prevalence herds (5%). Testing all cows using EAIFC or EBIFC, as is commonly done in paratuberculosis-screening programs, was less likely to achieve a HSe of 95% in low than in high prevalence herds. ELISA alone was a sensitive and low-cost testing method; however, without confirmatory fecal culture, testing 30 cows in non-infected herds yielded HSp of 21% and 91% for EA and EB, respectively.  相似文献   

4.
OBJECTIVE: To estimate prevalence of Salmonella spp in Ohio dairy farms and to identify potential risk factors for fecal shedding of salmonellae. DESIGN: Cross-sectional study. SAMPLE POPULATION: 105 Ohio dairy farms. PROCEDURE: Individual fecal samples from all mature cows in study herds were tested for Salmonella spp by use of standard bacteriologic culture procedures. Herds were identified as infected if at least 1 cow was shedding Salmonella spp. Information regarding herd characteristics, management practices, and health history were collected. Potential risk factors for herd-level Salmonella infection were identified. RESULTS: In 31% of the study herds (95% confidence interval, 22 to 40%), at least 1 cow was shedding Salmonella spp. Six percent of 7,776 fecal samples contained Salmonella organisms; prevalence within infected herds ranged from < 1 to 97%. Herd size, use of free stalls for lactating and nonlactating cows, and use of straw bedding in nonlactating cows were significantly associated with fecal shedding of Salmonella spp, as determined by use of univariate analysis. By use of multivariate analysis, large herds were more likely to be infected than smaller herds; however, no other factors were associated with Salmonella infection after adjustment for herd size. CONCLUSIONS AND CLINICAL RELEVANCE: Subclinical shedding of Salmonella spp is common in Ohio dairy herds, although we could not identify specific interventions that may influence the prevalence of Salmonella spp on dairy farms. It appears that large herd size and intensive management may provide an environment conducive to Salmonella shedding and chronic dairy herd infection.  相似文献   

5.
Two tests are used on a regular basis to detect Mycobacterium avium subsp. paratuberculosis (Map): ELISA and fecal culture. Fecal culture is considered more sensitive and specific but is costly and requires 3-4 months for results. Pooling of fecal samples of individual animals may reduce the high costs of fecal culture. The objective of the study was to investigate the diagnostic validity and costs for pooling of fecal samples in dairy farms relative to culture or an ELISA on individual samples to determine the cow- or herd-status for Map. Fifty fecal and blood samples per herd were collected in 12 Chilean dairy herds. The sensitivity of pooling was estimated given the pool-size, amount of shedding in the pool and the prevalence in the herd. The sensitivity of the pools relative to individual fecal culture was 46% (95% CI 29-63%) and 48% (28-68%) for pools of 5 and 10 cows, respectively. The sensitivity of the pools was lower in pools with low shedders (26 and 24% for pools of 5 and 10, respectively) than in pools with moderate or heavy shedders (>75% sensitivity). Pools of 10 cows are the better option to determine or monitor the herd status. A whole-herd ELISA is the least expensive way to determine the status of individual cows but has a lower Se and Sp than individual culture.  相似文献   

6.
Fifty dairy herds in Alberta were tested for the presence of Mycobacterium paratuberculosis by fecal culture and serum enzyme linked immunosorbent assay (ELISA). Individual sera (1500) were tested for antibodies to M. paratuberculosis by ELISA. Fecal samples were combined in pools of 3 (10 pools/herd) for a total of 500 pools that were cultured for M. paratuberculosis. Thirty cultures, including all 10 pools from 1 herd, were not readable due to fungal contamination. The remaining 470 cultures, representing 49 herds, yielded 16 positive pools (3.4% +/- 2.1%) from 10 herds (20.4% +/- 11.3%). The ELISA of each of the 1500 sera detected 105 (7.0% +/- 2.4%) positive sera and 20 (40.0% +/- 13.6%) positive herds, based on 2 or more individual positive sera in the herd. The true herd-level prevalence, as determined by ELISA, was 26.8% +/- 9.6%. The true herd-level prevalence, as determined by M. paratuberculosis fecal culture, ranged from 27.6% +/- 6.5% to 57.1% +/- 8.3%, depending on whether 1, 2, or all 3 individual fecal samples in the positive fecal pool were culture positive.  相似文献   

7.
Epidemiologic investigations of Salmonella infections in dairy cattle often rely on testing fecal samples from individual animals or samples from other farm sources to determine herd infection status. The objectives of this project were to evaluate the effect of sampling frequency on Salmonella isolation and to compare Salmonella isolation and serogroup classification among sample sources on 12 US dairy farms sampled weekly for 7-8 weeks. Three herds per state were enrolled from Michigan, Minnesota, New York and Wisconsin based upon predefined herd-size criteria. Weekly samples were obtained from cattle, bulk tank milk, milk filters, water and feed sources and environmental sites. Samples were submitted to a central laboratory for isolation of Salmonella using standard laboratory procedures. The herd average number of cattle fecal samples collected ranged from 26 to 58 per week. Salmonella was isolated from 9.3% of 4049 fecal samples collected from cattle and 12.9% of 811 samples from other sources. Serogroup C1 was found in more than half of the samples and multiple serogroups were identified among isolates from the same samples and farms. The percentage of herd visits with at least one Salmonella isolate from cattle fecal samples increased with overall herd prevalence of fecal shedding. Only the three herds with an average fecal shedding prevalence of more than 15% had over 85% of weekly visits with at least one positive fecal sample. The prevalence of fecal shedding from different groups of cattle varied widely among herds showing that herds with infected cattle may be classified incorrectly if only one age group is tested. Testing environmental sample sources was more efficient for identifying infected premises than using individual cattle fecal samples.  相似文献   

8.
A stochastic spreadsheet model was developed to obtain estimates of the costs of whole herd testing on dairy farms for Mycobacterium avium subsp. paratuberculosis (Map) with pooled fecal samples. The optimal pool size was investigated for 2 scenarios, prevalence (a low-prevalence herd [< or = 5%] and a high-prevalence herd [> 5%]) and for different herd sizes (100-, 250-, 500- and 1,000-cow herds). All adult animals in the herd were sampled, and the samples of the individuals were divided into equal sized pools. When a pool tested positive, the manure samples of the animals in the pool were tested individually. The individual samples from a negative pool were assumed negative and not tested individually. Distributions were used to model the uncertainty about the sensitivity of the fecal culture at farm level and Map prevalence. The model randomly allocated a disease status to the cows (not shedding, low Map shedder, moderate Map shedder, and heavy Map shedder) on the basis of the expected prevalence in the herd. Pooling was not efficient in 100-cow and 250-cow herds with low prevalence because the probability to detect a map infection in these herds became poor (53% and 88%) when samples were pooled. When samples were pooled in larger herds, the probability to detect at least 1 (moderate to heavy) shedder was > 90%. The cost reduction as a result of pooling varied from 43% in a 100-cow herd with a high prevalence to 71% in a 1,000-cow herd with a low prevalence. The optimal pool size increased with increasing herd size and varied from 3 for a 500-cow herd with a low prevalence to 5 for a 1,000-cow herd with a high prevalence.  相似文献   

9.

Background

Bovine viral diarrhoea (BVD) is an infectious disease of cattle with a worldwide distribution. Herd-level prevalence varies among European Union (EU) member states, and prevalence information facilitates decision-making and monitoring of progress in control and eradication programmes. The primary objective of the present study was to address significant knowledge gaps regarding herd BVD seroprevalence (based on pooled sera) and control on Irish farms, including vaccine usage.

Methods

Preliminary validation of an indirect BVD antibody ELISA test (Svanova, Biotech AB, Uppsala, Sweden) using pooled sera was a novel and important aspect of the present study. Serum pools were constructed from serum samples of known seropositivity and pools were analysed using the same test in laboratory replicates. The output from this indirect ELISA was expressed as a percentage positivity (PP) value. Results were used to guide selection of a proposed cut-off (PCO) PP. This indirect ELISA was applied to randomly constructed within-herd serum pools, in a cross-sectional study of a stratified random sample of 1,171 Irish dairy and beef cow herds in 2009, for which vaccination status was determined by telephone survey. The herd-level prevalence of BVD in Ireland (percentage positive herds) was estimated in non-vaccinating herds, where herds were classified positive when herd pool result exceeded PCO PP. Vaccinated herds were excluded because of the potential impact of vaccination on herd classification status. Comparison of herd-level classification was conducted in a subset of 111 non-vaccinating dairy herds using the same ELISA on bulk milk tank (BMT) samples. Associations between possible risk factors (herd size (quartiles)) and herd-level prevalence were determined using chi-squared analysis.

Results

Receiver Operating Characteristics Analysis of replicate results in the preliminary validation study yielded an optimal cut-off PP (Proposed Cut-off percentage positivity - PCO PP) of 7.58%. This PCO PP gave a relative sensitivity (Se) and specificity (Sp) of 98.57% and 100% respectively, relative to the use of the ELISA on individual sera, and was chosen as the optimal cut-off since it resulted in maximization of the prevalence independent Youden’s Index.The herd-level BVD prevalence in non-vaccinating herds was 98.7% (95% CI - 98.3-99.5%) in the cross-sectional study with no significant difference between dairy and beef herds (98.3% vs 98.8%, respectively, p = 0.595).An agreement of 95.4% was found on Kappa analysis of herd serological classification when bulk milk and serum pool results were compared in non-vaccinating herds. 19.2 percent of farmers used BVDV vaccine; 81% of vaccinated herds were dairy. A significant association was found between seroprevalence (quartiles) and herd size (quartiles) (p < 0.01), though no association was found between herd size (quartiles) and herd-level classification based on PCO (p = 0.548).

Conclusions

The results from this study indicate that the true herd-level seroprevalence to Bovine Virus Diarrhoea (BVD) virus in Ireland is approaching 100%. The results of the present study will assist with national policy development, particularly with respect to the national BVD eradication programme which commenced recently.  相似文献   

10.
Surveillance of porcine reproductive and respiratory syndrome (PRRS) in negative sow farms is usually performed by testing for the presence of antibodies against PRRS virus in serum with a commercial ELISA test. The objective of this study was to evaluate the feasibility of pooling serum samples for detection of PRRS virus antibodies by ELISA. The effect of pool size on the sensitivity and specificity of the ELISA test was evaluated by testing true positive samples and false positive samples, respectively, diluted in negative sera. The effect of three different cut-off values for the interpretation of the diagnostic test (0.4, 0.3 and 0.2) was evaluated as well. Furthermore, the obtained sensitivity and specificity estimates were used to calculate the herd sensitivity and herd specificity of surveillance protocols in different scenarios. The results showed that pooling serum samples to detect PRRSV antibodies resulted in a decrease in sensitivity and an increase in specificity, compared to testing individual samples, while the reduction of the s/p cut-off value recommended by the manufacturer (0.4) had the opposite effect. We describe an approach that can increase the herd sensitivity of a surveillance protocol for breeding herds, while maintaining high herd specificity and low testing costs. This can be achieved by sampling a larger number of animals and running the samples in pools. Therefore, the conventional monitoring protocols based on ELISA on individual samples can be improved by using pooled-sample testing.  相似文献   

11.

Background

The major indication for antibiotic use in Danish pigs is treatment of intestinal diseases post weaning. Clinical decisions on antibiotic batch medication are often based on inspection of diarrhoeic pools on the pen floor. In some of these treated diarrhoea outbreaks, intestinal pathogens can only be demonstrated in a small number of pigs within the treated group (low pathogen diarrhoea). Termination of antibiotic batch medication in herds suffering from such diarrhoea could potentially reduce the consumption of antibiotics in the pig industry. The objective of the present pilot study was to suggest criteria for herd diagnosis of low pathogen diarrhoea in growing pigs.Data previously collected from 20 Danish herds were used to create a case series of clinical diarrhoea outbreaks normally subjected to antibiotic treatment. In the present study, these diarrhoea outbreaks were classified as low pathogen (<15% of the pigs having bacterial intestinal disease) (n =5 outbreaks) or high pathogen (≥15% of the pigs having bacterial intestinal disease) (n =15 outbreaks). Based on the case series, different diagnostic procedures were explored, and criteria for herd diagnosis of low pathogen diarrhoea were suggested. The effect of sampling variation was explored by simulation.

Results

The diagnostic procedure with the highest combined herd-level sensitivity and specificity was qPCR testing of a pooled sample containing 20 randomly selected faecal samples. The criteria for a positive test result (high pathogen diarrhoea outbreak) were an average of 1.5 diarrhoeic faecal pools on the floor of each pen in the room under investigation and a pathogenic bacterial load ≥35,000 per gram in the faecal pool tested by qPCR. The bacterial load was the sum of Lawsonia intracellularis, Brachyspira pilosicoli and Escherichia coli F4 and F18 bacteria per gram faeces. The herd-diagnostic performance was (herd-level) diagnostic sensitivity =0.99, diagnostic specificity =0.80, positive predictive value =0.94 and negative predictive value =0.96.

Conclusions

The pilot study suggests criteria for herd diagnosis of low pathogen diarrhoea in growing pigs. The suggested criteria should now be evaluated, and the effect of terminating antibiotic batch medication in herds identified as suffering from low pathogen diarrhoea should be explored.

Electronic supplementary material

The online version of this article (doi:10.1186/2046-0481-67-24) contains supplementary material, which is available to authorized users.  相似文献   

12.
OBJECTIVE: To evaluate sensitivity of microbial culture of pooled fecal samples for detection of Mycobacterium avium subsp paratuberculosis (MAP) in large dairy herds and assess the use of the method for estimation of MAP prevalence. ANIMALS: 1,740 lactating cows from 29 dairy herds in California. PROCEDURE: Serum from each cow was tested by use of a commercial ELISA kit. Individual fecal samples were cultured and used to create pooled fecal samples (10 randomly selected fecal samples/pool; 6 pooled samples/herd). Sensitivity of MAP detection was compared between Herrold's egg yolk (HEY) agar and a new liquid culture method. Bayesian methods were used to estimate true prevalence of MAP-infected cows and herd sensitivity. RESULTS: Estimated sensitivity for pooled fecal samples among all herds was 0.69 (25 culture-positive pools/36 pools that were MAP positive). Sensitivity increased as the number of culture-positive samples in a pool increased. The HEY agar method detected more infected cows than the liquid culture method but had lower sensitivity for pooled fecal samples. Prevalence of MAP-infected cows was estimated to be 4% (95% probability interval, 2% to 6%) on the basis of culture of pooled fecal samples. Herd-level sensitivity estimate ranged from 90% to 100% and was dependent on prevalence in the population and the sensitivity for culture of pooled fecal samples. CONCLUSIONS AND CLINICAL RELEVANCE: Use of pooled fecal samples from 10 cows was a cost-effective tool for herd screening and may provide a good estimate of the percentage of MAP-infected cows in dairy herds with a low prevalence of MAP.  相似文献   

13.
We propose a herd-level sample-size formula based on a common adjustment for prevalence estimates when diagnostic tests are imperfect. The formula depends on estimates of herd-level sensitivity and specificity. With Monte Carlo simulations, we explored the effects of different intracluster correlations on herd-level sensitivity and specificity. At low prevalence (e.g. 1% of animals infected), herd-level sensitivity increased with increasing intracluster correlation and many herds were classified as positive based only on false-positive test results. Herd-level sensitivity was less affected at higher prevalence (e.g. 20% of animals infected). A real-life example was developed for estimating ovine progressive pneumonia prevalence in sheep. The approach allows researchers to balance the number of herds and the total number of animals sampled by manipulating herd-level test characteristics (such as the number of animals sampled within a herd).  相似文献   

14.
The Danish government and cattle industry instituted a Salmonella surveillance program in October 2002 to help reduce Salmonella enterica subsp. enterica serotype Dublin (S. Dublin) infections. All dairy herds are tested by measuring antibodies in bulk tank milk at 3-month intervals. The program is based on a well-established ELISA, but the overall test program accuracy and misclassification was not previously investigated. We developed a model to simulate repeated bulk tank milk antibody measurements for dairy herds conditional on true infection status. The distributions of bulk tank milk antibody measurements for infected and noninfected herds were determined from field study data. Herd infection was defined as having either >or=1 Salmonella culture-positive fecal sample or >or=5% within-herd prevalence based on antibody measurements in serum or milk from individual animals. No distinction was made between Dublin and other Salmonella serotypes which cross-react in the ELISA. The simulation model was used to estimate the accuracy of herd classification for true herd-level prevalence values ranging from 0.02 to 0.5. Test program sensitivity was 0.95 across the range of prevalence values evaluated. Specificity was inversely related to prevalence and ranged from 0.83 to 0.98. For a true herd-level infection prevalence of 15%, the estimate for specificity (Sp) was 0.96. Also at the 15% herd-level prevalence, approximately 99% of herds classified as negative in the program would be truly noninfected and 80% of herds classified as positive would be infected. The predictive values were consistent with the primary goal of the surveillance program which was to have confidence that herds classified negative would be free of Salmonella infection.  相似文献   

15.
We investigated characteristics of Yersinia enterocolitica infection in Ontario finisher pig herds. Our specific objectives were to estimate or test: prevalence of Y. enterocolitica shedding in finisher pigs, bioserotype distribution, agreement between the herd-level tests based on sampling pig and pooled fecal samples, whether bioserotypes cluster by farms, and whether Y. enterocolitica-positive herds cluster spatially. In total, 3747 fecal samples were collected from 100 farms over the years 2001, 2002, and 2004 (250 total herd visits). Fecal samples were tested by culture and positive isolates were biotyped and serotyped.Apparent pig-level prevalence of Y. enterocolitica was 1.8%, 3.2%, and 12.5% in 2001, 2002, and 2004, respectively. Estimated true pig-level prevalence of Y. enterocolitica was 5.1%, 9.1%, and 35.1% in 2001, 2002, and 2004, respectively. Herd-level prevalence was 16.3%, 17.9%, and 37.5% in 2001, 2002, and 2004, respectively. In all years, the most common bioserotype was 4, O:3, followed by bioserotype 2, O:5,27. Kappa between herd-level status based on pig and pooled samples ranged between 0.51 and 0.68 for biotype 1A and bioserotype 4, O:3, respectively. For 4, O:3, a significant bias in discordant pairs was detected, indicating that pig samples were more sensitive than pooled samples in declaring a herd as positive. Farms tended to be repeatedly positive with the same bioserotype, but positive study farms did not cluster spatially (suggesting lack of between herd transmission and lack of a common geographic risk factor).  相似文献   

16.
An epidemiological study of Fasciola hepatica in cattle was implemented in the north central region of Portugal. Both an enzyme-linked immunosorbent assay and an egg shedding quantification technique were used in the follow-up of seven herds. Two of these herds were negative and the other five were positive for F. hepatica. A herd cut-off of value of 0.425 optical density was calculated and herd sensitivity (HSe) and herd specificity (HSp) were defined. Three seroprevalence studies were also implemented in the region with stratification by county sub-regions for a period of 18 months. Overall mean herd prevalence in Vagos of 11, 23 and 48% was progressively found for the three studies, respectively.  相似文献   

17.
Aggregate testing for the evaluation of Johne's disease herd status   总被引:4,自引:0,他引:4  
This paper examines methods for evaluating herd Johne's disease status that could be used in a survey of the cattle industry. Emphasis is placed on aggregate testing, a process whereby a random sample of cattle from a herd is assessed using an imperfect test, such as an ELISA for detecting antibody in serum. Important aggregate test parameters discussed include: sample size, herd-level sensitivity, herd-level specificity, the number of reactors used for declaring a positive herd result, and the expected within-herd prevalence of disease. Aggregate testing may be useful for several livestock diseases. However, problems arise when it is applied to Johne's disease because of the poor sensitivity of the available diagnostic tests, the low within herd prevalence of infection, and clustering of false positives within a herd.  相似文献   

18.
When foot-and-mouth-disease (FMD) was identified in Miyazaki prefecture in March 2000, Japan conducted an intensive serological and clinical survey in the areas surrounding the index herd. As a result of the survey during the 21 days of the movement-restriction period, two infected herds were detected and destroyed; there were no other cases in the months that followed. To evaluate the survey used for screening the disease-control area and surveillance area, we estimated the herd-level sensitivity of the survey (HSe) through a spreadsheet model using Monte-Carlo methods. The Reed-Frost model was incorporated to simulate the spread of FMD within an infected herd. In the simulations, 4, 8 and 12 effective-contact scenarios during the 5-day period were examined. The estimated HSes of serological tests (HSeE) were 71.0, 75.3 and 76.3% under the 4, 8 and 12 contact scenarios, respectively. The sensitivity analysis showed that increasing the number of contacts beyond 12 did not improve HSeE, but increasing the number of sampled animals and delaying the dates of sampling did raise HSeEs. Small herd size in the outbreak area (>80% of herds have <20 animals) seems to have helped in maintaining HSeE relatively high, although the serological inspection was carried out before sero-positive animals had a chance to increase in infected herds. The estimated herd-level specificity of serological tests (HSpE) was 98.6%. This HSpE predicted 224 false-positive herds (5th percentile estimate was 200 and 95th percentile was 249), which proved close to the 232 false-positive herds actually observed. The combined-test herd-level sensitivity (serological and clinical inspections combined; CTHSe), averaged 85.5, 87.6 and 88.1% for the 4, 8 and 12 contact scenarios, respectively. Using these CTHSes, the calculated probability that no infected herd was overlooked by the survey was > or =62.5% under the most-conservative, four-contact scenario. The probability that no more than one infected herd was overlooked was > or =89.7%.  相似文献   

19.
A practical approach to calculate sample size for herd prevalence surveys   总被引:1,自引:0,他引:1  
When designing a herd-level prevalence study that will use an imperfect diagnostic test, it is necessary to consider the test sensitivity and specificity. A new approach was developed to take into account the imperfections of the test. We present an adapted formula that, when combined with an existing piece of software, allows improved planning. Bovine paratuberculosis is included as an example infection because it originally stimulated the work. Examples are provided of the trade-off between the benefit (low number of herds) and the disadvantage (large number of animals per herd and exclusion of small herds) that are associated with achieving high herd-level sensitivity and specificity. We demonstrate the bias in the estimate of prevalence and the underestimate of the confidence range that would arise if we did not account for test sensitivity and specificity.  相似文献   

20.
The results of a commercial bulk-milk enzyme-linked immunosorbent assay (ELISA) test for herd-level bovine leukemia virus (BLV) status were compared to results obtained from individual agar-gel immunodiffussion (AGID) testing on sampled cattle. A positive herd was defined as a herd having one or more AGID-positive animals. The estimated true herd status was based on the sensitivity and specificity of the AGID test and the number of cattle sampled per herd. Ninety-seven herds were used, with a mean of 13 cows sampled per herd. The AGID test indicated an apparent herd prevalence of 70.1%. After accounting for the number of cows sampled and the sensitivity and specificity of the AGID test, the estimated true herd prevalence of BLV was 52.3%. The ELISA test identified 79.4% of herds as positive for BLV, and had an apparent sensitivity and specificity of 0.97 and 0.62, respectively. However, after accounting for the sensitivity and specificity of the AGID test in individual animals, the specificity of the ELISA test was 0.44. The ELISA test was useful for identifying BLV-negative herds (i.e., ruling out the presence of BLV infection in test negative herds). With the moderately low specificity, herds identified as positive by the ELISA test would require further testing at the individual or herd level to definitively establish their BLV status.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号