首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
A semi-quantitative model is presented to rank freshwater rainbow trout farms within a country or region with regards to the risk of becoming infected and spreading a specified pathogen. The model was developed to support a risk-based surveillance scheme for notifiable salmonid pathogens. Routes of pathogen introduction and spread were identified through a process of expert consultation in a series of workshops. The routes were combined into themes (e.g. exposure via water, mechanical transmission). Themes were weighted based on expert opinion. Risk factors for each route were scored and combined into a theme score which was adjusted by the weight. The number of sources and consignments were used to assess introduction via live fish movements onto the farm. Biosecurity measures were scored to assess introduction on fomites. Upstream farms, wild fish and processing plants were included in assessing the likelihood of introduction by water. The scores for each theme were combined to give separate risk scores for introduction and spread. A matrix was used to combine these to give an overall risk score. A case study for viral haemorrhagic septicaemia is presented. Nine farms that represent a range of farming practices of rainbow trout farms in England and Wales are used as worked examples of the model. The model is suited to risk rank freshwater salmonid farms which are declared free of the pathogen(s) under consideration. The score allocated to a farm does not equate to a quantitative probability estimate of the farm to become infected or spread infection. Nevertheless, the method provides a transparent approach to ranking farms with regards to pathogen transmission risks. The output of the model at a regional or national level allows the allocation of surveillance effort to be risk based. It also provides fish farms with information on how they can reduce their risk score by improving biosecurity. The framework of the model can be applied to different production systems which may have other routes of disease spread. Further work is recommended to validate the allocated scores. Expert opinion was obtained through workshops, where the outputs from groups were single point estimates for relative weights of risks. More formal expert opinion elicitation methods could be used to capture variation in the experts' estimates and uncertainty and would provide data on which to simulate the model stochastically. The model can be downloaded (in Microsoft(?)-Excel format) from the Internet at: http://www.cefas.defra.gov.uk/6701.aspx.  相似文献   

2.
Certification that a country, region or state is "free" from a pathogen or has a prevalence less than a threshold value has implications for trade in animals and animal products. We develop a Bayesian model for assessment of (i) the probability that a country is "free" of or has an animal pathogen, (ii) the proportion of infected herds in an infected country, and (iii) the within-herd prevalence in infected herds. The model uses test results from animals sampled in a two-stage cluster sample of herds within a country. Model parameters are estimated using modern Markov-chain Monte Carlo methods. We demonstrate our approach using published data from surveys of Newcastle disease and porcine reproductive and respiratory syndrome in Switzerland, and for three simulated data sets.  相似文献   

3.
Samples from livestock or food items are often submitted to microbiological analysis to determine whether or not the group (herd, flock or consignment) is shedding or is contaminated with a bacterial pathogen. This process is known as 'herd testing' and has traditionally involved subjecting each sample to a test on an individual basis. Alternatively one or more pools can be formed by combining and mixing samples from individuals (animals or items) and then each pool is subjected to a test for the pathogen. I constructed a model to simulate herd-level sensitivity of the individual-sample approach (HSe) and the herd-level sensitivity of the pooled-sample approach (HPSe) of tests for detecting pathogen. The two approaches are compared by calculating the relative sensitivity (RelHSe = HPSe/HSe). An assumption is that microbiological procedures had 100% specificity. The new model accounts for the potential for HPSe and RelHSe to be reduced by the dilution of pathogen that occurs when contaminated samples are blended with pathogen-free samples. Key inputs include a probability distribution describing the concentration of the pathogen of interest in samples, characteristics of the pooled-test protocol, and a 'test-dose-response curve' that quantifies the relationship between concentration of pathogen in the pool and the probability of detecting the target organism. The model also compares the per-herd cost of the pooled-sample and individual-sample approaches to herd testing. When applied to the example of Salmonella spp. in cattle feces it was shown that a reduction in the assumed prevalence of shedding can cause a substantial fall in HPSe and RelHSe. However, these outputs are much less sensitive to changes in prevalence when the number of samples per pool is high, or when the number of pools per herd-test is high, or both. By manipulating the number of pools per herd and the number of samples per pool HPSe can be optimized to suit the range of values of true prevalence of shedding of Salmonella that are likely to be encountered in the field.  相似文献   

4.
Abstract

We provide the computer program code to estimate pathogen prevalence and calculate confidence intervals for estimates based on maximum likelihood methods for the open-source statistical and graphics package R and a commercially licensed statistical package, the Statistical Analysis System (SAS). We correct a previously published SAS program to allow use of newer versions of the SAS software and provide a second SAS program that will work in either version of SAS. All of the programs allow users to calculate prevalence from any number of pooled samples representing different numbers of individuals, and two of these programs allow users to make estimates from data pools that are entirely test-positive or test-negative.  相似文献   

5.
We used a simulation model to study the spatio-temporal dynamics of a potential rabies outbreak in an immunized fox population after the termination of a long-term, large-scale vaccination program with two campaigns per year one in spring and one in autumn. The 'worst-case' scenario of rabies resurgence occurs if rabies has persisted at a low prevalence despite control and has remained undetected by a customary surveillance program or if infected individuals invade to the control area. Even if the termination of a vaccination program entails such a risk of a subsequent new outbreak, prolonged vaccination of a wild host population is expensive and the declining cost-benefit ratio over time eventually makes it uneconomic. Based on the knowledge of the spatio-temporal dynamics of a potential new outbreak gained from our modelling study, we suggest "terminating but observing" to be an appropriate strategy. Simulating the decline of population immunity without revaccination, we found that a new outbreak of rabies should be detected by customary surveillance programs within two years after the termination of the control. The time until detection does not depend on whether vaccination was terminated within the fourth, fifth or sixth years of repeated biannual campaigns. But it is faster if the program was completed with an autumn campaign (because next-year dispersal then occurs after a noticeable decrease in population immunity). Finally, if a rabid fox is detected after terminating vaccination, we determine a rule for defining a circular hazard area based on the simulated spatial spread of rabies. The radius of this area should be increased with the time since the last vaccination campaign. The trade-off between the number of foxes potentially missed by the emergency treatment and the cost for the emergency measures in an enlarged hazard area was found.  相似文献   

6.
We determined the impact of eliminating routine screening for Aeromonas salmonicida and Yersinia ruckeri on the efficacy of the Ontario Ministry of Natural Resources (OMNR) fish disease monitoring program, using Monte Carlo simulation. Because the main purpose of the program is to prevent transferring infected fish among OMNR hatcheries, or to wild fish populations through stocking waterways, the hatchery-level negative predictive value (HNPV) was used as an indicator of monitoring efficacy. The present program (which includes both routine screening of asymptomatic hatchery fish, and diagnostic testing of hatchery mortalities and clinically diseased fish) was confirmed to have a high median HNPV (0.999) for both study pathogens. Simulations suggested that the median probabilities that a hatchery would be pathogen-free if only diagnostic testing were continued (i.e. if no asymptomatic lots were screened), and all diseased lots tested negative for A. salmonicida and Y. ruckeri would be 0.994 for both pathogens (with <5% probability that HNPV would be less than 0.953 and 0.957, respectively) - indicating acceptable monitoring efficacy. However, limitations of the theoretical monitoring model must be considered.  相似文献   

7.
Abstract

Intense infections of the gill pathogen Dermocystidium salmonis were associated with mortality of prespawning chinook salmon Oncorhynchus tshawytscha in several Oregon rivers in 1988. The occurrence of the pathogen in returning adult chinook salmon was monitored in several coastal Oregon stocks from 1989 to 1993. Although the prevalence of the pathogen was high in these fish (up to 66.6%), infection intensities were generally low, and no mortality attributable to D. salmonis was observed. In 1988, the pathogen was associated with a lethal epizootic among juvenile chinook salmon smolts at the Trask State Fish Hatchery near Tillamook, Oregon. Histological examination of gills from heavily infected fish revealed hyperplasia of gill epithelium and fusion of gill lamellae. When naturally infected smolts were transferred from fresh to salt water, the most heavily infected fish died within 10 d, and the number of D. salmonis cysts declined and disappeared from previously infected salmon after 21–42 d.  相似文献   

8.
Long-term Salmonella Dublin carrier animals harbor the pathogen in lymph nodes and internal organs and can periodically shed bacteria through feces or milk, and contribute to transmission of the pathogen within infected herds. Thus, it is of great interest to reduce the number of new carrier animals in cattle herds. An observational field study was performed to evaluate factors affecting the risk that dairy cattle become carrier animals after infection with Salmonella Dublin. Based on repeated sampling, cattle in 12 Danish dairy herds were categorized according to course of infection, as either carriers (n = 157) or transiently infected (n = 87). The infection date for each animal was estimated from fecal excretion and antibody responses. The relationship between the course of infection (carrier versus transiently infected) and risk factors were analyzed using a random effect multilevel, multivariable logistic regression model. The animals with the highest risk of becoming carriers were heifers infected between the age of 1 year and 1st calving, and cows infected around the time of calving. The risk was higher in the first two quarters of the year (late Winter to Spring), and when the prevalence of potential shedders in the herd was low. The risk also varied between herds. The herds with the highest risk of carrier development were herds with clinical disease outbreaks during the study period. These findings are useful for future control strategies against Salmonella Dublin, because they show the importance of optimized calving management and management of heifers, and because they show that even when the herd prevalence is low, carriers are still being produced. The results raise new questions about the development of the carrier state in cattle after infection with low doses of Salmonella Dublin.  相似文献   

9.
A spreadsheet program was written to perform decision tree analysis for control of paratuberculosis (Johne's disease), when testing all adults in a herd and culling all animals with positive test results. The program incorporated diagnostic test sensitivity, specificity, and test cost with the cost or value of each of the 4 possible outcomes; true-positive, true-negative, false-positive, and false-negative test results. The program was designed to repeat the analysis for the independent variable pretest paratuberculosis prevalence (0 to 100%). Model output was graphed as profit or loss in dollars vs pretest prevalence. The threshold was defined as the pretest prevalence at which benefit-cost equaled zero. Reed-Frost disease modeling techniques were used to predict the number of Mycobacterium paratuberculosis-infected replacement heifers resulting from infected cows during a control program. Sensitivity analysis was performed on variables of the decision tree model; test sensitivity, specificity, test cost, and factors affecting the cost of paratuberculosis to a commercial dairy. A test and cull program was profitable when paratuberculosis caused greater than or equal to 6% decrease in milk production if the pretest prevalence was greater than 6%, test sensitivity was 50%, test specificity was 98%, and the testing cost was $4/cow. Test specificities greater than 98% did not markedly affect the threshold for tests with a 50% sensitivity and costing $4/cow. Test sensitivity had minimal effect on the threshold. Using a diagnostic test with a 50% sensitivity and a 98% specificity as an example, test cost was shown to affect the threshold prevalence at which the test and cull program became profitable.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

10.
Tick-borne diseases are of increasing concern in many countries, particularly as a consequence of changes in land use and climate. Ticks are vectors of numerous pathogens (viruses, bacteria, protozoa) that can be harmful to humans and animals. In the context of animal health, bovine babesiosis poses a recurrent threat to cattle herds. In this study, we use a modeling approach to investigate the spread of babesiosis and evaluate control measures. A previously developed tick population dynamics model (here, Ixodes ricinus) is coupled with a pathogen spread model (here, the protozoan Babesia divergens), which describes pathogen spread in a dairy herd through the following processes: transmission, acquisition, transovarial transmission, transstadial persistence, and clearance of the pathogen. An assessment of the simulated B. divergens prevalence levels in ticks and cattle in the context of existing knowledge and data suggested that the model provides a realistic representation of pathogen spread. The model was then used to evaluate the influence of host density and the effect of acaricides on B. divergens prevalence in cattle. Increasing deer density results in an increase in prevalence in cattle whereas increasing cattle stocking rate results in a slight decrease. A potential increase in deer density would thus have an amplification effect on disease spread due to the increase in the number of infected ticks. Regular use of acaricides produces a reduction in pathogen prevalence in cattle. This model could be adapted to other tick-borne diseases.  相似文献   

11.
International trade of livestock and livestock products poses a significant potential threat for spread of diseases, and importing countries therefore often require that imported animals and products are free from certain pathogens. However, absolute freedom from infection cannot be documented, since all test protocols are imperfect and can lead to false-negative results. It is possible instead to estimate the "probability of freedom from infection" and its opposite, the probability of infection despite having a negative test result. These probabilities can be estimated based on a pre-defined target prevalence, known surveillance efforts in the target population and known test characteristics of any pre-export test. Here, calculations are demonstrated using the example of bovine herpes virus-1 (BoHV-1). In a population that recently became free of BoHV-1 without using vaccination, the probability of being infected of an animal randomly selected for trade is 800 per 1 million and this probability is reduced to 64 (95% probability interval PI 6-161) per 1 million when this animal is tested negatively prior to export with a gB-ELISA. In a population that recently became free of BoHV-1 using vaccination, the probability of being infected of an animal randomly selected for trade is 200 per 1 million, and this probability can be reduced to 63 (95% PI 42-87) when this animal is tested negatively prior to export with a gE-ELISA. Similar estimations can be made on a herd level when assumptions are made about the herd size and the intensity of the surveillance efforts. Subsequently, the overall probability for an importing country of importing at least 1 infected animal can be assessed by taking into account the trade volume. Definition of the acceptable level of risk, including the probability of false-negative results to occur, is part of risk management. Internationally harmonized target prevalence levels for the declaration of freedom from infection from selected pathogens provide a significant contribution to the facilitation of international trade of livestock and livestock products by allowing exporting countries to design tailor-made output-based surveillance programs, while providing equivalent guarantees regarding the probability of freedom from infection of the population. Combining this with an approach to assess the overall probability of introducing at least 1 infected animal into an importing country during a defined time interval will help importing countries to achieve their desired level of acceptable risk and will help to assess the equivalence of animal health and food safety standards between trading partners.  相似文献   

12.
Abstract

We used maximum likelihood methods to estimate an observed or apparent prevalence for a pathogen in a pooled sample of fish and here provide the program code for such calculations using commonly available statistical software. To illustrate the characteristics and variability of prevalence estimates from pooled samples, we explored the relationships among pathogen prevalence, sample size, and method of pooling samples. We calculated the average width of confidence intervals and the mean square error of the prevalence estimator for samples from populations with pathogen prevalence ranging from 1% to 90% using several pooling strategies for samples of 30 and 60 fish. As an illustration, we calculated the confidence interval and apparent prevalence of Myxobolus cerebralis in samples of fish from Utah screened with pooled sampling strategies. When all pools were positive, the apparent prevalence was 100%, but the bounds of the confidence interval ranged from 8% to 100%. Interpretations of data sets that are based only on the results for positive pools may be misleading, as the percentage of pools that is positive when any single pool scores negative is higher than the maximum likelihood estimate of apparent prevalence. The confidence intervals bounding estimates were generally smaller when larger numbers of groups were used and samples had few fish per pool. In populations with higher prevalence, the use of pooled samples significantly enlarges the confidence interval of estimates.  相似文献   

13.
Testing of pooled samples has been proposed as a low-cost alternative for diagnostic screening and surveillance for infectious agents in situations where the prevalence of infection is low and most samples can be expected to test negative. The present study extends our previous work in pooled-sample testing (PST) to evaluate effects of the following factors on the overall PST sensitivity (SEk) and specificity (SPk): dilution (pool size), cross-contamination, and cross-reaction. A probabilistic model, in conjunction with Monte Carlo simulations, was used to calculate SEk and SPk, as applied to detection of bovine viral diarrhea virus (BVDV) persistently infected (PI) animals using RT-PCR. For an average prevalence of BVDV PI of 0.01 and viremia in each animal between 102 and 107 virus particles/mL, the pool size associated with the lowest number of tests, and lowest cost, corresponded to eight samples/pool. However, the least-cost pool size (lowest number of tests) was associated with a SEk of 0.90 (0.75–1), which corresponded to a decrease of 0.04, relative to the assay sensitivity for a single sample. The SPk for the same pool size, considering the effect of detection of BVDV acutely infected animals and cross-contamination as source of false positive results, was 0.90 (0.85–0.95). The effect of a hypothetical cross-reacting agent was to markedly decrease SPk, especially as the prevalence of the cross-reacting agent increased. For a pool size of eight samples and a prevalence of the cross-reacting agent of 0.3, SPk ranged from 0.67 to 0.86, depending on the probability that the assay would detect the cross-reacting agent. The methods presented offer a means of evaluating and understanding the various factors that can influence overall accuracy of PST procedures.  相似文献   

14.
Chronic wasting disease (CWD) is a fatal disease of North American cervids that was first detected in a wild, hunter-shot deer in Saskatchewan along the border with Alberta in Canada in 2000. Spatially explicit models for assessing factors affecting disease detection are needed to guide surveillance and control programs. Spatio-temporal patterns in CWD prevalence can be complicated by variation in individual infection probability and sampling biases. We assessed hunter harvest data of mule deer (Odocoileus hemionus) and white-tailed deer (Odocoileus virginianus) during the early phases of an outbreak in Saskatchewan (i.e., 2002-2007) for targeting the detection of CWD by defining (1) where to look, and (2) how much effort to use. First, we accounted for known demographic heterogeneities in infection to model the probability, P(E), that a harvested deer was infected with CWD given characteristics of the harvest location. Second, in areas where infected deer were harvested we modelled the probability, P(D), of the hunter harvest re-detecting CWD within sample units of varying size (9-54 km(2)) given the demographics of harvested deer and time since first detection in the study area. Heterogeneities in host infection were consistent with those reported elsewhere: mule deer 3.7 times >white-tailed deer, males 1.8 times>females, and quadratically related to age in both sexes. P(E) increased with number of years since the first detection in our study area (2002) and proximity to known disease sources, and also varied with distance to the South Saskatchewan River and small creek drainages, terrain ruggedness, and extent of agriculture lands within a 3 km radius of the harvest. The majority (75%) of new CWD-positive deer from our sample were found within 20 km of infected deer harvested in the previous year, while approximately 10% were greater than 40 km. P(D) modelled at 18 km(2) was best supported, but for all scales, P(D) depended on the number of harvested deer and time since the first infected deer was harvested. Within an 18 km(2) sampling unit, there was an 80% probability of detecting a CWD-positive deer with 16 harvested deer five years after the initial infected harvest. Identifying where and how much to sample to detect CWD can improve targeted surveillance programs early in the outbreak of the disease when based on hunter harvest.  相似文献   

15.
We developed the BSurvE spreadsheet model to estimate the true prevalence of bovine spongiform encephalopathy (BSE) in a national cattle population, and evaluate national BSE surveillance programs. BSurvE uses BSE surveillance data and demographic information about the national cattle population. The proportion of each cohort infected with BSE is found by equating the observed number of infected animals with the number expected, following a series of probability calculations and assuming a binomial distribution for the number of infected animals detected in each surveillance stream. BSurvE has been used in a series of international workshops, where analysis of national datasets demonstrated patterns of cohort infection that were consistent with infection-control activities within the country. The results also reflected the timing of known events that were high-risk for introduction of the infectious agent.  相似文献   

16.
Balanced autosomal translocations are a known cause for repeated early embryonic loss (REEL) in horses. In most cases, carriers of such translocations are phenotypically normal, but the chromosomal aberration negatively affects gametogenesis giving rise to both genetically balanced and unbalanced gametes. The latter, if involved in fertilization, result in REEL, whereas gametes with the balanced form of translocation will pass the defect into next generation. Therefore, in order to reduce the incidence of REEL, identification of translocation carriers is critical. Here, we report about a phenotypically normal 3‐year‐old Arabian mare that had repeated resorption of conceptuses prior to day 45 of gestation and was diagnosed with REEL. Conventional and molecular cytogenetic analyses revealed that the mare had normal chromosome number 64,XX but carried a non‐mosaic and non‐reciprocal autosomal translocation t(4;10)(q21;p15). This is a novel translocation described in horses with REEL and the first such report in Arabians. Previous cases of REEL due to autosomal translocations have exclusively involved Thoroughbreds. The findings underscore the importance of routine cytogenetic screening of breeding animals.  相似文献   

17.

Background

In recent years, the occurrence and the relevance of Mycoplasma hyopneumoniae infections in suckling pigs has been examined in several studies. Whereas most of these studies were focused on sole prevalence estimation within different age groups, follow-up of infected piglets or assessment of pathological findings, none of the studies included a detailed analysis of individual and environmental risk factors. Therefore, the aim of the present study was to investigate the frequency of M. hyopneumoniae infections in suckling pigs of endemically infected herds and to identify individual risk factors potentially influencing the infection status of suckling pigs at the age of weaning.

Results

The animal level prevalence of M. hyopneumoniae infections in suckling pigs examined in three conventional pig breeding herds was 3.6% (41/1127) at the time of weaning. A prevalence of 1.2% was found in the same pigs at the end of their nursery period. In a multivariable Poisson regression model it was found that incidence rate ratios (IRR) for suckling pigs are significantly lower than 1 when teeth grinding was conducted (IRR: 0.10). Moreover, high temperatures in the piglet nest during the first two weeks of life (occasionally >40°C) were associated with a decrease of the probability of an infection (IRR: 0.23-0.40). Contrary, the application of PCV2 vaccines to piglets was associated with an increased infection risk (IRR: 9.72).

Conclusions

Since single infected piglets are supposed to act as initiators for the transmission of this pathogen in nursery and fattening pigs, the elimination of the risk factors described in this study should help to reduce the incidence rate of M. hyopneumoniae infections and thereby might contribute to a reduced probability of high prevalences in older pigs.  相似文献   

18.
The national control programme for Salmonella in Danish swine herds introduced in 1993 has led to a large decrease in pork-associated human cases of salmonellosis. The pork industry is increasingly focused on the cost-effectiveness of surveillance while maintaining consumer confidence in the pork food supply. Using national control programme data from 2003 and 2004, we developed a zero-inflated binomial model to predict which farms were most at risk of Salmonella. We preferentially sampled these high-risk farms using two sampling schemes based on model predictions resulting from a farm's covariate pattern and its random effect. Zero-inflated binomial modelling allows assessment of similarities and differences between factors that affect herd infection status (introduction), and those that affect the seroprevalence in infected herds (persistence and spread). Both large (producing greater than 5000 pigs per annum), and small herds (producing less than 2000 pigs per annum) were at significantly higher risk for infection and subsequent seroprevalence, when compared with medium sized herds (producing between 2000 and 5000 pigs per annum). When compared with herds being located elsewhere, being located in the south of Jutland significantly decreased the risk of herd infection, but increased the risk of a pig from an infected herd being seropositive. The model suggested that many of the herds where Salmonella was not detected were infected, but at a low prevalence. Using cost and sensitivity, we compared the results of our model based sampling schemes with those under the standard sampling scheme, based on herd size, and the recently introduced risk-based approach. Model-based results were less sensitive but show significant cost savings. Further model refinements, sampling schemes and the methods to evaluate their performance are important areas for future work, and these should continue to occur in direct consultation with Danish authorities.  相似文献   

19.
Certification-and-monitoring programs for paratuberculosis are based on repetitive herd testing to establish a herd's health status. The available tests have poor sensitivity. Infected but undetected herds may remain among certified "paratuberculosis-free" herds. The objective was to determine if truly free herds acquire a certified status and keep it over time when infected but undetected herds remain. The Dutch program was used as a basis to construct a mechanistic deterministic model of the evolution over 25 years of the number of herds per health status. Three health states for herds were defined: not detected as infected in the certification process to obtain a free status; not detected as infected by any of the repetitive tests for monitoring the certified free status; detected as infected. Among undetected herds, two types were defined: truly free versus undetected but infected. Transitions between states were due to the purchase of an infected animal, infection via the environment, clearance via culling or sales, detection of an infected animal, and certification. A sensitivity analysis was carried out. We showed that--for a 100% specific test only--most of the truly free herds at the beginning of the program got a certified free status and kept it over time. Most infected herds were either detected as infected or cleared. The number of certified truly free herds increased with a decrease in the animal-level prevalence or in the risk of purchasing an infected cattle, for example by restricting purchases to cattle from herds at the highest level of certification.  相似文献   

20.
Heifers (n=136) from 5 herds were treated with a commercially available beta-lactam intramammary (IMM) antibiotic preparation containing cephapirin sodium at 10-21 d prior to anticipated parturition to evaluate the risk of antibiotic residues occurring in milk postpartum and to determine factors associated with antibiotic residues and IMM pathogen presence in milk postpartum. Mammary secretions collected from quarters before antibiotic administration and during weeks 1, 2 and 3 postpartum were analyzed for mastitis pathogens. Composite milk was collected at milkings 3, 6 and 10 postpartum and analyzed for beta-lactam residues using a microbial inhibition antibiotic residue screening test. Antibiotic residues were confirmed with beta-lactamase treatment and re-tested for residues. Residues were detected in 28.0, 8.82 and 3.68% of milk samples obtained at the third, sixth, and tenth milking postpartum, respectively. Increases in interval between prepartum antibiotic therapy and parturition and an increase in the postpartum interval to sampling were associated with a decrease in risk of antibiotic residues. The presence of antibiotic residues in milk at the third milking was associated with a reduced risk for IMM pathogen prevalence in the first 21 d postpartum. Lower somatic cell counts, an increase in mean milk yield over 200 days in milk and a reduction in IMM pathogen prevalence were associated with the presence of an antibiotic in milk postpartum. Screening milk for antibiotic residues in milk postpartum following prepartum antibiotic therapy in heifers is recommended to reduce the risk for antibiotic residue contamination of milk.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号