首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mixed discrete and continuous outcomes are commonly measured on each experimental unit in dose-response studies in toxicology. The dose-response relationships for these outcomes often have dose thresholds and nonlinear patterns. In addition, the endpoints are typically correlated, and a statistical analysis that incorporates the association may result in improved precision. We propose an extension of the generalized estimating equation (GEE) methodology to simultaneously analyze binary, count, and continuous outcomes with nonlinear threshold models that incorporates the intra-subject correlation. The methodology uses a quasi-likelihood framework and a working correlation matrix, and is appropriate when the marginal expectation of each outcome is of primary interest and the correlation between endpoints is a nuisance parameter. Because the derivatives of threshold models are not continuous at each point of the parameter space, we describe the necessary modifications that result in asymptotically normal and consistent estimators. Using dose-response data from a neurotoxicity experiment, the methodology is illustrated by analyzing five outcomes of mixed type with nonlinear threshold models. In this example, the incorporation of the intra-subject correlation resulted in decreased standard errors for the threshold parameters.  相似文献   

2.
Measurements of both continuous and discrete outcomes are encountered in many statistical problems. Here we consider the particular context of teratology studies, where quantitative risk asessment is aimed at determining the effect of dose on the probability that an individual fetus is malformed or of low birth weight, both being important measures of teratogenicity. We will introduce two different joint marginal mean models for outcomes of a mixed nature. First, we propose the Plackett-Dale approach, where for each binary outcome it is assumed that there exists an underlying glatent variable. The latent malformation outcomes are then assumed to follow a Plackett distribution. The second approach we consider is a probit approach. Here it is assumed that there exists an underlying continuous variable for each binary outcome, so the joint distribution for weight and malformation can be assumed to follow a multivariate normal distribution. In both cases, specification of the full distribution will be avoided using pseudolikelihood and generalized estimating equations methodology, respectively. Quantitative risk assessment is illustrated using data from two developmental toxicology experiments.  相似文献   

3.
Classical optimal design theory may produce experimental designs that are biologically or characteristically inappropriate. Often, there is a particular study goal along with many practical experimental concerns that a researcher may wish to include in the optimal design process. This article provides a technique that allows a researcher to incorporate desired experimental characteristics into an adjusted optimal design criterion. This technique uses a weighted overall desirability function to penalize the optimal design criterion. A researcher may define an overall desirability function using any number of individual desirability functions to influence the properties of an optimal experimental design. The methodology is illustrated with two dose-response examples.  相似文献   

4.
Many disciplines conduct studies in which the primary objectives depend on inference based on a nonlinear relationship between the treatment and response. In particular, interest often focuses on calibration—that is, estimating the best treatment level to achieve a particular result. Often, data for such calibration come from experiments with split-plots or other features that result in multiple error terms or other nontrivial error structures. One such example is the time-of-weed-removal study in weed science, designed to estimate the critical period of weed control. Calibration, or inverse prediction, is not a trivial problem with simple linear regression, and the complexities of experiments such as the time-of-weed-removal study further complicate the procedure. In this article, we extend existing calibration techniques to nonlinear mixed effects models, and illustrate the procedure using data from a time-of-weed-removal study.  相似文献   

5.
Aboveground biomass estimation in short-rotation forestry plantations is an essential step in the development of crop management strategies as well as allowing the economic viability of the crop to be determined prior to harvesting. Hence, it is important to develop new methodologies that improve the accuracy of predictions, using only a minimum set of easily obtainable information i.e., diameter and height. Many existing models base their predictions only on diameter (mainly due to the complexity of including further covariates), or rely on complicated equations to obtain biomass predictions. However, in tree species, it is important to include height when estimating aboveground biomass because this will vary from one genotype to another. This work proposes the use of a more flexible and easy to implement model for predicting aboveground biomass (stem, branches and total) as a smooth function of height and diameter using smooth additive mixed models which preserve the additive property necessary to model the relationship within wood fractions, and allows the inclusion of random effects and interaction terms. The model is applied to the analysis of three trials carried out in Spain, where nine clones at three different sites are compared. Also, an analysis of slash pine data is carried out in order to compare with the approach proposed by Parresol (Can J For Res 31:865–878, 2001).Supplementary materials accompanying this paper appear on-line  相似文献   

6.
 The literature was reviewed regarding laboratory incubation studies where C mineralization was measured. Experiments were selected in which the same substrate was incubated at least at two different temperatures and where time-series were available with at least four measurements for each substrate and temperature. A first-order one-component model and a parallel first-order two-component model were fitted to the CO2–C evolution data in each experiment using a least-squares procedure. After normalising for a reference temperature, the temperature coefficient (Q 10 ) function and three other temperature response functions were fitted to the estimated rate constants. The two-component model could describe the dynamics of the 25 experiments much more adequately than the one-component model (higher R2, adjusted for the number of parameters), even when the rate constants for both were assumed to be equally affected by temperature. The goodness-of-fit did not differ between the temperature response models, but was affected by the choice of the reference temperature. For the whole data set, a Q 10 of 2 was found to be adequate for describing the temperature dependence of decomposition in the intermediate temperature range (about 5–35  °C). However, for individual experiments, Q 10 values deviated greatly from 2. At least at temperatures below 5  °C, functions not based on Q 10 are probably more adequate. However, due to the paucity of data from low-temperature incubations, this conclusion is only tentative, and more experimental work is called for. Received: 1 December 1997  相似文献   

7.
Mixed models have become important in analyzing the results of experiments, particularly those that require more complicated models (e.g., those that involve longitudinal data). This article describes a method for deriving the terms in a mixed model. Our approach extends an earlier method by Brien and Bailey to explicitly identify terms for which autocorrelation and smooth trend arising from longitudinal observations need to be incorporated in the model. At the same time we retain the principle that the model used should include, at least, all the terms that are justified by the randomization. This is done by dividing the factors into sets, called tiers, based on the randomization and determining the crossing and nesting relationships between factors. The method is applied to formulate mixed models for a wide range of examples. We also describe the mixed model analysis of data from a three-phase experiment to investigate the effect of time of refinement on Eucalyptus pulp from four different sources. Cubic smoothing splines are used to describe differences in the trend over time and unstructured covariance matrices between times are found to be necessary.  相似文献   

8.
为了进一步提升基于离散元法对免耕作业机具与土壤互作关系研究的准确性,以华北麦玉两熟区免耕壤土为研究对象,基于离散元软件(Discrete Element Modeling,DEM)扩展的The Edinburgh Elasto-Plastic Adhesion非线性弹塑性接触模型开展常年免耕农田土壤离散元仿真模型参数标定研究。应用Plackett-Burman 设计敏感性分析试验,选择对土壤压板试验沉陷量影响显著的关键参数(即土壤颗粒恢复系数、土壤颗粒-颗粒静摩擦系数、土壤颗粒表面能和土壤颗粒-颗粒塑性变形比),以土壤压板试验沉陷量为评价指标,应用Box-Behnken 试验建立了沉陷量与4个显著性参数的二次多项式回归模型,以物理试验得到的实际沉陷量6.2 mm为目标值,对显著性参数进行寻优,得到最优组合为:土壤颗粒恢复系数为0.47,土壤颗粒-颗粒间静摩擦系数为0.62,土壤颗粒表面能为6.12 J/m2,土壤颗粒塑性变形比为0.41。最后通过标定优化的参数进行土壤颗粒应力传递试验,与室内试验结果对比发现,仿真的土壤颗粒接触传力特性与实际土壤压实过程中应力传递差别较小,误差范围在9.21%以内,并对比分析仿真和实测的土壤应力传递特性曲线的拟合情况,两者之间的决定系数R2为0.91,表明两条曲线的拟合相似度较高,验证了免耕土壤参数标定的准确可靠,参数标定后的离散元免耕土壤模型精确度高。研究可为快速构建免耕模式下的土壤离散元模型及免耕农机装备优化提供技术支撑。  相似文献   

9.
Environmental data routinely are collected at irregularly spaced monitoring stations and at intermittent times, times which may differ by location. This article introduces a class of continuous-time, continuous-space statistical models that can accommodate many of these more complex environmental processes. This class of models in corporates temporal and spatial variability in a cohesive manner and is broad enough to include temporal processes that are assumed to be generated by stochastic differential equations with possibly temporally and spatially correlated errors. A wide range of ARIMA temporal models and geostatistical spatial models are included in the class of models investigated. Techniques for identifying the structure of the temporal and spatial components of this class of models are detailed. Point estimates of model parameters, asymptotic distributions, and Kalman-filter prediction methods are discussed.  相似文献   

10.
Measurements of both continuous and categorical outcomes appear in many statistical problems. One such example is the study of teratology and developmental toxicity, where both the probability that a live fetus is malformed (ordinal) or of low birth weight (continuous) are important measures in the context of teratogenicity. Although multivariate methods of the analysis of continuous outcomes are well understood, methods for jointly continuous and discrete outcomes are less familiar. We propose a likelihood-based method that is an extension of the Plackett-Dale approach. Specification of the full likelihood will be avoided using pseudo-likelihood methodology. The estimation of safe dose levels as part of quantitative risk assessment will be illustrated based on a developmental toxicity experiment of diethylene glycol dimethyl ether in mice.  相似文献   

11.
An important trait in crop cultivar evaluation is stability of performance across environments. There are many different measures of stability, most of which are related to variance components of a mixed model. We believe that stability measures assessing yield risk are of particular relevance, because they integrate location and scale parameters in a meaningful way. A prerequisite for obtaining valid risk estimates is an appropriate model for the distribution of yield across environments. Multienvironment trials (MET) are often analyzed by mixed linear models, assuming that environments are a random sample from a target population, and that random terms in the model are normally distributed. The normality assumption may not always be tenable, and consequently, risk estimates may be biased. In this article, we suggest a transformation approach based on the Johnson system to cope with nonnormality in mixed models. The methods are exemplified using an international wheat yield trial. The importance of accounting for nonnormality in risk analyses based on MET is emphasized. We suggest that transformations should be routinely considered in analyses to assess risk.  相似文献   

12.
为设计和优化面粉输送设备,应用离散元法对面粉进行准确地工程建模和分析,需要对其接触参数进行必要的标定。该研究依据颗粒缩放理论,用“Hertz-Mindlin with Johnson-Kendall-Roberts”接触模型表征面粉颗粒间黏性的影响,提出了一种基于静/动态休止角的接触参数标定方法。运用正交试验方法,对接触参数的敏感性和方差分析,表明面粉颗粒间的滚动摩擦系数、面粉颗粒与不锈钢表面间的静摩擦系数、表面能对静态休止角的影响极显著(P<0.01),并且多组接触参数都可以模拟出与试验相同的静态休止角。进一步研究表明,面粉颗粒与不锈钢表面间的静摩擦系数的合理取值范围为0.2~0.4。通过2种填充率、4种转速下基于动态休止角的参数标定,将其中与试验最为吻合的一组参数作为标定结果,其值如下:面粉颗粒之间恢复系数为0.6、面粉颗粒之间静摩擦系数为0.2、面粉颗粒之间滚动摩擦系数为0.1、面粉颗粒与不锈钢容器表面之间恢复系数为0.6、面粉颗粒与不锈钢容器表面之间静摩擦系数为0.6、面粉颗粒与不锈钢容器表面之间滚动摩擦系数为0.5、表面能为0.12 J/m2。使用该组参数对矩形容器中物料自由坍塌试验进行仿真,其结果与试验结果相符,验证了该标定方法的有效性。该研究提出的标定方法简单、易执行,对粉料输送设备的设计及优化具有一定的工程应用价值。  相似文献   

13.
棉花遥感识别的混合像元分解   总被引:3,自引:3,他引:0  
为了进一步提高棉花遥感识别精度,以新疆玛纳斯县为研究区,运用线性光谱混合模型(LSMM),对TM遥感数据的混合像元分解技术与方法进行了研究。将棉花、玉米、番茄和土壤4类典型的端元组分光谱值代入线性模型,在非约束条件下,用最小二乘法估计混合系数,得到每种地物类型的丰度及RMS误差图,以实地测量的棉花种植面积对模型分解效果进行评估,结果表明:线性光谱混合模型构模简单、计算量小,棉花线性光谱混合像元分解精度达到90%以上,可用于新疆棉花的遥感识别。  相似文献   

14.
Conservation planning and management decisions often present trade-offs among habitats and species, generating uncertainty about the composition and configuration of habitat that will best meet management goals. The public acquisition of 5471 ha of salt ponds in San Francisco Bay for tidal-marsh restoration presents just such a challenge. Because the existing ponds support large numbers of waterbirds, restoring the entire area to tidal marsh could cause undesirable local declines for many species. To identify management strategies that simultaneously maximize abundances of marsh- and pond-associated species, we applied an integer programming approach to maximize avian abundance, comparing across two objectives, two models, and five species weightings (20 runs total). For each pond, we asked: should it be restored to a tidal marsh or kept as a managed pond, and with what salinity and depth? We used habitat relationship models as inputs to non-linear integer programs to find optimal or near-optimal solutions. We found that a simple linear objective, based on maximizing a weighted sum of standardized species’ abundance, led to homogeneous solutions (all-pond or all-marsh). Maximizing a log-linear objective yielded more heterogeneous configurations that benefit more species. Including landscape terms in the models resulted in slightly greater habitat aggregation, but generally favored pond-associated species. It also led to the placement of certain habitats near the bay’s edge. Using the log-linear objective, optimal restoration configurations ranged from 9% to 60% tidal marsh, depending on the species weighting, highlighting the importance of thoughtful a priori consideration of priority species.  相似文献   

15.
基于气温估算参考作物蒸散量方法的对比与改进   总被引:1,自引:1,他引:0  
为提高基于气温数据估算参考作物蒸散量(ET0)模型的精度,该研究对比分析了基于温度数据估算ET0的Penman-Monteith(PMT)模型、Hargreaves-Samani(HS)模型和改进HS模型,并运用基于气温数据估算实际水汽压和太阳辐射的最新进展改进PMT模型。结果表明:改进HS模型较传统HS模型提高了半干旱区到湿润区ET0的估算精度; PMT模型与改进HS模型估算的各气候区相关系数(r)均值相似,但与改进HS模型相比,PMT模型提高了除湿润区和亚湿润干旱区外各气候区的ET0估算精度,均方根误差(RMSE)和相对均方根误差(RRMSE)均值分别降低0.01~0.15 mm/d和0~0.05,且模型效率(EF)均值提高了0.01~0.06;本文提出的改进PMT模型可进一步改进PMT模型估算除干旱区和半干旱区外各气候区精度,RMSE和RRMSE均值较原PMT模型分别降低0.04~0.12 mm/d和0.02~0.04,r和EF均值更接近于1;并且改进PMT模型估算各站点全局性能指数(Global Performance Index,GPI)值较好,90%的站点GPI值排名第一。因此,建议在仅有气温数据时,使用改进PMT模型作为估算ET0的推荐模型。研究成果可为区域农业水资源管理提供依据。  相似文献   

16.
The study of individual animal movement in relation to objects in a landscape is important in many areas of ecology and conservation biology. Yet, many of the models used by ecologists do not account for landscape features and thus may not be conducive to analysis of animal movement data. This article develops a set of nonlinear regression models for both move angles and move distances in relation to a single object in the landscape. Our models incorporate the concept of perceptual range from theories of animal movement behavior. We describe numerical methods for obtaining the maximum likelihood estimates of the model parameters. For illustration, we show results from both computer simulated data and real movement data collected for a red diamond rattlesnake (Crotalus ruber) via radio telemetry field techniques.  相似文献   

17.
Clustered data, either as an explicit part of the study design or due to the natural distribution of habitats, populations, and so on, are frequently encountered by biologists. Mixed effect models provide a framework that can handle clustered data by estimating cluster-specific random effects and introducing correlated residual structures. General parametric models have been shown not to suit all biological problems, resulting in an increased popularity for local regression procedures, such as LOESS and splines. To evaluate similar biological problems for clustered data with cluster-specific random effects and potential dependencies between within-cluster residuals, we suggest a local linear mixed model (LLMM). The LLMM approach is a local version of a linear mixed-effect model (LME), and the LLMM approach produces: (1) local shared predictions, (2) local cluster-specific predictions, and (3) estimates of cluster-specific random effects conditioned on the covariates. Thus, in addition to the local estimates of the expected response, we obtain information about how the cluster-specific random variability depends on the values of the covariate. Ovary data are used to illustrate the flexibility and potential of this procedure in biological contexts.  相似文献   

18.
This article considers the analysis of experiments with missing data from various experimental designs frequently used in agricultural research (randomized complete blocks, split plots, strip plots). We investigate the small sample properties of REML-based Wald-type F tests using linear mixed models. Several methods for approximating the denominator degrees of freedom are employed, all of which are available with the MIXED procedure of the SAS System (8.02). The simulation results show that the Kenward-Roger method provides the best control of the Type I error rate and is not inferior to other methods in terms of power.  相似文献   

19.
Monitoring natural resources in Alaskan national parks is challenging because of their remoteness, limited accessibility, and high sampling costs. We describe an iterative, three-phased process for developing sampling designs based on our efforts to establish a vegetation monitoring program in southwest Alaska. In the first phase, we defined a sampling frame based on land ownership and specific vegetated habitats within the park boundaries and used Path Distance analysis tools to create a GIS layer that delineated portions of each park that could be feasibly accessed for ground sampling. In the second phase, we used simulations based on landcover maps to identify size and configuration of the ground sampling units (single plots or grids of plots) and to refine areas to be potentially sampled. In the third phase, we used a second set of simulations to estimate sample size and sampling frequency required to have a reasonable chance of detecting a minimum trend in vegetation cover for a specified time period and level of statistical confidence. Results of the first set of simulations indicated that a spatially balanced random sample of single plots from the most common landcover types yielded the most efficient sampling scheme. Results of the second set of simulations were compared with field data and indicated that we should be able to detect at least a 25% change in vegetation attributes over 31 years by sampling 8 or more plots per year every five years in focal landcover types. This approach would be especially useful in situations where ground sampling is restricted by access.  相似文献   

20.
Selecting a survey design to detect change through time in an ecological resource requires balancing the speed with which a given level of change can be detected against the cost of monitoring. Planning studies allow one to assess these tradeoffs and identify the optimal design choices for a specific scenario of change. However, such studies seldom are conducted. Even worse, they seem least likely to be undertaken when they offer the most insight – when survey methods and monitoring designs are complex and not well captured by simple statistical models. This may be due to limited technical capacity within management agencies. Without such planning, managers risk a potentially severe waste of monitoring resources on ineffective and inefficient monitoring, and institutions will remain ignorant of the true costs of information and the potential efficiency gains afforded by a moderate increase in technical capacity. We discuss the importance of planning studies, outline their main components, and illustrate the process through an investigation of competing designs for monitoring for declining brown bear (Ursus arctos) densities in southwestern Alaska. The results provide guidance on how long monitoring must be sustained before any change is likely to be detected (under a scenario of rather strong true decline), the optimal designs for detecting a change, and a tradeoff where accepting a delay of 2 years in detecting the change could reduce the monitoring cost by almost 50%. This report emphasizes the importance of planning studies for guiding monitoring decisions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号