Land cover data for landscape ecological studies are frequently obtained by field survey. In the United Kingdom, temporally separated field surveys have been used to identify the locations and magnitudes of recent changes in land cover. However, such map data contain errors which may seriously hinder the identification of land cover change and the extent and locations of rare landscape features. This paper investigates the extent of the differences between two sets of maps derived from field surveys within the Northumberland National Park in 1991 and 1992. The method used in each survey was the Phase 1 approach of the Nature Conservancy Council of Great Britain. Differences between maps were greatest for the land cover types with the smallest areas. Overall spatial correspondence between maps was found to be only 44.4%. A maximum of 14.4% of the total area surveyed was found to have undergone genuine land cover change. The remaining discrepancies, equivalent to 41.2% of the total survey area, were attributed primarily to differences of land cover interpretation between surveyors (classification error). Differences in boundary locations (positional error) were also noted, but were found to be a relatively minor source of error. The implications for the detection of land cover change and habitat mapping are discussed. 相似文献
The cover, number, size, shape, spatial arrangement and orientation of vegetation patches are attributes that have been used
to indicate how well landscapes function to retain, not ‘leak’, vital system resources such as rainwater and soil. We derived
and tested a directional leakiness index (DLI) for this resource retention function. We used simulated landscape maps where
resource flows over map surfaces were directional and where landscape patch attributes were known. Although DLI was most strongly
related to patch cover, it also logically related to patch number, size, shape, arrangement and orientation. If the direction
of resource flow is multi-directional, a variant of DLI, the multi-directional leakiness index (MDLI) can be used. The utility
of DLI and MDLI was demonstrated by applying these indices to three Australian savanna landscapes differing in their remotely
sensed vegetation patch attributes. These leakiness indices clearly positioned these three landscapes along a function-dysfunction
continuum, where dysfunctional landscapes are leaky (poorly retain resources).
This revised version was published online in July 2006 with corrections to the Cover Date. 相似文献
Testing of soil samples in greenhouse assays for suppressiveness to soilborne plant pathogens requires a considerable investment in time and effort as well as large numbers of soil samples. To make it possible to process large numbers of samples efficiently, we compared an in vitro growth assay with a damping-off assay using Pythium aphanidermatum as the test organism on tomato seedlings. The in vitro test compares the radial growth or relative growth of the fungus in soil to that in autoclaved soil and reflects suppressiveness of soils to the pathogen. We used soils from a field experiment that had been farmed either organically or conventionally and into which a cover crop (oats and vetch in mixture) had been incorporated 0, 10, 21, and 35 days previously. We obtained a significant, positive correlation between damping-off severities of tomato seedlings in damping-off assays and both relative and radial growth in vitro. In addition, radial and relative growth of P. aphanidermatum in the in vitro assay were positively correlated with several carbon and nitrogen variables measured for soil and incorporated debris. We did not find differences between the two farming systems for either growth measures of P. aphanidermatum or disease severities on tomato at different stages of cover crop decomposition. The in vitro assay shows potential for use with any fungus that exhibits rapid saprophytic growth, and is most suitable for routine application in suppressiveness testing. 相似文献
1. The objective was to compare three whole grain (WG) inclusion levels (7.5, 15 and 30%) offered to broiler chickens by three modes of WG incorporation: (i) pre-pellet WG inclusion, (ii) post-pellet WG inclusion as a blend of WG and pelleted concentrate and (iii) post-pellet WG inclusion where WG and pelleted concentrate were provided in separate feed trays against a ground-grain, wheat-based control diet.
2. Ten dietary treatments were offered to 6 replicate cages (6 birds per cage) of male Ross 308 chickens from 7 to 28 d post-hatch. The effects of treatment on relative gizzard weights, gizzard contents, pancreatic weights and pH of gizzard digesta were monitored. Parameters of growth performance, nutrient utilisation (apparent metabolisable energy [AME], metabolisable to gross energy [ME:GE] ratios, nitrogen [N] retention and N-corrected AME [AMEn]), apparent starch and protein (N) digestibility coefficients and disappearance rates in for small intestinal segments and concentrations of free amino acids in plasma taken from the anterior mesenteric vein were determined.
3. Whole grain feeding (WGF) did not influence weight gain, but 30% post-pellet blended and 15 and 30% post-pellet separated treatments significantly depressed (P < 0.05) feed intakes while the 30% post-pellet separated treatment improved (P < 0.01) feed conversion ratios (FCR). WGF regimes significantly increased relative gizzard weights.
4. Overall, WGF generated profound responses in AME, ME:GE ratios, N retention and AMEn that were highly correlated with relative gizzard weights. In general, WGF improved starch and protein (N) digestibilities and again there were some correlations with these outcomes and relative gizzard weights.
5. Post-pellet WG inclusions where WG and pelleted concentrate were offered separately provided chickens with the opportunity to choice feed. Birds showed a preference for the relatively high-protein pelleted concentrate and at 30% WG, this resulted in an improvement in FCR of 7.69% (1.260 versus 1.365; P < 0.001) relative to the ground-grain control diet. 相似文献