首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The basic method of UPEN (uniform penalty inversion of multiexponential decay data) is given in an earlier publication (Borgia et al., J. Magn. Reson. 132, 65-77 (1998)), which also discusses the effects of noise, constraints, and smoothing on the resolution or apparent resolution of features of a computed distribution of relaxation times. UPEN applies negative feedback to a regularization penalty, allowing stronger smoothing for a broad feature than for a sharp line. This avoids unnecessarily broadening the sharp line and/or breaking the wide peak or tail into several peaks that the relaxation data do not demand to be separate. The experimental and artificial data presented earlier were T(1) data, and all had fixed data spacings, uniform in log-time. However, for T(2) data, usually spaced uniformly in linear time, or for data spaced in any manner, we have found that the data spacing does not enter explicitly into the computation. The present work shows the extension of UPEN to T(2) data, including the averaging of data in windows and the use of the corresponding weighting factors in the computation. Measures are implemented to control portions of computed distributions extending beyond the data range. The input smoothing parameters in UPEN are normally fixed, rather than data dependent. A major problem arises, especially at high signal-to-noise ratios, when UPEN is applied to data sets with systematic errors due to instrumental nonidealities or adjustment problems. For instance, a relaxation curve for a wide line can be narrowed by an artificial downward bending of the relaxation curve. Diagnostic parameters are generated to help identify data problems, and the diagnostics are applied in several examples, with particular attention to the meaningful resolution of two closely spaced peaks in a distribution of relaxation times. Where feasible, processing with UPEN in nearly real time should help identify data problems while further instrument adjustments can still be made. The need for the nonnegative constraint is greatly reduced in UPEN, and preliminary processing without this constraint helps identify data sets for which application of the nonnegative constraint is too expensive in terms of error of fit for the data set to represent sums of decaying positive exponentials plus random noise.  相似文献   

2.
The multiexponential inversion program UPEN by the authors [J. Magn. Reson. 1998; 132: 65-77; Ibid. 2000;147:273-85] employs negative feedback to a regularization penalty to implement variable smoothing when both sharp and broad features appear on a single distribution of relaxation times. This allows a good fit to relaxation data that correspond to a sum of decaying exponentials plus random noise, but it usually does not give a good fit to data that are distorted by systematic errors from instrument problems, which can cause erroneous "resolution" or erroneous non-resolution of peaks. UPEN provides a series of diagnostic parameters to help identify such data problems that can lead to interpretation errors, and, in particular, to warn when a close call on the resolution or non-resolution of nearby peaks might be questionable. Examples are given from a series of T(2) data sets from desiccated bone samples, with examples where the presence of two peaks is required by good data, examples where the presence of two peaks is negated by good data, and examples where the resolution or non-resolution of peaks cannot be trusted because of instrumental distortions revealed by UPEN diagnostic parameters. It is suggested that processing relaxation data with UPEN in nearly real time could permit retaking data while a sample is still available if the diagnostic parameters show instrumental problems.  相似文献   

3.
The basic method of UPEN (uniform penalty inversion of multiexponential decay data) is given in an earlier publication (Borgia et al., J. Magn. Reson. 132, 65–77 (1998)), which also discusses the effects of noise, constraints, and smoothing on the resolution or apparent resolution of features of a computed distribution of relaxation times. UPEN applies negative feedback to a regularization penalty, allowing stronger smoothing for a broad feature than for a sharp line. This avoids unnecessarily broadening the sharp line and/or breaking the wide peak or tail into several peaks that the relaxation data do not demand to be separate. The experimental and artificial data presented earlier were T1 data, and all had fixed data spacings, uniform in log-time. However, for T2 data, usually spaced uniformly in linear time, or for data spaced in any manner, we have found that the data spacing does not enter explicitly into the computation. The present work shows the extension of UPEN to T2 data, including the averaging of data in windows and the use of the corresponding weighting factors in the computation. Measures are implemented to control portions of computed distributions extending beyond the data range. The input smoothing parameters in UPEN are normally fixed, rather than data dependent. A major problem arises, especially at high signal-to-noise ratios, when UPEN is applied to data sets with systematic errors due to instrumental nonidealities or adjustment problems. For instance, a relaxation curve for a wide line can be narrowed by an artificial downward bending of the relaxation curve. Diagnostic parameters are generated to help identify data problems, and the diagnostics are applied in several examples, with particular attention to the meaningful resolution of two closely spaced peaks in a distribution of relaxation times. Where feasible, processing with UPEN in nearly real time should help identify data problems while further instrument adjustments can still be made. The need for the nonnegative constraint is greatly reduced in UPEN, and preliminary processing without this constraint helps identify data sets for which application of the nonnegative constraint is too expensive in terms of error of fit for the data set to represent sums of decaying positive exponentials plus random noise.  相似文献   

4.
Provencher's constrained regularization method of inverting the Laplace transform was tested on 7 decades wide simulated quasielastic light scattering (QELS) data. The standard method with integration and logarithmic grid was shown to undersmooth seriously theG(Γ) distribution in the region of largeΓ (small relaxation timeτ). The regularization can be considerably improved by switching the integration off. Then, smooth distributions of relaxation timeτ of the generalized exponential type are reproduced essentially correctly with a tendency to replace asymmetric peaks by more symmetric ones with shoulders (in the Gaussian distribution ofτ) or side peaks (in the Gaussian distribution of 1/τ) on the slow decrease sides. In distributions with singularities such as edges of histogram bins or delta functions, the coarse shape of the distribution is recovered essentially correctly, but smoothing of singularities causes a distortion of wide regions of the relaxation spectrum usually in the form of sinusoidal waves. The bias introduced by taking the square root of theg 2 function was shown to worsen sometimes the CONTIN results considerably. Thus, the use of Provencher's CONTIN program with logarithmic grid and integration switched off is recommended for the analysis of very wide QELS autocorrelation curves.  相似文献   

5.
NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65–77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273–285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.  相似文献   

6.
Cramer-Rao theory and computer simulations were used to show that the errors involved in calculating the magnetization and relaxation parameters of a two-component system decrease with: (1) increasing SNR, (2) increasing number of echoes used in the fitting procedure, and (3) increasing ratio of the relaxation times of the two components, T(22)/T(21). Images of bi-compartmental phantoms of known T(2) values were acquired using an optimized imaging sequence, and an optimized fitting algorithm was used to calculate the T(2) values of the two components by fitting the resulting images to a bi-exponential decay model. Accuracy better than 6% was achieved in the calculations of the T(2) values of the two components, and region fitting provided better accuracy than pixel-by-pixel fitting. The procedures were used to calculate the T(2) and M(0) values of equine carpal bones with known degree of radiographic bone sclerosis. Although the T(2) and M(0) values of both water and fat components all decreased with the degree of radiographic bone sclerosis, the transverse relaxation of the water component, T(2W), showed a greater decrease with advanced stages of bone sclerosis.  相似文献   

7.
Despite significant differences between bone tissues and other porous media such as oilfield rocks, there are common features as well as differences in the response of NMR relaxation measurements to the internal structures of the materials. Internal surfaces contribute to both transverse (T2) and longitudinal (T1) relaxation of pore fluids, and in both cases the effects depend on, among other things, local surface-to-volume ratio (S/V). In both cases variations in local S/V can lead to distributions of relaxation times, sometimes over decades. As in rocks, it is useful to take bone data under different conditions of cleaning, saturation, and desaturation. T1 and T2 distributions are computed using UPEN. In trabecular bone it is easy to see differences in dimensions of intertrabecular spaces in samples that have been de-fatted and saturated with water, with longer T1 and T2 for larger pores. Both T1 and T2 distributions for these water-saturated samples are bimodal, separating or partly separating inter- and intratrabecular water. The T1 peak times have a ratio of from 10 to 30, depending on pore size, but for the smaller separations the distributions may not have deep minima. The T2 peak times have ratios of over 1000, with intratrabecular water represented by large peaks at a fraction of a ms, which we can observe only by single spin echoes. CPMG data show peaks at about a second, tapering down to small amplitudes by a ms. In all samples the free induction decay (FID) from an inversion-recovery (IR) T1 measurement shows an approximately Gaussian (solid-like) component, exp[-1/2 (T/TGC), with TGC approximately 11.7+/-0.7 micros (GC for "Gaussian Component"), and a liquid-like component (LLC) with initially simple-exponential decay at the rate-average time T(2-FID) for the first 100 micros. Averaging and smoothing procedures are adopted to derive T(2-FID) as a function of IR time and to get T1 distributions for both the GC and the LLC. It appears that contact with the GC, which is presumed to be 1H on collagen, leads to the T2 reduction of at least part of the LLC, which is presumed to be water. Progressive drying of the cleaned and water-saturated samples confirms that the long T1 and T2 components were in the large intertrabecular spaces, since the corresponding peaks are lost. Further drying leads to further shortening of T2 for the remaining water but eventually leads to lengthening of T1 for both the collagen and the water. After the intertrabecular water is lost by drying, T1 is the same for GC and LLC. T(2-FID) is found to be roughly 320/alpha micros, where alpha is the ratio of the extrapolated GC to LLC, appearing to indicate a time tau of about 320 micros for 1H transverse magnetization in GC to exchange with that of LLC. This holds for all samples and under all conditions investigated. The role of the collagen in relaxation is confirmed by treatment to remove the mineral component, observing that the GC remains and has the same TGC and has the same effect on the relaxation times of the associated water. Measurements on cortical bone show the same collagen-related effects but do not have the long T1 and T2 components.  相似文献   

8.
A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.  相似文献   

9.
明胶水溶液分形性质的光散射研究   总被引:1,自引:0,他引:1  
本文通过动态光散射技术并结合静态光散射及粘滞系数实验研究了明胶水溶液在不同温度、不同浓度下的分形性质。当T>Tgel时(Tgel为胶凝临界温度),水为明胶的良好溶剂。明胶分子在水溶液中符合自避无规行走模型,分维df=5/3。当T<Tgel时,明胶水溶液需经历从溶液到凝胶的胶凝过程。在胶凝过程中,动态光散射弛豫谱分段具有指数衰减和随后的伸展指数两种模式,且伸展指数衰减的宽度参数β逐渐从0.8减少到胶凝点的0.67,与此相对应,分维由5/3逐渐增加到2.0。在胶凝点,通过静态光散射及粘滞系数实验测得分维为2.0。  相似文献   

10.
本文对具有特定横向弛豫时间(T2)的硫酸铜溶液进行了多回波间隔(TE)的核磁共振(NMR)实验,并利用数值模拟对32组具有不同弛豫分量的模型进行了变TE模拟实验,定量研究了TE对致密油气、页岩气等低孔低渗储层NMR孔隙度的影响规律.实验结果表明,随着TE的增大,各T2弛豫组分NMR孔隙度先维持在100%左右,然后迅速衰减,当TE增加到一定数值时,趋近于0;不同T2弛豫组分NMR孔隙度开始迅速衰减及最后变为0的TE值存在显著差异.根据不同T2弛豫组分NMR孔隙度与TE的关系,将整个NMR测量分为无损测量区、快速衰减区、无效参数区和仪器盲区4个区域.对特定弛豫组分而言,在快速衰减区弛豫组分损失量与TE呈对数关系,本文还给出了该区域NMR孔隙度的校正公式及方法.  相似文献   

11.
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T(1) and T(2)). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T(1) and T(2) relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T(2) relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T(1) was observed in BNC. In the single-component gels, for T(2) measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T(2) NMR relaxation processes in biological tissues.  相似文献   

12.
The T2 behavior of parotid gland tissue was investigated in 11 patients affected by pleomorphic adenoma. A protocol that was previously set up to define acquisition and post-processing procedures, reaching an accuracy of 2.5% in phantoms and an in vivo long term reproducibility of 0.9-8.5%, was used for the evaluations. The measurements were carried out on a whole body, superconducting imager, using a neck coil as a receiver. Some reference gel samples were imaged together with the patient and used to correct T2 results. The sequence protocol was a multispin-echo, 16 echoes. Signals were fitted with mono and biexponential decay models and an automatic choice of the best model was performed using the two chisquared comparison. Two T2 maps (T2 monoexponential or short T2 component, and long T2 component) and chisquared maps were then produced. Pathologic and normal tissues showed a dominant monoexponential decay with a good level of biexponentiality (16%-27% of total fitted pixels) due to partial volume effects from the liquid content. Concerning the biexponentiality, no significant differences were found between the fitted pixel fraction of normal and pathologic tissue, because the T2 long component of the lesion was related both to the edema and saliva content, but probably the increase in the first compensated the decrease in the second. Chisquared maps showed that most of the lesions presented a monoexponential core surrounded by a biexponential border probably due to a solid component similar to normal tissue with partial volume effects from saliva content. Ninety-five percent confidence intervals for normal tissue were 69.40-87.80 ms (monoexponential relaxation), 38.19-44.67 ms and 285.84-691.28 ms (short and long components of biexponential relaxation). For pathologic tissue they resulted 172.17-275.83 ms, 53.86-89.98 ms and 442.10-814.58 ms. The monoexponential component, mostly present in the core of the lesion, was the parameter that better characterized pathologic tissue. A comparison was performed between normal tissue of patients and normal tissue of volunteers, whose statistics was collected in a previous study with the same evaluation protocol. Results showed no significant differences in the biexponential fitted fractions and the comparison of relaxation times. We conclude that, for tissue characterization, a multiexponential analysis should be carried out in order to improve accuracy and to obtain more reliable results. Moreover, other than relaxation calculations, a topographical analysis of relaxation distribution, using for instance the chisquared maps, might in the future give us more useful information on tissue structure.  相似文献   

13.
We investigate the relaxation dynamics of nonequilibrium carriers in organic conductors κ-(BEDT-TTF)(2)Cu[N(CN)(2)]X (X=Br and Cl) using ultrafast time-resolved optical spectroscopy. The dynamics for both salts show similar temperature dependences, which is well characterized by the carrier relaxation across the pseudogap (PG) of the magnitude Δ(PG) ≈ 16 meV for Br salt and 7.0 meV for Cl salt. On the other hand, only the Br salt shows an abrupt increase of the decay time at low temperature, indicating an additional decay component associated with the superconducting (SC) gap below T(c). The fluence dependent dynamics at low temperature evidences the superposition of the SC component onto the PG component. These results indicate a metallic-insulating phase separation in the Br salt triggered by photoexcited nonequilibrium carriers.  相似文献   

14.
We analyze problems of processing the results of radiophysical observations, which can be described using the generalized Likhter model. Consistent estimates of the parameters of a signal and interference are found. We describe algorithms for detecting the pulsed processes as well as a detector of harmonic signal against the background of interference having a chaotic pulsed component.  相似文献   

15.
In this paper, the reconstruction of particle size distributions (PSDs) using particle swarm optimization (PSO) techniques from dynamic light scattering (DLS) data was established. Three different objective functions containing non-smooth constrained objective function, smooth functional objective function of Tikhonov regularization and L objective function, were employed. Simulated results of unimodal, bimodal and bi-dispersed particles show that the PSO technique with non-smooth constrained objective function produces narrower PSDs focusing on peak position in the presence of random noise, the PSO technique with smooth functional of Tikhonov regularization creates relative smooth PSDs, which could be successfully applied to the broad particles inversion, and the PSO technique with L objective function yields smooth PSDs, which saves calculation amount. Experimental results as well as comparisons with CONTIN algorithm and Cumulants method demonstrate the performance of our algorithms. Therefore, the PSO techniques employing the three different objective functions, which only require objective function and need a few initial guesses, may be applied to the reconstruction of PSDs from DLS data.  相似文献   

16.
A better knowledge of the NMR relaxation behavior of bone tissue can improve the definition of imaging protocols to detect bone diseases like osteoporosis. The six rat lumbar vertebrae, from L1 to L6, were analyzed by means of both transverse (T(2)) and longitudinal (T(1)) relaxation of (1)H nuclei at 20 MHz and 30 degrees C. Distributions of relaxation times, computed using the multiexponential inversion software uniform penalty inversion, extend over decades for both T(2) and T(1) relaxation. In all samples, the free induction decay (FID) from an inversion-recovery (IR) T(1) measurement shows an approximately Gaussian (solid-like) component, exp[-1/2(t/T(GC))2], with T(GC) approximately 12 micros (GC for Gaussian component) and a liquid-like component (LLC) with initially simple-exponential decay. Averaging and smoothing procedures are adopted to obtain the ratio alpha between GC and LLC signals and to get separate T(1) distributions for GC and LLC. Distributions of T(1) for LLC show peaks centered at 300-500 ms and shoulders going down to 10 ms, whereas distributions of T(1) for GC are single broad peaks centered at roughly 100 ms. The T(2) distributions by Carr-Purcell-Meiboom-Gill at 600 micros echo spacing are very broad and extend from 1 ms to hundreds of ms. This long echo spacing does not allow one to see a peak in the region of hundreds of micros, which is better seen by single spin-echo T(2) measurements. Results of the relaxation analysis were then compared with densitometric data. From the study, a clear picture of the intratrabecular and intertrabecular (1)H signals emerges. In particular, the GC is presumed to be due to (1)H in collagen, LLC due to all the fluids in the bone including water and fat, and the very short T(2) peak due to the intratrabecular water. Overall, indications of some trends in composition and in pore-space distributions going from L1 to L6 appeared. Published results on rat vertebrae obtained by fitting the curves by discrete two-component models for both T(2) and T(1) are consistent with our results and can be better interpreted in light of the shown distributions of relaxation times.  相似文献   

17.
The method of maximum likelihood has been implemented for the estimation of multiple exponential components of T2 decay curves in spin echo NMR measurements on biologic tissues. Each Each component contributes an exponential term described by two parameters (initial amplitude and T2) to the T2 decay curve. The maximum likelihood method estimates the parameters and their standard errors for all terms simultaneously, avoiding the subjectivity inherent in methods such as graphical peeling. In the model used, it was assumed that water protons are compartmentalized and that the measured spin echo signals from the protons undergoing relaxation obey the Poisson distribution. A system of non-linear equations was derived and solved iteratively for the values of the exponential parameters which maximize the likelihood of obtaining the observed data under these assumptions. The approach was implemented for bi- and tri-exponential models on a MicroVAX II computer (Digital Equipment Corporation, Maynard, MA). Simulations of bi- and tri-exponential data, with and without system noise, were analyzed to assess the accuracy and reproducibility of the method. A subset of the simulations was repeated with non-linear least squares techniques and was compared to the results obtained with maximum likelihood. Rabbit muscle and gerbil brain samples were measured and analyzed with the maximum likelihood method. The simulations showed that within specific limits on relative sizes and relaxation rates of components, these parameters can be estimated with errors less than 5%. The comparison to non-linear least squares analysis showed that the maximum likelihood method is generally superior in estimating the parameters in difficult cases. The results from tissue measurements demonstrate that the method is effective even in cases where graphical peeling would clearly not yield reliable results.  相似文献   

18.
The analysis of diffusion NMR data in terms of distributions of diffusion coefficients is hampered by the ill-posed nature of the required inverse Laplace transformation. Na?ve approaches such as multiexponential fitting or standard least-squares algorithms are numerically unstable and often fail. This paper updates the CONTIN approach of the application of Tikhonov regularization to stabilise this numerical inversion problem and demonstrates two methods for automatically choosing the optimal value of the regularization parameter. These approaches are computationally efficient and easy to implement using standard matrix algebra techniques. Example analyses are presenting using both synthetic data and experimental results of diffusion NMR studies on the azo-dye sunset yellow and some polymer molecular weight reference standards.  相似文献   

19.
This study investigates the effects of developmental stage and muscle type on the mobility and distribution of water within skeletal muscles, using low-field (1)H-NMR transverse relaxation measurements in vitro on four different porcine muscles (M. longissimus dorsi, M. semitendinosus, M. biceps femoris, M. vastus intermedius) from a total of 48 pigs slaughtered at various weight classes between 25 kg and 150 kg. Principal component analysis (PCA) revealed effects of both slaughter weight and muscle type on the transverse relaxation decay. Independent of developmental stage and muscle type, distributed exponential analysis of the NMR T(2) relaxation data imparted the existence of three distinct water populations, T(2b), T(21), and T(22), with relaxation times of approximately 1-10, 45-120, and 200-500 ms, respectively. The most profound change during muscle growth was a shift toward faster relaxation in the intermediate time constant, T(21). It decreased by approx. 24% in all four muscle types during the period from 25 to 150 kg live weight. Determination of dry matter, fat, and protein content in the muscles showed that the changes in relaxation time of the intermediate time constant, T(21), during growth should be ascribed mainly to a change in protein content, as the protein content explained 77% of the variation in the T(21) time constant. Partial least squares (PLS) regression revealed validated correlations in the region of 0.58 to 0.77 between NMR transverse relaxation data and muscle development for all the four muscle types, which indicates that NMR relaxation measurements may be used in the prediction of muscle developmental stage.  相似文献   

20.
《Physics letters. A》2019,383(24):2862-2868
Quantum principal component analysis (qPCA) is a dimensionality reduction algorithm for getting the eigenvectors corresponding to top several eigenvalues of the data matrix and then reconstructing. However, qPCA can only construct the quantum state contains all the eigenvectors and eigenvalues. In this paper, we present an improved quantum principal component analysis (Improved qPCA) algorithm with a fixed threshold. We can reduce the singular value less than the threshold to 0 and obtain a target quantum state which can be used to get an output similar to qPCA after phase estimation. Compared with qPCA, our algorithm has only the target eigenvalues and the probability that we get each target eigenvalue is greater. Furthermore, our algorithm can serve as an additional regularization method and a subroutine for subsequent data processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号