This paper discusses how gamma irradiation plants are putting the latest advances in computer and information technology to use for better process control, cost savings, and strategic advantages.
Some irradiator operations are gaining significant benefits by integrating computer technology and robotics with real-time information processing, multi-user databases, and communication networks. The paper reports on several irradiation facilities that are making good use of client/server LANs, user-friendly graphics interfaces, supervisory control and data acquisition (SCADA) systems, distributed I/O with real-time sensor devices, trending analysis, real-time product tracking, dynamic product scheduling, and automated dosimetry reading. These plants are lowering costs by fast and reliable reconciliation of dosimetry data, easier validation to GMP requirements, optimizing production flow, and faster release of sterilized products to market.
There is a trend in the manufacturing sector towards total automation using “predictive process control”. Real-time verification of process parameters “on-the-run” allows control parameters to be adjusted appropriately, before the process strays out of limits. Applying this technology to the gamma radiation process, control will be based on monitoring the key parameters such as time, and making adjustments during the process to optimize quality and throughput. Dosimetry results will be used as a quality control measurement rather than as a final monitor for the release of the product. Results are correlated with the irradiation process data to quickly and confidently reconcile variations. Ultimately, a parametric process control system utilizing responsive control, feedback and verification will not only increase productivity and process efficiency, but can also result in operating within tighter dose control set points. 相似文献
Predicting the location where a protein resides within a cell is important in cell biology. Computational approaches to this issue have attracted more and more attentions from the community of biomedicine. Among the protein features used to predict the subcellular localization of proteins, the feature derived from Gene Ontology (GO) has been shown to be superior to others. However, most of the sights in this field are set on the presence or absence of some predefined GO terms. We proposed a method to derive information from the intrinsic structure of the GO graph. The feature vector was constructed with each element in it representing the information content of the GO term annotating to a protein investigated, and the support vector machines was used as classifier to test our extracted features. Evaluation experiments were conducted on three protein datasets and the results show that our method can enhance eukaryotic and human subcellular location prediction accuracy by up to 1.1% better than previous studies that also used GO-based features. Especially in the scenario where the cellular component annotation is absent, our method can achieved satisfied results with an overall accuracy of more than 87%. 相似文献
Combination of information technology and separation sciences opens a new avenue to achieve high sample throughputs and therefore is of great interest to bypass bottlenecks in catalyst screening of parallelized reactors or using multitier well plates in reaction optimization. Multiplexing gas chromatography utilizes pseudo-random injection sequences derived from Hadamard matrices to perform rapid sample injections which gives a convoluted chromatogram containing the information of a single sample or of several samples with similar analyte composition. The conventional chromatogram is obtained by application of the Hadamard transform using the known injection sequence or in case of several samples an averaged transformed chromatogram is obtained which can be used in a Gauss–Jordan deconvolution procedure to obtain all single chromatograms of the individual samples. The performance of such a system depends on the modulation precision and on the parameters, e.g. the sequence length and modulation interval. Here we demonstrate the effects of the sequence length and modulation interval on the deconvoluted chromatogram, peak shapes and peak integration for sequences between 9-bit (511 elements) and 13-bit (8191 elements) and modulation intervals Δt between 5 s and 500 ms using a mixture of five components. It could be demonstrated that even for high-speed modulation at time intervals of 500 ms the chromatographic information is very well preserved and that the separation efficiency can be improved by very narrow sample injections. Furthermore this study shows that the relative peak areas in multiplexed chromatograms do not deviate from conventionally recorded chromatograms. 相似文献
A chromoionophore-derived calix[4]crown, 1 , possessing an effective signal-controllable function by metal ionic inputs has been newly synthesized, whose function is mainly of our interest, by transforming the process of receptor activation to one that may be detected by an optical signal (i.e. color change), the basic feature of antagonist-agonist competition may be reproduced readily and visually detected. The process would be particularly new within the field of optical read-out receptors. Further, from the standpoint of material sciences, the controllable signal function may not only be welcome for molecular information processing, but also contribute to the design of new sensory materials. 相似文献
In this paper, a novel cognitive system model is established based on formal concept analysis to exactly describe human cognitive processes. Two new operators, extent–intent and intent–extent, are introduced between an object and its attributes. By analyzing the necessity and sufficient relations between the object and some of its attributes, the information granule concept is investigated in human cognitive processes. Furthermore, theories of transforming arbitrary information granule into necessary, sufficient, sufficient and necessary information granules are addressed carefully. Algorithm of the transformation is constructed, by which we can provide an efficient approach to the conversion among information granules. To interpret and help understand the theories and algorithm, an experimental computing program is designed and two cases is employed as case study. Results of the small scale case are calculated by the method presented in this paper. The large-scale case is calculated by the experimental computing program and validated by the proposed algorithm. The considered framework can provide a novel convenient tool for artificial intelligence researches. 相似文献
This paper investigates a wholesale-price contract of supply chain under the endogenous information structure. This supply chain consists of one supplier and one retailer during the selling season. The retailer does not know his selling cost but can spend resources to acquire information. The supplier offers a contract, which induces the retailer to gather information and generate more production orders with beta costs. We find that there exists an upper bound of the information gathering cost such that the supplier induces the retailer to gather information. The increasing cost of information gathering may decrease the order quantity and wholesale price. Moreover, the cost beta has an impact on the expected profits of the two parties. With the increasing cost of information gathering, the supplier’s expected profit is reduced, while that of the retailer becomes ambiguous in terms of the distribution function and the interval of selling cost information. Finally, a numerical example is presented to explain the main results. 相似文献
We propose a two-person game-theoretical model to study information sharing decisions at an interim stage when information is incomplete. The two agents have pieces of private information about the state of nature, and that information is improved by combining the pieces. Agents are both senders and receivers of information. There is an institutional arrangement that fixes a transfer of wealth from an agent who lies about her private information. In our model, we show that (1) there is a positive relation between information revelation and the amount of the transfers, and (2) information revelation has a collective action structure, in particular, the incentives of an agent to reveal decrease with respect to the amount of information disclosed by the other. 相似文献
A common business strategy to promote product adoption in software industry is to provide a free trial version with limited functionalities of the commercial product to increase the installed user base. The increase of user base will lead to higher value of the software because of positive network effects. However, offering a free trial version may cannibalize some demand of the commercial software. This paper examines the tradeoff between network effects and the cannibalization effect, and aims to uncover the conditions under which firms should introduce the free trial product. We find that when network intensity is strong, it is more profitable for a software monopoly to offer free trial than to segment the market with two versions of different qualities. In addition, this paper solves the joint decision problem of finding the optimal quality for the firm’s free trial software and the optimal price of its commercial product. 相似文献
Summary Multi-sample cluster analysis, the problem of grouping samples, is studied from an information-theoretic viewpoint via Akaike's
Information Criterion (AIC). This criterion combines the maximum value of the likelihood with the number of parameters used
in achieving that value. The multi-sample cluster problem is defined, and AIC is developed for this problem. The form of AIC
is derived in both the multivariate analysis of variance (MANOVA) model and in the multivariate model with varying mean vectors
and variance-covariance matrices. Numerical examples are presented for AIC and another criterion calledw-square. The results demonstrate the utility of AIC in identifying the best clustering alternatives.
This research was supported by Office of Naval Research Contract N00014-80-C-0408, Task NR042-443 and Army Research Office
Contract DAAG 29-82-K-0155, at the University of Illinois at Chicago. 相似文献
Biorefineries can provide a product portfolio from renewable biomass similar to that of crude oil refineries. To operate biorefineries of any kind, however, the availability of biomass inputs is crucial and must be considered during planning. Here, we develop a planning approach that uses Geographic Information Systems (GIS) to account for spatially scattered biomass when optimizing a biorefinery’s location, capacity, and configuration. To deal with the challenges of a non-smooth objective function arising from the geographic data, higher dimensionality, and strict constraints, the planning problem is repeatedly decomposed by nesting an exact nonlinear program (NLP) inside an evolutionary strategy (ES) heuristic, which handles the spatial data from the GIS. We demonstrate the functionality of the algorithm and show how including spatial data improves the planning process by optimizing a synthesis gas biorefinery using this new planning approach. 相似文献