{"Package":"A3","Version":"1.0.0","Title":"Accurate, Adaptable, and Accessible Error Metrics for Predictive\nModels","Description":"Supplies tools for tabulating and analyzing the results of predictive models. The methods employed are applicable to virtually any predictive model and make comparisons between different methodologies straightforward.","Published":"2015-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"abbyyR","Version":"0.5.1","Title":"Access to Abbyy Optical Character Recognition (OCR) API","Description":"Get text from images of text using Abbyy Cloud Optical Character\n Recognition (OCR) API. Easily OCR images, barcodes, forms, documents with\n machine readable zones, e.g. passports. Get the results in a variety of formats\n including plain text and XML. To learn more about the Abbyy OCR API, see \n .","Published":"2017-04-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"abc","Version":"2.1","Title":"Tools for Approximate Bayesian Computation (ABC)","Description":"Implements several ABC algorithms for\n performing parameter estimation, model selection, and goodness-of-fit.\n Cross-validation tools are also available for measuring the\n accuracy of ABC estimates, and to calculate the\n misclassification probabilities of different models.","Published":"2015-05-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"abc.data","Version":"1.0","Title":"Data Only: Tools for Approximate Bayesian Computation (ABC)","Description":"Contains data which are used by functions of the 'abc' package.","Published":"2015-05-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ABC.RAP","Version":"0.9.0","Title":"Array Based CpG Region Analysis Pipeline","Description":"It aims to identify candidate genes that are “differentially\n methylated” between cases and controls. It applies Student’s t-test and delta beta analysis to\n identify candidate genes containing multiple “CpG sites”.","Published":"2016-10-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ABCanalysis","Version":"1.2.1","Title":"Computed ABC Analysis","Description":"For a given data set, the package provides a novel method of computing precise limits to acquire subsets which are easily interpreted. Closely related to the Lorenz curve, the ABC curve visualizes the data by graphically representing the cumulative distribution function. Based on an ABC analysis the algorithm calculates, with the help of the ABC curve, the optimal limits by exploiting the mathematical properties pertaining to distribution of analyzed items. The data containing positive values is divided into three disjoint subsets A, B and C, with subset A comprising very profitable values, i.e. largest data values (\"the important few\"), subset B comprising values where the yield equals to the effort required to obtain it, and the subset C comprising of non-profitable values, i.e., the smallest data sets (\"the trivial many\"). Package is based on \"Computed ABC Analysis for rational Selection of most informative Variables in multivariate Data\", PLoS One. Ultsch. A., Lotsch J. (2015) .","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"abcdeFBA","Version":"0.4","Title":"ABCDE_FBA: A-Biologist-Can-Do-Everything of Flux Balance\nAnalysis with this package","Description":"Functions for Constraint Based Simulation using Flux\n Balance Analysis and informative analysis of the data generated\n during simulation.","Published":"2012-09-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ABCoptim","Version":"0.14.0","Title":"Implementation of Artificial Bee Colony (ABC) Optimization","Description":"An implementation of Karaboga (2005) Artificial Bee Colony\n Optimization algorithm .\n This (working) version is a Work-in-progress, which is\n why it has been implemented using pure R code. This was developed upon the basic\n version programmed in C and distributed at the algorithm's official website.","Published":"2016-11-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ABCp2","Version":"1.2","Title":"Approximate Bayesian Computational Model for Estimating P2","Description":"Tests the goodness of fit of a distribution of offspring to the Normal, Poisson, and Gamma distribution and estimates the proportional paternity of the second male (P2) based on the best fit distribution.","Published":"2016-02-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"abcrf","Version":"1.5","Title":"Approximate Bayesian Computation via Random Forests","Description":"Performs Approximate Bayesian Computation (ABC) model choice and parameter inference via random forests.","Published":"2017-01-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"abctools","Version":"1.1.1","Title":"Tools for ABC Analyses","Description":"Tools for approximate Bayesian computation including summary statistic selection and assessing coverage.","Published":"2017-04-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"abd","Version":"0.2-8","Title":"The Analysis of Biological Data","Description":"The abd package contains data sets and sample code for The\n Analysis of Biological Data by Michael Whitlock and Dolph Schluter (2009;\n Roberts & Company Publishers).","Published":"2015-07-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"abf2","Version":"0.7-1","Title":"Load Gap-Free Axon ABF2 Files","Description":"Loads ABF2 files containing gap-free data from electrophysiological recordings, as created by Axon Instruments/Molecular Devices software such as pClamp 10.","Published":"2015-03-04","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"ABHgenotypeR","Version":"1.0.1","Title":"Easy Visualization of ABH Genotypes","Description":"Easy to use functions to visualize marker data\n from biparental populations. Useful for both analyzing and\n presenting genotypes in the ABH format.","Published":"2016-02-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"abind","Version":"1.4-5","Title":"Combine Multidimensional Arrays","Description":"Combine multidimensional arrays into a single array.\n This is a generalization of 'cbind' and 'rbind'. Works with\n vectors, matrices, and higher-dimensional arrays. Also\n provides functions 'adrop', 'asub', and 'afill' for manipulating,\n extracting and replacing data in arrays.","Published":"2016-07-21","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"abjutils","Version":"0.0.1","Title":"Useful Tools for Jurimetrical Analysis Used by the Brazilian\nJurimetrics Association","Description":"The Brazilian Jurimetrics Association (BJA or ABJ in Portuguese, see for more information) is a non-profit organization which aims to investigate and promote the use of statistics and probability in the study of Law and its institutions. This package implements general purpose tools used by BJA, such as functions for sampling and basic manipulation of Brazilian lawsuits identification number. It also implements functions for text cleaning, such as accentuation removal.","Published":"2017-01-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"abn","Version":"1.0.2","Title":"Modelling Multivariate Data with Additive Bayesian Networks","Description":"Bayesian network analysis is a form of probabilistic graphical models which derives from empirical data a directed acyclic graph, DAG, describing the dependency structure between random variables. An additive Bayesian network model consists of a form of a DAG where each node comprises a generalized linear model, GLM. Additive Bayesian network models are equivalent to Bayesian multivariate regression using graphical modelling, they generalises the usual multivariable regression, GLM, to multiple dependent variables. 'abn' provides routines to help determine optimal Bayesian network models for a given data set, where these models are used to identify statistical dependencies in messy, complex data. The additive formulation of these models is equivalent to multivariate generalised linear modelling (including mixed models with iid random effects). The usual term to describe this model selection process is structure discovery. The core functionality is concerned with model selection - determining the most robust empirical model of data from interdependent variables. Laplace approximations are used to estimate goodness of fit metrics and model parameters, and wrappers are also included to the INLA package which can be obtained from . It is recommended the testing version, which can be downloaded by running: source(\"http://www.math.ntnu.no/inla/givemeINLA-testing.R\"). A comprehensive set of documented case studies, numerical accuracy/quality assurance exercises, and additional documentation are available from the 'abn' website.","Published":"2016-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"abodOutlier","Version":"0.1","Title":"Angle-Based Outlier Detection","Description":"Performs angle-based outlier detection on a given dataframe. Three methods are available, a full but slow implementation using all the data that has cubic complexity, a fully randomized one which is way more efficient and another using k-nearest neighbours. These algorithms are specially well suited for high dimensional data outlier detection.","Published":"2015-08-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"AbsFilterGSEA","Version":"1.5","Title":"Improved False Positive Control of Gene-Permuting GSEA with\nAbsolute Filtering","Description":"Gene-set enrichment analysis (GSEA) is popularly used to assess the enrichment of differential signal in a pre-defined gene-set without using a cutoff threshold for differential expression. The significance of enrichment is evaluated through sample- or gene-permutation method. Although the sample-permutation approach is highly recommended due to its good false positive control, we must use gene-permuting method if the number of samples is small. However, such gene-permuting GSEA (or preranked GSEA) generates a lot of false positive gene-sets as the inter-gene correlation in each gene set increases. These false positives can be successfully reduced by filtering with the one-tailed absolute GSEA results. This package provides a function that performs gene-permuting GSEA calculation with or without the absolute filtering. Without filtering, users can perform (original) two-tailed or one-tailed absolute GSEA.","Published":"2016-08-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AbSim","Version":"0.2.2","Title":"Time Resolved Simulations of Antibody Repertoires","Description":"Simulation methods for the evolution of antibody repertoires. The heavy and light chain variable region of both human and C57BL/6 mice can be simulated in a time-dependent fashion. Both single lineages using one set of V-, D-, and J-genes or full repertoires can be simulated. The algorithm begins with an initial V-D-J recombination event, starting the first phylogenetic tree. Upon completion, the main loop of the algorithm begins, with each iteration representing one simulated time step. Various mutation events are possible at each time step, contributing to a diverse final repertoire.","Published":"2017-06-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"abundant","Version":"1.1","Title":"High-Dimensional Principal Fitted Components and Abundant\nRegression","Description":"Fit and predict with the high-dimensional principal fitted\n components model. This model is described by Cook, Forzani, and Rothman (2012)\n\t.","Published":"2017-01-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ACA","Version":"1.0","Title":"Abrupt Change-Point or Aberration Detection in Point Series","Description":"Offers an interactive function for the detection of breakpoints in series. ","Published":"2016-03-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"acc","Version":"1.3.3","Title":"Exploring Accelerometer Data","Description":"Processes accelerometer data from uni-axial and tri-axial devices,\n and generates data summaries. Also includes functions to plot, analyze, and\n simulate accelerometer data.","Published":"2016-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"accelerometry","Version":"2.2.5","Title":"Functions for Processing Minute-to-Minute Accelerometer Data","Description":"A collection of functions that perform operations on time-series accelerometer data, such as identify non-wear time, flag minutes that are part of an activity bout, and find the maximum 10-minute average count value. The functions are generally very flexible, allowing for a variety of algorithms to be implemented. Most of the functions are written in C++ for efficiency.","Published":"2015-05-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"accelmissing","Version":"1.1","Title":"Missing Value Imputation for Accelerometer Data","Description":"Imputation for the missing count values in accelerometer data. The methodology includes both parametric and semi-parametric multiple imputations under the zero-inflated Poisson lognormal model. This package also provides multiple functions to pre-process the accelerometer data previous to the missing data imputation. These includes detecting wearing and non-wearing time, selecting valid days and subjects, and creating plots.","Published":"2016-07-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AcceptanceSampling","Version":"1.0-5","Title":"Creation and Evaluation of Acceptance Sampling Plans","Description":"Provides functionality for creating and\n\tevaluating acceptance sampling plans. Sampling plans can be single,\n\tdouble or multiple.","Published":"2016-12-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ACCLMA","Version":"1.0","Title":"ACC & LMA Graph Plotting","Description":"The main function is plotLMA(sourcefile,header) that takes\n a data set and plots the appropriate LMA and ACC graphs. If no\n sourcefile (a string) was passed, a manual data entry window is\n opened. The header parameter indicates by TRUE/FALSE (false by\n default) if the source CSV file has a head row or not. The data\n set should contain only one independent variable (X) and one\n dependent varialbe (Y) and can contain a weight for each\n observation","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"accrual","Version":"1.2","Title":"Bayesian Accrual Prediction","Description":"Subject recruitment for medical research is challenging. Slow patient accrual leads to delay in research. Accrual monitoring during the process of recruitment is critical. Researchers need reliable tools to manage the accrual rate. We developed a Bayesian method that integrates researcher's experience on previous trials and data from the current study, providing reliable prediction on accrual rate for clinical studies. In this R package, we present functions for Bayesian accrual prediction which can be easily used by statisticians and clinical researchers.","Published":"2016-07-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"accrued","Version":"1.4.1","Title":"Data Quality Visualization Tools for Partially Accruing Data","Description":"Package for visualizing data quality of partially accruing data.","Published":"2016-08-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ACD","Version":"1.5.3","Title":"Categorical data analysis with complete or missing responses","Description":"Categorical data analysis with complete or missing responses","Published":"2013-10-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ACDm","Version":"1.0.4","Title":"Tools for Autoregressive Conditional Duration Models","Description":"Package for Autoregressive Conditional Duration (ACD, Engle and Russell, 1998) models. Creates trade, price or volume durations from transactions (tic) data, performs diurnal adjustments, fits various ACD models and tests them. ","Published":"2016-07-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"acebayes","Version":"1.4","Title":"Optimal Bayesian Experimental Design using the ACE Algorithm","Description":"Optimal Bayesian experimental design using the approximate coordinate exchange (ACE) algorithm.","Published":"2017-05-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"acepack","Version":"1.4.1","Title":"ACE and AVAS for Selecting Multiple Regression Transformations","Description":"Two nonparametric methods for multiple regression transform selection are provided.\n The first, Alternative Conditional Expectations (ACE), \n is an algorithm to find the fixed point of maximal\n correlation, i.e. it finds a set of transformed response variables that maximizes R^2\n using smoothing functions [see Breiman, L., and J.H. Friedman. 1985. \"Estimating Optimal Transformations\n for Multiple Regression and Correlation\". Journal of the American Statistical Association.\n 80:580-598. ].\n Also included is the Additivity Variance Stabilization (AVAS) method which works better than ACE when\n correlation is low [see Tibshirani, R.. 1986. \"Estimating Transformations for Regression via Additivity\n and Variance Stabilization\". Journal of the American Statistical Association. 83:394-405. \n ]. A good introduction to these two methods is in chapter 16 of\n Frank Harrel's \"Regression Modeling Strategies\" in the Springer Series in Statistics.","Published":"2016-10-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ACEt","Version":"1.8.0","Title":"Estimating Dynamic Heritability and Twin Model Comparison","Description":"Twin models that are able to estimate the dynamic behaviour of the variance components in the classical twin models with respect to age using B-splines and P-splines.","Published":"2017-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"acid","Version":"1.1","Title":"Analysing Conditional Income Distributions","Description":"Functions for the analysis of income distributions for subgroups of the population as defined by a set of variables like age, gender, region, etc. This entails a Kolmogorov-Smirnov test for a mixture distribution as well as functions for moments, inequality measures, entropy measures and polarisation measures of income distributions. This package thus aides the analysis of income inequality by offering tools for the exploratory analysis of income distributions at the disaggregated level. ","Published":"2016-02-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"acm4r","Version":"1.0","Title":"Align-and-Count Method comparisons of RFLP data","Description":"Fragment lengths or molecular weights from pairs of lanes are\n compared, and a number of matching bands are calculated using the\n Align-and-Count Method.","Published":"2013-12-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ACMEeqtl","Version":"1.4","Title":"Estimation of Interpretable eQTL Effect Sizes Using a Log of\nLinear Model","Description":"We use a non-linear model, termed ACME, \n that reflects a parsimonious biological model for \n allelic contributions of cis-acting eQTLs.\n With non-linear least-squares algorithm we \n estimate maximum likelihood parameters. The ACME model\n provides interpretable effect size estimates and\n p-values with well controlled Type-I error.\n Includes both R and (much faster) C implementations.","Published":"2017-03-11","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"acmeR","Version":"1.1.0","Title":"Implements ACME Estimator of Bird and Bat Mortality by Wind\nTurbines","Description":"Implementation of estimator ACME, described in Wolpert (2015), ACME: A \t\tPartially Periodic Estimator of Avian & Chiropteran Mortality at Wind\n Turbines (submitted). Unlike most other models, this estimator\n supports decreasing-hazard Weibull model for persistence;\n decreasing search proficiency as carcasses age; variable\n bleed-through at successive searches; and interval mortality\n estimates. The package provides, based on search data, functions\n for estimating the mortality inflation factor in Frequentist and\n Bayesian settings.","Published":"2015-09-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ACNE","Version":"0.8.1","Title":"Affymetrix SNP Probe-Summarization using Non-Negative Matrix\nFactorization","Description":"A summarization method to estimate allele-specific copy number signals for Affymetrix SNP microarrays using non-negative matrix factorization (NMF).","Published":"2015-10-27","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"acnr","Version":"1.0.0","Title":"Annotated Copy-Number Regions","Description":"Provides SNP array data from different types of\n copy-number regions. These regions were identified manually by the authors\n of the package and may be used to generate realistic data sets with known\n truth.","Published":"2017-04-18","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"acopula","Version":"0.9.2","Title":"Modelling dependence with multivariate Archimax (or any\nuser-defined continuous) copulas","Description":"Archimax copulas are mixture of Archimedean and EV copulas. The package provides definitions of several parametric families of generator and dependence function, computes CDF and PDF, estimates parameters, tests for goodness of fit, generates random sample and checks copula properties for custom constructs. In 2-dimensional case explicit formulas for density are used, in the contrary to higher dimensions when all derivatives are linearly approximated. Several non-archimax families (normal, FGM, Plackett) are provided as well. ","Published":"2013-07-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AcousticNDLCodeR","Version":"1.0.1","Title":"Coding Sound Files for Use with NDL","Description":"Make acoustic cues to use with the R packages 'ndl' or 'ndl2'. The package implements functions used\n in the PLoS ONE paper:\n Denis Arnold, Fabian Tomaschek, Konstantin Sering, Florence Lopez, and R. Harald Baayen (2017).\n Words from spontaneous conversational speech can be recognized with human-like accuracy by \n an error-driven learning algorithm that discriminates between meanings straight from smart \n acoustic features, bypassing the phoneme as recognition unit. PLoS ONE 12(4):e0174623\n https://doi.org/10.1371/journal.pone.0174623\n More details can be found in the paper and the supplement.\n 'ndl' is available on CRAN. 'ndl2' is available by request from .","Published":"2017-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"acp","Version":"2.1","Title":"Autoregressive Conditional Poisson","Description":"Analysis of count data exhibiting autoregressive properties, using the Autoregressive Conditional Poisson model (ACP(p,q)) proposed by Heinen (2003).","Published":"2015-12-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"aCRM","Version":"0.1.1","Title":"Convenience functions for analytical Customer Relationship\nManagement","Description":"Convenience functions for data preparation and modeling often used in aCRM.","Published":"2014-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AcrossTic","Version":"1.0-3","Title":"A Cost-Minimal Regular Spanning Subgraph with TreeClust","Description":"Construct minimum-cost regular spanning subgraph as part of a\n non-parametric two-sample test for equality of distribution.","Published":"2016-08-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"acrt","Version":"1.0.1","Title":"Autocorrelation Robust Testing","Description":"Functions for testing affine hypotheses on the regression coefficient vector in regression models with autocorrelated errors. ","Published":"2016-12-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"acs","Version":"2.0","Title":"Download, Manipulate, and Present American Community Survey and\nDecennial Data from the US Census","Description":"Provides a general toolkit for downloading, managing,\n analyzing, and presenting data from the U.S. Census, including SF1\n (Decennial short-form), SF3 (Decennial long-form), and the American\n Community Survey (ACS). Confidence intervals provided with ACS data\n are converted to standard errors to be bundled with estimates in\n complex acs objects. Package provides new methods to conduct\n standard operations on acs objects and present/plot data in\n statistically appropriate ways. Current version is 2.0 +/- .033.","Published":"2016-03-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ACSNMineR","Version":"0.16.8.25","Title":"Gene Enrichment Analysis from ACSN Maps or GMT Files","Description":"Compute and represent gene set enrichment or depletion from your\n data based on pre-saved maps from the Atlas of Cancer Signalling Networks (ACSN)\n or user imported maps. User imported maps must be complying with the GMT format\n as defined by the Broad Institute, that is to say that the file should be tab-\n separated, that the first column should contain the module name, the second\n column can contain comments that will be overwritten with the number of genes\n in the module, and subsequent columns must contain the list of genes (HUGO\n symbols; tab-separated) inside the module. The gene set enrichment can be run\n with hypergeometric test or Fisher exact test, and can use multiple corrections.\n Visualization of data can be done either by barplots or heatmaps.","Published":"2016-09-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"acss","Version":"0.2-5","Title":"Algorithmic Complexity for Short Strings","Description":"Main functionality is to provide the algorithmic complexity for\n short strings, an approximation of the Kolmogorov Complexity of a short\n string using the coding theorem method (see ?acss). The database containing\n the complexity is provided in the data only package acss.data, this package\n provides functions accessing the data such as prob_random returning the\n posterior probability that a given string was produced by a random process.\n In addition, two traditional (but problematic) measures of complexity are\n also provided: entropy and change complexity.","Published":"2014-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"acss.data","Version":"1.0","Title":"Data Only: Algorithmic Complexity of Short Strings (Computed via\nCoding Theorem Method)","Description":"Data only package providing the algorithmic complexity of short strings, computed using the coding theorem method. For a given set of symbols in a string, all possible or a large number of random samples of Turing machines (TM) with a given number of states (e.g., 5) and number of symbols corresponding to the number of symbols in the strings were simulated until they reached a halting state or failed to end. This package contains data on 4.5 million strings from length 1 to 12 simulated on TMs with 2, 4, 5, 6, and 9 symbols. The complexity of the string corresponds to the distribution of the halting states of the TMs.","Published":"2014-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ACSWR","Version":"1.0","Title":"A Companion Package for the Book \"A Course in Statistics with R\"","Description":"A book designed to meet the requirements of masters students. Tattar, P.N., Suresh, R., and Manjunath, B.G. \"A Course in Statistics with R\", J. Wiley, ISBN 978-1-119-15272-9. ","Published":"2015-09-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ACTCD","Version":"1.1-0","Title":"Asymptotic Classification Theory for Cognitive Diagnosis","Description":"Cluster analysis for cognitive diagnosis based on the Asymptotic Classification Theory (Chiu, Douglas & Li, 2009; ). Given the sample statistic of sum-scores, cluster analysis techniques can be used to classify examinees into latent classes based on their attribute patterns. In addition to the algorithms used to classify data, three labeling approaches are proposed to label clusters so that examinees' attribute profiles can be obtained.","Published":"2016-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Actigraphy","Version":"1.3.2","Title":"Actigraphy Data Analysis","Description":"Functional linear modeling and analysis for actigraphy data. ","Published":"2016-01-15","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"activity","Version":"1.1","Title":"Animal Activity Statistics","Description":"Provides functions to fit kernel density functions\n to animal activity time data; plot activity distributions;\n quantify overall levels of activity; statistically compare\n activity metrics through bootstrapping; and evaluate variation\n in linear variables with time (or other circular variables).","Published":"2016-09-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"activpalProcessing","Version":"1.0.2","Title":"Process activPAL Events Files","Description":"Performs estimation of physical activity and sedentary behavior variables from activPAL (PAL Technologies, Glasgow, Scotland) events files. See for more information on the activPAL.","Published":"2016-12-14","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"actuar","Version":"2.1-1","Title":"Actuarial Functions and Heavy Tailed Distributions","Description":"Functions and data sets for actuarial science:\n modeling of loss distributions; risk theory and ruin theory;\n simulation of compound models, discrete mixtures and compound\n hierarchical models; credibility theory. Support for many additional\n probability distributions to model insurance loss amounts and loss\n frequency: 19 continuous heavy tailed distributions; the\n Poisson-inverse Gaussian discrete distribution; zero-truncated and\n zero-modified extensions of the standard discrete distributions.\n Support for phase-type distributions commonly used to compute ruin\n probabilities.","Published":"2017-05-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ActuDistns","Version":"3.0","Title":"Functions for actuarial scientists","Description":"Computes the probability density function, hazard rate\n function, integrated hazard rate function and the quantile\n function for 44 commonly used survival models","Published":"2012-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AcuityView","Version":"0.1","Title":"A Package for Displaying Visual Scenes as They May Appear to an\nAnimal with Lower Acuity","Description":"This code provides a simple method for representing a visual scene as it may be seen by an animal with less acute vision. When using (or for more information), please cite the original publication.","Published":"2017-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ada","Version":"2.0-5","Title":"The R Package Ada for Stochastic Boosting","Description":"Performs discrete, real, and gentle boost under both exponential and \n logistic loss on a given data set. The package ada provides a straightforward, \n well-documented, and broad boosting routine for classification, ideally suited \n for small to moderate-sized data sets.","Published":"2016-05-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"adabag","Version":"4.1","Title":"Applies Multiclass AdaBoost.M1, SAMME and Bagging","Description":"It implements Freund and Schapire's Adaboost.M1 algorithm and Breiman's Bagging\n\talgorithm using classification trees as individual classifiers. Once these classifiers have been\n\ttrained, they can be used to predict on new data. Also, cross validation estimation of the error can\n\tbe done. Since version 2.0 the function margins() is available to calculate the margins for these\n\tclassifiers. Also a higher flexibility is achieved giving access to the rpart.control() argument\n\tof 'rpart'. Four important new features were introduced on version 3.0, AdaBoost-SAMME (Zhu \n\tet al., 2009) is implemented and a new function errorevol() shows the error of the ensembles as\n\ta function of the number of iterations. In addition, the ensembles can be pruned using the option \n\t'newmfinal' in the predict.bagging() and predict.boosting() functions and the posterior probability of\n\teach class for observations can be obtained. Version 3.1 modifies the relative importance measure\n\tto take into account the gain of the Gini index given by a variable in each tree and the weights of \n\tthese trees. Version 4.0 includes the margin-based ordered aggregation for Bagging pruning (Guo\n\tand Boukir, 2013) and a function to auto prune the 'rpart' tree. Moreover, three new plots are also \n\tavailable importanceplot(), plot.errorevol() and plot.margins(). Version 4.1 allows to predict on \n\tunlabeled data. ","Published":"2015-10-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adagio","Version":"0.6.5","Title":"Discrete and Global Optimization Routines","Description":"\n The R package 'adagio' will provide methods and algorithms for\n discrete optimization and (evolutionary) global optimization.","Published":"2016-05-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"AdapEnetClass","Version":"1.2","Title":"A Class of Adaptive Elastic Net Methods for Censored Data","Description":"Provides new approaches to variable selection for AFT model. ","Published":"2015-10-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"adapr","Version":"1.0.2","Title":"Implementation of an Accountable Data Analysis Process","Description":"Tracks reading and writing within R scripts that are organized into a directed acyclic graph. Contains an interactive shiny application adaprApp(). Uses git2r package, Git and file hashes to track version histories of input and output. See package vignette for how to get started. V1.02 adds parallel execution of project scripts and function map in vignette. Makes project specification argument last in order.","Published":"2017-02-02","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"adaptDA","Version":"1.0","Title":"Adaptive Mixture Discriminant Analysis","Description":"The adaptive mixture discriminant analysis (AMDA) allows to adapt a model-based classifier to the situation where a class represented in the test set may have not been encountered earlier in the learning phase.","Published":"2014-09-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AdaptFit","Version":"0.2-2","Title":"Adaptive Semiparametic Regression","Description":"Based on the function \"spm\" of the SemiPar package fits\n semiparametric regression models with spatially adaptive\n penalized splines.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AdaptFitOS","Version":"0.62","Title":"Adaptive Semiparametric Regression with Simultaneous Confidence\nBands","Description":"Fits semiparametric regression models with spatially adaptive penalized splines and computes simultaneous confidence bands.","Published":"2015-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AdaptGauss","Version":"1.3.3","Title":"Gaussian Mixture Models (GMM)","Description":"Multimodal distributions can be modelled as a mixture of components. The model is derived using the Pareto Density Estimation (PDE) for an estimation of the pdf. PDE has been designed in particular to identify groups/classes in a dataset. Precise limits for the classes can be calculated using the theorem of Bayes. Verification of the model is possible by QQ plot, Chi-squared test and Kolmogorov-Smirnov test. The package is based on the publication of Ultsch, A., Thrun, M.C., Hansen-Goos, O., Lotsch, J. (2015) .","Published":"2017-03-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"adaptiveGPCA","Version":"0.1","Title":"Adaptive Generalized PCA","Description":"Implements adaptive gPCA, as described in: Fukuyama, J. (2017)\n . The package also includes functionality for applying\n the method to 'phyloseq' objects so that the method can be easily applied\n to microbiome data and a 'shiny' app for interactive visualization. ","Published":"2017-05-05","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"AdaptiveSparsity","Version":"1.4","Title":"Adaptive Sparsity Models","Description":"Implements Figueiredo EM algorithm for adaptive sparsity (Jeffreys prior) (see Figueiredo, M.A.T.; , \"Adaptive sparseness for supervised learning,\" Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.25, no.9, pp. 1150- 1159, Sept. 2003) and Wong algorithm for adaptively sparse gaussian geometric models (see Wong, Eleanor, Suyash Awate, and P. Thomas Fletcher. \"Adaptive Sparsity in Gaussian Graphical Models.\" In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 311-319. 2013.)","Published":"2014-01-03","License":"LGPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"adaptivetau","Version":"2.2-1","Title":"Tau-Leaping Stochastic Simulation","Description":"Implements adaptive tau leaping to approximate the\n trajectory of a continuous-time stochastic process as\n described by Cao et al. (2007) The Journal of Chemical Physics\n . This package is based upon work\n supported by NSF DBI-0906041 and NIH K99-GM104158 to Philip\n Johnson and NIH R01-AI049334 to Rustom\n Antia.","Published":"2016-10-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"adaptMCMC","Version":"1.1","Title":"Implementation of a generic adaptive Monte Carlo Markov Chain\nsampler","Description":"This package provides an implementation of the generic\n adaptive Monte Carlo Markov chain sampler proposed by Vihola\n (2011).","Published":"2012-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adaptsmoFMRI","Version":"1.1","Title":"Adaptive Smoothing of FMRI Data","Description":"This package contains R functions for estimating the blood\n oxygenation level dependent (BOLD) effect by using functional\n Magnetic Resonance Imaging (fMRI) data, based on adaptive Gauss\n Markov random fields, for real as well as simulated data. The\n implemented simulations make use of efficient Markov Chain\n Monte Carlo methods.","Published":"2013-01-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"adaptTest","Version":"1.0","Title":"Adaptive two-stage tests","Description":"The functions defined in this program serve for\n implementing adaptive two-stage tests. Currently, four tests\n are included: Bauer and Koehne (1994), Lehmacher and Wassmer\n (1999), Vandemeulebroecke (2006), and the horizontal\n conditional error function. User-defined tests can also be\n implemented. Reference: Vandemeulebroecke, An investigation of\n two-stage tests, Statistica Sinica 2006.","Published":"2009-10-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ADCT","Version":"0.1.0","Title":"Adaptive Design in Clinical Trials","Description":"Existing adaptive design methods in clinical trials. The package\n includes power, stopping boundaries (sample size) calculation functions for\n two-group group sequential designs, adaptive design with coprimary endpoints,\n biomarker-informed adaptive design, etc.","Published":"2016-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"addhaz","Version":"0.4","Title":"Binomial and Multinomial Additive Hazards Models","Description":"Functions to fit the binomial and multinomial additive hazards models and to calculate the contribution of diseases/conditions to the disability prevalence, as proposed by Nusselder and Looman (2004) .","Published":"2016-05-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"addhazard","Version":"1.1.0","Title":"Fit Additive Hazards Models for Survival Analysis","Description":"Contains tools to fit the additive hazards model to data from a cohort,\n random sampling, two-phase Bernoulli sampling and two-phase finite population sampling,\n as well as calibration tool to incorporate phase I auxiliary information into the\n two-phase data model fitting. This package provides regression parameter estimates and\n their model-based and robust standard errors. It also offers tools to make prediction of\n individual specific hazards.","Published":"2017-03-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"addinslist","Version":"0.2","Title":"Discover and Install Useful RStudio Addins","Description":"Browse through a continuously updated list of existing RStudio \n addins and install/uninstall their corresponding packages.","Published":"2016-09-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"additivityTests","Version":"1.1-4","Title":"Additivity Tests in the Two Way Anova with Single Sub-class\nNumbers","Description":"Implementation of the Tukey, Mandel, Johnson-Graybill, LBI, Tusell\n and modified Tukey non-additivity tests.","Published":"2014-12-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"addreg","Version":"2.0","Title":"Additive Regression for Discrete Data","Description":"Methods for fitting identity-link GLMs and GAMs to discrete data,\n using EM-type algorithms with more stable convergence properties than standard methods.","Published":"2015-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ADDT","Version":"2.0","Title":"Analysis of Accelerated Destructive Degradation Test Data","Description":"Accelerated destructive degradation tests (ADDT) are often used to collect necessary data for assessing the long-term properties of polymeric materials. Based on the collected data, a thermal index (TI) is estimated. The TI can be useful for material rating and comparison. This package implements the traditional method based on the least-squares method, the parametric method based on maximum likelihood estimation, and the semiparametric method based on spline methods, and the corresponding methods for estimating TI for polymeric materials. The traditional approach is a two-step approach that is currently used in industrial standards, while the parametric method is widely used in the statistical literature. The semiparametric method is newly developed. Both the parametric and semiparametric approaches allow one to do statistical inference such as quantifying uncertainties in estimation, hypothesis testing, and predictions. Publicly available datasets are provided illustrations. More details can be found in Jin et al. (2017).","Published":"2016-11-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ade4","Version":"1.7-6","Title":"Analysis of Ecological Data : Exploratory and Euclidean Methods\nin Environmental Sciences","Description":"Multivariate data analysis and graphical display.","Published":"2017-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ade4TkGUI","Version":"0.2-9","Title":"'ade4' Tcl/Tk Graphical User Interface","Description":"A Tcl/Tk GUI for some basic functions in the 'ade4' package.","Published":"2015-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adegenet","Version":"2.0.1","Title":"Exploratory Analysis of Genetic and Genomic Data","Description":"Toolset for the exploration of genetic and genomic data. Adegenet\n provides formal (S4) classes for storing and handling various genetic data,\n including genetic markers with varying ploidy and hierarchical population\n structure ('genind' class), alleles counts by populations ('genpop'), and\n genome-wide SNP data ('genlight'). It also implements original multivariate\n methods (DAPC, sPCA), graphics, statistical tests, simulation tools, distance\n and similarity measures, and several spatial methods. A range of both empirical\n and simulated datasets is also provided to illustrate various methods.","Published":"2016-02-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adegraphics","Version":"1.0-8","Title":"An S4 Lattice-Based Package for the Representation of\nMultivariate Data","Description":"Graphical functionalities for the representation of multivariate data. It is a complete re-implementation of the functions available in the 'ade4' package.","Published":"2017-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adehabitat","Version":"1.8.18","Title":"Analysis of Habitat Selection by Animals","Description":"A collection of tools for the analysis of habitat selection by animals.","Published":"2015-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adehabitatHR","Version":"0.4.14","Title":"Home Range Estimation","Description":"A collection of tools for the estimation of animals home range.","Published":"2015-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adehabitatHS","Version":"0.3.12","Title":"Analysis of Habitat Selection by Animals","Description":"A collection of tools for the analysis of habitat selection.","Published":"2015-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adehabitatLT","Version":"0.3.21","Title":"Analysis of Animal Movements","Description":"A collection of tools for the analysis of animal movements.","Published":"2016-08-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adehabitatMA","Version":"0.3.11","Title":"Tools to Deal with Raster Maps","Description":"A collection of tools to deal with raster maps.","Published":"2016-08-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adephylo","Version":"1.1-10","Title":"Adephylo: Exploratory Analyses for the Phylogenetic Comparative\nMethod","Description":"Multivariate tools to analyze comparative data, i.e. a phylogeny\n and some traits measured for each taxa.","Published":"2016-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AdequacyModel","Version":"2.0.0","Title":"Adequacy of Probabilistic Models and General Purpose\nOptimization","Description":"The main application concerns to a new robust optimization package with two major contributions. The first contribution refers to the assessment of the adequacy of probabilistic models through a combination of several statistics, which measure the relative quality of statistical models for a given data set. The second one provides a general purpose optimization method based on meta-heuristics functions for maximizing or minimizing an arbitrary objective function.","Published":"2016-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adespatial","Version":"0.0-8","Title":"Multivariate Multiscale Spatial Analysis","Description":"Tools for the multiscale spatial analysis of multivariate data.\n Several methods are based on the use of a spatial weighting matrix and its\n eigenvector decomposition (Moran's Eigenvectors Maps, MEM).","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ADGofTest","Version":"0.3","Title":"Anderson-Darling GoF test","Description":"Anderson-Darling GoF test with p-value calculation based on Marsaglia's 2004 paper \"Evaluating the Anderson-Darling Distribution\"","Published":"2011-12-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"AdhereR","Version":"0.1.0","Title":"Adherence to Medications","Description":"Computation of adherence to medications from Electronic Health care \n Data and visualization of individual medication histories and adherence \n patterns. The package implements a set of S3 classes and\n functions consistent with current adherence guidelines and definitions. \n It allows the computation of different measures of\n adherence (as defined in the literature, but also several original ones), \n their publication-quality plotting,\n the interactive exploration of patient medication history and \n the real-time estimation of adherence given various parameter settings. ","Published":"2017-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adhoc","Version":"1.1","Title":"Calculate Ad Hoc Distance Thresholds for DNA Barcoding\nIdentification","Description":"Two functions to calculate intra- and interspecific pairwise distances, evaluate DNA barcoding identification error and calculate an ad hoc distance threshold for each particular reference library of DNA barcodes. Specimen identification at this ad hoc distance threshold (using the best close match method) will produce identifications with an estimated relative error probability that can be fixed by the user (e.g. 5%).","Published":"2017-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"adimpro","Version":"0.8.2","Title":"Adaptive Smoothing of Digital Images","Description":"Implements tools for manipulation of digital \n \t\timages and the Propagation Separation approach \n \t\tby Polzehl and Spokoiny (2006) \n for smoothing digital images, see Polzehl and Tabelow (2007)\n .","Published":"2016-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AdjBQR","Version":"1.0","Title":"Adjusted Bayesian Quantile Regression Inference","Description":"Adjusted inference for Bayesian quantile regression based on\n asymmetric Laplace working likelihood, for details see Yang, Y., Wang, H.\n and He, X. (2015), Posterior inference in Bayesian quantile regression with\n asymmetric Laplace likelihood, International Statistical \n Review, 2015 .","Published":"2016-10-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"adlift","Version":"1.3-2","Title":"An adaptive lifting scheme algorithm","Description":"Adaptive Wavelet transforms for signal denoising","Published":"2012-11-06","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ADM3","Version":"1.3","Title":"An Interpretation of the ADM method - automated detection\nalgorithm","Description":"Robust change point detection using ADM3 algorithm.","Published":"2013-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AdMit","Version":"2.1.3","Title":"Adaptive Mixture of Student-t Distributions","Description":"Provides functions to perform the fitting of an adaptive mixture\n of Student-t distributions to a target density through its kernel function as described in\n Ardia et al. (2009) . The\n mixture approximation can then be used as the importance density in importance\n sampling or as the candidate density in the Metropolis-Hastings algorithm to\n obtain quantities of interest for the target density itself. ","Published":"2017-02-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"admixturegraph","Version":"1.0.2","Title":"Admixture Graph Manipulation and Fitting","Description":"Implements tools for building and visualising admixture graphs\n and for extracting equations from them. These equations can be compared to f-\n statistics obtained from data to test the consistency of a graph against data --\n for example by comparing the sign of f_4-statistics with the signs predicted by\n the graph -- and graph parameters (edge lengths and admixture proportions) can\n be fitted to observed statistics.","Published":"2016-12-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ADMMnet","Version":"0.1","Title":"Regularized Model with Selecting the Number of Non-Zeros","Description":"Fit linear and cox models regularized with net (L1 and Laplacian), elastic-net (L1 and L2) or lasso (L1) penalty, and their adaptive forms, such as adaptive lasso and net adjusting for signs of linked coefficients. In addition, it treats the number of non-zero coefficients as another tuning parameter and simultaneously selects with the regularization parameter. The package uses one-step coordinate descent algorithm and runs extremely fast by taking into account the sparsity structure of coefficients.","Published":"2015-12-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ADPclust","Version":"0.7","Title":"Fast Clustering Using Adaptive Density Peak Detection","Description":"An implementation of ADPclust clustering procedures (Fast\n Clustering Using Adaptive Density Peak Detection). The work is built and\n improved upon the idea of Rodriguez and Laio (2014). \n ADPclust clusters data by finding density peaks in a density-distance plot \n generated from local multivariate Gaussian density estimation. It includes \n an automatic centroids selection and parameter optimization algorithm, which \n finds the number of clusters and cluster centroids by comparing average \n silhouettes on a grid of testing clustering results; It also includes a user \n interactive algorithm that allows the user to manually selects cluster \n centroids from a two dimensional \"density-distance plot\". Here is the \n research article associated with this package: \"Wang, Xiao-Feng, and \n Yifan Xu (2015) Fast clustering using adaptive \n density peak detection.\" Statistical methods in medical research\". url:\n http://smm.sagepub.com/content/early/2015/10/15/0962280215609948.abstract. ","Published":"2016-10-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ads","Version":"1.5-2.2","Title":"Spatial point patterns analysis","Description":"Perform first- and second-order multi-scale analyses derived from Ripley K-function, for univariate,\n multivariate and marked mapped data in rectangular, circular or irregular shaped sampling windows, with tests of \n statistical significance based on Monte Carlo simulations.","Published":"2015-01-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AdvBinomApps","Version":"1.0","Title":"Upper Clopper-Pearson Confidence Limits for Burn-in Studies\nunder Additional Available Information","Description":"Functions to compute upper Clopper-Pearson confidence limits of early life failure probabilities and required sample sizes of burn-in studies under further available information, e.g. from other products or technologies. ","Published":"2016-04-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"advclust","Version":"0.4","Title":"Object Oriented Advanced Clustering","Description":"S4 Object Oriented for Advanced Fuzzy Clustering and Fuzzy COnsensus Clustering. Techniques that provided by this package are Fuzzy C-Means, Gustafson Kessel (Babuska Version), Gath-Geva, Sum Voting Consensus, Product Voting Consensus, and Borda Voting Consensus. This package also provide visualization via Biplot and Radar Plot.","Published":"2016-09-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"adwave","Version":"1.1","Title":"Wavelet Analysis of Genomic Data from Admixed Populations","Description":"Implements wavelet-based approaches for describing population admixture. Principal Components Analysis (PCA) is used to define the population structure and produce a localized admixture signal for each individual. Wavelet summaries of the PCA output describe variation present in the data and can be related to population-level demographic processes. For more details, see Sanderson et al. (2015).","Published":"2015-06-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AEDForecasting","Version":"0.20.0","Title":"Change Point Analysis in ARIMA Forecasting","Description":"Package to incorporate change point analysis in ARIMA forecasting.","Published":"2016-09-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"aemo","Version":"0.2.0","Title":"Download and Process AEMO Price and Demand Data","Description":"Download and process real time trading prices and demand data\n freely provided by the Australian Energy Market Operator (AEMO). Note that\n this includes a sample data set.","Published":"2016-08-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AER","Version":"1.2-5","Title":"Applied Econometrics with R","Description":"Functions, data sets, examples, demos, and vignettes for the book\n Christian Kleiber and Achim Zeileis (2008),\n\t Applied Econometrics with R, Springer-Verlag, New York.\n\t ISBN 978-0-387-77316-2. (See the vignette \"AER\" for a package overview.)","Published":"2017-01-07","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"AF","Version":"0.1.4","Title":"Model-Based Estimation of Confounder-Adjusted Attributable\nFractions","Description":"Estimates the attributable fraction in different sampling designs\n adjusted for measured confounders using logistic regression (cross-sectional\n and case-control designs), conditional logistic regression (matched case-control\n design), Cox proportional hazard regression (cohort design with time-to-\n event outcome) and gamma-frailty model with a Weibull baseline hazard. The variance of the estimator is obtained by combining the delta\n method with the the sandwich formula. Dahlqwist et al.(2016) .","Published":"2017-02-11","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"afc","Version":"1.4.0","Title":"Generalized Discrimination Score","Description":"This is an implementation of the Generalized Discrimination Score\n (also known as Two Alternatives Forced Choice Score, 2AFC) for various \n representations of forecasts and verifying observations. The Generalized \n Discrimination Score is a generic forecast verification framework which \n can be applied to any of the following verification contexts: dichotomous, \n polychotomous (ordinal and nominal), continuous, probabilistic, and ensemble.\n A comprehensive description of the Generalized Discrimination Score, including \n all equations used in this package, is provided by Mason and Weigel (2009) \n .","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"afex","Version":"0.18-0","Title":"Analysis of Factorial Experiments","Description":"Convenience functions for analyzing factorial experiments using ANOVA or\n mixed models. aov_ez(), aov_car(), and aov_4() allow specification of between,\n within (i.e., repeated-measures), or mixed between-within (i.e., split-plot)\n ANOVAs for data in long format (i.e., one observation per row), aggregating\n multiple observations per individual and cell of the design. mixed() fits mixed\n models using lme4::lmer() and computes p-values for all fixed effects using\n either Kenward-Roger or Satterthwaite approximation for degrees of freedom (LMM\n only), parametric bootstrap (LMMs and GLMMs), or likelihood ratio tests (LMMs\n and GLMMs). afex uses type 3 sums of squares as default (imitating commercial\n statistical software).","Published":"2017-05-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"affluenceIndex","Version":"1.0","Title":"Affluence Indices","Description":"Computes the statistical indices of affluence (richness) and constructs bootstrap confidence intervals for these indices. Also computes the Wolfson polarization index.","Published":"2017-03-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AFLPsim","Version":"0.4-2","Title":"Hybrid Simulation and Genome Scan for Dominant Markers","Description":"Hybrid simulation functions for dominant genetic data and genome scan methods.","Published":"2015-08-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AFM","Version":"1.2.2","Title":"Atomic Force Microscope Image Analysis","Description":"Provides Atomic Force Microscope images analysis such as Power\n Spectral Density, roughness against lengthscale, experimental variogram and variogram models,\n fractal dimension and scale. The AFM images can be exported to STL format for 3D\n printing.","Published":"2016-09-01","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"afmToolkit","Version":"0.0.1","Title":"Functions for Atomic Force Microscope Force-Distance Curves\nAnalysis","Description":"Set of functions for analyzing Atomic Force Microscope (AFM) force-distance curves. It allows to obtain the contact and unbinding points, perform the baseline correction, estimate the Young's modulus, fit up to two exponential decay function to a stress-relaxation / creep experiment, obtain adhesion energies. These operations can be done either over a single F-d curve or over a set of F-d curves in batch mode.","Published":"2017-04-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"aftgee","Version":"1.0-0","Title":"Accelerated Failure Time Model with Generalized Estimating\nEquations","Description":"This package features both rank-based estimates and least\n\t\t square estimates to the Accelerated Failure Time (AFT) model. \n\t\t For rank-based estimation, it provides approaches that include \n\t\t the computationally efficient Gehan's weight and the general's \n\t\t weight such as the logrank weight. \n\t\t For the least square estimation, the estimating equation is \n\t\t solved with Generalized Estimating Equations (GEE). \n\t\t Moreover, in multivariate cases, the dependence working \n\t\t correlation structure can be specified in GEE's setting.","Published":"2014-11-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"AGD","Version":"0.35","Title":"Analysis of Growth Data","Description":"Tools for NIHES course EP18 'Analysis of Growth Data', May 22-23\n 2012, Rotterdam.","Published":"2015-05-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"AggregateR","Version":"0.0.2","Title":"Aggregate Numeric, Date and Categorical Variables by an ID","Description":"Convenience functions for aggregating data frame. Currently mean, sum and variance are supported. For Date variables, recency and duration are supported. There is also support for dummy variables in predictive contexts. ","Published":"2015-11-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"agop","Version":"0.1-4","Title":"Aggregation Operators and Preordered Sets","Description":"Tools supporting multi-criteria decision making, including\n variable number of criteria, by means of aggregation operators\n and preordered sets. Possible applications include, but are not\n limited to, scientometrics and bibliometrics.","Published":"2014-09-14","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"agRee","Version":"0.5-0","Title":"Various Methods for Measuring Agreement","Description":"Bland-Altman plot and scatter plot with identity line \n for visualization and point and \n interval estimates for different metrics related to \n reproducibility/repeatability/agreement including\n the concordance correlation coefficient, \n intraclass correlation coefficient,\n within-subject coefficient of variation,\n smallest detectable difference, \n and mean normalized smallest detectable difference.","Published":"2016-07-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Agreement","Version":"0.8-1","Title":"Statistical Tools for Measuring Agreement","Description":"This package computes several statistics for measuring\n agreement, for example, mean square deviation (MSD), total\n deviation index (TDI) or concordance correlation coefficient\n (CCC). It can be used for both continuous data and categorical\n data for multiple raters and multiple readings cases.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"agricolae","Version":"1.2-4","Title":"Statistical Procedures for Agricultural Research","Description":"Original idea was presented in the thesis \"A statistical analysis tool for agricultural research\" to obtain the degree of Master on science, National Engineering University (UNI), Lima-Peru. Some experimental data for the examples come from the CIP and others research. Agricolae offers extensive functionality on experimental design especially for agricultural and plant breeding experiments, which can also be useful for other purposes. It supports planning of lattice, Alpha, Cyclic, Complete Block, Latin Square, Graeco-Latin Squares, augmented block, factorial, split and strip plot designs. There are also various analysis facilities for experimental data, e.g. treatment comparison procedures and several non-parametric tests comparison, biodiversity indexes and consensus cluster.","Published":"2016-06-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"agridat","Version":"1.12","Title":"Agricultural Datasets","Description":"Datasets from books, papers, and websites related to agriculture.\n Example analyses are included. Includes functions for plotting field\n designs and GGE biplots.","Published":"2015-06-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"agrmt","Version":"1.40.4","Title":"Calculate Agreement or Consensus in Ordered Rating Scales","Description":"Calculate agreement or consensus in ordered rating scales. The package implements van der Eijk's (2001) measure of agreement A, which can be used to describe agreement, consensus, or polarization among respondents. It also implements measures of consensus (dispersion) by Leik, Tatsle and Wierman, Blair and Lacy, Kvalseth, Berry and Mielke, and Garcia-Montalvo and Reynal-Querol. Furthermore, an implementation of Galtungs AJUS-system is provided to classify distributions, as well as a function to identify the position of multiple modes.","Published":"2016-04-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"AGSDest","Version":"2.3.1","Title":"Estimation in Adaptive Group Sequential Trials","Description":"Calculation of repeated confidence intervals as well as confidence\n intervals based on the stage-wise ordering in group sequential designs and\n adaptive group sequential designs. For adaptive group sequential designs\n the confidence intervals are based on the conditional rejection probability\n principle. Currently the procedures do not support the use of futility\n boundaries or more than one adaptive interim analysis.","Published":"2016-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"agsemisc","Version":"1.3-1","Title":"Miscellaneous plotting and utility functions","Description":"High-featured panel functions for bwplot and xyplot,\n some plot management helpers, various convenience functions","Published":"2014-07-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ahaz","Version":"1.14","Title":"Regularization for semiparametric additive hazards regression","Description":"Computationally efficient procedures for regularized\n estimation with the semiparametric additive hazards regression\n model.","Published":"2013-06-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AHMbook","Version":"0.1.4","Title":"Functions and Data for the Book 'Applied Hierarchical Modeling\nin Ecology'","Description":"Provides functions and data sets to accompany the book 'Applied Hierarchical Modeling in Ecology: Analysis of distribution, abundance and species richness in R and BUGS' by Marc Kery and Andy Royle. The first volume appeared early in 2016 (ISBN: 978-0-12-801378-6, ); the second volume is in preparation and additional functions will be added to this package.","Published":"2017-05-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"AhoCorasickTrie","Version":"0.1.0","Title":"Fast Searching for Multiple Keywords in Multiple Texts","Description":"Aho-Corasick is an optimal algorithm for finding many\n keywords in a text. It can locate all matches in a text in O(N+M) time; i.e.,\n the time needed scales linearly with the number of keywords (N) and the size of\n the text (M). Compare this to the naive approach which takes O(N*M) time to loop\n through each pattern and scan for it in the text. This implementation builds the\n trie (the generic name of the data structure) and runs the search in a single\n function call. If you want to search multiple texts with the same trie, the\n function will take a list or vector of texts and return a list of matches to\n each text. By default, all 128 ASCII characters are allowed in both the keywords\n and the text. A more efficient trie is possible if the alphabet size can be\n reduced. For example, DNA sequences use at most 19 distinct characters and\n usually only 4; protein sequences use at most 26 distinct characters and usually\n only 20. UTF-8 (Unicode) matching is not currently supported.","Published":"2016-07-29","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"ahp","Version":"0.2.11","Title":"Analytic Hierarchy Process","Description":"Model and analyse complex decision making problems\n using the Analytic Hierarchy Process (AHP) by Thomas Saaty.","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AHR","Version":"1.4.2","Title":"Estimation and Testing of Average Hazard Ratios","Description":"Methods for estimation of multivariate average hazard ratios as\n defined by Kalbfleisch and Prentice. The underlying survival functions of the\n event of interest in each group can be estimated using either the (weighted)\n Kaplan-Meier estimator or the Aalen-Johansen estimator for the transition\n probabilities in Markov multi-state models. Right-censored and left-truncated\n data is supported. Moreover, the difference in restricted mean survival can be\n estimated.","Published":"2016-08-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"AICcmodavg","Version":"2.1-1","Title":"Model Selection and Multimodel Inference Based on (Q)AIC(c)","Description":"Functions to implement model selection and multimodel inference based on Akaike's information criterion (AIC) and the second-order AIC (AICc), as well as their quasi-likelihood counterparts (QAIC, QAICc) from various model object classes. The package implements classic model averaging for a given parameter of interest or predicted values, as well as a shrinkage version of model averaging parameter estimates or effect sizes. The package includes diagnostics and goodness-of-fit statistics for certain model types including those of 'unmarkedFit' classes estimating demographic parameters after accounting for imperfect detection probabilities. Some functions also allow the creation of model selection tables for Bayesian models of the 'bugs' and 'rjags' classes. Functions also implement model selection using BIC. Objects following model selection and multimodel inference can be formatted to LaTeX using 'xtable' methods included in the package.","Published":"2017-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AID","Version":"2.0","Title":"Box-Cox Power Transformation","Description":"Performs Box-Cox power transformation for different purposes, graphical approaches, assess the success of the transformation via tests and plots, computes mean and confidence interval for back transformed data.","Published":"2017-05-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aidar","Version":"1.0.0","Title":"Tools for reading AIDA (http://aida.freehep.org/) files into R","Description":"Read objects from the AIDA file and make them available\n as dataframes in R","Published":"2013-12-11","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AIG","Version":"0.1.6","Title":"Automatic Item Generator","Description":"A collection of Automatic Item Generators used mainly for\n psychological research. This package can generate linear syllogistic reasoning,\n arithmetic and 2D/3D/Double 3D spatial reasoning items. It is recommended for research\n purpose only.","Published":"2017-06-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"AIM","Version":"1.01","Title":"AIM: adaptive index model","Description":"R functions for adaptively constructing index models for\n continuous, binary and survival outcomes. Implementation\n requires loading R-pacakge \"survival\"","Published":"2010-04-05","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"aimPlot","Version":"1.0.0","Title":"Create Pie Like Plot for Completeness","Description":"Create a pie like plot to visualise if the aim or several aims of a\n project is achieved or close to be achieved i.e the aim is achieved when the point is at the\n center of the pie plot. Imagine it's like a dartboard and the center means 100%\n completeness/achievement. Achievement can also be understood as 100%\n coverage. The standard distribution of completeness allocated in the pie plot\n is 50%, 80% and 100% completeness.","Published":"2016-04-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"airGR","Version":"1.0.5.12","Title":"Suite of GR Hydrological Models for Precipitation-Runoff\nModelling","Description":"Hydrological modelling tools developed\n at Irstea-Antony (HBAN Research Unit, France). The package includes several conceptual\n rainfall-runoff models (GR4H, GR4J, GR5J, GR6J, GR2M, GR1A), a snowmelt module (CemaNeige)\n and the associated functions for their calibration and evaluation. Use help(airGR) for package description.","Published":"2017-01-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ajv","Version":"1.0.0","Title":"Another JSON Schema Validator","Description":"A thin wrapper around the 'ajv' JSON validation package for\n JavaScript. See for details.","Published":"2017-04-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Ake","Version":"1.0","Title":"Associated Kernel Estimations","Description":"Continuous and discrete (count or categorical) estimation of density, probability mass function (p.m.f.) and regression functions are performed using associated kernels. The cross-validation technique and the local Bayesian procedure are also implemented for bandwidth selection.","Published":"2015-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"akima","Version":"0.6-2","Title":"Interpolation of Irregularly and Regularly Spaced Data","Description":"Several cubic spline interpolation methods of H. Akima for irregular and\n regular gridded data are available through this package, both for the bivariate case\n (irregular data: ACM 761, regular data: ACM 760) and univariate case (ACM 433 and ACM 697).\n Linear interpolation of irregular gridded data is also covered by reusing D. J. Renkas\n triangulation code which is part of Akimas Fortran code. A bilinear interpolator\n for regular grids was also added for comparison with the bicubic interpolator on\n regular grids.","Published":"2016-12-20","License":"ACM | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"akmeans","Version":"1.1","Title":"Adaptive Kmeans algorithm based on threshold","Description":"Adaptive K-means algorithm with various threshold settings.\n It support two distance metric: \n Euclidean distance, Cosine distance (1 - cosine similarity)\n In version 1.1, it contains one more threshold condition.","Published":"2014-05-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ALA4R","Version":"1.5.6","Title":"Atlas of Living Australia (ALA) Data and Resources in R","Description":"The Atlas of Living Australia (ALA) provides tools to enable users\n of biodiversity information to find, access, combine and visualise data on\n Australian plants and animals; these have been made available from\n . ALA4R provides a subset of the tools to be\n directly used within R. It enables the R community to directly access data\n and resources hosted by the ALA. Our goal is to enable outputs (e.g.\n observations of species) to be queried and output in a range of standard\n formats.","Published":"2017-02-18","License":"MPL-2.0","snapshot_date":"2017-06-23"} {"Package":"alabama","Version":"2015.3-1","Title":"Constrained Nonlinear Optimization","Description":"Augmented Lagrangian Adaptive Barrier Minimization\n Algorithm for optimizing smooth nonlinear objective functions\n with constraints. Linear or nonlinear equality and inequality\n constraints are allowed.","Published":"2015-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"alakazam","Version":"0.2.7","Title":"Immunoglobulin Clonal Lineage and Diversity Analysis","Description":"Provides immunoglobulin (Ig) sequence lineage reconstruction,\n diversity profiling, and amino acid property analysis.","Published":"2017-06-15","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"ald","Version":"1.1","Title":"The Asymmetric Laplace Distribution","Description":"It provides the density, distribution function, quantile function, \n random number generator, likelihood function, moments and Maximum Likelihood estimators for a given sample, all this for\n the three parameter Asymmetric Laplace Distribution defined \n in Koenker and Machado (1999). This is a special case of the skewed family of distributions\n available in Galarza (2016) useful for quantile regression. ","Published":"2016-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ALDqr","Version":"1.0","Title":"Quantile Regression Using Asymmetric Laplace Distribution","Description":"EM algorithm for estimation of parameters and other methods in a quantile regression. ","Published":"2017-01-22","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"aLFQ","Version":"1.3.4","Title":"Estimating Absolute Protein Quantities from Label-Free LC-MS/MS\nProteomics Data","Description":"Determination of absolute protein quantities is necessary for multiple applications, such as mechanistic modeling of biological systems. Quantitative liquid chromatography tandem mass spectrometry (LC-MS/MS) proteomics can measure relative protein abundance on a system-wide scale. To estimate absolute quantitative information using these relative abundance measurements requires additional information such as heavy-labeled references of known concentration. Multiple methods have been using different references and strategies; some are easily available whereas others require more effort on the users end. Hence, we believe the field might benefit from making some of these methods available under an automated framework, which also facilitates validation of the chosen strategy. We have implemented the most commonly used absolute label-free protein abundance estimation methods for LC-MS/MS modes quantifying on either MS1-, MS2-levels or spectral counts together with validation algorithms to enable automated data analysis and error estimation. Specifically, we used Monte-carlo cross-validation and bootstrapping for model selection and imputation of proteome-wide absolute protein quantity estimation. Our open-source software is written in the statistical programming language R and validated and demonstrated on a synthetic sample. ","Published":"2017-03-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"alfred","Version":"0.1.1","Title":"Downloading Time Series from ALFRED Database for Various\nVintages","Description":"Provides direct access to the ALFRED () and FRED () databases.\n Its functions return tidy data frames for different releases of the specified time series. \n Note that this product uses the FRED© API but is not endorsed or certified by the Federal Reserve Bank of St. Louis.","Published":"2017-06-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"AlgDesign","Version":"1.1-7.3","Title":"Algorithmic Experimental Design","Description":"Algorithmic experimental designs. Calculates exact and\n approximate theory experimental designs for D,A, and I\n criteria. Very large designs may be created. Experimental\n designs may be blocked or blocked designs created from a\n candidate list, using several criteria. The blocking can be\n done when whole and within plot factors interact.","Published":"2014-10-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AlgebraicHaploPackage","Version":"1.2","Title":"Haplotype Two Snips Out of a Paired Group of Patients","Description":"Two unordered pairs of data of two different snips positions is haplotyped by resolving a small number ob closed equations.","Published":"2015-10-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"algorithmia","Version":"0.0.2","Title":"Allows you to Easily Interact with the Algorithmia Platform","Description":"The company, Algorithmia, houses the largest marketplace of online\n algorithms. This package essentially holds a bunch of REST wrappers that\n make it very easy to call algorithms in the Algorithmia platform and access\n files and directories in the Algorithmia data API. To learn more about the\n services they offer and the algorithms in the platform visit\n . More information for developers can be found at\n .","Published":"2016-09-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"algstat","Version":"0.0.2","Title":"Algebraic statistics in R","Description":"algstat provides functionality for algebraic statistics in R.\n Current applications include exact inference in log-linear models for\n contingency table data, analysis of ranked and partially ranked data, and\n general purpose tools for multivariate polynomials, building on the mpoly\n package. To aid in the process, algstat has ports to Macaulay2, Bertini,\n LattE-integrale and 4ti2.","Published":"2014-12-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AlignStat","Version":"1.3.1","Title":"Comparison of Alternative Multiple Sequence Alignments","Description":"Methods for comparing two alternative multiple \n sequence alignments (MSAs) to determine whether they align homologous residues in \n the same columns as one another. It then classifies similarities and differences \n into conserved gaps, conserved sequence, merges, splits or shifts of one MSA relative \n to the other. Summarising these categories for each MSA column yields information \n on which sequence regions are agreed upon my both MSAs, and which differ. Several \n plotting functions enable easily visualisation of the comparison data for analysis.","Published":"2016-09-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"alineR","Version":"1.1.3","Title":"Alignment of Phonetic Sequences Using the 'ALINE' Algorithm","Description":"Functions are provided to calculate the 'ALINE' Distance between words. The score is based on phonetic features represented using the Unicode-compliant International Phonetic Alphabet (IPA). Parameterized features weights are used to determine the optimal alignment and functions are provided to estimate optimum values.","Published":"2016-04-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ALKr","Version":"0.5.3.1","Title":"Generate Age-Length Keys for fish populations","Description":"A collection of functions that implement several algorithms for\n generating age-length keys for fish populations from incomplete data.","Published":"2014-02-26","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"allan","Version":"1.01","Title":"Automated Large Linear Analysis Node","Description":"Automated fitting of linear regression models and a\n stepwise routine","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"allanvar","Version":"1.1","Title":"Allan Variance Analysis","Description":"A collection of tools for stochastic sensor error\n characterization using the Allan Variance technique originally\n developed by D. Allan.","Published":"2015-07-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"alleHap","Version":"0.9.7","Title":"Allele Imputation and Haplotype Reconstruction from Pedigree\nDatabases","Description":"Tools to simulate alphanumeric alleles, impute genetic missing data and reconstruct non-recombinant haplotypes from pedigree databases in a deterministic way. Allelic simulations can be implemented taking into account many factors (such as number of families, markers, alleles per marker,\n probability and proportion of missing genotypes, recombination rate, etc).\n Genotype imputation can be used with simulated datasets or real databases (previously loaded in .ped format). Haplotype reconstruction can be carried\n out even with missing data, since the program firstly imputes each family genotype (without a reference panel), to later reconstruct the corresponding\n haplotypes for each family member. All this considering that each individual (due to meiosis) should unequivocally have two alleles per marker (one inherited\n from each parent) and thus imputation and reconstruction results can be deterministically calculated.","Published":"2016-07-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"allelematch","Version":"2.5","Title":"Identifying unique multilocus genotypes where genotyping error\nand missing data may be present","Description":"This package provides tools for the identification of unique of multilocus genotypes when both genotyping error and missing data may be present. The package is targeted at those working with large datasets and databases containing multiple samples of each individual, a situation that is common in conservation genetics, and particularly in non-invasive wildlife sampling applications. Functions explicitly incorporate missing data, and can tolerate allele mismatches created by genotyping error. If you use this tool, please cite the package using the journal article in Molecular Ecology Resources (Galpern et al., 2012). Please use citation('allelematch') to find this. Due to changing CRAN policy, and the size and compile time of the vignettes, they can no longer be distributed with this package. Please contact the package primary author, or visit the allelematch site for a complete vignette (http://nricaribou.cc.umanitoba.ca/allelematch/). For users with access to academic literature, tutorial material is also available as supplementary material to the article describing this software. ","Published":"2014-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AlleleRetain","Version":"1.3.1","Title":"Allele Retention, Inbreeding, and Demography","Description":"Simulate the effect of management or demography on allele\n retention and inbreeding accumulation in bottlenecked\n populations of animals with overlapping generations.","Published":"2013-06-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"allelic","Version":"0.1","Title":"A fast, unbiased and exact allelic exact test","Description":"This is the implementation in R+C of a new association\n test described in \"A fast, unbiased and exact allelic exact\n test for case-control association studies\" (Submitted). It\n appears that in most cases the classical chi-square test used\n for testing for allelic association on genotype data is biased.\n Our test is unbiased, exact but fast throught careful\n optimization.","Published":"2006-05-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AllPossibleSpellings","Version":"1.1","Title":"Computes all of a word's possible spellings","Description":"Contains functions possSpells.fnc and\n batch.possSpells.fnc.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"alluvial","Version":"0.1-2","Title":"Alluvial Diagrams","Description":"Creating alluvial diagrams (also known as parallel sets plots) for multivariate\n and time series-like data.","Published":"2016-09-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"alphabetr","Version":"0.2.2","Title":"Algorithms for High-Throughput Sequencing of Antigen-Specific T\nCells","Description":"Provides algorithms for frequency-based pairing of alpha-beta T\n cell receptors.","Published":"2017-01-28","License":"AGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"alphahull","Version":"2.1","Title":"Generalization of the Convex Hull of a Sample of Points in the\nPlane","Description":"Computation of the alpha-shape and alpha-convex\n hull of a given sample of points in the plane. The concepts of\n alpha-shape and alpha-convex hull generalize the definition of\n the convex hull of a finite set of points. The programming is\n based on the duality between the Voronoi diagram and Delaunay\n triangulation. The package also includes a function that\n returns the Delaunay mesh of a given sample of points and its\n dual Voronoi diagram in one single object.","Published":"2016-02-15","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"alphaOutlier","Version":"1.2.0","Title":"Obtain Alpha-Outlier Regions for Well-Known Probability\nDistributions","Description":"Given the parameters of a distribution, the package uses the concept of alpha-outliers by Davies and Gather (1993) to flag outliers in a data set. See Davies, L.; Gather, U. (1993): The identification of multiple outliers, JASA, 88 423, 782-792, for details.","Published":"2016-09-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"alphashape3d","Version":"1.2","Title":"Implementation of the 3D Alpha-Shape for the Reconstruction of\n3D Sets from a Point Cloud","Description":"Implementation in R of the alpha-shape of a finite set of points in the three-dimensional space. The alpha-shape generalizes the convex hull and allows to recover the shape of non-convex and even non-connected sets in 3D, given a random sample of points taken into it. Besides the computation of the alpha-shape, this package provides users with functions to compute the volume of the alpha-shape, identify the connected components and facilitate the three-dimensional graphical visualization of the estimated set. ","Published":"2016-02-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"alr3","Version":"2.0.5","Title":"Data to accompany Applied Linear Regression 3rd edition","Description":"This package is a companion to the textbook S. Weisberg (2005), \n \"Applied Linear Regression,\" 3rd edition, Wiley. It includes all the\n data sets discussed in the book (except one), and a few functions that \n are tailored to the methods discussed in the book. As of version 2.0.0,\n this package depends on the car package. Many functions formerly \n in alr3 have been renamed and now reside in car. \n Data files have beeen lightly modified to make some data columns row labels.","Published":"2011-10-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"alr4","Version":"1.0.5","Title":"Data to accompany Applied Linear Regression 4rd edition","Description":"This package is a companion to the textbook S. Weisberg (2014), \n \"Applied Linear Regression,\" 4rd edition, Wiley. It includes all the\n data sets discussed in the book and one function to access the textbook's\n website. \n This package depends on the car package. Many data files in this package\n are included in the alr3 package as well, so only one of them should be\n loaded.","Published":"2014-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ALS","Version":"0.0.6","Title":"Multivariate Curve Resolution Alternating Least Squares\n(MCR-ALS)","Description":"Alternating least squares is often used to resolve\n components contributing to data with a bilinear structure; the\n basic technique may be extended to alternating constrained\n least squares. Commonly applied constraints include\n unimodality, non-negativity, and normalization of components.\n Several data matrices may be decomposed simultaneously by\n assuming that one of the two matrices in the bilinear\n decomposition is shared between datasets.","Published":"2015-08-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ALSCPC","Version":"1.0","Title":"Accelerated line search algorithm for simultaneous orthogonal\ntransformation of several positive definite symmetric matrices\nto nearly diagonal form","Description":"Using of the accelerated line search algorithm for simultaneously diagonalize a set of symmetric positive definite matrices.","Published":"2013-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ALSM","Version":"0.2.0","Title":"Companion to Applied Linear Statistical Models","Description":"Functions and Data set presented in Applied Linear Statistical Models Fifth Edition (Chapters 1-9 and 16-25), Michael H. Kutner; Christopher J. Nachtsheim; John Neter; William Li, 2005. (ISBN-10: 0071122214, ISBN-13: 978-0071122214) that do not exist in R, are gathered in this package. The whole book will be covered in the next versions.","Published":"2017-03-07","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"alterryx","Version":"0.2.0","Title":"An 'API' Client for the 'Alteryx' Gallery","Description":"A tool to access each of the 'Alteryx' Gallery 'API' endpoints.\n Users can queue jobs, poll job status, and retrieve application output as\n a data frame. You will need an 'Alteryx' Server license and have 'Alteryx'\n Gallery running to utilize this package. The 'API' is accessed through the\n 'URL' that you setup for the server running 'Alteryx' Gallery and more\n information on the endpoints can be found at\n .","Published":"2017-03-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"altmeta","Version":"2.2","Title":"Alternative Meta-Analysis Methods","Description":"Provides alternative statistical methods for meta-analysis, including new heterogeneity tests and measures that are robust to outliers.","Published":"2016-09-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ALTopt","Version":"0.1.1","Title":"Optimal Experimental Designs for Accelerated Life Testing","Description":"Creates the optimal (D, U and I) designs for the accelerated life\n testing with right censoring or interval censoring. It uses generalized \n linear model (GLM) approach to derive the asymptotic variance-covariance \n matrix of regression coefficients. The failure time distribution is assumed \n to follow Weibull distribution with a known shape parameter and log-linear \n link functions are used to model the relationship between failure time \n parameters and stress variables. The acceleration model may have multiple \n stress factors, although most ALTs involve only two or less stress factors. \n ALTopt package also provides several plotting functions including contour plot,\n Fraction of Use Space (FUS) plot and Variance Dispersion graphs of Use Space\n (VDUS) plot.","Published":"2015-08-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"amap","Version":"0.8-14","Title":"Another Multidimensional Analysis Package","Description":"Tools for Clustering and Principal Component Analysis\n (With robust methods, and parallelized functions).","Published":"2014-12-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"AMAP.Seq","Version":"1.0","Title":"Compare Gene Expressions from 2-Treatment RNA-Seq Experiments","Description":"An Approximated Most Average Powerful Test with Optimal\n FDR Control with Application to RNA-seq Data","Published":"2012-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AMCP","Version":"0.0.4","Title":"A Model Comparison Perspective","Description":"Accompanies \"Designing experiments and \n analyzing data: A model comparison perspective\" (3rd ed.) by \n Maxwell, Delaney, & Kelley (forthcoming from Routledge). \n Contains all of the data sets in the book's chapters and \n end-of-chapter exercises. Information about the book is available at \n .","Published":"2017-02-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"AMCTestmakeR","Version":"0.1.0","Title":"Generate LaTeX Code for Auto-Multiple-Choice (AMC)","Description":"Generate code for use with the Optical Mark Recognition free software Auto Multiple Choice (AMC). More specifically, this package provides functions that use as input the question and answer texts, and output the LaTeX code for AMC.","Published":"2017-03-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ameco","Version":"0.2.7","Title":"European Commission Annual Macro-Economic (AMECO) Database","Description":"Annual macro-economic database provided by the European Commission.","Published":"2017-05-29","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"amei","Version":"1.0-7","Title":"Adaptive Management of Epidemiological Interventions","Description":"\n This package provides a flexible statistical framework for generating optimal \n epidemiological interventions that are designed to minimize the total expected\n cost of an emerging epidemic while simultaneously propagating uncertainty regarding \n underlying disease parameters through to the decision process via Bayesian posterior\n inference. The strategies produced through this framework are adaptive: vaccination \n schedules are iteratively adjusted to reflect the anticipated trajectory of the \n epidemic given the current population state and updated parameter estimates.","Published":"2013-12-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Amelia","Version":"1.7.4","Title":"A Program for Missing Data","Description":"A tool that \"multiply imputes\" missing data in a single cross-section\n (such as a survey), from a time series (like variables collected for\n each year in a country), or from a time-series-cross-sectional data\n set (such as collected by years for each of several countries).\n Amelia II implements our bootstrapping-based algorithm that gives\n essentially the same answers as the standard IP or EMis approaches,\n is usually considerably faster than existing approaches and can\n handle many more variables. Unlike Amelia I and other statistically\n rigorous imputation software, it virtually never crashes (but please\n let us know if you find to the contrary!). The program also\n generalizes existing approaches by allowing for trends in time series\n across observations within a cross-sectional unit, as well as priors\n that allow experts to incorporate beliefs they have about the values\n of missing cells in their data. Amelia II also includes useful\n diagnostics of the fit of multiple imputation models. The program\n works from the R command line or via a graphical user interface that\n does not require users to know R.","Published":"2015-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"amen","Version":"1.3","Title":"Additive and Multiplicative Effects Models for Networks and\nRelational Data","Description":"Analysis of dyadic network and relational data using additive and\n multiplicative effects (AME) models. The basic model includes\n regression terms, the covariance structure of the social relations model\n (Warner, Kenny and Stoto (1979) , \n Wong (1982) ), and multiplicative factor\n models (Hoff(2009) ). \n Four different link functions accommodate different\n relational data structures, including binary/network data (bin), normal\n relational data (nrm), ordinal relational data (ord) and data from\n fixed-rank nomination schemes (frn). Several of these link functions are\n discussed in Hoff, Fosdick, Volfovsky and Stovel (2013) \n . Development of this\n software was supported in part by NIH grant R01HD067509.","Published":"2017-05-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"AmericanCallOpt","Version":"0.95","Title":"This package includes pricing function for selected American\ncall options with underlying assets that generate payouts","Description":"This package includes a set of pricing functions for\n American call options. The following cases are covered: Pricing\n of an American call using the standard binomial approximation;\n Hedge parameters for an American call with a standard binomial\n tree; Binomial pricing of an American call with continuous\n payout from the underlying asset; Binomial pricing of an\n American call with an underlying stock that pays proportional\n dividends in discrete time; Pricing of an American call on\n futures using a binomial approximation; Pricing of a currency\n futures American call using a binomial approximation; Pricing\n of a perpetual American call. The user should kindly notice\n that this material is for educational purposes only. The codes\n are not optimized for computational efficiency as they are\n meant to represent standard cases of analytical and numerical\n solution.","Published":"2012-03-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"AMGET","Version":"1.0","Title":"Post-processing tool for ADAPT 5","Description":"AMGET allows to simply and rapidly creates highly informative diagnostic plots for ADAPT 5 models. Features include data analysis prior any modeling form either NONMEM or ADAPT shaped dataset, goodness-of-fit plots (GOF), posthoc-fits plots (PHF), parameters distribution plots (PRM) and visual predictive check plots (VPC) based on ADAPT output.","Published":"2013-08-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aml","Version":"0.1-1","Title":"Adaptive Mixed LASSO","Description":"This package implements the adaptive mixed lasso (AML) method proposed by Wang et al.(2011). AML applies adaptive lasso penalty to a large number of predictors, thus producing a sparse model, while accounting for the population structure in the linear mixed model framework. The package here is primarily designed for application to genome wide association studies or genomic prediction in plant breeding populations, though it could be applied to other settings of linear mixed models.","Published":"2013-11-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AMModels","Version":"0.1.2","Title":"Adaptive Management Model Manager","Description":"Helps enable adaptive management by codifying knowledge in the\n form of models generated from numerous analyses and data sets. Facilitates\n this process by storing all models and data sets in a single object that can\n be updated and saved, thus tracking changes in knowledge through time. A shiny\n application called AM Model Manager (modelMgr()) enables the use of these\n functions via a GUI.","Published":"2017-02-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"AmmoniaConcentration","Version":"0.1","Title":"Un-Ionized Ammonia Concentration","Description":"Provides a function to calculate the concentration of un-ionized ammonia in the total ammonia in aqueous solution using the pH and temperature values.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"AMOEBA","Version":"1.1","Title":"A Multidirectional Optimum Ecotope-Based Algorithm","Description":"A function to calculate spatial clusters using the Getis-Ord local statistic. It searches\n irregular clusters (ecotopes) on a map.","Published":"2014-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AMORE","Version":"0.2-15","Title":"A MORE flexible neural network package","Description":"This package was born to release the TAO robust neural\n network algorithm to the R users. It has grown and I think it\n can be of interest for the users wanting to implement their own\n training algorithms as well as for those others whose needs lye\n only in the \"user space\".","Published":"2014-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AmostraBrasil","Version":"1.2","Title":"Generates Samples or Complete List of Brazilian IBGE (Instituto\nBrasileiro De Geografia e Estatistica) Census Households,\nGeocoding it by Google Maps","Description":"Generates samples or complete list of Brazilian IBGE (Instituto Brasileiro de Geografia e Estatistica, see\n for more information) census\n households, geocoding it by Google Maps. The package connects IBGE site and\n downloads maps and census data.","Published":"2016-07-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ampd","Version":"0.2","Title":"An Algorithm for Automatic Peak Detection in Noisy Periodic and\nQuasi-Periodic Signals","Description":"A method for automatic detection of peaks in noisy periodic and quasi-periodic signals. This method, called automatic multiscale-based peak detection (AMPD), is based on the calculation and analysis of the local maxima scalogram, a matrix comprising the scale-dependent occurrences of local maxima.\n For further information see .","Published":"2016-12-11","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"AmpliconDuo","Version":"1.1","Title":"Statistical Analysis of Amplicon Data of the Same Sample to\nIdentify Artefacts","Description":"Increasingly powerful techniques for high-throughput sequencing open the possibility to comprehensively characterize microbial communities, including rare species. However, a still unresolved issue are the substantial error rates in the experimental process generating these sequences. To overcome these limitations we propose an approach, where each sample is split and the same amplification and sequencing protocol is applied to both halves. This procedure should allow to detect likely PCR and sequencing artifacts, and true rare species by comparison of the results of both parts. The AmpliconDuo package, whereas amplicon duo from here on refers to the two amplicon data sets of a split sample, is intended to help interpret the obtained read frequency distribution across split samples, and to filter the false positive reads.","Published":"2016-01-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"AmyloGram","Version":"1.0","Title":"Prediction of Amyloid Proteins","Description":"Predicts amyloid proteins using random forests trained on the\n n-gram encoded peptides. The implemented algorithm can be accessed from\n both the command line and shiny-based GUI.","Published":"2016-09-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"anacor","Version":"1.1-3","Title":"Simple and Canonical Correspondence Analysis","Description":"Performs simple and canonical CA (covariates on rows/columns) on a two-way frequency table (with missings) by means of SVD. Different scaling methods (standard, centroid, Benzecri, Goodman) as well as various plots including confidence ellipsoids are provided. ","Published":"2017-05-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"analogsea","Version":"0.5.0","Title":"Interface to 'Digital Ocean'","Description":"Provides a set of functions for interacting with the 'Digital\n Ocean' API at , including\n creating images, destroying them, rebooting, getting details on regions, and\n available images.","Published":"2016-11-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"analogue","Version":"0.17-0","Title":"Analogue and Weighted Averaging Methods for Palaeoecology","Description":"Fits Modern Analogue Technique and Weighted Averaging transfer \n \t function models for prediction of environmental data from species \n\t data, and related methods used in palaeoecology.","Published":"2016-02-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"analogueExtra","Version":"0.1-1","Title":"Additional Functions for Use with the Analogue Package","Description":"Provides additional functionality for the analogue package\n\t that is not required by all users of the main package.","Published":"2016-04-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"analytics","Version":"2.0","Title":"Regression Outlier Detection, Stationary Bootstrap, Testing Weak\nStationarity, and Other Tools for Data Analysis","Description":"Current version includes outlier detection in a fitted linear model, stationary bootstrap using a truncated geometric distribution, a comprehensive test for weak stationarity, column means by group, weighted biplots, and a heuristic to obtain a better initial configuration in non-metric MDS.","Published":"2017-06-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"analyz","Version":"1.4","Title":"Model Layer for Automatic Data Analysis via CSV File\nInterpretation","Description":"Class with methods to read and execute R commands described as steps in a CSV file.","Published":"2015-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AnalyzeFMRI","Version":"1.1-16","Title":"Functions for analysis of fMRI datasets stored in the ANALYZE or\nNIFTI format","Description":"Functions for I/O, visualisation and analysis of\n functional Magnetic Resonance Imaging (fMRI) datasets stored in\n the ANALYZE or NIFTI format.","Published":"2013-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AnalyzeTS","Version":"2.2","Title":"Analyze Fuzzy Time Series","Description":"Analyze fuzzy time series by Chen, Singh, Heuristic and Chen-Hsu models. The Abbasov-Mamedova and NFTS models is included as well.","Published":"2016-11-24","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"anapuce","Version":"2.2","Title":"Tools for microarray data analysis","Description":"This package contains functions for\n normalisation,differentially analysis of microarray data and\n others functions implementing recent methods developed by the\n Statistic and Genom Team from UMR 518 AgroParisTech/INRA Appl.\n Math. Comput. Sc.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AncestryMapper","Version":"2.0","Title":"Assigning Ancestry Based on Population References","Description":"Assigns genetic ancestry to an individual and\n studies relationships between local and global populations.","Published":"2016-09-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"anchoredDistr","Version":"1.0.3","Title":"Post-Processing for the Method of Anchored Distributions","Description":"Supplements the 'MAD#' software (see , \n or Osorio-Murillo, et al. (2015) ) that\n implements the Method of Anchored Distributions for inferring geostatistical\n parameters (see Rubin, et al. (2010) ). Reads 'MAD#' \n result databases, performs dimension reduction on inversion data, calculates\n likelihoods and posteriors, and tests for convergence. Also generates plots \n to summarize results.","Published":"2017-06-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"anchors","Version":"3.0-8","Title":"Statistical analysis of surveys with anchoring vignettes","Description":"Tools for analyzing survey responses with anchors.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AnDE","Version":"1.0","Title":"An extended Bayesian Learning Technique developed by Dr. Geoff\nWebb","Description":"AODE achieves highly accurate classification by averaging over all\n of a small space.","Published":"2013-07-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"andrews","Version":"1.0","Title":"Andrews curves","Description":"Andrews curves for visualization of multidimensional data","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"anesrake","Version":"0.75","Title":"ANES Raking Implementation","Description":"Provides a comprehensive system for selecting\n variables and weighting data to match the specifications of the American\n National Election Studies. The package includes methods for identifying\n discrepant variables, raking data, and assessing the effects of the raking\n algorithm. It also allows automated re-raking if target variables fall\n outside identified bounds and allows greater user specification than other\n available raking algorithms. A variety of simple weighted statistics that\n were previously in this package (version .55 and earlier) have been moved to\n the package 'weights.'","Published":"2016-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"anfis","Version":"0.99.1","Title":"Adaptive Neuro Fuzzy Inference System in R","Description":"The package implements ANFIS Type 3 Takagi and Sugeno's fuzzy\n if-then rule network with the following features: (1) Independent number of\n membership functions(MF) for each input, and also different MF extensible\n types. (2) Type 3 Takagi and Sugeno's fuzzy if-then rule (3) Full Rule\n combinations, e.g. 2 inputs 2 membership funtions -> 4 fuzzy rules (4)\n Hibrid learning, i.e. Descent Gradient for precedents and Least Squares\n Estimation for consequents (5) Multiple outputs.","Published":"2015-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AnglerCreelSurveySimulation","Version":"0.2.1","Title":"Simulate a Bus Route Creel Survey of Anglers","Description":"Create an angler population, sample the population with a user-specified survey times, and calculate metrics from a bus route-type creel survey.","Published":"2015-01-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"angstroms","Version":"0.0.1","Title":"Tools for 'ROMS' the Regional Ocean Modeling System","Description":"Helper functions for working with Regional Ocean Modeling System 'ROMS' output. See\n for more information about 'ROMS'. ","Published":"2017-05-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"aniDom","Version":"0.1.1","Title":"Inferring Dominance Hierarchies and Estimating Uncertainty","Description":"Provides: (1) Tools to infer dominance hierarchies based on calculating Elo scores, but with custom functions to improve estimates in animals with relatively stable dominance ranks. (2) Tools to plot the shape of the dominance hierarchy and estimate the uncertainty of a given data set.","Published":"2017-04-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"anim.plots","Version":"0.2","Title":"Simple Animated Plots for R","Description":"Simple animated versions of basic R plots, using the 'animation'\n package. Includes animated versions of plot, barplot, persp, contour,\n filled.contour, hist, curve, points, lines, text, symbols, segments, and\n arrows.","Published":"2017-05-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"animalTrack","Version":"1.0.0","Title":"Animal track reconstruction for high frequency 2-dimensional\n(2D) or 3-dimensional (3D) movement data","Description":"2D and 3D animal tracking data can be used to reconstruct tracks through time/space with correction based on known positions. 3D visualization of animal position and attitude.","Published":"2013-09-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"animation","Version":"2.5","Title":"A Gallery of Animations in Statistics and Utilities to Create\nAnimations","Description":"Provides functions for animations in statistics, covering topics\n in probability theory, mathematical statistics, multivariate statistics,\n non-parametric statistics, sampling survey, linear models, time series,\n computational statistics, data mining and machine learning. These functions\n may be helpful in teaching statistics and data analysis. Also provided in\n this package are a series of functions to save animations to various formats,\n e.g. Flash, 'GIF', HTML pages, 'PDF' and videos. 'PDF' animations can be\n inserted into 'Sweave' / 'knitr' easily.","Published":"2017-03-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ANLP","Version":"1.3","Title":"Build Text Prediction Model","Description":"Library to sample and clean text data, build N-gram model, Backoff algorithm etc.","Published":"2016-07-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"anMC","Version":"0.1.0","Title":"Compute High Dimensional Orthant Probabilities","Description":"Computationally efficient method to estimate orthant probabilities of high-dimensional Gaussian vectors. Further implements a function to compute conservative estimates of excursion sets under Gaussian random field priors. ","Published":"2017-05-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"AnnotationBustR","Version":"1.0","Title":"Extract Subsequences from GenBank Annotations","Description":"Extraction of subsequences into FASTA files from GenBank annotations where gene names may vary among accessions.","Published":"2017-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AnnotLists","Version":"1.2","Title":"AnnotLists: A tool to annotate multiple lists from a specific\nannotation file","Description":"Annotate multiple lists from a specific annotation file.","Published":"2011-10-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"anocva","Version":"0.1.0","Title":"A Non-Parametric Statistical Test to Compare Clustering\nStructures","Description":"Provides ANOCVA (ANalysis Of Cluster VAriability), a non-parametric statistical test\n to compare clustering structures with applications in functional magnetic resonance imaging\n data (fMRI). The ANOCVA allows us to compare the clustering structure of multiple groups\n simultaneously and also to identify features that contribute to the differential clustering.","Published":"2016-12-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"anoint","Version":"1.4","Title":"Analysis of Interactions","Description":"The tools in this package are intended to help researchers assess multiple treatment-covariate interactions with data from a parallel-group randomized controlled clinical trial.","Published":"2015-07-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ANOM","Version":"0.5","Title":"Analysis of Means","Description":"Analysis of means (ANOM) as used in technometrical computing. The package takes results from multiple comparisons with the grand mean (obtained with 'multcomp', 'SimComp', 'nparcomp', or 'MCPAN') or corresponding simultaneous confidence intervals as input and produces ANOM decision charts that illustrate which group means deviate significantly from the grand mean.","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"anomalyDetection","Version":"0.1.1","Title":"Implementation of Augmented Network Log Anomaly Detection\nProcedures","Description":"Implements procedures to aid in detecting network log anomalies.\n By combining various multivariate analytic approaches relevant to network\n anomaly detection, it provides cyber analysts efficient means to detect\n suspected anomalies requiring further evaluation.","Published":"2017-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"anominate","Version":"0.5","Title":"alpha-NOMINATE Ideal Point Estimator","Description":"Fits ideal point model described in Carroll, Lewis, Lo, Poole and Rosenthal, \"The Structure of Utility in Models of Spatial Voting,\" American Journal of Political Science 57(4): 1008--1028.","Published":"2014-10-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"anonymizer","Version":"0.2.0","Title":"Anonymize Data Containing Personally Identifiable Information","Description":"Allows users to quickly and easily anonymize data containing\n Personally Identifiable Information (PII) through convenience functions.","Published":"2015-09-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ANOVAreplication","Version":"1.0.0","Title":"Test ANOVA Replications by Means of the Prior Predictive p-Value","Description":"Allows for the computation of a prior predictive p-value to test replication of relevant features of original ANOVA studies. Relevant features are captured in informative hypotheses. The package also allows for the computation of sample sizes for new studies, and comes with a Shiny application in which all calculations can be conducted as well. ","Published":"2017-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AntAngioCOOL","Version":"1.2","Title":"Anti-Angiogenic Peptide Prediction","Description":"Machine learning based package to predict anti-angiogenic peptides using heterogeneous sequence descriptors. 'AntAngioCOOL' exploits five descriptor types of a peptide of interest to do prediction including: pseudo amino acid composition, k-mer composition, k-mer composition (reduced alphabet), physico-chemical profile and atomic profile. According to the obtained results, 'AntAngioCOOL' reached to a satisfactory performance in anti-angiogenic peptide prediction on a benchmark non-redundant independent test dataset.","Published":"2016-08-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"antaresProcessing","Version":"0.10.2","Title":"Antares Results Processing","Description":"\n Process results generated by Antares, a powerful software developed by\n RTE to simulate and study electric power systems (more information about\n Antares here: ). This package provides\n functions to create new columns like net load, load factors, upward and\n downward margins or to compute aggregated statistics like economic surpluses\n of consumers, producers and sectors.","Published":"2017-05-24","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"antaresRead","Version":"1.1.3","Title":"Import, Manipulate and Explore the Results of an Antares\nSimulation","Description":"Import, manipulate and explore results generated by Antares, a \n powerful software developed by RTE to simulate and study electric power systems\n (more information about Antares here: ).","Published":"2017-05-30","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"antaresViz","Version":"0.10","Title":"Antares Visualizations","Description":"Visualize results generated by Antares, a powerful software\n developed by RTE to simulate and study electric power systems\n (more information about Antares here: ).\n This package provides functions that create interactive charts to help\n Antares users visually explore the results of their simulations.","Published":"2017-06-20","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"AnthropMMD","Version":"1.0.1","Title":"A GUI for Mean Measures of Divergence","Description":"Offers a complete and interactive GUI to work out Mean Measures of Divergence, especially for anthropologists.","Published":"2016-04-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Anthropometry","Version":"1.8","Title":"Statistical Methods for Anthropometric Data","Description":"Statistical methodologies especially developed to analyze anthropometric data. These methods are aimed \t\tat providing effective solutions to some commons problems related to Ergonomics and Anthropometry. They are based on clustering, the \t\tstatistical concept of data depth, statistical shape analysis and archetypal analysis.","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"antitrust","Version":"0.95.1","Title":"Tools for Antitrust Practitioners","Description":"A collection of tools for antitrust practitioners, including the ability to calibrate different consumer demand systems and simulate the effects mergers under different competitive regimes.","Published":"2015-11-23","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"antiword","Version":"1.1","Title":"Extract Text from Microsoft Word Documents","Description":"Wraps the 'AntiWord' utility to extract text from Microsoft\n Word documents. The utility only supports the old 'doc' format, not the \n new xml based 'docx' format. Use the 'xml2' package to read the latter.","Published":"2017-04-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AntWeb","Version":"0.7","Title":"programmatic interface to the AntWeb","Description":"A complete programmatic interface to the AntWeb database from the\n California Academy of Sciences.","Published":"2014-08-14","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"anytime","Version":"0.3.0","Title":"Anything to 'POSIXct' or 'Date' Converter","Description":"Convert input in any one of character, integer, numeric, factor,\n or ordered type into 'POSIXct' (or 'Date') objects, using one of a number of\n predefined formats, and relying on Boost facilities for date and time parsing.","Published":"2017-06-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aod","Version":"1.3","Title":"Analysis of Overdispersed Data","Description":"This package provides a set of functions to analyse\n overdispersed counts or proportions. Most of the methods are\n already available elsewhere but are scattered in different\n packages. The proposed functions should be considered as\n complements to more sophisticated methods such as generalized\n estimating equations (GEE) or generalized linear mixed effect\n models (GLMM).","Published":"2012-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aods3","Version":"0.4-1","Title":"Analysis of Overdispersed Data using S3 methods","Description":"This package provides functions to analyse overdispersed\n counts or proportions. These functions should be considered as\n complements to more sophisticated methods such as generalized\n estimating equations (GEE) or generalized linear mixed effect\n models (GLMM). aods3 is an S3 re-implementation of the\n deprecated S4 package aod.","Published":"2013-06-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aoos","Version":"0.5.0","Title":"Another Object Orientation System","Description":"Another implementation of object-orientation in R. It provides\n syntactic sugar for the S4 class system and two alternative new\n implementations. One is an experimental version built around S4\n and the other one makes it more convenient to work with lists as objects.","Published":"2017-05-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"aop","Version":"1.0.0","Title":"Adverse Outcome Pathway Analysis","Description":"Provides tools for analyzing adverse outcome pathways\n (AOPs) for pharmacological and toxicological research. Functionality\n includes the ability to perform causal network analysis of networks\n developed in and exported from Cytoscape or existing as R graph objects, and\n identifying the point of departure/screening/risk value from concentration-\n response data.","Published":"2016-12-05","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"aoristic","Version":"0.6","Title":"aoristic analysis with spatial output (kml)","Description":"'Aoristic' is one of the past tenses in Greek and represents an\n uncertain occurrence time. Aoristic analysis suggested by Ratcliffe (2002)\n is a method to analyze events that do not have exact times of occurrence\n but have starting times and ending times. For example, a property crime\n database (e.g., burglary) typically has a starting time and ending time of\n the crime that could have occurred. Aoristic analysis allocates the\n probability of a crime incident occurring at every hour over a 24-hour\n period. The probability is aggregated over a study area to create an\n aoristic graph.\n Using crime incident data with lat/lon, DateTimeFrom, and\n DateTimeTo, functions in this package create a total of three (3) kml\n files and corresponding aoristic graphs: 1) density and contour; 2) grid\n count; and 3) shapefile boundary. (see also:\n https://sites.google.com/site/georgekick/software)","Published":"2015-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"apa","Version":"0.2.0","Title":"Format Outputs of Statistical Tests According to APA Guidelines","Description":"Formatter functions in the 'apa' package take the return value of a\n statistical test function, e.g. a call to chisq.test() and return a string\n formatted according to the guidelines of the APA (American Psychological\n Association).","Published":"2017-02-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ApacheLogProcessor","Version":"0.2.2","Title":"Process the Apache Web Server Log Files","Description":"Provides capabilities to process Apache HTTPD Log files.The main functionalities are to extract data from access and error log files to data frames.","Published":"2017-03-29","License":"LGPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"apaStyle","Version":"0.5","Title":"Generate APA Tables for MS Word","Description":"Most psychological journals require that tables in a manuscript\n comply to APA (American Association of Psychology) standards. Creating APA\n tables manually is often time consuming and prone to transcription errors.\n This package generates tables for MS Word ('.docx' extension) in APA format\n automatically with just a few lines of code.","Published":"2017-03-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"apaTables","Version":"1.5.1","Title":"Create American Psychological Association (APA) Style Tables","Description":"A common task faced by researchers is the creation of APA style\n (i.e., American Psychological Association style) tables from statistical\n output. In R a large number of function calls are often needed to obtain all of\n the desired information for a single APA style table. As well, the process of\n manually creating APA style tables in a word processor is prone to transcription\n errors. This package creates Word files (.doc files) containing APA style tables\n for several types of analyses. Using this package minimizes transcription errors\n and reduces the number commands needed by the user.","Published":"2017-06-20","License":"MIT License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"apc","Version":"1.3","Title":"Age-Period-Cohort Analysis","Description":"Functions for age-period-cohort analysis. The data can be organised in matrices indexed by age-cohort, age-period or cohort-period. The data can include dose and response or just doses. The statistical model is a generalized linear model (GLM) allowing for 3,2,1 or 0 of the age-period-cohort factors. The canonical parametrisation of Kuang, Nielsen and Nielsen (2008) is used. Thus, the analysis does not rely on ad hoc identification.","Published":"2016-12-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"apcluster","Version":"1.4.3","Title":"Affinity Propagation Clustering","Description":"Implements Affinity Propagation clustering introduced by Frey and\n\tDueck (2007) . The algorithms are largely\n analogous to the 'Matlab' code published by Frey and Dueck.\n The package further provides leveraged affinity propagation and an\n algorithm for exemplar-based agglomerative clustering that can also be\n used to join clusters obtained from affinity propagation. Various\n plotting functions are available for analyzing clustering results.","Published":"2016-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"apdesign","Version":"1.0.0","Title":"An Implementation of the Additive Polynomial Design Matrix","Description":"An implementation of the additive polynomial (AP) design matrix. It\n constructs and appends an AP design matrix to a data frame for use with\n longitudinal data subject to seasonality.","Published":"2016-11-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ape","Version":"4.1","Title":"Analyses of Phylogenetics and Evolution","Description":"Functions for reading, writing, plotting, and manipulating phylogenetic trees, analyses of comparative data in a phylogenetic framework, ancestral character analyses, analyses of diversification and macroevolution, computing distances from DNA sequences, reading and writing nucleotide sequences as well as importing from BioConductor, and several tools such as Mantel's test, generalized skyline plots, graphical exploration of phylogenetic data (alex, trex, kronoviz), estimation of absolute evolutionary rates and clock-like trees using mean path lengths and penalized likelihood, dating trees with non-contemporaneous sequences, translating DNA into AA sequences, and assessing sequence alignments. Phylogeny estimation can be done with the NJ, BIONJ, ME, MVR, SDM, and triangle methods, and several methods handling incomplete distance matrices (NJ*, BIONJ*, MVR*, and the corresponding triangle method). Some functions call external applications (PhyML, Clustal, T-Coffee, Muscle) whose results are returned into R.","Published":"2017-02-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"apercu","Version":"0.2.1","Title":"Apercu is Giving you a Quick Look at your Data","Description":"The goal is to print an \"aperçu\", a short view of a vector, a\n matrix, a data.frame, a list or an array. By default, it prints the first 5\n elements of each dimension. By default, the number of columns is equal to\n the number of lines. If you want to control the selection of the elements,\n you can pass a list, with each element being a vector giving the selection\n for each dimension.","Published":"2017-04-25","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"apex","Version":"1.0.2","Title":"Phylogenetic Methods for Multiple Gene Data","Description":"Toolkit for the analysis of multiple gene data. Apex implements\n the new S4 classes 'multidna', 'multiphyDat' and associated methods to handle\n aligned DNA sequences from multiple genes.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"APfun","Version":"0.1.1","Title":"Geo-Processing Base Functions","Description":"Base tools for facilitating the creation geo-processing functions\n in R.","Published":"2017-04-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"aplore3","Version":"0.9","Title":"Datasets from Hosmer, Lemeshow and Sturdivant, \"Applied Logistic\nRegression\" (3rd Ed., 2013)","Description":"An unofficial companion to \"Applied\n Logistic Regression\" by D.W. Hosmer, S. Lemeshow and\n R.X. Sturdivant (3rd ed., 2013) containing the dataset used in the book.","Published":"2016-10-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"aplpack","Version":"1.3.0","Title":"Another Plot PACKage: stem.leaf, bagplot, faces, spin3R,\nplotsummary, plothulls, and some slider functions","Description":"set of functions for drawing some special plots:\n stem.leaf plots a stem and leaf plot,\n stem.leaf.backback plots back-to-back versions of stem and leafs,\n bagplot plots a bagplot,\n skyline.hist plots several histgramm in one plot of a one dimensional data set,\n plotsummary plots a graphical summary of a data set with one or more variables,\n plothulls plots sequentially hulls of a bivariate data set,\n faces plots chernoff faces,\n spin3R for an inspection of a 3-dim point cloud,\n slider functions for interactive graphics.","Published":"2014-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"apmsWAPP","Version":"1.0","Title":"Pre- and Postprocessing for AP-MS data analysis based on\nspectral counts","Description":"apmsWAPP provides a complete workflow for the analysis of AP-MS data (replicate single-bait purifications including negative controls) based on spectral counts. \n\t\tIt comprises pre-processing, scoring and postprocessing of protein interactions.\n\t\tA final list of interaction candidates is reported: it provides a ranking of the candidates according \n\t\tto their p-values which allow estimating the number of false-positive interactions.","Published":"2014-04-22","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"apng","Version":"1.0","Title":"Convert Png Files into Animated Png","Description":"Convert several png files into an animated png file.\n This package exports only a single function `apng'. Call the\n apng function with a vector of file names (which should be\n png files) to convert them to a single animated png file.","Published":"2017-05-25","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"appell","Version":"0.0-4","Title":"Compute Appell's F1 hypergeometric function","Description":"This package wraps Fortran code by F. D. Colavecchia and\n G. Gasaneo for computing the Appell's F1 hypergeometric\n function. Their program uses Fortran code by L. F. Shampine and\n H. A. Watts. Moreover, the hypergeometric function with complex\n arguments is computed with Fortran code by N. L. J. Michel and\n M. V. Stoitsov or with Fortran code by R. C. Forrey. See the\n function documentations for the references and please cite them\n accordingly.","Published":"2013-04-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"apple","Version":"0.3","Title":"Approximate Path for Penalized Likelihood Estimators","Description":"Approximate Path for Penalized Likelihood Estimators for\n Generalized Linear Models penalized by LASSO or MCP","Published":"2012-01-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AppliedPredictiveModeling","Version":"1.1-6","Title":"Functions and Data Sets for 'Applied Predictive Modeling'","Description":"A few functions and several data set for the Springer book 'Applied Predictive Modeling'","Published":"2014-07-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"appnn","Version":"1.0-0","Title":"Amyloid Propensity Prediction Neural Network","Description":"Amyloid propensity prediction neural network (APPNN) is an amyloidogenicity propensity predictor based on a machine learning approach through recursive feature selection and feed-forward neural networks, taking advantage of newly published sequences with experimental, in vitro, evidence of amyloid formation.","Published":"2015-07-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"approximator","Version":"1.2-6","Title":"Bayesian prediction of complex computer codes","Description":"Performs Bayesian prediction of complex computer codes\n when fast approximations are available: M. C. Kennedy and A. O'Hagan\n 2000, Biometrika 87(1):1-13","Published":"2013-12-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"aprean3","Version":"1.0.1","Title":"Datasets from Draper and Smith \"Applied Regression Analysis\"\n(3rd Ed., 1998)","Description":"An unofficial companion to the textbook \"Applied Regression\n Analysis\" by N.R. Draper and H. Smith (3rd Ed., 1998) including all the\n accompanying datasets.","Published":"2015-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"apricom","Version":"1.0.0","Title":"Tools for the a Priori Comparison of Regression Modelling\nStrategies","Description":"Tools to compare several model adjustment and validation methods prior to application in a final analysis.","Published":"2015-11-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"aprof","Version":"0.3.2","Title":"Amdahl's Profiler, Directed Optimization Made Easy","Description":"Assists the evaluation of whether and\n where to focus code optimization, using Amdahl's law and visual aids\n based on line profiling. Amdahl's profiler organises profiling output\n files (including memory profiling) in a visually appealing way.\n It is meant to help to balance development\n vs. execution time by helping to identify the most promising sections\n of code to optimize and projecting potential gains. The package is\n an addition to R's standard profiling tools and is not a wrapper for them.","Published":"2016-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"APSIM","Version":"0.9.1","Title":"General Utility Functions for the 'Agricultural Production\nSystems Simulator'","Description":"Contains functions designed to facilitate the loading\n and transformation of 'Agricultural Production Systems Simulator'\n output files . Input meteorological data\n (also known as \"weather\" or \"met\") files can also be generated\n from user supplied data.","Published":"2016-10-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"APSIMBatch","Version":"0.1.0.2374","Title":"Analysis the output of Apsim software","Description":"Run APSIM in Batch mode","Published":"2012-10-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"apsimr","Version":"1.2","Title":"Edit, Run and Evaluate APSIM Simulations Easily Using R","Description":"The Agricultural Production Systems sIMulator (APSIM) is a widely\n used simulator of agricultural systems. This package includes\n functions to create, edit and run APSIM simulations from R. It\n also includes functions to visualize the results of an APSIM simulation\n and perform sensitivity/uncertainty analysis of APSIM either via functions\n in the sensitivity package or by novel emulator-based functions. \n For more on APSIM including download instructions go to\n \\url{www.apsim.info}.","Published":"2015-10-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"apsrtable","Version":"0.8-8","Title":"apsrtable model-output formatter for social science","Description":"Formats latex tables from one or more model objects\n side-by-side with standard errors below, not unlike tables\n found in such journals as the American Political Science\n Review.","Published":"2012-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"apt","Version":"2.5","Title":"Asymmetric Price Transmission","Description":"Asymmetric price transmission between two time series is assessed. Several functions are available for linear and nonlinear threshold cointegration, and furthermore, symmetric and asymmetric error correction model. A graphical user interface is also included for major functions included in the package, so users can also use these functions in a more intuitive way.","Published":"2016-02-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"aptg","Version":"0.1.0","Title":"Automatic Phylogenetic Tree Generator","Description":"Generates phylogenetic trees and distance matrices from a list of species name or from a taxon down to whatever lower taxon. It can do so based on two reference super trees: mammals and angiosperms. ","Published":"2017-03-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"APtools","Version":"3.0","Title":"Average Positive Predictive Values (AP) for Binary Outcomes and\nCensored Event Times","Description":"We provide tools to estimate two prediction performance metrics,\n the average positive predictive values (AP) as well as the well-known AUC\n (the area under the receiver operator characteristic curve) for risk scores\n or marker. The outcome of interest is either binary or censored event time.\n Note that for censored event time, our functions estimate the AP and the\n AUC are time-dependent for pre-specified time interval(s). A function that\n compares the APs of two risk scores/markers is also included. Optional\n outputs include positive predictive values and true positive fractions at\n the specified marker cut-off values, and a plot of the time-dependent AP\n versus time (available for event time data).","Published":"2016-08-05","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"apTreeshape","Version":"1.4-5","Title":"Analyses of Phylogenetic Treeshape","Description":"apTreeshape is mainly dedicated to simulation and analysis\n of phylogenetic tree topologies using statistical indices. It\n is a companion library of the 'ape' package. It provides\n additional functions for reading, plotting, manipulating\n phylogenetic trees. It also offers convenient web-access to\n public databases, and enables testing null models of\n macroevolution using corrected test statistics. Trees of class\n \"phylo\" (from 'ape' package) can be converted easily.","Published":"2012-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aqfig","Version":"0.8","Title":"Functions to help display air quality model output and\nmonitoring data","Description":"This package contains functions to help display air quality model output and monitoring data, such as creating color scatterplots, color legends, etc.","Published":"2013-11-09","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"aqp","Version":"1.10","Title":"Algorithms for Quantitative Pedology","Description":"A collection of algorithms related to modeling of soil resources, soil classification, soil profile aggregation, and visualization.","Published":"2017-01-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"aqr","Version":"0.4","Title":"Interface methods to use with an ActiveQuant Master Server","Description":"This R extension provides methods to use a standalone ActiveQuant\n Master Server from within R. Currently available features include fetching\n and storing historical data, receiving and sending live data. Several\n utility methods for simple data transformations are included, too. For\n support requests, please join the mailing list at\n https://r-forge.r-project.org/mail/?group_id=1518","Published":"2014-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AquaEnv","Version":"1.0-4","Title":"Integrated Development Toolbox for Aquatic Chemical Model\nGeneration","Description":"Toolbox for the experimental aquatic chemist, focused on \n acidification and CO2 air-water exchange. It contains all elements to\n model the pH, the related CO2 air-water exchange, and\n aquatic acid-base chemistry for an arbitrary marine,\n estuarine or freshwater system. It contains a suite of tools for \n sensitivity analysis, visualisation, modelling of chemical batches, \n and can be used to build dynamic models of aquatic systems. \n As from version 1.0-4, it also contains functions to calculate \n the buffer factors. ","Published":"2016-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AR","Version":"1.0","Title":"Another Look at the Acceptance-Rejection Method","Description":"In mathematics, 'rejection sampling' is a basic technique used to generate observations from a distribution. It is also commonly called 'the Acceptance-Rejection method' or 'Accept-Reject algorithm' and is a type of Monte Carlo method. 'Acceptance-Rejection method' is based on the observation that to sample a random variable one can perform a uniformly random sampling of the 2D cartesian graph, and keep the samples in the region under the graph of its density function. Package 'AR' is able to generate/simulate random data from a probability density function by Acceptance-Rejection method. Moreover, this package is a useful teaching resource for graphical presentation of Acceptance-Rejection method. From the practical point of view, the user needs to calculate a constant in Acceptance-Rejection method, which package 'AR' is able to compute this constant by optimization tools. Several numerical examples are provided to illustrate the graphical presentation for the Acceptance-Rejection Method.","Published":"2017-05-18","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"AR1seg","Version":"1.0","Title":"Segmentation of an autoregressive Gaussian process of order 1","Description":"This package corresponds to the implementation of the robust approach for estimating change-points in the mean of an AR(1) Gaussian process by using the methodology described in the paper arXiv 1403.1958","Published":"2014-06-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"arabicStemR","Version":"1.2","Title":"Arabic Stemmer for Text Analysis","Description":"Allows users to stem Arabic texts for text analysis.","Published":"2017-02-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ArArRedux","Version":"0.2","Title":"Rigorous Data Reduction and Error Propagation of Ar40 / Ar39\nData","Description":"Processes noble gas mass spectrometer data to determine the isotopic composition of argon (comprised of Ar36, Ar37, Ar38, Ar39 and Ar40) released from neutron-irradiated potassium-bearing minerals. Then uses these compositions to calculate precise and accurate geochronological ages for multiple samples as well as the covariances between them. Error propagation is done in matrix form, which jointly treats all samples and all isotopes simultaneously at every step of the data reduction process. Includes methods for regression of the time-resolved mass spectrometer signals to t=0 ('time zero') for both single- and multi-collector instruments, blank correction, mass fractionation correction, detector intercalibration, decay corrections, interference corrections, interpolation of the irradiation parameter between neutron fluence monitors, and (weighted mean) age calculation. All operations are performed on the logs of the ratios between the different argon isotopes so as to properly treat them as 'compositional data', sensu Aitchison [1986, The Statistics of Compositional Data, Chapman and Hall].","Published":"2015-08-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"arc","Version":"1.1","Title":"Association Rule Classification","Description":"Implements the Classification-based on\n Association Rules (CBA) algorithm for association rule classification (ARC).\n The package also contains several convenience methods that allow to automatically\n set CBA parameters (minimum confidence, minimum support) and it also natively\n handles numeric attributes by integrating a pre-discretization step.\n The rule generation phase is handled by the 'arules' package.","Published":"2017-03-02","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"ARCensReg","Version":"2.1","Title":"Fitting Univariate Censored Linear Regression Model with\nAutoregressive Errors","Description":"It fits an univariate left or right censored linear regression model\n with autoregressive errors under the normal distribution. It provides estimates\n and standard errors of the parameters, prediction of future observations and\n it supports missing values on the dependent variable.\n It also performs influence diagnostic through local influence for three possible\n perturbation schemes.","Published":"2016-09-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ArchaeoPhases","Version":"1.2","Title":"Post-Processing of the Markov Chain Simulated by 'ChronoModel',\n'Oxcal' or 'BCal'","Description":"Provides a list of functions for the statistical analysis of archaeological dates and groups of dates. It is based on the post-processing of the Markov Chains whose stationary distribution is the posterior distribution of a series of dates. Such output can be simulated by different applications as for instance 'ChronoModel' (see ), 'Oxcal' (see ) or 'BCal' (see http://bcal.shef.ac.uk/). The only requirement is to have a csv file containing a sample from the posterior distribution.","Published":"2017-06-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"archdata","Version":"1.1","Title":"Example Datasets from Archaeological Research","Description":"The archdata package provides several types of data that are typically used in archaeological research. ","Published":"2016-04-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"archetypes","Version":"2.2-0","Title":"Archetypal Analysis","Description":"The main function archetypes implements a\n framework for archetypal analysis supporting arbitrary\n problem solving mechanisms for the different conceptual\n parts of the algorithm.","Published":"2014-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"archiDART","Version":"2.0","Title":"Plant Root System Architecture Analysis Using DART and RSML\nFiles","Description":"Analysis of complex plant root system architectures (RSA) using the output files created by Data Analysis of Root Tracings (DART), an open-access software dedicated to the study of plant root architecture and development across time series (Le Bot et al (2010) \"DART: a software to analyse root system architecture and development from captured images\", Plant and Soil, ), and RSA data encoded with the Root System Markup Language (RSML) (Lobet et al (2015) \"Root System Markup Language: toward a unified root architecture description language\", Plant Physiology, ). More information can be found in Delory et al (2016) \"archiDART: an R package for the automated computation of plant root architectural traits\", Plant and Soil, .","Published":"2017-05-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"archivist","Version":"2.1.2","Title":"Tools for Storing, Restoring and Searching for R Objects","Description":"Data exploration and modelling is a process in which a lot of data\n artifacts are produced. Artifacts like: subsets, data aggregates, plots,\n statistical models, different versions of data sets and different versions\n of results. The more projects we work with the more artifacts are produced\n and the harder it is to manage these artifacts. Archivist helps to store\n and manage artifacts created in R. Archivist allows you to store selected\n artifacts as a binary files together with their metadata and relations.\n Archivist allows to share artifacts with others, either through shared\n folder or github. Archivist allows to look for already created artifacts by\n using it's class, name, date of the creation or other properties. Makes it\n easy to restore such artifacts. Archivist allows to check if new artifact\n is the exact copy that was produced some time ago. That might be useful\n either for testing or caching.","Published":"2016-12-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"archivist.github","Version":"0.2.2","Title":"Tools for Archiving, Managing and Sharing R Objects via GitHub","Description":"The extension of the 'archivist' package integrating the archivist with GitHub via GitHub API, 'git2r' packages and 'httr' package. ","Published":"2016-08-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ArCo","Version":"0.1-2","Title":"Artificial Counterfactual Package","Description":"Set of functions to analyse and estimate Artificial Counterfactual models from Carvalho, Masini and Medeiros (2016) .","Published":"2017-04-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ArDec","Version":"2.0","Title":"Time series autoregressive-based decomposition","Description":"Package ArDec implements autoregressive-based\n decomposition of a time series based on the constructive\n approach in West (1997). Particular cases include the\n extraction of trend and seasonal components.","Published":"2013-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"arf3DS4","Version":"2.5-10","Title":"Activated Region Fitting, fMRI data analysis (3D)","Description":"Activated Region Fitting (ARF) is an analysis method for fMRI data. ","Published":"2014-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"arfima","Version":"1.4-0","Title":"Fractional ARIMA (and Other Long Memory) Time Series Modeling","Description":"Simulates, fits, and predicts long-memory and anti-persistent time\n series, possibly mixed with ARMA, regression, transfer-function components.\n Exact methods (MLE, forecasting, simulation) are used.","Published":"2017-06-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ArfimaMLM","Version":"1.3","Title":"Arfima-MLM Estimation For Repeated Cross-Sectional Data","Description":"Functions to facilitate the estimation of Arfima-MLM models for repeated cross-sectional data and pooled cross-sectional time-series data (see Lebo and Weber 2015). The estimation procedure uses double filtering with Arfima methods to account for autocorrelation in repeated cross-sectional data followed by multilevel modeling (MLM) to estimate aggregate as well as individual-level parameters simultaneously.","Published":"2015-01-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"argon2","Version":"0.2-0","Title":"Secure Password Hashing","Description":"Utilities for secure password hashing via the argon2 algorithm.\n It is a relatively new hashing algorithm and is believed to be very secure.\n The 'argon2' implementation included in the package is the reference\n implementation. The package also includes some utilities that should be\n useful for digest authentication, including a wrapper of 'blake2b'. For\n similar R packages, see sodium and 'bcrypt'. See\n or\n for more information.","Published":"2017-06-12","License":"BSD 2-clause License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"argosfilter","Version":"0.63","Title":"Argos locations filter","Description":"Functions to filters animal satellite tracking data\n obtained from Argos. It is especially indicated for telemetry\n studies of marine animals, where Argos locations are\n predominantly of low-quality.","Published":"2012-11-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"argparse","Version":"1.0.4","Title":"Command Line Optional and Positional Argument Parser","Description":"A command line parser to\n be used with Rscript to write \"#!\" shebang scripts that gracefully\n accept positional and optional arguments and automatically generate usage.","Published":"2016-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"argparser","Version":"0.4","Title":"Command-Line Argument Parser","Description":"Cross-platform command-line argument parser written purely in R\n with no external dependencies. It is useful with the Rscript\n front-end and facilitates turning an R script into an executable script.","Published":"2016-04-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ArgumentCheck","Version":"0.10.2","Title":"Improved Communication to Users with Respect to Problems in\nFunction Arguments","Description":"The typical process of checking arguments in functions is\n iterative. In this process, an error may be returned and the user may fix\n it only to receive another error on a different argument. 'ArgumentCheck'\n facilitates a more helpful way to perform argument checks allowing the\n programmer to run all of the checks and then return all of the errors and\n warnings in a single message.","Published":"2016-04-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"arm","Version":"1.9-3","Title":"Data Analysis Using Regression and Multilevel/Hierarchical\nModels","Description":"Functions to accompany A. Gelman and J. Hill, Data Analysis Using Regression and Multilevel/Hierarchical Models, Cambridge University Press, 2007.","Published":"2016-11-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"arnie","Version":"0.1.2","Title":"\"Arnie\" box office records 1982-2014","Description":"Arnold Schwarzenegger movie weekend box office records from\n 1982-2014","Published":"2014-06-16","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"aroma.affymetrix","Version":"3.1.0","Title":"Analysis of Large Affymetrix Microarray Data Sets","Description":"A cross-platform R framework that facilitates processing of any number of Affymetrix microarray samples regardless of computer system. The only parameter that limits the number of chips that can be processed is the amount of available disk space. The Aroma Framework has successfully been used in studies to process tens of thousands of arrays. This package has actively been used since 2006.","Published":"2017-03-24","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"aroma.apd","Version":"0.6.0","Title":"A Probe-Level Data File Format Used by 'aroma.affymetrix'\n[deprecated]","Description":"DEPRECATED. Do not start building new projects based on this package. (The (in-house) APD file format was initially developed to store Affymetrix probe-level data, e.g. normalized CEL intensities. Chip types can be added to APD file and similar to methods in the affxparser package, this package provides methods to read APDs organized by units (probesets). In addition, the probe elements can be arranged optimally such that the elements are guaranteed to be read in order when, for instance, data is read unit by unit. This speeds up the read substantially. This package is supporting the Aroma framework and should not be used elsewhere.)","Published":"2015-02-25","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"aroma.cn","Version":"1.6.1","Title":"Copy-Number Analysis of Large Microarray Data Sets","Description":"Methods for analyzing DNA copy-number data. Specifically,\n this package implements the multi-source copy-number normalization (MSCN)\n method for normalizing copy-number data obtained on various platforms and\n technologies. It also implements the TumorBoost method for normalizing\n paired tumor-normal SNP data.","Published":"2015-10-28","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"aroma.core","Version":"3.1.0","Title":"Core Methods and Classes Used by 'aroma.*' Packages Part of the\nAroma Framework","Description":"Core methods and classes used by higher-level aroma.* packages\n part of the Aroma Project, e.g. aroma.affymetrix and aroma.cn.","Published":"2017-03-23","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"ARPobservation","Version":"1.1","Title":"Tools for Simulating Direct Behavioral Observation Recording\nProcedures Based on Alternating Renewal Processes","Description":"Tools for simulating data generated by direct observation\n recording. Behavior streams are simulated based on an alternating renewal\n process, given specified distributions of event durations and interim\n times. Different procedures for recording data can then be applied to the\n simulated behavior streams. Functions are provided for the following\n recording methods: continuous duration recording, event counting, momentary\n time sampling, partial interval recording, and whole interval recording.","Published":"2015-02-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"aRpsDCA","Version":"1.1.0","Title":"Arps Decline Curve Analysis in R","Description":"Functions for Arps decline-curve analysis on oil and gas data. Includes exponential, hyperbolic, harmonic, and hyperbolic-to-exponential models as well as the preceding with initial curtailment or a period of linear rate buildup. Functions included for computing rate, cumulative production, instantaneous decline, EUR, time to economic limit, and performing least-squares best fits.","Published":"2016-04-05","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"arrApply","Version":"2.0.1","Title":"Apply a Function to a Margin of an Array","Description":"High performance variant of apply() for a fixed set of functions.\n Considerable speedup is a trade-off for universality, user defined\n functions cannot be used with this package. However, 20 most currently employed\n functions are available for usage. They can be divided in three types:\n reducing functions (like mean(), sum() etc., giving a scalar when applied to a vector),\n mapping function (like normalise(), cumsum() etc., giving a vector of the same length\n as the input vector) and finally, vector reducing function (like diff() which produces\n result vector of a length different from the length of input vector).\n Optional or mandatory additional arguments required by some functions\n (e.g. norm type for norm() or normalise() functions) can be\n passed as named arguments in '...'.","Published":"2016-11-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ArrayBin","Version":"0.2","Title":"Binarization of numeric data arrays","Description":"Fast adaptive binarization for numeric data arrays,\n particularly designed for high-throughput biological datasets.\n Includes options to filter out rows of the array with\n insufficient magnitude or variation (based on gap statistic).","Published":"2013-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"arrayhelpers","Version":"1.0-20160527","Title":"Convenience Functions for Arrays","Description":"Some convenient functions to work with arrays.","Published":"2016-05-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ars","Version":"0.5","Title":"Adaptive Rejection Sampling","Description":"Adaptive Rejection Sampling, Original version","Published":"2014-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"arsenal","Version":"0.3.0","Title":"An Arsenal of 'R' Functions for Large-Scale Statistical\nSummaries","Description":"An Arsenal of 'R' functions for large-scale statistical summaries,\n which are streamlined to work within the latest reporting tools in 'R' and\n 'RStudio' and which use formulas and versatile summary statistics for summary\n tables and models. The primary functions include tableby(), a Table-1-like\n summary of multiple variable types 'by' the levels of a categorical\n variable; modelsum(), which performs simple model fits on the same endpoint\n for many variables (univariate or adjusted for standard covariates);\n freqlist(), a powerful frequency table across many categorical variables; and\n write2(), a function to output tables to a document.","Published":"2017-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ART","Version":"1.0","Title":"Aligned Rank Transform for Nonparametric Factorial Analysis","Description":"An implementation of the Aligned Rank Transform technique for\n factorial analysis (see references below for details) including models with\n missing terms (unsaturated factorial models). The function first\n computes a separate aligned ranked response variable for each effect of the\n user-specified model, and then runs a classic ANOVA on each of the aligned\n ranked responses. For further details, see Higgins, J. J. and Tashtoush, S.\n (1994). An aligned rank transform test for interaction. Nonlinear World 1\n (2), pp. 201-211. Wobbrock, J.O., Findlater, L., Gergle, D. and\n Higgins,J.J. (2011). The Aligned Rank Transform for nonparametric factorial\n analyses using only ANOVA procedures. Proceedings of the ACM Conference on\n Human Factors in Computing Systems (CHI '11). New York: ACM Press, pp.\n 143-146. .","Published":"2015-08-13","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"artfima","Version":"1.5","Title":"ARTFIMA Model Estimation","Description":"Fit and simulate ARTFIMA. Theoretical autocovariance function and spectral density function for stationary ARTFIMA.","Published":"2016-07-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ARTIVA","Version":"1.2.3","Title":"Time-Varying DBN Inference with the ARTIVA (Auto Regressive TIme\nVArying) Model","Description":"Reversible Jump MCMC (RJ-MCMC)sampling for approximating the posterior \n distribution of a time varying regulatory network, under the Auto Regressive TIme VArying\n\t\t(ARTIVA) model (for a detailed description of the algorithm, see Lebre et al. BMC Systems\n\t\tBiology, 2010). Starting from time-course gene expression measurements for a gene of \n\t\tinterest (referred to as \"target gene\") and a set of genes (referred to as \"parent genes\")\n\t\twhich may explain the expression of the target gene, the ARTIVA procedure identifies\n temporal segments for which a set of interactions occur between the \"parent genes\" and the\n\t\t\"target gene\". The time points that delimit the different temporal segments are referred to\n\t\tas changepoints (CP).","Published":"2015-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ARTool","Version":"0.10.4","Title":"Aligned Rank Transform","Description":"The Aligned Rank Transform for nonparametric\n factorial ANOVAs as described by J. O. Wobbrock,\n L. Findlater, D. Gergle, & J. J. Higgins, \"The Aligned\n Rank Transform for nonparametric factorial analyses\n using only ANOVA procedures\", CHI 2011 .","Published":"2016-10-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ARTP","Version":"2.0.4","Title":"Gene and Pathway p-values computed using the Adaptive Rank\nTruncated Product","Description":"A package for calculating gene and pathway p-values using the Adaptive Rank Truncated Product test","Published":"2014-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ARTP2","Version":"0.9.32","Title":"Pathway and Gene-Level Association Test","Description":"Pathway and gene level association test using raw data or summary statistics.","Published":"2017-05-24","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"arules","Version":"1.5-2","Title":"Mining Association Rules and Frequent Itemsets","Description":"Provides the infrastructure for representing,\n manipulating and analyzing transaction data and patterns (frequent\n itemsets and association rules). Also provides interfaces to\n C implementations of the association mining algorithms Apriori and Eclat\n by C. Borgelt.","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"arulesCBA","Version":"1.1.1","Title":"Classification Based on Association Rules","Description":"Provides a function to build an association rule-based classifier for data frames, and to classify incoming data frames using such a classifier.","Published":"2017-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"arulesNBMiner","Version":"0.1-5","Title":"Mining NB-Frequent Itemsets and NB-Precise Rules","Description":"NBMiner is an implementation of the model-based mining algorithm \n for mining NB-frequent itemsets presented in \"Michael Hahsler. A\n model-based frequency constraint for mining associations from\n transaction data. Data Mining and Knowledge Discovery, 13(2):137-166,\n September 2006.\" In addition an extension for NB-precise rules is \n implemented. ","Published":"2015-07-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"arulesSequences","Version":"0.2-19","Title":"Mining Frequent Sequences","Description":"Add-on for arules to handle and mine frequent sequences.\n Provides interfaces to the C++ implementation of cSPADE by \n Mohammed J. Zaki.","Published":"2017-05-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"arulesViz","Version":"1.2-1","Title":"Visualizing Association Rules and Frequent Itemsets","Description":"Extends package arules with various visualization techniques for association rules and itemsets. The package also includes several interactive visualizations for rule exploration.","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"aRxiv","Version":"0.5.16","Title":"Interface to the arXiv API","Description":"An interface to the API for 'arXiv'\n (), a repository of electronic preprints for\n computer science, mathematics, physics, quantitative biology,\n quantitative finance, and statistics.","Published":"2017-04-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"as.color","Version":"0.1","Title":"Assign Random Colors to Unique Items in a Vector","Description":"The as.color function takes an R vector of any class as an input,\n and outputs a vector of unique hexadecimal color values that correspond to the\n unique input values. This is most handy when overlaying points and lines for\n data that correspond to different levels or factors. The function will also\n print the random seed used to generate the colors. If you like the color palette\n generated, you can save the seed and reuse those colors.","Published":"2016-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"asaur","Version":"0.50","Title":"Data Sets for \"Applied Survival Analysis Using R\"\"","Description":"Data sets are referred to in the text \"Applied Survival Analysis Using R\"\n by Dirk F. Moore, Springer, 2016, ISBN: 978-3-319-31243-9, .","Published":"2016-04-12","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"asbio","Version":"1.4-2","Title":"A Collection of Statistical Tools for Biologists","Description":"Contains functions from: Aho, K. (2014) Foundational and Applied Statistics for Biologists using R. CRC/Taylor and Francis, Boca Raton, FL, ISBN: 978-1-4398-7338-0.","Published":"2017-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ascii","Version":"2.1","Title":"Export R objects to several markup languages","Description":"Coerce R object to asciidoc, txt2tags, restructuredText,\n org, textile or pandoc syntax. Package comes with a set of\n drivers for Sweave.","Published":"2011-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"asd","Version":"2.2","Title":"Simulations for Adaptive Seamless Designs","Description":"Package runs simulations for adaptive seamless designs with and without early outcomes \n for treatment selection and subpopulation type designs.","Published":"2016-05-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"asdreader","Version":"0.1-2","Title":"Reading ASD Binary Files in R","Description":"A simple driver that reads binary data created by the ASD Inc.\n portable spectrometer instruments, such as the FieldSpec (for more information,\n see ). Spectral data\n can be extracted from the ASD files as raw (DN), white reference, radiance, or\n reflectance. Additionally, the metadata information contained in the ASD file\n header can also be accessed.","Published":"2016-03-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ash","Version":"1.0-15","Title":"David Scott's ASH Routines","Description":"David Scott's ASH routines ported from S-PLUS to R.","Published":"2015-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ashr","Version":"2.0.5","Title":"Methods for Adaptive Shrinkage, using Empirical Bayes","Description":"The R package 'ashr' implements an Empirical Bayes approach for large-scale hypothesis testing and false discovery rate (FDR) estimation based on the methods proposed in M. Stephens, 2016, \"False discovery rates: a new deal\", . These methods can be applied whenever two sets of summary statistics---estimated effects and standard errors---are available, just as 'qvalue' can be applied to previously computed p-values. Two main interfaces are provided: ash(), which is more user-friendly; and ash.workhorse(), which has more options and is geared toward advanced users.","Published":"2016-12-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"asht","Version":"0.9.1","Title":"Applied Statistical Hypothesis Tests","Description":"Some hypothesis test functions (sign test, median and other quantile tests, Wilcoxon signed rank test, coefficient of variation test, test of normal variance, test on weighted sums of Poisson, sample size for t-tests with different variances and non-equal n per arm, Behrens-Fisher test) with a focus on non-asymptotic methods that have matching confidence intervals. ","Published":"2017-05-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"AsioHeaders","Version":"1.11.0-1","Title":"'Asio' C++ Header Files","Description":"'Asio' is a cross-platform C++ library for network and low-level\n I/O programming that provides developers with a consistent asynchronous model\n using a modern C++ approach.\n\n 'Asio' is also included in Boost but requires linking when used with\n Boost. Standalone it can be used header-only provided a recent-enough\n compiler. 'Asio' is written and maintained by Christopher M. Kohlhoff.\n 'Asio' is released under the 'Boost Software License', Version 1.0.","Published":"2016-01-07","License":"BSL-1.0","snapshot_date":"2017-06-23"} {"Package":"aslib","Version":"0.1","Title":"Interface to the Algorithm Selection Benchmark Library","Description":"Provides an interface to the algorithm selection benchmark library\n at and the 'LLAMA' package\n () for building\n algorithm selection models.","Published":"2016-11-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ASMap","Version":"0.4-7","Title":"Linkage Map Construction using the MSTmap Algorithm","Description":"Functions for Accurate and Speedy linkage map construction, manipulation and diagnosis of Doubled Haploid, Backcross and Recombinant Inbred 'R/qtl' objects. This includes extremely fast linkage map clustering and optimal marker ordering using 'MSTmap' (see Wu et al.,2008).","Published":"2016-06-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"asnipe","Version":"1.1.3","Title":"Animal Social Network Inference and Permutations for Ecologists","Description":"Implements several tools that are used in animal social network analysis. In particular, this package provides the tools to infer groups and generate networks from observation data, perform permutation tests on the data, calculate lagged association rates, and performed multiple regression analysis on social network data.","Published":"2017-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"aspace","Version":"3.2","Title":"A collection of functions for estimating centrographic\nstatistics and computational geometries for spatial point\npatterns","Description":"A collection of functions for computing centrographic\n statistics (e.g., standard distance, standard deviation\n ellipse, standard deviation box) for observations taken at\n point locations. Separate plotting functions have been\n developed for each measure. Users interested in writing results\n to ESRI shapefiles can do so by using results from aspace\n functions as inputs to the convert.to.shapefile and\n write.shapefile functions in the shapefiles library. The aspace\n library was originally conceived to aid in the analysis of\n spatial patterns of travel behaviour (see Buliung and Remmel,\n 2008). Major changes in the current version include (1) removal\n of dependencies on several external libraries (e.g., gpclib,\n maptools, sp), (2) the separation of plotting and estimation\n capabilities, (3) reduction in the number of functions, and (4)\n expansion of analytical capabilities with additional functions\n for descriptive analysis and visualization (e.g., standard\n deviation box, centre of minimum distance, central feature).","Published":"2012-08-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ASPBay","Version":"1.2","Title":"Bayesian Inference on Causal Genetic Variants using Affected\nSib-Pairs Data","Description":"This package allows to make inference on the properties of causal genetic\n variants in linkage disequilibrium with genotyped markers. In a first step, \n\t\t\t we select a subset of variants using a score statistic for affected \n\t\t\t sib-pairs. In a second step, on the selected subset, we make \n inference on causal genetic variants in the considered region. ","Published":"2015-01-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"aSPC","Version":"0.1.2","Title":"An Adaptive Sum of Powered Correlation Test (aSPC) for Global\nAssociation Between Two Random Vectors","Description":"The aSPC test is designed to test global association between two groups of variables potentially with moderate to high dimension (e.g. in hundreds). The aSPC is particularly useful when the association signals between two groups of variables are sparse. ","Published":"2017-04-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"aspect","Version":"1.0-4","Title":"A General Framework for Multivariate Analysis with Optimal\nScaling","Description":"Contains various functions for optimal scaling. One function performs optimal scaling by maximizing an aspect (i.e. a target function such as the sum of eigenvalues, sum of squared correlations, squared multiple correlations, etc.) of the corresponding correlation matrix. Another function performs implements the LINEALS approach for optimal scaling by minimization of an aspect based on pairwise correlations and correlation ratios. The resulting correlation matrix and category scores can be used for further multivariate methods such as structural equation models. ","Published":"2015-07-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"aspi","Version":"0.2.0","Title":"Analysis of Symmetry of Parasitic Infections","Description":"Tools for the analysis and visualization of bilateral asymmetry in parasitic infections.","Published":"2016-09-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"aSPU","Version":"1.47","Title":"Adaptive Sum of Powered Score Test","Description":"R codes for the (adaptive) Sum of Powered Score ('SPU' and 'aSPU')\n tests, inverse variance weighted Sum of Powered score ('SPUw' and 'aSPUw') tests\n and gene-based and some pathway based association tests (Pathway based Sum of\n Powered Score tests ('SPUpath'), adaptive 'SPUpath' ('aSPUpath') test, 'GEEaSPU'\n test for multiple traits - single 'SNP' (single nucleotide polymorphism)\n association in generalized estimation equations, 'MTaSPUs' test for multiple\n traits - single 'SNP' association with Genome Wide Association Studies ('GWAS')\n summary statistics, Gene-based Association Test that uses an extended 'Simes'\n procedure ('GATES'), Hybrid Set-based Test ('HYST') and extended version\n of 'GATES' test for pathway-based association testing ('GATES-Simes'). ).\n The tests can be used with genetic and other data sets with covariates. The\n response variable is binary or quantitative. Summary; (1) Single trait-'SNP' set\n association with individual-level data ('aSPU', 'aSPUw', 'aSPUr'), (2) Single trait-'SNP'\n set association with summary statistics ('aSPUs'), (3) Single trait-pathway\n association with individual-level data ('aSPUpath'), (4) Single trait-pathway\n association with summary statistics ('aSPUsPath'), (5) Multiple traits-single\n 'SNP' association with individual-level data ('GEEaSPU'), (6) Multiple traits-\n single 'SNP' association with summary statistics ('MTaSPUs'), (7) Multiple traits-'SNP' set association with summary statistics('MTaSPUsSet'), (8) Multiple traits-pathway association with summary statistics('MTaSPUsSetPath').","Published":"2017-05-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"asremlPlus","Version":"2.0-12","Title":"Augments the Use of 'ASReml-R' in Fitting Mixed Models","Description":"Assists in automating the testing of terms in mixed models when 'asreml' is used \n to fit the models. The content falls into the following natural groupings: (i) Data, (ii) Object \n manipulation functions, (iii) Model modification functions, (iv) Model testing functions, \n (v) Model diagnostics functions, (vi) Prediction production and presentation functions, \n (vii) Response transformation functions, and (viii) Miscellaneous functions. A history of the \n fitting of a sequence of models is kept in a data frame. Procedures are available for choosing \n models that conform to the hierarchy or marginality principle and for displaying predictions \n for significant terms in tables and graphs. The package 'asreml' provides a computationally \n efficient algorithm for fitting mixed models using Residual Maximum Likelihood. It can be \n purchased from 'VSNi' as 'asreml-R', who will supply a zip file for \n local installation/updating. ","Published":"2016-09-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AssayCorrector","Version":"1.1.3","Title":"Detection and Correction of Spatial Bias in HTS Screens","Description":"(1) Detects plate-specific spatial bias by identifying rows and columns of all plates of the assay affected by this bias (following the results of the Mann-Whitney U test) as well as assay-specific spatial bias by identifying well locations (i.e., well positions scanned across all plates of a given assay) affected by this bias (also following the results of the Mann-Whitney U test); (2) Allows one to correct plate-specific spatial bias using either the additive or multiplicative PMP (Partial Mean Polish) method (the most appropriate spatial bias model can be either specified by the user or determined by the program following the results of the Kolmogorov-Smirnov two-sample test) to correct the assay measurements as well as to correct assay-specific spatial bias by carrying out robust Z-scores within each plate of the assay and then traditional Z-scores across well locations.","Published":"2016-12-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"assertable","Version":"0.2.3","Title":"Verbose Assertions for Tabular Data (Data.frames and\nData.tables)","Description":"Simple, flexible, assertions on data.frame or data.table objects with verbose output for vetting. While other assertion packages apply towards more general use-cases, assertable is tailored towards tabular data. It includes functions to check variable names and values, whether the dataset contains all combinations of a given set of unique identifiers, and whether it is a certain length. In addition, assertable includes utility functions to check the existence of target files and to efficiently import multiple tabular data files into one data.table.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"assertive","Version":"0.3-5","Title":"Readable Check Functions to Ensure Code Integrity","Description":"Lots of predicates (is_* functions) to check the state of your\n variables, and assertions (assert_* functions) to throw errors if they\n aren't in the right form.","Published":"2016-12-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.base","Version":"0.0-7","Title":"A Lightweight Core of the 'assertive' Package","Description":"A minimal set of predicates and assertions used by the assertive\n package. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2016-12-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.code","Version":"0.0-1","Title":"Assertions to Check Properties of Code","Description":"A set of predicates and assertions for checking the properties of\n code. This is mainly for use by other package developers who want to include\n run-time testing features in their own packages. End-users will usually want to\n use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.data","Version":"0.0-1","Title":"Assertions to Check Properties of Data","Description":"A set of predicates and assertions for checking the properties of\n (country independent) complex data types. This is mainly for use by other\n package developers who want to include run-time testing features in\n their own packages. End-users will usually want to use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.data.uk","Version":"0.0-1","Title":"Assertions to Check Properties of Strings","Description":"A set of predicates and assertions for checking the properties of\n UK-specific complex data types. This is mainly for use by other package\n developers who want to include run-time testing features in their own\n packages. End-users will usually want to use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.data.us","Version":"0.0-1","Title":"Assertions to Check Properties of Strings","Description":"A set of predicates and assertions for checking the properties of\n US-specific complex data types. This is mainly for use by other package\n developers who want to include run-time testing features in their own\n packages. End-users will usually want to use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.datetimes","Version":"0.0-2","Title":"Assertions to Check Properties of Dates and Times","Description":"A set of predicates and assertions for checking the properties of\n dates and times. This is mainly for use by other package developers who\n want to include run-time testing features in their own packages. End-users\n will usually want to use assertive directly.","Published":"2016-05-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.files","Version":"0.0-2","Title":"Assertions to Check Properties of Files","Description":"A set of predicates and assertions for checking the properties of\n files and connections. This is mainly for use by other package developers\n who want to include run-time testing features in their own packages.\n End-users will usually want to use assertive directly.","Published":"2016-05-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.matrices","Version":"0.0-1","Title":"Assertions to Check Properties of Matrices","Description":"A set of predicates and assertions for checking the properties of\n matrices. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.models","Version":"0.0-1","Title":"Assertions to Check Properties of Models","Description":"A set of predicates and assertions for checking the properties of\n models. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.numbers","Version":"0.0-2","Title":"Assertions to Check Properties of Numbers","Description":"A set of predicates and assertions for checking the properties of\n numbers. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2016-05-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.properties","Version":"0.0-4","Title":"Assertions to Check Properties of Variables","Description":"A set of predicates and assertions for checking the properties of\n variables, such as length, names and attributes. This is mainly for use by\n other package developers who want to include run-time testing features in\n their own packages. End-users will usually want to use assertive directly.","Published":"2016-12-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.reflection","Version":"0.0-4","Title":"Assertions for Checking the State of R","Description":"A set of predicates and assertions for checking the state and\n capabilities of R, the operating system it is running on, and the IDE\n being used. This is mainly for use by other package developers who\n want to include run-time testing features in their own packages.\n End-users will usually want to use assertive directly.","Published":"2016-12-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.sets","Version":"0.0-3","Title":"Assertions to Check Properties of Sets","Description":"A set of predicates and assertions for checking the properties of\n sets. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2016-12-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.strings","Version":"0.0-3","Title":"Assertions to Check Properties of Strings","Description":"A set of predicates and assertions for checking the properties of\n strings. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2016-05-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertive.types","Version":"0.0-3","Title":"Assertions to Check Types of Variables","Description":"A set of predicates and assertions for checking the types of\n variables. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2016-12-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"assertr","Version":"2.0.2.2","Title":"Assertive Programming for R Analysis Pipelines","Description":"Provides functionality to assert conditions\n that have to be met so that errors in data used in\n analysis pipelines can fail quickly. Similar to\n 'stopifnot()' but more powerful, friendly, and easier\n for use in pipelines.","Published":"2017-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"assertthat","Version":"0.2.0","Title":"Easy Pre and Post Assertions","Description":"assertthat is an extension to stopifnot() that makes it\n easy to declare the pre and post conditions that you code should\n satisfy, while also producing friendly error messages so that your\n users know what they've done wrong.","Published":"2017-04-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"AssetPricing","Version":"1.0-0","Title":"Optimal pricing of assets with fixed expiry date","Description":"Calculates the optimal price of assets (such as\n\tairline flight seats, hotel room bookings) whose value\n\tbecomes zero after a fixed ``expiry date''. Assumes\n\tpotential customers arrive (possibly in groups) according\n\tto a known inhomogeneous Poisson process. Also assumes a\n\tknown time-varying elasticity of demand (price sensitivity)\n\tfunction. Uses elementary techniques based on ordinary\n\tdifferential equations. Uses the package deSolve to effect\n\tthe solution of these differential equations.","Published":"2014-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"assignPOP","Version":"1.1.3","Title":"Population Assignment using Genetic, Non-Genetic or Integrated\nData in a Machine Learning Framework","Description":"Use Monte-Carlo and K-fold cross-validation coupled with machine-learning classification algorithms to perform population assignment, with functionalities of evaluating discriminatory power of independent training samples, identifying informative loci, reducing data dimensionality for genomic data, integrating genetic and non-genetic data, and visualizing results. ","Published":"2017-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"assist","Version":"3.1.3","Title":"A Suite of R Functions Implementing Spline Smoothing Techniques","Description":"A comprehensive package for fitting various non-parametric/semi-parametric linear/nonlinear fixed/mixed smoothing spline models.","Published":"2015-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ASSISTant","Version":"1.2-3","Title":"Adaptive Subgroup Selection in Group Sequential Trials","Description":"Clinical trial design for subgroup selection in three-stage group\n sequential trial. Includes facilities for design, exploration and analysis of\n such trials. An implementation of the initial DEFUSE-3 trial is also provided\n as a vignette.","Published":"2016-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AssocTests","Version":"0.0-3","Title":"Genetic Association Studies","Description":"Some procedures including EIGENSTRAT (a procedure for\n\tdetecting and correcting for population stratification through \n\tsearching for the eigenvectors in genetic association studies),\n\tPCoC (a procedure for correcting for population stratification\n\tthrough calculating the principal coordinates and the clustering\n\tof the subjects), Tracy-Wisdom test (a procedure for detecting\n\tthe significant eigenvalues of a matrix), distance regression (a\n\tprocedure for detecting the association between a distance matrix\n\tand some independent variants of interest), single-marker test (a\n\tprocedure for identifying the association between the genotype at\n\ta biallelic marker and a trait using the Wald test or the Fisher\n\texact test), MAX3 (a procedure for testing for the association\n\tbetween a single nucleotide polymorphism and a binary phenotype\n\tusing the maximum value of the three test statistics derived for\n\tthe recessive, additive, and dominant models), nonparametric trend\n\ttest (a procedure for testing for the association between a genetic\n\tvariant and a non-normal distributed quantitative trait based on the\n\tnonparametric risk), and nonparametric MAX3 (a procedure for testing\n\tfor the association between a biallelic single nucleotide polymorphism\n\tand a quantitative trait using the maximum value of the three\n\tnonparametric trend tests derived for the recessive, additive, and\n\tdominant models), which are commonly used in genetic association studies.","Published":"2015-08-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"assortnet","Version":"0.12","Title":"Calculate the Assortativity Coefficient of Weighted and Binary\nNetworks","Description":"Functions to calculate the assortment of vertices in social networks. This can be measured on both weighted and binary networks, with discrete or continuous vertex values.","Published":"2016-01-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AssotesteR","Version":"0.1-10","Title":"Statistical Tests for Genetic Association Studies","Description":"R package with statistical tests and methods for genetic\n association studies with emphasis on rare variants and binary (dichotomous)\n traits","Published":"2013-12-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"aster","Version":"0.9.1","Title":"Aster Models","Description":"Aster models are exponential family regression models for life\n history analysis. They are like generalized linear models except that\n elements of the response vector can have different families (e. g.,\n some Bernoulli, some Poisson, some zero-truncated Poisson, some normal)\n and can be dependent, the dependence indicated by a graphical structure.\n Discrete time survival analysis, zero-inflated Poisson regression, and\n generalized linear models that are exponential family (e. g., logistic\n regression and Poisson regression with log link) are special cases.\n Main use is for data in which there is survival over discrete time periods\n and there is additional data about what happens conditional on survival\n (e. g., number of offspring). Uses the exponential family canonical\n parameterization (aster transform of usual parameterization).","Published":"2017-03-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"aster2","Version":"0.3","Title":"Aster Models","Description":"Aster models are exponential family regression models for life\n history analysis. They are like generalized linear models except that\n elements of the response vector can have different families (e. g.,\n some Bernoulli, some Poisson, some zero-truncated Poisson, some normal)\n and can be dependent, the dependence indicated by a graphical structure.\n Discrete time survival analysis, zero-inflated Poisson regression, and\n generalized linear models that are exponential family (e. g., logistic\n regression and Poisson regression with log link) are special cases.\n Main use is for data in which there is survival over discrete time periods\n and there is additional data about what happens conditional on survival\n (e. g., number of offspring). Uses the exponential family canonical\n parameterization (aster transform of usual parameterization).\n Unlike the aster package, this package does dependence groups (nodes of\n the graph need not be conditionally independent given their predecessor\n node), including multinomial and two-parameter normal as families. Thus\n this package also generalizes mark-capture-recapture analysis.","Published":"2017-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"astro","Version":"1.2","Title":"Astronomy Functions, Tools and Routines","Description":"The astro package provides a series of functions, tools and routines in everyday use within astronomy. Broadly speaking, one may group these functions into 7 main areas, namely: cosmology, FITS file manipulation, the Sersic function, plotting, data manipulation, statistics and general convenience functions and scripting tools.","Published":"2014-09-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"astrochron","Version":"0.7","Title":"A Computational Tool for Astrochronology","Description":"Routines for astrochronologic testing, astronomical time scale construction, and time series analysis. Also included are a range of statistical analysis and modeling routines that are relevant to time scale development and paleoclimate analysis.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"astrodatR","Version":"0.1","Title":"Astronomical Data","Description":"A collection of 19 datasets from contemporary astronomical research. They are described the textbook `Modern Statistical Methods for Astronomy with R Applications' by Eric D. Feigelson and G. Jogesh Babu (Cambridge University Press, 2012, Appendix C) or on the website of Penn State's Center for Astrostatistics (http://astrostatistics.psu.edu/datasets). These datasets can be used to exercise methodology involving: density estimation; heteroscedastic measurement errors; contingency tables; two-sample hypothesis tests; spatial point processes; nonlinear regression; mixture models; censoring and truncation; multivariate analysis; classification and clustering; inhomogeneous Poisson processes; periodic and stochastic time series analysis. ","Published":"2014-08-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"astroFns","Version":"4.1-0","Title":"Astronomy: time and position functions, misc. utilities","Description":"Miscellaneous astronomy functions, utilities, and data.","Published":"2012-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"astrolibR","Version":"0.1","Title":"Astronomy Users Library","Description":"Several dozen low-level utilities and codes from the Interactive Data Language (IDL) Astronomy Users Library (http://idlastro.gsfc.nasa.gov) are implemented in R. They treat: time, coordinate and proper motion transformations; terrestrial precession and nutation, atmospheric refraction and aberration, barycentric corrections, and related effects; utilities for astrometry, photometry, and spectroscopy; and utilities for planetary, stellar, Galactic, and extragalactic science.","Published":"2014-08-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"astsa","Version":"1.7","Title":"Applied Statistical Time Series Analysis","Description":"Contains data sets and scripts to accompany Time Series Analysis and Its Applications: With R Examples by Shumway and Stoffer, fourth edition, . ","Published":"2016-12-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"asVPC","Version":"1.0.2","Title":"Average Shifted Visual Predictive Checks","Description":"The visual predictive checks are well-known method to validate the \n nonlinear mixed effect model, especially in pharmacometrics area. \n The average shifted visual predictive checks are the newly \n developed method of Visual predictive checks combined with \n the idea of the average shifted histogram.","Published":"2015-05-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"asymLD","Version":"0.1","Title":"Asymmetric Linkage Disequilibrium (ALD) for Polymorphic Genetic\nData","Description":"Computes asymmetric LD measures (ALD) for multi-allelic genetic data. These measures are identical to the correlation measure (r) for bi-allelic data.","Published":"2016-01-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"asymmetry","Version":"1.2.1","Title":"Visualizing Asymmetric Data","Description":"Models and methods for the visualization for asymmetric data. A matrix is asymmetric if the number of rows equals the number of columns, and these rows and columns refer to the same set of objects. An example is a student migration table, where the rows correspond to the countries of origin of the students and the columns to the destination countries. This package provides the slide-vector model and the asymscal model for asymmetric multidimensional scaling. Furthermore, a heat map for skew-symmetric data, and the decomposition of asymmetry are provided for the analysis of asymmetric tables.","Published":"2017-05-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"asympTest","Version":"0.1.3","Title":"Asymptotic statistic","Description":"Asymptotic testing","Published":"2012-08-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AsynchLong","Version":"2.0","Title":"Regression Analysis of Sparse Asynchronous Longitudinal Data","Description":"Estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent response and covariates are mismatched and observed intermittently within subjects. Kernel weighted estimating equations are used for generalized linear models with either time-invariant or time-dependent coefficients.","Published":"2016-01-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"asypow","Version":"2015.6.25","Title":"Calculate Power Utilizing Asymptotic Likelihood Ratio Methods","Description":"A set of routines written in the S language\n that calculate power and related quantities utilizing asymptotic\n likelihood ratio methods.","Published":"2015-06-26","License":"ACM | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ATE","Version":"0.2.0","Title":"Inference for Average Treatment Effects using Covariate\nBalancing","Description":"Nonparametric estimation and inference for average treatment effects based on covariate balancing.","Published":"2015-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AtelieR","Version":"0.24","Title":"A GTK GUI for teaching basic concepts in statistical inference,\nand doing elementary bayesian tests","Description":"A collection of statistical simulation and computation tools with a GTK GUI, to help teach statistical concepts and compute probabilities. Two domains are covered: I. Understanding (Central-Limit Theorem and the Normal Distribution, Distribution of a sample mean, Distribution of a sample variance, Probability calculator for common distributions), and II. Elementary Bayesian Statistics (bayesian inference on proportions, contingency tables, means and variances, with informative and noninformative priors).","Published":"2013-09-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"atlantistools","Version":"0.4.2","Title":"Process and Visualise Output from Atlantis Models","Description":"Atlantis is an end-to-end marine ecosystem modelling framework. It was originally developed in Australia by E.A. Fulton, A.D.M. Smith and D.C. Smith (2007) and has since been adopted in many marine ecosystems around the world (). The output of an Atlantis simulation is stored in various file formats like .netcdf and .txt and different output structures are used for the output variables like e.g. productivity or biomass. This package is used to convert the different output types to a unified format according to the \"tidy-data\" approach by H. Wickham (2014) . Additionally, ecological metrics like for example spatial overlap of predator and prey or consumption can be calculated and visualised with this package. Due to the unified data structure it is very easy to share model output with each other and perform model comparisons.","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"atmcmc","Version":"1.0","Title":"Automatically Tuned Markov Chain Monte Carlo","Description":"Uses adaptive diagnostics to tune and run a random walk Metropolis MCMC algorithm, to converge to a specified target distribution and estimate means of functionals.","Published":"2014-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ATmet","Version":"1.2","Title":"Advanced Tools for Metrology","Description":"This package provides functions for smart sampling and sensitivity analysis for metrology applications, including computationally expensive problems.","Published":"2014-04-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"AtmRay","Version":"1.31","Title":"Acoustic Traveltime Calculations for 1-D Atmospheric Models","Description":"Calculates acoustic traveltimes and ray paths in 1-D,\n linear atmospheres. Later versions will support arbitrary 1-D\n atmospheric models, such as radiosonde measurements and\n standard reference atmospheres.","Published":"2013-03-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"aTSA","Version":"3.1.2","Title":"Alternative Time Series Analysis","Description":"Contains some tools for testing, analyzing time series data and\n fitting popular time series models such as ARIMA, Moving Average and Holt\n Winters, etc. Most functions also provide nice and clear outputs like SAS\n does, such as identify, estimate and forecast, which are the same statements\n in PROC ARIMA in SAS.","Published":"2015-07-08","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"atsd","Version":"1.0.8441","Title":"Support Querying Axibase Time-Series Database","Description":"Provides functions for retrieving time-series and related \n meta-data such as entities, metrics, and tags from the Axibase \n Time-Series Database (ATSD). ATSD is a non-relational clustered \n database used for storing performance measurements from IT infrastructure \n resources: servers, network devices, storage systems, and applications.","Published":"2016-12-05","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"attrCUSUM","Version":"0.1.0","Title":"Tools for Attribute VSI CUSUM Control Chart","Description":"An implementation of tools for design of attribute \n variable sampling interval cumulative sum chart. \n It currently provides information for monitoring of mean increase such as \n average number of sample to signal, average time to signal,\n a matrix of transient probabilities, suitable control limits when the data are\n (zero inflated) Poisson/binomial distribution. \n Functions in the tools can be easily applied to other count processes.\n Also, tools might be extended to more complicated cumulative sum control chart.\n We leave these issues as our perpetual work.","Published":"2016-12-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"attribrisk","Version":"0.1","Title":"Population Attributable Risk","Description":"Estimates population (etiological) attributable risk for\n unmatched, pair-matched or set-matched case-control designs and returns a\n list containing the estimated attributable risk, estimates of coefficients,\n and their standard errors, from the (conditional, If necessary) logistic\n regression used for estimating the relative risk.","Published":"2014-11-18","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"AUC","Version":"0.3.0","Title":"Threshold independent performance measures for probabilistic\nclassifiers","Description":"This package includes functions to compute the area under the curve of selected measures: The area under the sensitivity curve (AUSEC), the area under the specificity curve (AUSPC), the area under the accuracy curve (AUACC), and the area under the receiver operating characteristic curve (AUROC). The curves can also be visualized. Support for partial areas is provided.","Published":"2013-09-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aucm","Version":"2017.3-2","Title":"AUC Maximization","Description":"Implements methods for identifying linear and nonlinear marker combinations that maximizes the Area Under the AUC Curve (AUC).","Published":"2017-03-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AUCRF","Version":"1.1","Title":"Variable Selection with Random Forest and the Area Under the\nCurve","Description":"Variable selection using Random Forest based on optimizing\n the area-under-the ROC curve (AUC) of the Random Forest.","Published":"2012-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"audio","Version":"0.1-5","Title":"Audio Interface for R","Description":"Interfaces to audio devices (mainly sample-based) from R to allow recording and playback of audio. Built-in devices include Windows MM, Mac OS X AudioUnits and PortAudio (the last one is very experimental).","Published":"2013-12-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"audiolyzR","Version":"0.4-9","Title":"audiolyzR: Give your data a listen","Description":"Creates audio representations of common plots in R","Published":"2013-02-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"audit","Version":"0.1-1","Title":"Bounds for Accounting Populations","Description":"Two Bayesian methods for Accounting Populations","Published":"2012-10-29","License":"MIT","snapshot_date":"2017-06-23"} {"Package":"auRoc","Version":"0.1-0","Title":"Various Methods to Estimate the AUC","Description":"Estimate the AUC using a variety of methods as follows: \n (1) frequentist nonparametric methods based on the Mann-Whitney statistic or kernel methods. \n (2) frequentist parametric methods using the likelihood ratio test based on higher-order \n asymptotic results, the signed log-likelihood ratio test, the Wald test, \n or the approximate ''t'' solution to the Behrens-Fisher problem. \n (3) Bayesian parametric MCMC methods.","Published":"2015-12-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"AUtests","Version":"0.98","Title":"Approximate Unconditional and Permutation Tests","Description":"Performs approximate unconditional and permutation testing for\n 2x2 contingency tables. Motivated by testing for disease association with rare\n genetic variants in case-control studies. When variants are extremely rare,\n these tests give better control of Type I error than standard tests.","Published":"2016-06-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AutoDeskR","Version":"0.1.2","Title":"An Interface to the 'AutoDesk' 'API' Platform","Description":"An interface to the 'AutoDesk' 'API' Platform including the Authentication \n 'API' for obtaining authentication to the 'AutoDesk' Forge Platform, Data Management \n 'API' for managing data across the platform's cloud services, Design Automation 'API'\n for performing automated tasks on design files in the cloud, Model\n Derivative 'API' for translating design files into different formats, sending\n them to the viewer app, and extracting design data, and Viewer for rendering\n 2D and 3D models (see for more information).","Published":"2017-02-18","License":"Apache License | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"autoencoder","Version":"1.1","Title":"Sparse Autoencoder for Automatic Learning of Representative\nFeatures from Unlabeled Data","Description":"Implementation of the sparse autoencoder in R environment, following the notes of Andrew Ng (http://www.stanford.edu/class/archive/cs/cs294a/cs294a.1104/sparseAutoencoder.pdf). The features learned by the hidden layer of the autoencoder (through unsupervised learning of unlabeled data) can be used in constructing deep belief neural networks. ","Published":"2015-07-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"autoimage","Version":"1.3","Title":"Multiple Heat Maps for Projected Coordinates","Description":"Functions for displaying multiple images with a color \n scale, i.e., heat maps, possibly with projected coordinates. The\n package relies on the base graphics system, so graphics are\n rendered rapidly.","Published":"2017-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"automagic","Version":"0.3","Title":"Automagically Document and Install Packages Necessary to Run R\nCode","Description":"Parse R code in a given directory for R packages and attempt to install them from CRAN or GitHub. Optionally use a dependencies file for tighter control over which package versions to install.","Published":"2017-02-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"automap","Version":"1.0-14","Title":"Automatic interpolation package","Description":"This package performs an automatic interpolation by automatically estimating the variogram and then calling gstat.","Published":"2013-08-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"AutoModel","Version":"0.4.9","Title":"Automated Hierarchical Multiple Regression with Assumptions\nChecking","Description":"A set of functions that automates the process and produces reasonable output for hierarchical multiple regression models. It allows you to specify predictor blocks, from which it generates all of the linear models, and checks the assumptions of the model, producing the requisite plots and statistics to allow you to judge the suitability of the model.","Published":"2015-08-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"automultinomial","Version":"1.0.0","Title":"Autologistic and Automultinomial Spatial Regression and Variable\nSelection","Description":"Contains functions for autologistic variable selection and parameter estimation for spatially correlated categorical data (including k>2). The main function is MPLE. Capable of fitting the centered autologistic model described in Caragea and Kaiser (2009), as well as the traditional autologistic model of Besag (1974). ","Published":"2016-10-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"autopls","Version":"1.3","Title":"Partial Least Squares Regression with Backward Selection of\nPredictors","Description":"Some convenience functions for pls regression, including backward \n variable selection and validation procedures, image based predictions\n\t\tand plotting.","Published":"2015-02-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AutoregressionMDE","Version":"1.0","Title":"Minimum Distance Estimation in Autoregressive Model","Description":"Consider autoregressive model of order p where the distribution function of innovation is unknown, but innovations are independent and symmetrically distributed. The package contains a function named ARMDE which takes X (vector of n observations) and p (order of the model) as input argument and returns minimum distance estimator of the parameters in the model.","Published":"2015-09-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AutoSEARCH","Version":"1.5","Title":"General-to-Specific (GETS) Modelling","Description":"General-to-Specific (GETS) modelling of the mean and variance of a regression. NOTE: The package has been succeeded by gets, also available on the CRAN, which is more user-friendly, faster and easier to extend. Users are therefore encouraged to consider gets instead.","Published":"2015-03-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"autoSEM","Version":"0.1.0","Title":"Performs Specification Search in Structural Equation Models","Description":"Implements multiple heuristic search algorithms for\n automatically creating structural equation models.","Published":"2016-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"autothresholdr","Version":"0.5.0","Title":"An R Port of the 'ImageJ' Plugin 'Auto Threshold'","Description":"Provides the 'ImageJ' 'Auto Threshold' plugin functionality to R users. \n See and Landini et al. (2017) .","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"autovarCore","Version":"1.0-0","Title":"Automated Vector Autoregression Models and Networks","Description":"Automatically find the best vector autoregression\n models and networks for a given time series data set. 'AutovarCore'\n evaluates eight kinds of models: models with and without log\n transforming the data, lag 1 and lag 2 models, and models with and\n without day dummy variables. For each of these 8 model configurations,\n 'AutovarCore' evaluates all possible combinations for including\n outlier dummies (at 2.5x the standard deviation of the residuals)\n and retains the best model. Model evaluation includes the Eigenvalue\n stability test and a configurable set of residual tests. These eight\n models are further reduced to four models because 'AutovarCore'\n determines whether adding day dummies improves the model fit.","Published":"2015-07-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"averisk","Version":"1.0.3","Title":"Calculation of Average Population Attributable Fractions and\nConfidence Intervals","Description":"Average population attributable fractions are calculated for a set\n of risk factors (either binary or ordinal valued) for both prospective and case-\n control designs. Confidence intervals are found by Monte Carlo simulation. The\n method can be applied to either prospective or case control designs, provided an\n estimate of disease prevalence is provided. In addition to an exact calculation\n of AF, an approximate calculation, based on randomly sampling permutations has\n been implemented to ensure the calculation is computationally tractable when the\n number of risk factors is large.","Published":"2017-03-21","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"aVirtualTwins","Version":"1.0.0","Title":"Adaptation of Virtual Twins Method from Jared Foster","Description":"Research of subgroups in random clinical trials with binary outcome and two treatments groups. This is an adaptation of the Jared Foster method.","Published":"2016-10-09","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"AWR","Version":"1.11.89","Title":"'AWS' Java 'SDK' for R","Description":"Installs the compiled Java modules of the Amazon Web Services ('AWS') 'SDK' to be used in downstream R packages interacting with 'AWS'. See for more information on the 'AWS' 'SDK' for Java.","Published":"2017-02-13","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"AWR.Athena","Version":"1.1.0","Title":"'AWS' Athena DBI Wrapper","Description":"'RJDBC' based DBI driver to Amazon Athena, which is an interactive\n query service to analyze data in Amazon S3 using standard SQL.","Published":"2017-06-16","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"AWR.Kinesis","Version":"1.7.3","Title":"Amazon 'Kinesis' Consumer Application for Stream Processing","Description":"Fetching data from Amazon 'Kinesis' Streams using the Java-based 'MultiLangDaemon' interacting with Amazon Web Services ('AWS') for easy stream processing from R. For more information on 'Kinesis', see .","Published":"2017-02-26","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"AWR.KMS","Version":"0.1","Title":"A Simple Client to the 'AWS' Key Management Service","Description":"Encrypt plain text and 'decrypt' cipher text using encryption keys hosted at Amazon Web Services ('AWS') Key Management Service ('KMS'), on which see for more information.","Published":"2017-02-20","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"aws","Version":"1.9-6","Title":"Adaptive Weights Smoothing","Description":"Collection of R-functions implementing the\n Propagation-Separation Approach to adaptive smoothing as\n described in \"J. Polzehl and V. Spokoiny (2006)\n \"\n and \"J. Polzehl and V. Spokoiny (2004) \".","Published":"2016-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aws.alexa","Version":"0.1.4","Title":"Client for the Amazon Alexa Web Information Services API","Description":"Use the Amazon Alexa Web Information Services API to \n find information about domains, including the kind of content \n that they carry, how popular are they---rank and traffic history, \n sites linking to them, among other things. See \n for more information.","Published":"2017-04-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"aws.ec2metadata","Version":"0.1.1","Title":"Get EC2 Instance Metadata","Description":"Retrieve Amazon EC2 instance metadata from within the running instance.","Published":"2016-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aws.polly","Version":"0.1.2","Title":"Client for AWS Polly","Description":"A client for AWS Polly , a speech synthesis service.","Published":"2016-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aws.s3","Version":"0.3.3","Title":"AWS S3 Client Package","Description":"A simple client package for the Amazon Web Services (AWS) Simple\n Storage Service (S3) REST API .","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aws.ses","Version":"0.1.4","Title":"AWS SES Client Package","Description":"A simple client package for the Amazon Web Services (AWS) Simple\n Email Service (SES) REST API.","Published":"2016-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aws.signature","Version":"0.3.2","Title":"Amazon Web Services Request Signatures","Description":"Generates version 2 and version 4 request signatures for Amazon Web Services ('AWS') Application Programming Interfaces ('APIs').","Published":"2017-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aws.sns","Version":"0.1.5","Title":"AWS SNS Client Package","Description":"A simple client package for the Amazon Web Services (AWS) Simple\n Notification Service (SNS) API.","Published":"2016-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aws.sqs","Version":"0.1.8","Title":"AWS SQS Client Package","Description":"A simple client package for the Amazon Web Services (AWS) Simple\n Queue Service (SQS) API.","Published":"2016-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"awsjavasdk","Version":"0.2.0","Title":"Boilerplate R Access to the Amazon Web Services ('AWS') Java SDK","Description":"Provides boilerplate access to all of the classes included in the \n Amazon Web Services ('AWS') Java Software Development Kit (SDK) via \n package:'rJava'. According to Amazon, the 'SDK helps take the complexity \n out of coding by providing Java APIs for many AWS services including \n Amazon S3, Amazon EC2, DynamoDB, and more'. You can read more about the \n included Java code on Amazon's website: \n .","Published":"2017-01-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"awsMethods","Version":"1.0-4","Title":"Class and Methods Definitions for Packages 'aws', 'adimpro',\n'fmri', 'dwi'","Description":"Defines the method extract and provides 'openMP' support as needed in several packages.","Published":"2016-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"aylmer","Version":"1.0-11","Title":"A generalization of Fisher's exact test","Description":"A generalization of Fisher's exact test that allows for\n structural zeros.","Published":"2013-12-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"AzureML","Version":"0.2.13","Title":"Interface with Azure Machine Learning Datasets, Experiments and\nWeb Services","Description":"Functions and datasets to support Azure Machine Learning. This\n allows you to interact with datasets, as well as publish and consume R functions\n as API services.","Published":"2016-08-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"B2Z","Version":"1.4","Title":"Bayesian Two-Zone Model","Description":"This package fits the Bayesian two-Zone Models.","Published":"2011-07-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"b6e6rl","Version":"1.1","Title":"Adaptive differential evolution, b6e6rl variant","Description":"This package contains b6e6rl algorithm, adaptive\n differential evolution for global optimization.","Published":"2013-06-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"babar","Version":"1.0","Title":"Bayesian Bacterial Growth Curve Analysis in R","Description":"Babar is designed to use nested sampling (a Bayesian analysis technique) to compare possible models for bacterial growth curves, as well as extracting parameters. It allows model evidence and parameter likelihood values to be extracted, and also contains helper functions for comparing distributions as well as direct access to the underlying nested sampling code.","Published":"2015-02-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"babel","Version":"0.3-0","Title":"Ribosome Profiling Data Analysis","Description":"Included here are babel routines for identifying unusual ribosome protected fragment counts given mRNA counts.","Published":"2016-06-23","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"BaBooN","Version":"0.2-0","Title":"Bayesian Bootstrap Predictive Mean Matching - Multiple and\nSingle Imputation for Discrete Data","Description":"Included are two variants of Bayesian Bootstrap\n Predictive Mean Matching to multiply impute missing data. The\n first variant is a variable-by-variable imputation combining\n sequential regression and Predictive Mean Matching (PMM) that\n has been extended for unordered categorical data. The Bayesian\n Bootstrap allows for generating approximately proper multiple\n imputations. The second variant is also based on PMM, but the\n focus is on imputing several variables at the same time. The\n suggestion is to use this variant, if the missing-data pattern\n resembles a data fusion situation, or any other\n missing-by-design pattern, where several variables have\n identical missing-data patterns. Both variants can be run as\n 'single imputation' versions, in case the analysis objective is\n of a purely descriptive nature.","Published":"2015-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"babynames","Version":"0.3.0","Title":"US Baby Names 1880-2015","Description":"US baby names provided by the SSA. This package contains all\n names used for at least 5 children of either sex.","Published":"2017-04-14","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"BACA","Version":"1.3","Title":"Bubble Chart to Compare Biological Annotations by using DAVID","Description":"R-based graphical tool to concisely visualise and compare biological annotations queried from the DAVID web service. It provides R functions to perform enrichment analysis (via DAVID - http://david.abcc.ncifcrf.gov) on several gene lists at once, and then visualizing all the results in one generated figure that allows R users to compare the annotations found for each list. ","Published":"2015-05-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BacArena","Version":"1.6","Title":"Modeling Framework for Cellular Communities in their\nEnvironments","Description":"Can be used for simulation of organisms living in\n communities. Each organism is represented individually and genome scale\n metabolic models determine the uptake and release of compounds. Biological\n processes such as movement, diffusion, chemotaxis and kinetics are available\n along with data analysis techniques.","Published":"2017-05-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BACCO","Version":"2.0-9","Title":"Bayesian Analysis of Computer Code Output (BACCO)","Description":"The BACCO bundle of packages is replaced by the BACCO\n package, which provides a vignette that illustrates the constituent\n packages (emulator, approximator, calibrator) in use.","Published":"2013-12-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"BACCT","Version":"1.0","Title":"Bayesian Augmented Control for Clinical Trials","Description":"Implements the Bayesian Augmented Control (BAC, a.k.a. Bayesian historical data borrowing) method under clinical trial setting by calling 'Just Another Gibbs Sampler' ('JAGS') software. In addition, the 'BACCT' package evaluates user-specified decision rules by computing the type-I error/power, or probability of correct go/no-go decision at interim look. The evaluation can be presented numerically or graphically. Users need to have 'JAGS' 4.0.0 or newer installed due to a compatibility issue with 'rjags' package. Currently, the package implements the BAC method for binary outcome only. Support for continuous and survival endpoints will be added in future releases. We would like to thank AbbVie's Statistical Innovation group and Clinical Statistics group for their support in developing the 'BACCT' package.","Published":"2016-06-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"backblazer","Version":"0.1.0","Title":"Bindings to the Backblaze B2 API","Description":"Provides convenience functions for the Backblaze B2 cloud storage\n API (see https://www.backblaze.com/b2/docs/). All B2 API calls are mapped\n to equivalent R functions. Files can be easily uploaded, downloaded and\n deleted from B2, all from within R programs.","Published":"2016-01-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"backpipe","Version":"0.1.8.1","Title":"Backward Pipe Operator","Description":"Provides a backward-pipe operator for 'magrittr' (%<%) or \n 'pipeR' (%<<%) that allows for a performing operations from right-to-left. \n This indispensable for writing clear code where there is natural \n right-to-left ordering common with nested structures \n and hierarchies such as trees/directories or markup languages such as HTML \n and XML. ","Published":"2016-10-04","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"backports","Version":"1.1.0","Title":"Reimplementations of Functions Introduced Since R-3.0.0","Description":"Implementations of functions which have been introduced in\n R since version 3.0.0. The backports are conditionally exported which\n results in R resolving the function names to the version shipped with R (if\n available) and uses the implemented backports as fallback. This way package\n developers can make use of the new functions without worrying about the\n minimum required R version.","Published":"2017-05-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"backShift","Version":"0.1.4.1","Title":"Learning Causal Cyclic Graphs from Unknown Shift Interventions","Description":"Code for 'backShift', an algorithm to estimate the connectivity\n matrix of a directed (possibly cyclic) graph with hidden variables. The\n underlying system is required to be linear and we assume that observations\n under different shift interventions are available. For more details,\n see .","Published":"2017-01-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"backtest","Version":"0.3-4","Title":"Exploring Portfolio-Based Conjectures About Financial\nInstruments","Description":"The backtest package provides facilities for exploring\n portfolio-based conjectures about financial instruments\n (stocks, bonds, swaps, options, et cetera).","Published":"2015-09-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"backtestGraphics","Version":"0.1.6","Title":"Interactive Graphics for Portfolio Data","Description":"Creates an interactive graphics \n interface to visualize backtest results of different financial \n instruments, such as equities, futures, and credit default swaps.\n The package does not run backtests on the given data set but \n displays a graphical explanation of the backtest results. Users can\n look at backtest graphics for different instruments, investment \n strategies, and portfolios. Summary statistics of different \n portfolio holdings are shown in the left panel, and interactive \n plots of profit and loss (P\\&L), net market value (NMV) and \n gross market value (GMV) are displayed in the right panel. ","Published":"2015-10-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BACprior","Version":"2.0","Title":"Choice of the Hyperparameter Omega in the Bayesian Adjustment\nfor Confounding (BAC) Algorithm","Description":"The BACprior package provides an approximate sensitivity analysis of the \n Bayesian Adjustment for Confounding (BAC) algorithm (Wang et al., 2012) with regards to the\n hyperparameter omega. The package also provides functions to guide the user in their choice\n of an appropriate omega value. The method is based on Lefebvre, Atherton and Talbot (2014).","Published":"2014-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bacr","Version":"1.0.1","Title":"Bayesian Adjustment for Confounding","Description":"Estimating the average causal effect based on the Bayesian Adjustment for Confounding (BAC) algorithm.","Published":"2016-10-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"badgecreatr","Version":"0.1.0","Title":"Create Badges for 'Travis', 'Repostatus' 'Codecov.io' Etc in\nGithub Readme","Description":"Tired of copy and pasting almost identical markdown for badges in\n every new R package that you create on Github? \n This package will search your DESCRIPTION file and extract the package name,\n licence, R-version, and current projectversion and transform that into \n badges. It will also search for a .travis.yml file and create a 'Travis' badge,\n if you use 'Codecov.io' to check your code coverage after a 'Travis' build \n this package will also build a 'Codecov.io'-badge. All the badges will be \n placed below the top YAML content of your Rmarkdown file (Readme.Rmd). \n Currently creates badges for Projectstatus (Repostatus.org), licence\n travis build status, codecov, minimal R version, CRAN status, \n current version of your package and last change of Readme.Rmd.","Published":"2016-07-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"badger","Version":"0.0.2","Title":"Badge for R Package","Description":"Query information and generate badge for using in README\n and GitHub Pages.","Published":"2017-03-08","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"BAEssd","Version":"1.0.1","Title":"Bayesian Average Error approach to Sample Size Determination","Description":"Implements sample size calculations following the approach\n described in \"Bayesian Average Error Based Approach to\n Hypothesis Testing and Sample Size Determination.\"","Published":"2012-11-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Bagidis","Version":"1.0","Title":"BAses GIving DIStances","Description":"This is the companion package of a PhD thesis entitled \"Bases Giving Distances. A new paradigm for investigating functional data with applications for spectroscopy\" by Timmermans (2012). See references for details and related publications. The core of the BAGIDIS methodology is a functional wavelet based semi-distance that has been introduced by Timmermans and von Sachs (2010, 2015) and Timmermans, Delsol and von Sachs (2013). This semi-distance allows for comparing curves with sharp local patterns that might not be well aligned from one curve to another. It is data-driven and highly adaptive to the curves being studied. Its main originality is its ability to consider simultaneously horizontal and vertical variations of patterns, which proofs highly useful when used together with clustering algorithms or visualization method. BAGIDIS is an acronym for BAsis GIving DIStances. The extension of BAGIDIS to image data relies on the same principles and has been described in Timmermans and Fryzlewicz (2012), Fryzlewicz and Timmermans (2015). ","Published":"2015-06-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"bagRboostR","Version":"0.0.2","Title":"Ensemble bagging and boosting classifiers","Description":"bagRboostR is a set of ensemble classifiers for multinomial\n classification. The bagging function is the implementation of Breiman's\n ensemble as described by Opitz & Maclin (1999). The boosting function is\n the implementation of Stagewise Additive Modeling using a Multi-class\n Exponential loss function (SAMME) created by Zhu et al (2006). Both bagging\n and SAMME implementations use randomForest as the weak classifier and\n expect a character outcome variable. Each ensemble classifier returns a\n character vector of predictions for the test set.","Published":"2014-03-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"baitmet","Version":"1.0.1","Title":"Library Driven Compound Profiling in Gas Chromatography - Mass\nSpectrometry Data","Description":"Automated quantification of metabolites by targeting mass spectral/retention time libraries into full scan-acquired gas chromatography - mass spectrometry (GC-MS) chromatograms. Baitmet outputs a table with compounds name, spectral matching score, retention index error, and compounds area in each sample. Baitmet can automatically determine the compounds retention indexes with or without co-injection of internal standards with samples.","Published":"2017-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BalanceCheck","Version":"0.1","Title":"Balance Check for Multiple Covariates in Matched Observational\nStudies","Description":"Two practical tests are provided for assessing whether multiple covariates in a treatment group and a matched control group are balanced in observational studies. ","Published":"2016-09-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BalancedSampling","Version":"1.5.2","Title":"Balanced and Spatially Balanced Sampling","Description":"Select balanced and spatially balanced probability samples in multi-dimensional spaces with any prescribed inclusion probabilities. It contains fast (C++ via Rcpp) implementations of the included sampling methods. The local pivotal method and spatially correlated Poisson sampling (for spatially balanced sampling) are included. Also the cube method (for balanced sampling) and the local cube method (for doubly balanced sampling) are included.","Published":"2016-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BaM","Version":"1.0.1","Title":"Functions and Datasets for Books by Jeff Gill","Description":"Functions and datasets for Jeff Gill: \"Bayesian Methods: A Social and Behavioral Sciences Approach\". First, Second, and Third Edition. Published by Chapman and Hall/CRC (2002, 2007, 2014).","Published":"2016-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BAMBI","Version":"1.1.0","Title":"Bivariate Angular Mixture Models","Description":"Fit (using Bayesian methods) and simulate mixtures of univariate and bivariate angular distributions.","Published":"2017-02-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bamboo","Version":"0.9.18","Title":"Protein Secondary Structure Prediction Using the Bamboo Method","Description":"Implementation of the Bamboo methods described in Li, Dahl, Vannucci, Joo, and Tsai (2014) .","Published":"2017-05-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bamdit","Version":"3.1.0","Title":"Bayesian Meta-Analysis of Diagnostic Test Data","Description":"Functions for Bayesian meta-analysis of diagnostic test data which\n are based on a scale mixtures bivariate random-effects model.","Published":"2017-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bamlss","Version":"0.1-2","Title":"Bayesian Additive Models for Location Scale and Shape (and\nBeyond)","Description":"Infrastructure for estimating probabilistic distributional regression models in a Bayesian framework.\n The distribution parameters may capture location, scale, shape, etc. and every parameter may depend\n on complex additive terms (fixed, random, smooth, spatial, etc.) similar to a generalized additive model.\n The conceptual and computational framework is introduced in Umlauf, Klein, Zeileis (2017)\n .","Published":"2017-04-14","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"BAMMtools","Version":"2.1.6","Title":"Analysis and Visualization of Macroevolutionary Dynamics on\nPhylogenetic Trees","Description":"Provides functions for analyzing and visualizing complex\n macroevolutionary dynamics on phylogenetic trees. It is a companion\n package to the command line program BAMM (Bayesian Analysis of\n Macroevolutionary Mixtures) and is entirely oriented towards the analysis,\n interpretation, and visualization of evolutionary rates. Functionality\n includes visualization of rate shifts on phylogenies, estimating\n evolutionary rates through time, comparing posterior distributions of\n evolutionary rates across clades, comparing diversification models using\n Bayes factors, and more.","Published":"2017-02-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bandit","Version":"0.5.0","Title":"Functions for simple A/B split test and multi-armed bandit\nanalysis","Description":"A set of functions for doing analysis of A/B split test data and web metrics in general.","Published":"2014-05-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BANFF","Version":"2.0","Title":"Bayesian Network Feature Finder","Description":"Provides a full package of posterior inference, model comparison, and graphical illustration of model fitting. A parallel computing algorithm for the Markov chain Monte Carlo (MCMC) based posterior inference and an Expectation-Maximization (EM) based algorithm for posterior approximation are are developed, both of which greatly reduce the computational time for model inference.","Published":"2017-03-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bannerCommenter","Version":"0.1.0","Title":"Make Banner Comments with a Consistent Format","Description":"A convenience package for use while drafting code.\n It facilitates making stand-out comment lines decorated with\n bands of characters. The input text strings are converted into\n R comment lines, suitably formatted. These are then displayed in\n a console window and, if possible, automatically transferred to a\n clipboard ready for pasting into an R script. Designed to save\n time when drafting R scripts that will need to be navigated and\n maintained by other programmers.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BANOVA","Version":"0.8","Title":"Hierarchical Bayesian ANOVA Models","Description":"It covers several Bayesian Analysis of Variance (BANOVA) models used in analysis of experimental designs in which both within- and between- subjects factors are manipulated. They can be applied to data that are common in the behavioral and social sciences. The package includes: Hierarchical Bayes ANOVA models with normal response, t response, Binomial(Bernoulli) response, Poisson response, ordered multinomial response and multinomial response variables. All models accommodate unobserved heterogeneity by including a normal distribution of the parameters across individuals. Outputs of the package include tables of sums of squares, effect sizes and p-values, and tables of predictions, which are easily interpretable for behavioral and social researchers. The floodlight analysis and mediation analysis based on these models are also provided. BANOVA uses JAGS as the computational platform.","Published":"2016-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"banxicoR","Version":"0.9.0","Title":"Download Data from the Bank of Mexico","Description":"Provides functions to scrape IQY calls to Bank of Mexico,\n downloading and ordering the data conveniently.","Published":"2016-08-17","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"bapred","Version":"1.0","Title":"Batch Effect Removal and Addon Normalization (in Phenotype\nPrediction using Gene Data)","Description":"Various tools dealing with batch effects, in particular enabling the \n removal of discrepancies between training and test sets in prediction scenarios.\n Moreover, addon quantile normalization and addon RMA normalization (Kostka & Spang, \n 2008) is implemented to enable integrating the quantile normalization step into \n prediction rules. The following batch effect removal methods are implemented: \n FAbatch, ComBat, (f)SVA, mean-centering, standardization, Ratio-A and Ratio-G. \n For each of these we provide an additional function which enables a posteriori \n ('addon') batch effect removal in independent batches ('test data'). Here, the\n (already batch effect adjusted) training data is not altered. For evaluating the\n success of batch effect adjustment several metrics are provided. Moreover, the \n package implements a plot for the visualization of batch effects using principal \n component analysis. The main functions of the package for batch effect adjustment \n are ba() and baaddon() which enable batch effect removal and addon batch effect \n removal, respectively, with one of the seven methods mentioned above. Another \n important function here is bametric() which is a wrapper function for all implemented\n methods for evaluating the success of batch effect removal. For (addon) quantile \n normalization and (addon) RMA normalization the functions qunormtrain(), \n qunormaddon(), rmatrain() and rmaaddon() can be used.","Published":"2016-06-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BaPreStoPro","Version":"0.1","Title":"Bayesian Prediction of Stochastic Processes","Description":"Bayesian estimation and prediction for stochastic processes based\n on the Euler approximation. Considered processes are: jump diffusion,\n (mixed) diffusion models, hidden (mixed) diffusion models, non-homogeneous\n Poisson processes (NHPP), (mixed) regression models for comparison and a\n regression model including a NHPP.","Published":"2016-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BarBorGradient","Version":"1.0.5","Title":"Function Minimum Approximator","Description":"Tool to find where a function has its lowest value(minimum). The\n functions can be any dimensions. Recommended use is with eps=10^-10, but can be\n run with 10^-20, although this depends on the function. Two more methods are in\n this package, simple gradient method (Gradmod) and Powell method (Powell). These\n are not recommended for use, their purpose are purely for comparison.","Published":"2017-04-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"barcode","Version":"1.1","Title":"Barcode distribution plots","Description":"This package includes the function \\code{barcode()}, which\n produces a histogram-like plot of a distribution that shows\n granularity in the data.","Published":"2012-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BarcodingR","Version":"1.0-2","Title":"Species Identification using DNA Barcodes","Description":"To perform species identification using DNA barcodes.","Published":"2016-10-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Barnard","Version":"1.8","Title":"Barnard's Unconditional Test","Description":"Barnard's unconditional test for 2x2 contingency tables.","Published":"2016-10-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BART","Version":"1.2","Title":"Bayesian Additive Regression Trees","Description":"Bayesian Additive Regression Trees (BART) provide flexible nonparametric modeling of covariates for continuous, binary and time-to-event outcomes. For more information on BART, see Chipman, George and McCulloch (2010) and Sparapani, Logan, McCulloch and Laud (2016) . ","Published":"2017-04-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bartMachine","Version":"1.2.3","Title":"Bayesian Additive Regression Trees","Description":"An advanced implementation of Bayesian Additive Regression Trees with expanded features for data analysis and visualization.","Published":"2016-05-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bartMachineJARs","Version":"1.0","Title":"bartMachine JARs","Description":"These are bartMachine's Java dependency libraries. Note: this package has no functionality of its own and should not be installed as a standalone package without bartMachine.","Published":"2016-02-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Barycenter","Version":"1.0","Title":"Wasserstein Barycenter","Description":"Computation of a Wasserstein Barycenter. The package implements a method described in Cuturi (2014) \"Fast Computation of Wasserstein Barycenters\". The paper is available at . To speed up the computation time the main iteration step is based on 'RcppArmadillo'.","Published":"2016-09-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BAS","Version":"1.4.6","Title":"Bayesian Model Averaging using Bayesian Adaptive Sampling","Description":"Package for Bayesian Model Averaging in linear models and\n generalized linear models using stochastic or\n deterministic sampling without replacement from posterior\n distributions. Prior distributions on coefficients are\n from Zellner's g-prior or mixtures of g-priors\n corresponding to the Zellner-Siow Cauchy Priors or the\n mixture of g-priors from Liang et al (2008)\n \n for linear models or mixtures of g-priors in GLMs of Li and Clyde (2015)\n . Other model\n selection criteria include AIC, BIC and Empirical Bayes estimates of g.\n Sampling probabilities may be updated based on the sampled models\n using Sampling w/out Replacement or an efficient MCMC algorithm\n samples models using the BAS tree structure as an efficient hash table.\n Uniform priors over all models or beta-binomial prior distributions on\n model size are allowed, and for large p truncated priors on the model\n space may be used. The user may force variables to always be included.\n Details behind the sampling algorithm are provided in\n Clyde, Ghosh and Littman (2010) .\n This material is based upon work supported by the National Science\n Foundation under Grant DMS-1106891. Any opinions, findings, and\n conclusions or recommendations expressed in this material are those of\n the author(s) and do not necessarily reflect the views of the\n National Science Foundation.","Published":"2017-05-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"base64","Version":"2.0","Title":"Base64 Encoder and Decoder","Description":"Compatibility wrapper to replace the orphaned package by\n Romain Francois. New applications should use the 'openssl' or\n 'base64enc' package instead.","Published":"2016-05-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"base64enc","Version":"0.1-3","Title":"Tools for base64 encoding","Description":"This package provides tools for handling base64 encoding. It is more flexible than the orphaned base64 package.","Published":"2015-07-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"base64url","Version":"1.2","Title":"Fast and URL-Safe Base64 Encoder and Decoder","Description":"In contrast to RFC3548, the 62nd character (\"+\") is replaced with\n \"-\", the 63rd character (\"/\") is replaced with \"_\". Furthermore, the encoder\n does not fill the string with trailing \"=\". The resulting encoded strings\n comply to the regular expression pattern \"[A-Za-z0-9_-]\" and thus are\n safe to use in URLs or for file names.\n The package also comes with a simple base32 encoder/decoder suited for\n case insensitive file systems.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"baseballDBR","Version":"0.1.2","Title":"Sabermetrics and Advanced Baseball Statistics","Description":"A tool for gathering and analyzing data from the Baseball Databank , which includes player performance statistics from major league baseball in the United States beginning in the year 1871.","Published":"2017-06-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"basefun","Version":"0.0-38","Title":"Infrastructure for Computing with Basis Functions","Description":"Some very simple infrastructure for basis functions.","Published":"2017-05-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"baseline","Version":"1.2-1","Title":"Baseline Correction of Spectra","Description":"Collection of baseline correction algorithms, along with a framework and a GUI for optimising baseline algorithm parameters. Typical use of the package is for removing background effects from spectra originating from various types of spectroscopy and spectrometry, possibly optimizing this with regard to regression or classification results. Correction methods include polynomial fitting, weighted local smoothers and many more.","Published":"2015-07-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BASIX","Version":"1.1","Title":"BASIX: An efficient C/C++ toolset for R","Description":"BASIX provides some efficient C/C++ implementations to speed up calculations in R. ","Published":"2013-10-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BASS","Version":"0.2.2","Title":"Bayesian Adaptive Spline Surfaces","Description":"Bayesian fitting and sensitivity analysis methods for adaptive\n spline surfaces. Built to handle continuous and categorical inputs as well as\n functional or scalar output. An extension of the methodology in Denison, Mallick\n and Smith (1998) .","Published":"2017-03-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BaSTA","Version":"1.9.4","Title":"Age-Specific Survival Analysis from Incomplete\nCapture-Recapture/Recovery Data","Description":"Estimates survival and mortality with covariates from capture-recapture/recovery data in a Bayesian framework when many individuals are of unknown age. It includes tools for data checking, model diagnostics and outputs such as life-tables and plots.","Published":"2015-11-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"bastah","Version":"1.0.7","Title":"Big Data Statistical Analysis for High-Dimensional Models","Description":"Big data statistical analysis for high-dimensional models is made possible by modifying lasso.proj() in 'hdi' package by replacing its nodewise-regression with sparse precision matrix computation using 'BigQUIC'.","Published":"2016-06-02","License":"GPL (== 2)","snapshot_date":"2017-06-23"} {"Package":"BAT","Version":"1.5.5","Title":"Biodiversity Assessment Tools","Description":"Includes algorithms to assess alpha and beta\n diversity in all their dimensions (taxon, phylogenetic and functional\n diversity), whether communities are completely sampled or not. It allows\n performing a number of analyses based on either species identities or\n phylogenetic/functional trees depicting species relationships.","Published":"2016-12-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"batade","Version":"0.1","Title":"HTML reports and so on","Description":"This package provides some utility functions (e.g HTML\n report maker).","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"batch","Version":"1.1-4","Title":"Batching Routines in Parallel and Passing Command-Line Arguments\nto R","Description":"Functions to allow you to easily pass command-line\n arguments into R, and functions to aid in submitting your R\n code in parallel on a cluster and joining the results afterward\n (e.g. multiple parameter values for simulations running in\n parallel, splitting up a permutation test in parallel, etc.).\n See `parseCommandArgs(...)' for the main example of how to use\n this package.","Published":"2013-06-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"BatchExperiments","Version":"1.4.1","Title":"Statistical Experiments on Batch Computing Clusters","Description":"Extends the BatchJobs package to run statistical experiments on\n batch computing clusters. For further details see the project web page.","Published":"2015-03-18","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"BatchGetSymbols","Version":"1.1","Title":"Downloads and Organizes Financial Data for Multiple Tickers","Description":"Makes it easy to download a large number of trade data from Yahoo or Google Finance.","Published":"2016-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BatchJobs","Version":"1.6","Title":"Batch Computing with R","Description":"Provides Map, Reduce and Filter variants to generate jobs on batch\n computing systems like PBS/Torque, LSF, SLURM and Sun Grid Engine.\n Multicore and SSH systems are also supported. For further details see the\n project web page.","Published":"2015-03-18","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"BatchMap","Version":"1.0.1.0","Title":"Software for the Creation of High Density Linkage Maps in\nOutcrossing Species","Description":"Algorithms that build on the 'OneMap' package to create linkage\n maps from high density data in outcrossing species in reasonable time frames.","Published":"2017-03-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"batchmeans","Version":"1.0-3","Title":"Consistent Batch Means Estimation of Monte Carlo Standard Errors","Description":"Provides consistent batch means estimation of Monte\n Carlo standard errors.","Published":"2016-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"batchtools","Version":"0.9.3","Title":"Tools for Computation on Batch Systems","Description":"As a successor of the packages 'BatchJobs' and 'BatchExperiments',\n this package provides a parallel implementation of the Map function for high\n performance computing systems managed by schedulers 'IBM Spectrum LSF'\n (),\n 'OpenLava' (), 'Univa Grid Engine'/'Oracle Grid\n Engine' (), 'Slurm' (),\n 'TORQUE/PBS'\n (), or\n 'Docker Swarm' ().\n A multicore and socket mode allow the parallelization on a local machines,\n and multiple machines can be hooked up via SSH to create a makeshift\n cluster. Moreover, the package provides an abstraction mechanism to define\n large-scale computer experiments in a well-organized and reproducible way.","Published":"2017-04-21","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"BaTFLED3D","Version":"0.2.1","Title":"Bayesian Tensor Factorization Linked to External Data","Description":"BaTFLED is a machine learning algorithm designed to make predictions and determine interactions in data that varies along three independent modes. For example BaTFLED was developed to predict the growth of cell lines when treated with drugs at different doses. The first mode corresponds to cell lines and incorporates predictors such as cell line genomics and growth conditions. The second mode corresponds to drugs and incorporates predictors indicating known targets and structural features. The third mode corresponds to dose and there are no dose-specific predictors (although the algorithm is capable of including predictors for the third mode if present). See 'BaTFLED3D_vignette.rmd' for a simulated example.","Published":"2017-04-02","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"batman","Version":"0.1.0","Title":"Convert Categorical Representations of Logicals to Actual\nLogicals","Description":"Survey systems and other third-party data sources commonly use non-standard representations of logical values when\n it comes to qualitative data - \"Yes\", \"No\" and \"N/A\", say. batman is a package designed to seamlessly convert these into logicals.\n It is highly localised, and contains equivalents to boolean values in languages including German, French, Spanish, Italian,\n Turkish, Chinese and Polish.","Published":"2015-10-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"batteryreduction","Version":"0.1.1","Title":"An R Package for Data Reduction by Battery Reduction","Description":"Battery reduction is a method used in data reduction. It uses Gram-Schmidt orthogonal rotations to find out a subset of variables best representing the original set of variables. ","Published":"2015-12-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"BayClone2","Version":"1.1","Title":"Bayesian Feature Allocation Model for Tumor Heterogeneity","Description":"A Bayesian feature allocation model is implemented for inference on tumor heterogeneity using next-generation sequencing data. The model identifies the subclonal copy number and single nucleotide mutations at a selected set of loci and provides inference on genetic tumor variation.","Published":"2014-12-24","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bayesAB","Version":"0.7.0","Title":"Fast Bayesian Methods for AB Testing","Description":"A suite of functions that allow the user to analyze A/B test\n data in a Bayesian framework. Intended to be a drop-in replacement for\n common frequentist hypothesis test such as the t-test and chi-sq test.","Published":"2016-10-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"BayesBD","Version":"1.1","Title":"Bayesian Inference for Image Boundaries","Description":"Provides tools for carrying out a Bayesian analysis of image boundaries. Functions are provided\n for both binary (Bernoulli) and continuous (Gaussian) images. Examples, along with an interactive shiny function\n illustrate how to perform simulations, analyze custom data, and plot estimates and credible intervals. ","Published":"2016-12-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"BayesBinMix","Version":"1.4","Title":"Bayesian Estimation of Mixtures of Multivariate Bernoulli\nDistributions","Description":"Fully Bayesian inference for estimating the number of clusters and related parameters to heterogeneous binary data.","Published":"2017-01-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bayesbio","Version":"1.0.0","Title":"Miscellaneous Functions for Bioinformatics and Bayesian\nStatistics","Description":"A hodgepodge of hopefully helpful functions. Two of these perform\n shrinkage estimation: one using a simple weighted method where the user can\n specify the degree of shrinkage required, and one using James-Stein shrinkage\n estimation for the case of unequal variances.","Published":"2016-05-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bayesboot","Version":"0.2.1","Title":"An Implementation of Rubin's (1981) Bayesian Bootstrap","Description":"Functions for performing the Bayesian bootstrap as introduced by\n Rubin (1981) and for summarizing the result.\n The implementation can handle both summary statistics that works on a\n weighted version of the data and summary statistics that works on a\n resampled data set.","Published":"2016-04-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"BayesBridge","Version":"0.6","Title":"Bridge Regression","Description":"Bayesian bridge regression.","Published":"2015-02-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"bayesCL","Version":"0.0.1","Title":"Bayesian Inference on a GPU using OpenCL","Description":"Bayesian Inference on a GPU. The package currently supports sampling from PolyaGamma, Multinomial logit and Bayesian lasso.","Published":"2017-04-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"BayesCombo","Version":"1.0","Title":"Bayesian Evidence Combination","Description":"Combine diverse evidence across multiple studies to test a high level scientific theory. The methods can also be used as an alternative to a standard meta-analysis.","Published":"2017-02-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BayesComm","Version":"0.1-2","Title":"Bayesian Community Ecology Analysis","Description":"Bayesian multivariate binary (probit) regression\n models for analysis of ecological communities.","Published":"2015-07-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayescount","Version":"0.9.99-5","Title":"Power Calculations and Bayesian Analysis of Count Distributions\nand FECRT Data using MCMC","Description":"A set of functions to allow analysis of count data (such\n as faecal egg count data) using Bayesian MCMC methods. Returns\n information on the possible values for mean count, coefficient\n of variation and zero inflation (true prevalence) present in\n the data. A complete faecal egg count reduction test (FECRT)\n model is implemented, which returns inference on the true\n efficacy of the drug from the pre- and post-treatment data\n provided, using non-parametric bootstrapping as well as using\n Bayesian MCMC. Functions to perform power analyses for faecal\n egg counts (including FECRT) are also provided.","Published":"2015-04-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BayesCR","Version":"2.0","Title":"Bayesian Analysis of Censored Regression Models Under Scale\nMixture of Skew Normal Distributions","Description":"Propose a parametric fit for censored linear regression models based on SMSN distributions, from a Bayesian perspective. Also, generates SMSN random variables.","Published":"2015-01-31","License":"GPL (>= 3.1.2)","snapshot_date":"2017-06-23"} {"Package":"BayesDA","Version":"2012.04-1","Title":"Functions and Datasets for the book \"Bayesian Data Analysis\"","Description":"Functions for Bayesian Data Analysis, with datasets from\n the book \"Bayesian data Analysis (second edition)\" by Gelman,\n Carlin, Stern and Rubin. Not all datasets yet, hopefully\n completed soon.","Published":"2012-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesDccGarch","Version":"2.0","Title":"The Bayesian Dynamic Conditional Correlation GARCH Model","Description":"Bayesian estimation of dynamic conditional correlation GARCH model for multivariate time series volatility (Fioruci, J.A., Ehlers, R.S. and Andrade-Filho, M.G., (2014), DOI:10.1080/02664763.2013.839635).","Published":"2016-02-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BAYESDEF","Version":"0.1.0","Title":"Bayesian Analysis of DSD","Description":"Definitive Screening Designs are a class of experimental designs that under factor sparsity have the potential to estimate linear, quadratic and interaction effects with little experimental effort. BAYESDEF is a package that performs a five step strategy to analyze this kind of experiments that makes use of tools coming from the Bayesian approach. It also includes the least absolute shrinkage and selection operator (lasso) as a check (Aguirre VM. (2016) ).","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesDem","Version":"2.5-1","Title":"Graphical User Interface for bayesTFR, bayesLife and bayesPop","Description":"Provides graphical user interface for the packages 'bayesTFR', 'bayesLife' and 'bayesPop'.","Published":"2016-11-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesDP","Version":"1.1.1","Title":"Tools for the Bayesian Discount Prior Function","Description":"Functions for data augmentation using the\n Bayesian discount prior function for 1 arm and 2 arm clinical trials.","Published":"2017-05-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BayesFactor","Version":"0.9.12-2","Title":"Computation of Bayes Factors for Common Designs","Description":"A suite of functions for computing\n various Bayes factors for simple designs, including contingency tables,\n one- and two-sample designs, one-way designs, general ANOVA designs, and\n linear regression.","Published":"2015-09-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BayesFM","Version":"0.1.2","Title":"Bayesian Inference for Factor Modeling","Description":"Collection of procedures to perform Bayesian analysis on a variety\n of factor models. Currently, it includes: Bayesian Exploratory Factor\n Analysis (befa), an approach to dedicated factor analysis with stochastic\n search on the structure of the factor loading matrix. The number of latent\n factors, as well as the allocation of the manifest variables to the factors,\n are not fixed a priori but determined during MCMC sampling.\n More approaches will be included in future releases of this package.","Published":"2017-02-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bayesGARCH","Version":"2.1.3","Title":"Bayesian Estimation of the GARCH(1,1) Model with Student-t\nInnovations","Description":"Provides the bayesGARCH() function which performs the\n Bayesian estimation of the GARCH(1,1) model with Student's t innovations as described in Ardia (2008) .","Published":"2017-02-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesGDS","Version":"0.6.2","Title":"Scalable Rejection Sampling for Bayesian Hierarchical Models","Description":"Functions for implementing the Braun and Damien (2015) rejection\n sampling algorithm for Bayesian hierarchical models. The algorithm generates\n posterior samples in parallel, and is scalable when the individual units are\n conditionally independent.","Published":"2016-03-16","License":"MPL (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"BayesGESM","Version":"1.4","Title":"Bayesian Analysis of Generalized Elliptical Semi-Parametric\nModels and Flexible Measurement Error Models","Description":"Set of tools to perform the statistical inference based on the Bayesian approach for regression models under the assumption that independent additive errors follow normal, Student-t, slash, contaminated normal, Laplace or symmetric hyperbolic distributions, i.e., additive errors follow a scale mixtures of normal distributions. The regression models considered in this package are: (i) Generalized elliptical semi-parametric models, where both location and dispersion parameters of the response variable distribution include non-parametric additive components described by using B-splines; and (ii) Flexible measurement error models under the presence of homoscedastic and heteroscedastic random errors, which admit explanatory variables with and without measurement additive errors as well as the presence of a non-parametric components approximated by using B-splines. ","Published":"2015-06-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"BayesH","Version":"1.0","Title":"Bayesian Regression Model with Mixture of Two Scaled Inverse Chi\nSquare as Hyperprior","Description":"Functions to performs Bayesian regression model with mixture of two scaled inverse\n chi square as hyperprior distribution for variance of each regression coefficient.","Published":"2016-04-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BayesianAnimalTracker","Version":"1.2","Title":"Bayesian Melding of GPS and DR Path for Animal Tracking","Description":"Bayesian melding approach to combine the GPS observations and Dead-Reckoned path for an accurate animal's track, or equivalently, use the GPS observations to correct the Dead-Reckoned path. It can take the measurement errors in the GPS observations into account and provide uncertainty statement about the corrected path. The main calculation can be done by the BMAnimalTrack function.","Published":"2014-12-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Bayesianbetareg","Version":"1.2","Title":"Bayesian Beta regression: joint mean and precision modeling","Description":"This package performs beta regression","Published":"2014-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesianETAS","Version":"1.0.3","Title":"Bayesian Estimation of the ETAS Model for Earthquake Occurrences","Description":"The Epidemic Type Aftershock Sequence (ETAS) model is one of the best-performing methods for modeling and forecasting earthquake occurrences. This package implements Bayesian estimation routines to draw samples from the full posterior distribution of the model parameters, given an earthquake catalog. The paper on which this package is based is Gordon J. Ross - Bayesian Estimation of the ETAS Model for Earthquake Occurrences (2016), available from the below URL.","Published":"2017-01-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BayesianNetwork","Version":"0.1.1","Title":"Bayesian Network Modeling and Analysis","Description":"A 'Shiny' web application for creating interactive Bayesian Network models,\n learning the structure and parameters of Bayesian networks, and utilities for classical\n network analysis.","Published":"2016-10-25","License":"Apache License | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"BayesianTools","Version":"0.1.2","Title":"General-Purpose MCMC and SMC Samplers and Tools for Bayesian\nStatistics","Description":"General-purpose MCMC and SMC samplers, as well as plot and\n diagnostic functions for Bayesian statistics, with a particular focus on\n calibrating complex system models. Implemented samplers include various\n Metropolis MCMC variants (including adaptive and/or delayed rejection MH), the\n T-walk, two differential evolution MCMCs, two DREAM MCMCs, and a sequential\n Monte Carlo (SMC) particle filter.","Published":"2017-05-27","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"bayesImageS","Version":"0.4-0","Title":"Bayesian Methods for Image Segmentation using a Potts Model","Description":"Various algorithms for segmentation of 2D and 3D images, such\n as computed tomography and satellite remote sensing. This package implements\n Bayesian image analysis using the hidden Potts model with external field\n prior. Latent labels are sampled using chequerboard updating or Swendsen-Wang.\n Algorithms for the smoothing parameter include pseudolikelihood, path sampling,\n the exchange algorithm, and approximate Bayesian computation (ABC).","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BayesLCA","Version":"1.7","Title":"Bayesian Latent Class Analysis","Description":"Bayesian Latent Class Analysis using several different\n methods.","Published":"2015-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesLife","Version":"3.0-5","Title":"Bayesian Projection of Life Expectancy","Description":"Making probabilistic projections of life expectancy for all countries of the world, using a Bayesian hierarchical model .","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BayesLogit","Version":"0.6","Title":"Logistic Regression","Description":"The BayesLogit package does posterior simulation for binomial and\n multinomial logistic regression using the Polya-Gamma latent variable\n technique. This method is fully automatic, exact, and fast. A routine to\n efficiently sample from the Polya-Gamma class of distributions is included.","Published":"2016-10-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"bayesloglin","Version":"1.0.1","Title":"Bayesian Analysis of Contingency Table Data","Description":"The function MC3() searches for log-linear models with the highest posterior probability. The function gibbsSampler() is a blocked Gibbs sampler for sampling from the posterior distribution of the log-linear parameters. The functions findPostMean() and findPostCov() compute the posterior mean and covariance matrix for decomposable models which, for these models, is available in closed form.","Published":"2016-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesm","Version":"3.0-2","Title":"Bayesian Inference for Marketing/Micro-Econometrics","Description":"Covers many important models used\n in marketing and micro-econometrics applications. \n The package includes:\n Bayes Regression (univariate or multivariate dep var),\n Bayes Seemingly Unrelated Regression (SUR),\n Binary and Ordinal Probit,\n Multinomial Logit (MNL) and Multinomial Probit (MNP),\n Multivariate Probit,\n Negative Binomial (Poisson) Regression,\n Multivariate Mixtures of Normals (including clustering),\n Dirichlet Process Prior Density Estimation with normal base,\n Hierarchical Linear Models with normal prior and covariates,\n Hierarchical Linear Models with a mixture of normals prior and covariates,\n Hierarchical Multinomial Logits with a mixture of normals prior\n and covariates,\n Hierarchical Multinomial Logits with a Dirichlet Process prior and covariates,\n Hierarchical Negative Binomial Regression Models,\n Bayesian analysis of choice-based conjoint data,\n Bayesian treatment of linear instrumental variables models,\n Analysis of Multivariate Ordinal survey data with scale\n usage heterogeneity (as in Rossi et al, JASA (01)),\n Bayesian Analysis of Aggregate Random Coefficient Logit Models as in BLP (see\n Jiang, Manchanda, Rossi 2009)\n For further reference, consult our book, Bayesian Statistics and\n Marketing by Rossi, Allenby and McCulloch (Wiley 2005) and Bayesian Non- and Semi-Parametric\n Methods and Applications (Princeton U Press 2014).","Published":"2015-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BayesMAMS","Version":"0.1","Title":"Designing Bayesian Multi-Arm Multi-Stage Studies","Description":"Calculating Bayesian sample sizes for multi-arm trials where several experimental treatments are compared to a common control, perhaps even at multiple stages.","Published":"2015-11-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesMCClust","Version":"1.0","Title":"Mixtures-of-Experts Markov Chain Clustering and Dirichlet\nMultinomial Clustering","Description":"This package provides various Markov Chain Monte Carlo\n (MCMC) sampler for model-based clustering of discrete-valued\n time series obtained by observing a categorical variable with\n several states (in a Bayesian approach). In order to analyze\n group membership, we provide also an extension to the\n approaches by formulating a probabilistic model for the latent\n group indicators within the Bayesian classification rule using\n a multinomial logit model.","Published":"2012-01-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BayesMed","Version":"1.0.1","Title":"Default Bayesian Hypothesis Tests for Correlation, Partial\nCorrelation, and Mediation","Description":"Default Bayesian hypothesis tests for correlation, partial correlation, and mediation","Published":"2015-02-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bayesmeta","Version":"1.4","Title":"Bayesian Random-Effects Meta-Analysis","Description":"A collection of functions allowing to derive the posterior distribution of the two parameters in a random-effects meta-analysis, and providing functionality to evaluate joint and marginal posterior probability distributions, predictive distributions, shrinkage effects, etc.","Published":"2017-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesmix","Version":"0.7-4","Title":"Bayesian Mixture Models with JAGS","Description":"The fitting of finite mixture models of univariate\n\t Gaussian distributions using JAGS within a Bayesian\n\t framework is provided.","Published":"2015-07-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BayesMixSurv","Version":"0.9.1","Title":"Bayesian Mixture Survival Models using Additive\nMixture-of-Weibull Hazards, with Lasso Shrinkage and\nStratification","Description":"Bayesian Mixture Survival Models using Additive Mixture-of-Weibull Hazards, with Lasso Shrinkage and\n Stratification. As a Bayesian dynamic survival model, it relaxes the proportional-hazard assumption. Lasso shrinkage controls\n overfitting, given the increase in the number of free parameters in the model due to presence of two Weibull components\n in the hazard function.","Published":"2016-09-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BayesNetBP","Version":"1.2.1","Title":"Bayesian Network Belief Propagation","Description":"Belief propagation methods in Bayesian Networks to propagate evidence through the network. The implementation of these methods are based on the article: Cowell, RG (2005). Local Propagation in Conditional Gaussian Bayesian Networks .","Published":"2017-06-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BayesNI","Version":"0.1","Title":"BayesNI: Bayesian Testing Procedure for Noninferiority with\nBinary Endpoints","Description":"A Bayesian testing procedure for noninferiority trials\n with binary endpoints. The prior is constructed based on\n Bernstein polynomials with options for both informative and\n non-informative prior. The critical value of the test statistic\n (Bayes factor) is determined by minimizing total weighted error\n (TWE) criteria","Published":"2012-09-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BayesPieceHazSelect","Version":"1.1.0","Title":"Variable Selection in a Hierarchical Bayesian Model for a Hazard\nFunction","Description":"Fits a piecewise exponential hazard to survival data using a\n Hierarchical Bayesian model with an Intrinsic Conditional Autoregressive\n formulation for the spatial dependency in the hazard rates for each piece.\n This function uses Metropolis- Hastings-Green MCMC to allow the number of split\n points to vary and also uses Stochastic Search Variable Selection to determine\n what covariates drive the risk of the event. This function outputs trace plots\n depicting the number of split points in the hazard and the number of variables\n included in the hazard. The function saves all posterior quantities to the\n desired path.","Published":"2017-01-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BayesPiecewiseICAR","Version":"0.2.1","Title":"Hierarchical Bayesian Model for a Hazard Function","Description":"Fits a piecewise exponential hazard to survival data using a\n Hierarchical Bayesian model with an Intrinsic Conditional Autoregressive\n formulation for the spatial dependency in the hazard rates for each piece.\n This function uses Metropolis- Hastings-Green MCMC to allow the number of split\n points to vary. This function outputs graphics that display the histogram of\n the number of split points and the trace plots of the hierarchical parameters.\n The function outputs a list that contains the posterior samples for the number\n of split points, the location of the split points, and the log hazard rates\n corresponding to these splits. Additionally, this outputs the posterior samples\n of the two hierarchical parameters, Mu and Sigma^2.","Published":"2017-01-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bayesplot","Version":"1.2.0","Title":"Plotting for Bayesian Models","Description":"Plotting functions for posterior analysis, model checking,\n and MCMC diagnostics. The package is designed not only to provide convenient\n functionality for users, but also a common set of functions that can be\n easily used by developers working on a variety of R packages for Bayesian\n modeling, particularly (but not exclusively) packages interfacing with Stan.","Published":"2017-04-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"bayesPop","Version":"6.0-4","Title":"Probabilistic Population Projection","Description":"Generating population projections for all countries of the world using several probabilistic components, such as total fertility rate and life expectancy.","Published":"2017-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayespref","Version":"1.0","Title":"Hierarchical Bayesian analysis of ecological count data","Description":"This program implements a hierarchical Bayesian analysis\n of count data, such as preference experiments. It provides\n population-level and individual-level preference parameter\n estimates obtained via MCMC. It also allows for model\n comparison using Deviance Information Criterion.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesQR","Version":"2.3","Title":"Bayesian Quantile Regression","Description":"Bayesian quantile regression using the asymmetric Laplace distribution, both continuous as well as binary dependent variables are supported. The package consists of implementations of the methods of Yu & Moyeed (2001) , Benoit & Van den Poel (2012) and Al-Hamzawi, Yu & Benoit (2012) . To speed up the calculations, the Markov Chain Monte Carlo core of all algorithms is programmed in Fortran and called from R.","Published":"2017-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesreg","Version":"1.0","Title":"Bayesian Regression Models with Continuous Shrinkage Priors","Description":"Fits linear or logistic regression model using Bayesian continuous\n shrinkage prior distributions. Handles ridge, lasso, horseshoe and horseshoe+\n regression with logistic, Gaussian, Laplace or Student-t distributed targets.","Published":"2016-11-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bayess","Version":"1.4","Title":"Bayesian Essentials with R","Description":"bayess contains a collection of functions that allows the\n reenactment of the R programs used in the book \"Bayesian\n Essentials with R\" (revision of \"Bayesian Core\") without\n further programming. R code being available as well, they can\n be modified by the user to conduct one's own simulations.","Published":"2013-02-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BayesS5","Version":"1.30","Title":"Bayesian Variable Selection Using Simplified Shotgun Stochastic\nSearch with Screening (S5)","Description":"In p >> n settings, full posterior sampling using existing Markov chain Monte\n Carlo (MCMC) algorithms is highly inefficient and often not feasible from a practical\n perspective. To overcome this problem, we propose a scalable stochastic search algorithm that is called the Simplified Shotgun Stochastic Search (S5) and aimed at rapidly explore interesting regions of model space and finding the maximum a posteriori(MAP) model. Also, the S5 provides an approximation of posterior probability of each model (including the marginal inclusion probabilities). This algorithm is a part of an article titled Scalable Bayesian Variable Selection Using Nonlocal Prior Densities in Ultrahigh-dimensional Settings (2017+), by Minsuk Shin, Anirban Bhattachary, and Valen E. Johnson, accepted in Statistica Sinica. ","Published":"2017-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BayesSAE","Version":"1.0-1","Title":"Bayesian Analysis of Small Area Estimation","Description":"This package provides a variety of functions to deal with several specific small area area-\n level models in Bayesian context. Models provided range from the basic Fay-Herriot model to \n its improvement such as You-Chapman models, unmatched models, spatial models and so on. \n Different types of priors for specific parameters could be chosen to obtain MCMC posterior \n draws. The main sampling function is written in C with GSL lab so as to facilitate the \n computation. Model internal checking and model comparison criteria are also involved.","Published":"2013-10-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BayesSingleSub","Version":"0.6.2","Title":"Computation of Bayes factors for interrupted time-series designs","Description":"The BayesSingleSub package is a suite of functions for computing various Bayes factors for interrupted time-series, based on the models described in de Vries and Morey (2013).","Published":"2014-01-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BayesSpec","Version":"0.5.3","Title":"Bayesian Spectral Analysis Techniques","Description":"An implementation of methods for spectral analysis using the Bayesian framework. It includes functions for modelling spectrum as well as appropriate plotting and output estimates. There is segmentation capability with RJ MCMC (Reversible Jump Markov Chain Monte Carlo). The package takes these methods predominantly from the 2012 paper \"AdaptSPEC: Adaptive Spectral Estimation for Nonstationary Time Series\" .","Published":"2017-02-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BayesSummaryStatLM","Version":"1.0-1","Title":"MCMC Sampling of Bayesian Linear Models via Summary Statistics","Description":"Methods for generating Markov Chain Monte Carlo (MCMC) posterior samples of Bayesian linear regression model parameters that require only summary statistics of data as input. Summary statistics are useful for systems with very limited amounts of physical memory. The package provides two functions: one function that computes summary statistics of data and one function that carries out the MCMC posterior sampling for Bayesian linear regression models where summary statistics are used as input. The function read.regress.data.ff utilizes the R package 'ff' to handle data sets that are too large to fit into a user's physical memory, by reading in data in chunks.","Published":"2015-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesSurv","Version":"3.0","Title":"Bayesian Survival Regression with Flexible Error and Random\nEffects Distributions","Description":"Contains Bayesian implementations of Mixed-Effects Accelerated Failure Time (MEAFT) models\n for censored data. Those can be not only right-censored but also interval-censored,\n\t doubly-interval-censored or misclassified interval-censored.","Published":"2017-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayesTFR","Version":"6.0-0","Title":"Bayesian Fertility Projection","Description":"Making probabilistic projections of total fertility rate for all countries of the world, using a Bayesian hierarchical model.","Published":"2017-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Bayesthresh","Version":"2.0.1","Title":"Bayesian thresholds mixed-effects models for categorical data","Description":"This package fits a linear mixed model for ordinal\n categorical responses using Bayesian inference via Monte Carlo\n Markov Chains. Default is Nandran & Chen algorithm using\n Gaussian link function and saving just the summaries of the\n chains. Among the options, package allow for two other options\n of algorithms, for using Student's \"t\" link function and for\n saving the full chains.","Published":"2013-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BayesTree","Version":"0.3-1.4","Title":"Bayesian Additive Regression Trees","Description":"This is an implementation of BART:Bayesian Additive Regression Trees,\n by Chipman, George, McCulloch (2010).","Published":"2016-07-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BayesTreePrior","Version":"1.0.1","Title":"Bayesian Tree Prior Simulation","Description":"Provides a way to simulate from the prior distribution of Bayesian trees by Chipman et al. (1998) . The prior distribution of Bayesian trees is highly dependent on the design matrix X, therefore using the suggested hyperparameters by Chipman et al. (1998) is not recommended and could lead to unexpected prior distribution. This work is part of my master thesis (expected 2016).","Published":"2016-07-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BayesValidate","Version":"0.0","Title":"BayesValidate Package","Description":"BayesValidate implements the software validation method\n described in the paper \"Validation of Software for Bayesian\n Models using Posterior Quantiles\" (Cook, Gelman, and Rubin,\n 2005). It inputs a function to perform Bayesian inference as\n well as functions to generate data from the Bayesian model\n being fit, and repeatedly generates and analyzes data to check\n that the Bayesian inference program works properly.","Published":"2006-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BayesVarSel","Version":"1.7.0","Title":"Bayes Factors, Model Choice and Variable Selection in Linear\nModels","Description":"Conceived to calculate Bayes factors in linear models and then to provide a formal Bayesian answer to testing and variable selection problems. From a theoretical side, the emphasis in this package is placed on the prior distributions and it allows a wide range of them: Jeffreys (1961); Zellner and Siow(1980); Zellner and Siow(1984); Zellner (1986); Fernandez et al. (2001); Liang et al. (2008) and Bayarri et al. (2012). The interaction with the package is through a friendly interface that syntactically mimics the well-known lm() command of R. The resulting objects can be easily explored providing the user very valuable information (like marginal, joint and conditional inclusion probabilities of potential variables; the highest posterior probability model, HPM; the median probability model, MPM) about the structure of the true -data generating- model. Additionally, this package incorporates abilities to handle problems with a large number of potential explanatory variables through parallel and heuristic versions of the main commands, Garcia-Donato and Martinez-Beneito (2013). ","Published":"2016-11-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BayesX","Version":"0.2-9","Title":"R Utilities Accompanying the Software Package BayesX","Description":"This package provides functionality for exploring and visualising estimation results\n\t obtained with the software package BayesX for structured additive regression. It also provides\n\t functions that allow to read, write and manipulate map objects that are required in spatial analyses\n\t performed with BayesX, a free software for estimating structured additive regression models \n (http://www.bayesx.org).","Published":"2014-08-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BayesXsrc","Version":"2.1-2","Title":"R Package Distribution of the BayesX C++ Sources","Description":"BayesX performs Bayesian inference in structured additive regression (STAR) models.\n\tThe R package BayesXsrc provides the BayesX command line tool for easy installation.\n\tA convenient R interface is provided in package R2BayesX.","Published":"2013-11-22","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"BayHap","Version":"1.0.1","Title":"Bayesian analysis of haplotype association using Markov Chain\nMonte Carlo","Description":"The package BayHap performs simultaneous estimation of\n uncertain haplotype frequencies and association with haplotypes\n based on generalized linear models for quantitative, binary and\n survival traits. Bayesian statistics and Markov Chain Monte\n Carlo techniques are the theoretical framework for the methods\n of estimation included in this package. Prior values for model\n parameters can be included by the user. Convergence diagnostics\n and statistical and graphical analysis of the sampling output\n can be also carried out.","Published":"2013-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BayHaz","Version":"0.1-3","Title":"R Functions for Bayesian Hazard Rate Estimation","Description":"A suite of R functions for Bayesian estimation of smooth\n hazard rates via Compound Poisson Process (CPP) and Bayesian\n Penalized Spline (BPS) priors.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BaylorEdPsych","Version":"0.5","Title":"R Package for Baylor University Educational Psychology\nQuantitative Courses","Description":"Functions and data used for Baylor University Educational\n Psychology Quantitative Courses","Published":"2012-07-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bayou","Version":"1.1.0","Title":"Bayesian Fitting of Ornstein-Uhlenbeck Models to Phylogenies","Description":"Tools for fitting and simulating multi-optima Ornstein-Uhlenbeck\n models to phylogenetic comparative data using Bayesian reversible-jump\n methods.","Published":"2015-10-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BaySIC","Version":"1.0","Title":"Bayesian Analysis of Significantly Mutated Genes in Cancer","Description":"This R package is the software implementation of the\n algorithm BaySIC, a Bayesian approach toward analysis of\n significantly mutated genes in cancer data.","Published":"2013-04-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BAYSTAR","Version":"0.2-9","Title":"On Bayesian analysis of Threshold autoregressive model (BAYSTAR)","Description":"The manuscript introduces the BAYSTAR package, which\n provides the functionality for Bayesian estimation in\n autoregressive threshold models.","Published":"2013-09-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bazar","Version":"0.1.4","Title":"Miscellaneous Basic Functions","Description":"A collection of miscellaneous functions for \n copying objects to the clipboard ('Copy');\n manipulating strings ('concat', 'mgsub', 'trim', 'verlan'); \n loading or showing packages ('library_with_rep', 'require_with_rep', \n 'sessionPackages'); \n creating or testing for named lists ('nlist', 'as.nlist', 'is.nlist'), \n formulas ('is.formula'), empty objects ('as.empty', 'is.empty'), \n whole numbers ('as.wholenumber', 'is.wholenumber'); \n testing for equality ('almost.equal', 'almost.zero'); \n getting modified versions of usual functions ('rle2', 'sumNA'); \n making a pause or a stop ('pause', 'stopif'); \n and others ('erase', '%nin%', 'unwhich'). ","Published":"2017-01-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BB","Version":"2014.10-1","Title":"Solving and Optimizing Large-Scale Nonlinear Systems","Description":"Barzilai-Borwein spectral methods for solving nonlinear\n system of equations, and for optimizing nonlinear objective\n functions subject to simple constraints. A tutorial style\n introduction to this package is available in a vignette on the\n CRAN download page or, when the package is loaded in an R\n session, with vignette(\"BB\").","Published":"2014-11-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bbefkr","Version":"4.2","Title":"Bayesian bandwidth estimation and semi-metric selection for the\nfunctional kernel regression with unknown error density","Description":"Estimating optimal bandwidths for the regression mean function approximated by the functional Nadaraya-Watson estimator and the error density approximated by a kernel density of residuals simultaneously in a scalar-on-function regression. As a by-product of Markov chain Monte Carlo, the optimal choice of semi-metric is selected based on largest marginal likelihood.","Published":"2014-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bbemkr","Version":"2.0","Title":"Bayesian bandwidth estimation for multivariate kernel regression\nwith Gaussian error","Description":"Bayesian bandwidth estimation for Nadaraya-Watson type\n multivariate kernel regression with Gaussian error density","Published":"2014-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BBEST","Version":"0.1-6","Title":"Bayesian Estimation of Incoherent Neutron Scattering Backgrounds","Description":"We implemented a Bayesian-statistics approach for \n subtraction of incoherent scattering from neutron total-scattering data. \n In this approach, the estimated background signal associated with \n incoherent scattering maximizes the posterior probability, which combines \n the likelihood of this signal in reciprocal and real spaces with the prior \n that favors smooth lines.","Published":"2016-03-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BBmisc","Version":"1.11","Title":"Miscellaneous Helper Functions for B. Bischl","Description":"Miscellaneous helper functions for and from B. Bischl and\n some other guys, mainly for package development.","Published":"2017-03-10","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bbmle","Version":"1.0.19","Title":"Tools for General Maximum Likelihood Estimation","Description":"Methods and functions for fitting maximum likelihood models in R.\n This package modifies and extends the 'mle' classes in the 'stats4' package.","Published":"2017-04-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"BBMM","Version":"3.0","Title":"Brownian bridge movement model","Description":"The model provides an empirical estimate of a movement\n path using discrete location data obtained at relatively short\n time intervals.","Published":"2013-03-08","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"BBMV","Version":"1.0","Title":"Models for Continuous Traits Evolving in Macroevolutionary\nLandscapes of any Shape","Description":"Provides a set of functions to fit general macroevolutionary models for continuous traits evolving in adaptive landscapes of any shape. The model is based on bounded Brownian motion (BBM), in which a continuous trait evolves along a phylogenetic tree under constant-rate diffusion between two reflective bounds. In addition to this random component, the trait evolves in a potential and is thus subject to a force that pulls it towards specific values - this force can be of any shape.","Published":"2017-05-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bbo","Version":"0.2","Title":"Biogeography-Based Optimization","Description":"This package provides an R implementation of\n Biogeography-Based Optimization (BBO), originally invented by\n Prof. Dan Simon, Cleveland State University, Ohio. This method\n is an application of the concept of biogeography, a study of\n the geographical distribution of biological organisms, to\n optimization problems. More information about this method can\n be found here: http://academic.csuohio.edu/simond/bbo/.","Published":"2014-09-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"BBRecapture","Version":"0.1","Title":"Bayesian Behavioural Capture-Recapture Models","Description":"Model fitting of flexible behavioural recapture models based on conditional probability reparameterization and meaningful partial capture history quantification also referred to as meaningful behavioural covariate","Published":"2013-12-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bc3net","Version":"1.0.4","Title":"Gene Regulatory Network Inference with Bc3net","Description":"Implementation of the BC3NET algorithm for gene regulatory network inference (de Matos Simoes and Frank Emmert-Streib, Bagging Statistical Network Inference from Large-Scale Gene Expression Data, PLoS ONE 7(3): e33624, ).","Published":"2016-11-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BCA","Version":"0.9-3","Title":"Business and Customer Analytics","Description":"Underlying support functions for RcmdrPlugin.BCA and a\n companion to the book Customer and Business Analytics: Applied\n Data Mining for Business Decision Making Using R by Daniel S.\n Putler and Robert E. Krider","Published":"2014-09-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BCBCSF","Version":"1.0-1","Title":"Bias-Corrected Bayesian Classification with Selected Features","Description":"Fully Bayesian Classification with a subset of high-dimensional features, such as expression levels of genes. The data are modeled with a hierarchical Bayesian models using heavy-tailed t distributions as priors. When a large number of features are available, one may like to select only a subset of features to use, typically those features strongly correlated with the response in training cases. Such a feature selection procedure is however invalid since the relationship between the response and the features has be exaggerated by feature selection. This package provides a way to avoid this bias and yield better-calibrated predictions for future cases when one uses F-statistic to select features. ","Published":"2015-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BCC1997","Version":"0.1.1","Title":"Calculation of Option Prices Based on a Universal Solution","Description":"Calculates the prices of European options based on the universal solution provided by Bakshi, Cao and Chen (1997) . This solution considers stochastic volatility, stochastic interest and random jumps. Please cite their work if this package is used. ","Published":"2017-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BCDating","Version":"0.9.7","Title":"Business Cycle Dating and Plotting Tools","Description":"Tools for Dating Business Cycles using Harding-Pagan (Quarterly Bry-Boschan) method and various plotting features.","Published":"2014-12-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BcDiag","Version":"1.0.10","Title":"Diagnostics Plots for Bicluster Data","Description":"Diagnostic tools based on two-way\n anova and median-polish residual plots for Bicluster output\n obtained from packages; \"biclust\" by Kaiser et al.(2008),\"isa2\"\n by Csardi et al. (2010) and \"fabia\" by Hochreiter et al.\n (2010). Moreover, It provides visualization tools for bicluster\n output and corresponding non-bicluster rows- or columns\n outcomes. It has also extended the idea of Kaiser et al.(2008)\n which is, extracting bicluster output in a text format, by\n adding two bicluster methods from the fabia and isa2 R\n packages.","Published":"2015-10-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BCE","Version":"2.1","Title":"Bayesian composition estimator: estimating sample (taxonomic)\ncomposition from biomarker data","Description":"Function to estimates taxonomic compositions from biomarker data, using a Bayesian approach.","Published":"2014-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BCEA","Version":"2.2-5","Title":"Bayesian Cost Effectiveness Analysis","Description":"Produces an economic evaluation of a Bayesian model in the form of MCMC simulations. Given suitable variables of cost and effectiveness / utility for two or more interventions, This package computes the most cost-effective alternative and produces graphical summaries and probabilistic sensitivity analysis.","Published":"2016-11-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BCEE","Version":"1.1","Title":"The Bayesian Causal Effect Estimation Algorithm","Description":"Implementation of the Bayesian Causal Effect Estimation algorithm, \n a data-driven method for the estimation of the causal effect of an exposure \n on a continuous outcome. For more details, see Talbot et al. (2015) DOI:10.1515/jci-2014-0035. ","Published":"2015-11-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BCellMA","Version":"0.3.4","Title":"B Cell Receptor Somatic Hyper Mutation Analysis","Description":"Includes a set of functions to analyze for instance nucleotide frequencies as well as transition and transversion. Can reconstruct germline sequences based on the international ImMunoGeneTics information system (IMGT/HighV-QUEST) outputs, calculate and plot the difference (%) of nucleotides at 6 positions around a mutation to identify and characterize hotspot motifs as well as calculate and plot average mutation frequencies of nucleotide mutations resulting in amino acid substitution.","Published":"2017-03-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BCEs0","Version":"1.1-1","Title":"Bayesian Models for Cost-Effectiveness Analysis in the Presence\nof Structural Zero Costs","Description":"Implements a full Bayesian cost-effectiveness analysis in the case where the cost variable is characterised by structural zeros. The package implements the Gamma, log-Normal and Normal models for the cost variable and the Gamma, Beta, Bernoulli and Normal models for the measure of clinical effectiveness. ","Published":"2015-08-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BCgee","Version":"0.1","Title":"Bias-Corrected Estimates for Generalized Linear Models for\nDependent Data","Description":"Provides bias-corrected estimates for the regression coefficients of a marginal model estimated with generalized estimating equations. Details about the bias formula used are in Lunardon, N., Scharfstein, D. (2017) .","Published":"2017-06-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Bchron","Version":"4.2.6","Title":"Radiocarbon Dating, Age-Depth Modelling, Relative Sea Level Rate\nEstimation, and Non-Parametric Phase Modelling","Description":"Enables quick calibration of radiocarbon dates under various\n calibration curves (including user generated ones); Age-depth modelling as\n per the algorithm of Haslett and Parnell (2008) ; Relative sea level rate\n estimation incorporating time uncertainty in polynomial regression models; and\n non-parametric phase modelling via Gaussian mixtures as a means to determine\n the activity of a site (and as an alternative to the Oxcal function SUM). The\n package includes a vignette which explains most of the basic functionality.","Published":"2016-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Bclim","Version":"3.1.2","Title":"Bayesian Palaeoclimate Reconstruction from Pollen Data","Description":"Takes pollen and chronology data from lake cores and produces\n a Bayesian posterior distribution of palaeoclimate from that location after\n fitting a non-linear non-Gaussian state-space model. For more details see the\n paper Parnell et al. (2015), Bayesian inference for palaeoclimate with\n time uncertainty and stochastic volatility. Journal of the Royal Statistical\n Society: Series C (Applied Statistics), 64: 115–138 .","Published":"2016-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bclust","Version":"1.5","Title":"Bayesian Hierarchical Clustering Using Spike and Slab Models","Description":"Builds a dendrogram using log posterior as a natural distance defined by the model and meanwhile waits the clustering variables. It is also capable to computing equivalent Bayesian discrimination probabilities. The adopted method suites small sample large dimension setting. The model parameter estimation maybe difficult, depending on data structure and the chosen distribution family.","Published":"2015-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bcp","Version":"4.0.0","Title":"Bayesian Analysis of Change Point Problems","Description":"Provides an implementation of the Barry and Hartigan (1993) product partition model for the normal errors change point problem using Markov Chain Monte Carlo. It also extends the methodology to regression models on a connected graph (Wang and Emerson, 2015); this allows estimation of change point models with multivariate responses. Parallel MCMC, previously available in bcp v.3.0.0, is currently not implemented.","Published":"2015-07-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bcpa","Version":"1.1","Title":"Behavioral change point analysis of animal movement","Description":"The Behavioral Change Point Analysis (BCPA) is a method of\n identifying hidden shifts in the underlying parameters of a time series,\n developed specifically to be applied to animal movement data which is\n irregularly sampled. The method is based on: E.\n Gurarie, R. Andrews and K. Laidre A novel method for identifying\n behavioural changes in animal movement data (2009) Ecology Letters 12:5\n 395-408.","Published":"2014-11-02","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"bcpmeta","Version":"1.0","Title":"Bayesian Multiple Changepoint Detection Using Metadata","Description":"A Bayesian approach to detect mean shifts in AR(1) time series while accommodating metadata (if available). In addition, a linear trend component is allowed. ","Published":"2014-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BCRA","Version":"1.0","Title":"Breast Cancer Risk Assessment","Description":"Functions provide risk projections of invasive breast cancer based on Gail model according to National Cancer Institute's Breast Cancer Risk Assessment Tool algorithm for specified race/ethnic groups and age intervals.","Published":"2015-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bcRep","Version":"1.3.6","Title":"Advanced Analysis of B Cell Receptor Repertoire Data","Description":"Methods for advanced analysis of B cell receptor repertoire\n data, like gene usage, mutations, clones, diversity, distance measures and\n multidimensional scaling and their visualisation.","Published":"2016-12-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bcrm","Version":"0.4.6","Title":"Bayesian Continual Reassessment Method for Phase I\nDose-Escalation Trials","Description":"Implements a wide variety of one and two-parameter Bayesian CRM\n designs. The program can run interactively, allowing the user to enter outcomes\n after each cohort has been recruited, or via simulation to assess operating\n characteristics.","Published":"2015-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bcROCsurface","Version":"1.0-1","Title":"Bias-Corrected Methods for Estimating the ROC Surface of\nContinuous Diagnostic Tests","Description":"The bias-corrected estimation methods for the receiver operating characteristics\n ROC surface and the volume under ROC surfaces (VUS) under missing at random (MAR)\n assumption.","Published":"2016-11-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bcrypt","Version":"0.2","Title":"'Blowfish' Password Hashing Algorithm","Description":"An R interface to the OpenBSD 'blowfish' password hashing algorithm,\n as described in \"A Future-Adaptable Password Scheme\" by Niels Provos. The\n implementation is derived from the 'py-bcrypt' module for Python which is a\n wrapper for the OpenBSD implementation.","Published":"2015-06-18","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bcs","Version":"1.0.0","Title":"Bayesian Compressive Sensing Using Laplace Priors","Description":"A Bayesian method for solving the compressive sensing problem. \n In particular, this package implements the algorithm 'Fast Laplace' found \n in the paper 'Bayesian Compressive Sensing Using Laplace Priors' by Babacan, \n Molina, Katsaggelos (2010) .","Published":"2017-04-04","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"BCSub","Version":"0.5","Title":"A Bayesian Semiparametric Factor Analysis Model for Subtype\nIdentification (Clustering)","Description":"Gene expression profiles are commonly utilized to infer disease\n subtypes and many clustering methods can be adopted for this task.\n However, existing clustering methods may not perform well when\n genes are highly correlated and many uninformative genes are included\n for clustering. To deal with these challenges, we develop a novel\n clustering method in the Bayesian setting. This method, called BCSub,\n adopts an innovative semiparametric Bayesian factor analysis model\n to reduce the dimension of the data to a few factor scores for\n clustering. Specifically, the factor scores are assumed to follow\n the Dirichlet process mixture model in order to induce clustering.","Published":"2017-03-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bcv","Version":"1.0.1","Title":"Cross-Validation for the SVD (Bi-Cross-Validation)","Description":"\n Methods for choosing the rank of an SVD approximation via cross\n validation. The package provides both Gabriel-style \"block\"\n holdouts and Wold-style \"speckled\" holdouts. It also includes an \n implementation of the SVDImpute algorithm. For more information about\n Bi-cross-validation, see Owen & Perry's 2009 AoAS article\n (at http://arxiv.org/abs/0908.2062) and Perry's 2009 PhD thesis\n (at http://arxiv.org/abs/0909.3052).","Published":"2015-05-22","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bda","Version":"5.1.6","Title":"Density Estimation for Grouped Data","Description":"Functions for density estimation based on grouped (or pre-binned) \n data. ","Published":"2015-07-29","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"bde","Version":"1.0.1","Title":"Bounded Density Estimation","Description":"A collection of S4 classes which implements different methods to estimate and deal with densities in bounded domains. That is, densities defined within the interval [lower.limit, upper.limit], where lower.limit and upper.limit are values that can be set by the user.","Published":"2015-02-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BDgraph","Version":"2.39","Title":"Bayesian Structure Learning in Graphical Models using\nBirth-Death MCMC","Description":"Provides statistical tools for Bayesian structure learning in undirected graphical models for continuous, discrete, and mixed data. The package is implemented the recent improvements in the Bayesian graphical models literature, including Mohammadi and Wit (2015) and Mohammadi et al. (2017) . To speed up the computations, the BDMCMC sampling algorithms are implemented in parallel using OpenMP in C++.","Published":"2017-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bdlp","Version":"0.9-1","Title":"Transparent and Reproducible Artificial Data Generation","Description":"The main function generateDataset() processes a user-supplied .R file that \n contains metadata parameters in order to generate actual data. The metadata parameters \n have to be structured in the form of metadata objects, the format of which is \n outlined in the package vignette. This approach allows to generate artificial data \n in a transparent and reproducible manner.","Published":"2017-06-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bdots","Version":"0.1.13","Title":"Bootstrapped Differences of Time Series","Description":"Analyze differences among time series curves with p-value adjustment for multiple comparisons introduced in Oleson et al (2015) .","Published":"2017-06-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"bdpopt","Version":"1.0-1","Title":"Optimisation of Bayesian Decision Problems","Description":"Optimisation of the expected utility in single-stage and multi-stage Bayesian decision problems. The expected utility is estimated by simulation. For single-stage problems, JAGS is used to draw MCMC samples.","Published":"2016-03-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bdpv","Version":"1.1","Title":"Inference and design for predictive values in binary diagnostic\ntests","Description":"Computation of asymptotic confidence intervals for negative and positive predictive values in binary diagnostic tests in case-control studies. Experimental design for hypothesis tests on predictive values.","Published":"2014-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bdrift","Version":"1.2.2","Title":"Beta Drift Analysis","Description":"Beta drift poses a serious challenge to asset managers \n and financial researchers. Beta drift causes problems in asset \n pricing models and can have serious ramifications for hedging \n attempts. Providing users with a tool that allows them to \n quantify beta drift and form educated opinions about it is \n the primary purpose of this package.\n This package contains the BDA() function that performs a beta \n drift analysis, typically for multi-factor asset pricing models. \n The BDA() function tests the underlying model parameters for \n drift across time, drift across model horizon, and applies a \n jackknife procedure to the baseline model. This allows the users \n to draw conclusions about the stability of model parameters or \n make inferences about the behavior of funds. For example, the \n drift of parameters for active funds could be interpreted as \n implicit style drift or, in the case of passive funds, management's \n inability to track a benchmark completely.","Published":"2016-04-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bdscale","Version":"2.0.0","Title":"Remove Weekends and Holidays from ggplot2 Axes","Description":"Provides a continuous date scale, omitting weekends and holidays.","Published":"2016-03-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bdsmatrix","Version":"1.3-2","Title":"Routines for Block Diagonal Symmetric matrices","Description":"This is a special case of sparse matrices, used by coxme ","Published":"2014-08-22","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"bdvis","Version":"0.2.15","Title":"Biodiversity Data Visualizations","Description":"Provides a set of functions to create basic visualizations to quickly\n preview different aspects of biodiversity information such as inventory \n completeness, extent of coverage (taxonomic, temporal and geographic), gaps\n and biases.","Published":"2017-03-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BDWreg","Version":"1.2.0","Title":"Bayesian Inference for Discrete Weibull Regression","Description":"A Bayesian regression model for discrete response, where the conditional distribution is modelled via a discrete Weibull distribution. This package provides an implementation of Metropolis-Hastings and Reversible-Jumps algorithms to draw samples from the posterior. It covers a wide range of regularizations through any two parameter prior. Examples are Laplace (Lasso), Gaussian (ridge), Uniform, Cauchy and customized priors like a mixture of priors. An extensive visual toolbox is included to check the validity of the results as well as several measures of goodness-of-fit.","Published":"2017-02-17","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bdynsys","Version":"1.3","Title":"Bayesian Dynamical System Model","Description":"The package bdynsys for panel/longitudinal data combines methods to model \n changes in up to four indicators over times as a function of the indicators\n themselves and up to three predictors using ordinary differential equations \n (ODEs) with polynomial terms that allow to model complex and nonlinear \n effects. A Bayesian model selection approach is implemented. The package \n provides also visualisation tools to plot phase portraits of the dynamic \n system, showing the complex co-evolution of two indicators over time with the\n possibility to highlight trajectories for specified entities (e.g. countries, \n individuals). Furthermore the visualisation tools allow for making \n predictions of the trajectories of specified entities with respect to the \n indicators. ","Published":"2014-12-08","License":"GNU General Public License (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bea.R","Version":"1.0.1","Title":"Bureau of Economic Analysis API","Description":"Provides an R interface for the Bureau of Economic Analysis (BEA) \n\t\tAPI (see for \n\t\tmore information) that serves two core purposes - \n 1. To Extract/Transform/Load data [beaGet()] from the BEA API as R-friendly \n\t\tformats in the user's work space [transformation done by default in beaGet() \n\t\tcan be modified using optional parameters; see, too, bea2List(), bea2Tab()].\n\t\t2. To enable the search of descriptive meta data [beaSearch()].\n\t\tOther features of the library exist mainly as intermediate methods \n\t\tor are in early stages of development.\n\t\tImportant Note - You must have an API key to use this library. \n\t\tRegister for a key at .","Published":"2017-01-26","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"BEACH","Version":"1.1.2","Title":"Biometric Exploratory Analysis Creation House","Description":"A platform is provided for interactive analyses with a goal of totally easy to develop, deploy, interact, and explore (TEDDIE). Using this package, users can create customized analyses and make them available to end users who can perform interactive analyses and save analyses to RTF or HTML files. It allows developers to focus on R code for analysis, instead of dealing with html or shiny code.","Published":"2016-10-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"beadarrayFilter","Version":"1.1.0","Title":"Bead filtering for Illumina bead arrays","Description":"This package contains functions to fit the filtering model\n of Forcheh et al., (2012) which is used to derive the\n intra-cluster correlation (ICC). Model fitting is done using\n the modified version of the ``MLM.beadarray\" function of Kim\n and Lin (2011).","Published":"2013-02-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"beadarrayMSV","Version":"1.1.0","Title":"Analysis of Illumina BeadArray SNP data including MSV markers","Description":"Imports bead-summary data from Illumina scanner.\n Pre-processes using a suite of optional normalizations and\n transformations. Clusters and automatically calls genotypes,\n critically able to handle markers in duplicated regions of the\n genome (multisite variants; MSVs). Interactive clustering if\n needed. MSVs with variation in both paralogs may be resolved\n and mapped to their respective chromosomes. Quality control\n including pedigree checking and visual assessment of clusters.\n Too large data-sets are handled by working on smaller subsets\n of the data in sequence.","Published":"2011-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"beanplot","Version":"1.2","Title":"Visualization via Beanplots (like Boxplot/Stripchart/Violin\nPlot)","Description":"Plots univariate comparison graphs, an alternative to\n boxplot/stripchart/violin plot.","Published":"2014-09-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"beanz","Version":"2.1","Title":"Bayesian Analysis of Heterogeneous Treatment Effect","Description":"It is vital to assess the heterogeneity of treatment effects\n (HTE) when making health care decisions for an individual patient or a group\n of patients. Nevertheless, it remains challenging to evaluate HTE based\n on information collected from clinical studies that are often designed and\n conducted to evaluate the efficacy of a treatment for the overall population.\n The Bayesian framework offers a principled and flexible approach to estimate\n and compare treatment effects across subgroups of patients defined by their\n characteristics. This package allows users to explore a wide range of Bayesian\n HTE analysis models, and produce posterior inferences about HTE.","Published":"2017-05-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"BEDASSLE","Version":"1.5","Title":"Quantifies effects of geo/eco distance on genetic\ndifferentiation","Description":"provides functions that allow users to quantify the relative \n\tcontributions of geographic and ecological distances to empirical patterns of genetic \n\tdifferentiation on a landscape. Specifically, we use a custom Markov chain \n\tMonte Carlo (MCMC) algorithm, which is used to estimate the parameters of the \n\tinference model, as well as functions for performing MCMC diagnosis and assessing \n\tmodel adequacy.","Published":"2014-12-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BEDMatrix","Version":"1.4.0","Title":"Extract Genotypes from a PLINK .bed File","Description":"A matrix-like data structure that allows for efficient,\n convenient, and scalable subsetting of binary genotype/phenotype files\n generated by PLINK (), the whole\n genome association analysis toolset, without loading the entire file into\n memory.","Published":"2017-05-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bedr","Version":"1.0.3","Title":"Genomic Region Processing using Tools Such as BEDtools, BEDOPS\nand Tabix","Description":"Genomic regions processing using open-source command line tools such as BEDtools, BEDOPS and Tabix. \n These tools offer scalable and efficient utilities to perform genome arithmetic e.g indexing, formatting and merging.\n bedr API enhances access to these tools as well as offers additional utilities for genomic regions processing.","Published":"2016-08-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"beepr","Version":"1.2","Title":"Easily Play Notification Sounds on any Platform","Description":"The sole function of this package is beep(), with the purpose to\n make it easy to play notification sounds on whatever platform you are on.\n It is intended to be useful, for example, if you are running a long analysis\n in the background and want to know when it is ready.","Published":"2015-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"beeswarm","Version":"0.2.3","Title":"The Bee Swarm Plot, an Alternative to Stripchart","Description":"The bee swarm plot is a one-dimensional scatter plot like \"stripchart\", but with closely-packed, non-overlapping points. ","Published":"2016-04-25","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"beginr","Version":"0.0.1","Title":"Functions for R Beginners","Description":"Useful functions for R beginners, including hints for the arguments of the 'plot()' function, self-defined functions for error bars, user-customized pair plots and hist plots, enhanced linear regression figures, etc.. This package could be helpful to R experts as well.","Published":"2017-06-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"belex","Version":"0.1.0","Title":"Download Historical Data from the Belgrade Stock Exchange","Description":"Tools for downloading historical financial data from the www.belex.rs.","Published":"2016-08-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"benchden","Version":"1.0.5","Title":"28 benchmark densities from Berlinet/Devroye (1994)","Description":"Full implementation of the 28 distributions introduced as\n benchmarks for nonparametric density estimation by Berlinet and\n Devroye (1994). Includes densities, cdfs, quantile functions\n and generators for samples as well as additional information on\n features of the densities. Also contains the 4 histogram\n densities used in Rozenholc/Mildenberger/Gather (2010).","Published":"2012-02-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"benchmark","Version":"0.3-6","Title":"Benchmark Experiments Toolbox","Description":"The benchmark package provides a toolbox for setup, execution\n and analysis of benchmark experiments. Main focus is the analysis of\n data accumulating during the execution -- one primary objective is the\n statistical correct computation of the candidate algorithms' order.","Published":"2014-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Benchmarking","Version":"0.26","Title":"Benchmark and Frontier Analysis Using DEA and SFA","Description":"Methods for frontier\n\tanalysis, Data Envelopment Analysis (DEA), under different\n\ttechnology assumptions (fdh, vrs, drs, crs, irs, add/frh, and fdh+),\n\tand using different efficiency measures (input based, output based,\n\thyperbolic graph, additive, super, and directional efficiency). Peers\n\tand slacks are available, partial price information can be included,\n\tand optimal cost, revenue and profit can be calculated. Evaluation of\n\tmergers is also supported. Methods for graphing the technology sets\n\tare also included. There is also support comparative methods based\n\ton Stochastic Frontier Analyses (SFA). In general, the methods can be\n\tused to solve not only standard models, but also many other model\n\tvariants. It complements the book, Bogetoft and Otto,\n\tBenchmarking with DEA, SFA, and R, Springer-Verlag, 2011, but can of\n\tcourse also be used as a stand-alone package.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"benchmarkme","Version":"0.4.0","Title":"Crowd Sourced System Benchmarks","Description":"Benchmark your CPU and compare against other CPUs. Also provides \n functions for obtaining system specifications, such as\n RAM, CPU type, and R version.","Published":"2017-01-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"benchmarkmeData","Version":"0.4.0","Title":"Data Set for the 'benchmarkme' Package","Description":"Crowd sourced benchmarks from running the 'benchmarkme' package.","Published":"2017-01-04","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"benchr","Version":"0.2.0","Title":"High Precise Measurement of R Expressions Execution Time","Description":"Provides infrastructure to accurately measure and compare\n the execution time of R expressions.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"benford.analysis","Version":"0.1.4.1","Title":"Benford Analysis for Data Validation and Forensic Analytics","Description":"Provides tools that make it easier to validate data using Benford's Law.","Published":"2017-03-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BenfordTests","Version":"1.2.0","Title":"Statistical Tests for Evaluating Conformity to Benford's Law","Description":"Several specialized statistical tests and support functions \n\t\t\tfor determining if numerical data could conform to Benford's law.","Published":"2015-08-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bentcableAR","Version":"0.3.0","Title":"Bent-Cable Regression for Independent Data or Autoregressive\nTime Series","Description":"\n\tIncluded are two main interfaces for fitting and diagnosing\n\tbent-cable regressions for autoregressive time-series data or\n\tindependent data (time series or otherwise): 'bentcable.ar()' and\n\t'bentcable.dev.plot()'. Some components in the package can also be\n\tused as stand-alone functions. The bent cable\n\t(linear-quadratic-linear) generalizes the broken stick\n\t(linear-linear), which is also handled by this package. Version 0.2\n\tcorrects a glitch in the computation of confidence intervals for the\n\tCTP. References that were updated from Versions 0.2.1 and 0.2.2 appear\n\tin Version 0.2.3 and up. Version 0.3.0 improves robustness of the\n\terror-message producing mechanism. It is the author's intention to\n\tdistribute any future updates via GitHub.","Published":"2015-04-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"BEQI2","Version":"2.0-0","Title":"Benthic Ecosystem Quality Index 2","Description":"Tool for analysing benthos data. It estimates several quality \n indices like the total abundance of species, species richness, \n Margalef's d, AZTI Marine Biotic Index (AMBI), and the BEQI-2 index. \n Furthermore, additional (optional) features are provided that enhance data \n preprocessing: (1) genus to species conversion, i.e.,taxa counts at the \n taxonomic genus level can optionally be converted to the species level and\n (2) pooling: small samples are combined to bigger samples with a \n standardized size to (a) meet the data requirements of the AMBI, \n (b) generate comparable species richness values and \n (c) give a higher benthos signal to noise ratio.","Published":"2015-01-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ber","Version":"4.0","Title":"Batch Effects Removal","Description":"The functions in this package remove batch effects from\n microarrary normalized data. The expression levels of the genes\n are represented in a matrix where rows correspond to\n independent samples and columns to genes (variables). The\n batches are represented by categorical variables (objects of\n class factor). When further covariates of interest are\n available they can be used to remove efficiently the batch\n effects and adjust the data.","Published":"2013-03-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Bergm","Version":"4.0.0","Title":"Bayesian Exponential Random Graph Models","Description":"Set of tools to analyse Bayesian exponential random graph models.","Published":"2017-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"berryFunctions","Version":"1.15.0","Title":"Function Collection Related to Plotting and Hydrology","Description":"Draw horizontal histograms, color scattered points by 3rd dimension,\n enhance date- and log-axis plots, zoom in X11 graphics, trace errors and warnings, \n use the unit hydrograph in a linear storage cascade, convert lists to data.frames and arrays, \n fit multiple functions.","Published":"2017-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BeSS","Version":"1.0.1","Title":"Best Subset Selection for Sparse Generalized Linear Model and\nCox Model","Description":"An implementation of best subset selection in generalized linear model and Cox proportional hazard model via the primal dual active set algorithm. The algorithm formulates coefficient parameters and residuals as primal and dual variables and utilizes efficient active set selection strategies based on the complementarity of the primal and dual variables.","Published":"2017-05-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Bessel","Version":"0.5-5","Title":"Bessel -- Bessel Functions Computations and Approximations","Description":"Bessel Function Computations for complex and real numbers;\n notably interfacing TOMS 644; approximations for large arguments,\n experiments, etc.","Published":"2013-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BEST","Version":"0.5.0","Title":"Bayesian Estimation Supersedes the t-Test","Description":"An alternative to t-tests, producing posterior estimates\n for group means and standard deviations and their differences and\n effect sizes.","Published":"2017-05-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"bestglm","Version":"0.36","Title":"Best Subset GLM","Description":"Best subset glm using information criteria or cross-validation.","Published":"2017-02-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BetaBit","Version":"1.3","Title":"Mini Games from Adventures of Beta and Bit","Description":"Three games: proton, frequon and regression. Each one is a console-based data-crunching game for younger and older data scientists.\n Act as a data-hacker and find Slawomir Pietraszko's credentials to the Proton server.\n In proton you have to solve four data-based puzzles to find the login and password.\n There are many ways to solve these puzzles. You may use loops, data filtering, ordering, aggregation or other tools.\n Only basics knowledge of R is required to play the game, yet the more functions you know, the more approaches you can try.\n In frequon you will help to perform statistical cryptanalytic attack on a corpus of ciphered messages.\n This time seven sub-tasks are pushing the bar much higher. Do you accept the challenge?\n In regression you will test your modeling skills in a series of eight sub-tasks.\n Try only if ANOVA is your close friend.\n It's a part of Beta and Bit project.\n You will find more about the Beta and Bit project at .","Published":"2016-07-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"betacal","Version":"0.1.0","Title":"Beta Calibration","Description":"Fit beta calibration models and obtain calibrated probabilities from\n them.","Published":"2017-02-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"betafam","Version":"1.0","Title":"Detecting rare variants for quantitative traits using nuclear\nfamilies","Description":"To detecting rare variants for quantitative traits using\n nuclear families, the linear combination methods are proposed\n using the estimated regression coefficients from the multiple\n regression and regularized regression as the weights.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"betalink","Version":"2.2.1","Title":"Beta-Diversity of Species Interactions","Description":"Measures of beta-diversity in networks, and easy visualization of why two networks are different.","Published":"2016-03-26","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"betapart","Version":"1.4-1","Title":"Partitioning Beta Diversity into Turnover and Nestedness\nComponents","Description":"Functions to compute pair-wise dissimilarities (distance matrices) and multiple-site dissimilarities, separating the turnover and nestedness-resultant components of taxonomic (incidence and abundance based), functional and phylogenetic beta diversity.","Published":"2017-01-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"betaper","Version":"1.1-0","Title":"Functions to incorporate taxonomic uncertainty on multivariate\nanalyses of ecological data","Description":"Permutational method to incorporate taxonomic uncertainty\n and some functions to assess its effects on parameters of some\n widely used multivariate methods in ecology","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"betareg","Version":"3.1-0","Title":"Beta Regression","Description":"Beta regression for modeling beta-distributed dependent variables, e.g., rates and proportions.\n In addition to maximum likelihood regression (for both mean and precision of a beta-distributed\n response), bias-corrected and bias-reduced estimation as well as finite mixture models and\n recursive partitioning for beta regressions are provided.","Published":"2016-08-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"betas","Version":"0.1.1","Title":"Standardized Beta Coefficients","Description":"Computes standardized beta coefficients and corresponding\n standard errors for the following models:\n linear regression models with numerical covariates only,\n linear regression models with numerical and factorial covariates,\n weighted linear regression models,\n all these linear regression models with interaction terms, and\n robust linear regression models with numerical covariates only.","Published":"2015-03-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"betategarch","Version":"3.3","Title":"Simulation, Estimation and Forecasting of Beta-Skew-t-EGARCH\nModels","Description":"Simulation, estimation and forecasting of first-order Beta-Skew-t-EGARCH models with leverage (one-component, two-component, skewed versions).","Published":"2016-10-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bethel","Version":"0.2","Title":"Bethel's algorithm","Description":"The sample size according to the Bethel's procedure.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BETS","Version":"0.2.1","Title":"Brazilian Economic Time Series","Description":"It provides access to and information about the most important\n Brazilian economic time series - from the Getulio Vargas Foundation, the Central\n Bank of Brazil and the Brazilian Institute of Geography and Statistics. It also\n presents tools for managing, analysing (e.g. generating dynamic reports with a\n complete analysis of a series) and exporting these time series.","Published":"2017-04-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BeviMed","Version":"5.0","Title":"Bayesian Evaluation of Variant Involvement in Mendelian Disease","Description":"A fast integrative genetic association test for rare diseases based on a model for disease status given allele counts at rare variant sites. Probability of association, mode of inheritance and probability of pathogenicity for individual variants are all inferred in a Bayesian framework.","Published":"2017-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"beyondWhittle","Version":"0.18.1","Title":"Bayesian Spectral Inference for Stationary Time Series","Description":"Implementations of a Bayesian parametric (autoregressive), a Bayesian nonparametric (Whittle likelihood with Bernstein-Dirichlet prior) and a Bayesian semiparametric (autoregressive likelihood with Bernstein-Dirichlet correction) procedure are provided. The work is based on the corrected parametric likelihood by C. Kirch et al (2017) . It was supported by DFG grant KI 1443/3-1.","Published":"2017-04-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"bezier","Version":"1.1","Title":"Bezier Curve and Spline Toolkit","Description":"The bezier package is a toolkit for working with Bezier curves and splines. The package provides functions for point generation, arc length estimation, degree elevation and curve fitting.","Published":"2014-07-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bfa","Version":"0.4","Title":"Bayesian Factor Analysis","Description":"Provides model fitting for\n several Bayesian factor models including Gaussian,\n ordinal probit, mixed and semiparametric Gaussian\n copula factor models under a range of priors.","Published":"2016-09-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bfast","Version":"1.5.7","Title":"Breaks For Additive Season and Trend (BFAST)","Description":"BFAST integrates the decomposition of time series into trend,\n seasonal, and remainder components with methods for detecting\n\t and characterizing abrupt changes within the trend and seasonal\n\t components. BFAST can be used to analyze different types of\n\t satellite image time series and can be applied to other disciplines\n\t dealing with seasonal or non-seasonal time series, such as hydrology,\n\t climatology, and econometrics. The algorithm can be extended to\n\t label detected changes with information on the parameters of the\n\t fitted piecewise linear models. BFAST monitoring functionality is added\n\t based on a paper that has been submitted to Remote Sensing of Environment.\n\t BFAST monitor provides functionality to detect disturbance in near real-time based on BFAST-type models.\n BFAST approach is flexible approach that handles missing data without interpolation.\n Furthermore now different models can be used to fit the time series data and detect structural changes (breaks).","Published":"2014-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bfork","Version":"0.1.2","Title":"Basic Unix Process Control","Description":"Wrappers for fork()/waitpid() meant to allow R users to quickly\n and easily fork child processes and wait for them to finish.","Published":"2016-01-04","License":"MPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bfp","Version":"0.0-35","Title":"Bayesian Fractional Polynomials","Description":"Implements the Bayesian paradigm for fractional\n polynomial models under the assumption of normally distributed error terms.","Published":"2017-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BGData","Version":"1.0.0","Title":"A Suite of Packages for Analysis of Big Genomic Data","Description":"An umbrella package providing a phenotype/genotype data structure\n and scalable and efficient computational methods for large genomic datasets\n in combination with several other packages: 'BEDMatrix', 'LinkedMatrix',\n and 'symDMatrix'.","Published":"2017-05-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bgeva","Version":"0.3-1","Title":"Binary Generalized Extreme Value Additive Models","Description":"Routine for fitting regression models for binary rare events with linear and nonlinear covariate effects when using the quantile function of the Generalized Extreme Value random variable.","Published":"2017-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bglm","Version":"1.0","Title":"Bayesian Estimation in Generalized Linear Models","Description":"Implementation of Bayesian estimation in generalized linear models following Gamerman (1997).","Published":"2014-11-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BGLR","Version":"1.0.5","Title":"Bayesian Generalized Linear Regression","Description":"Bayesian Generalized Linear Regression.","Published":"2016-08-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bgmfiles","Version":"0.0.6","Title":"Example BGM Files for the Atlantis Ecosystem Model","Description":"A collection of box-geometry model (BGM) files for the Atlantis \n ecosystem model. Atlantis is a deterministic, biogeochemical, \n whole-of-ecosystem model (see for more information).","Published":"2016-08-10","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"bgmm","Version":"1.8.3","Title":"Gaussian Mixture Modeling Algorithms and the Belief-Based\nMixture Modeling","Description":"Two partially supervised mixture modeling methods: \n soft-label and belief-based modeling are implemented.\n For completeness, we equipped the package also with the\n functionality of unsupervised, semi- and fully supervised\n mixture modeling. The package can be applied also to selection\n of the best-fitting from a set of models with different\n component numbers or constraints on their structures.\n For detailed introduction see:\n Przemyslaw Biecek, Ewa Szczurek, Martin Vingron, Jerzy\n Tiuryn (2012), The R Package bgmm: Mixture Modeling with\n Uncertain Knowledge, Journal of Statistical Software \n .","Published":"2017-02-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BGPhazard","Version":"1.2.3","Title":"Markov Beta and Gamma Processes for Modeling Hazard Rates","Description":"Computes the hazard rate estimate as described by Nieto-Barajas and Walker (2002) and Nieto-Barajas (2003).","Published":"2016-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BGSIMD","Version":"1.0","Title":"Block Gibbs Sampler with Incomplete Multinomial Distribution","Description":"Implement an efficient block Gibbs sampler with incomplete\n data from a multinomial distribution taking values from the k\n categories 1,2,...,k, where data are assumed to miss at random\n and each missing datum belongs to one and only one of m\n distinct non-empty proper subsets A1, A2,..., Am of 1,2,...,k\n and the k categories are labelled such that only consecutive\n A's may overlap.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bgsmtr","Version":"0.1","Title":"Bayesian Group Sparse Multi-Task Regression","Description":"Fits a Bayesian group-sparse multi-task regression model using Gibbs\n sampling. The hierarchical prior encourages shrinkage of the estimated regression\n coefficients at both the gene and SNP level. The model has been applied\n successfully to imaging phenotypes of dimension up to 100; it can be used more\n generally for multivariate (non-imaging) phenotypes.","Published":"2016-10-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BH","Version":"1.62.0-1","Title":"Boost C++ Header Files","Description":"Boost provides free peer-reviewed portable C++ source \n libraries. A large part of Boost is provided as C++ template code\n which is resolved entirely at compile-time without linking. This \n package aims to provide the most useful subset of Boost libraries \n for template use among CRAN package. By placing these libraries in \n this package, we offer a more efficient distribution system for CRAN \n as replication of this code in the sources of other packages is \n avoided. As of release 1.62.0-1, the following Boost libraries are\n included: 'algorithm' 'any' 'atomic' 'bimap' 'bind' 'circular_buffer'\n 'concept' 'config' 'container' 'date'_'time' 'detail' 'dynamic_bitset'\n 'exception' 'filesystem' 'flyweight' 'foreach' 'functional' 'fusion'\n 'geometry' 'graph' 'heap' 'icl' 'integer' 'interprocess' 'intrusive' 'io'\n 'iostreams' 'iterator' 'math' 'move' 'mpl' 'multiprcecision' 'numeric'\n 'pending' 'phoenix' 'preprocessor' 'propery_tree' 'random' 'range'\n 'scope_exit' 'smart_ptr' 'spirit' 'tuple' 'type_traits' 'typeof' 'unordered'\n 'utility' 'uuid'.","Published":"2016-11-19","License":"BSL-1.0","snapshot_date":"2017-06-23"} {"Package":"Bhat","Version":"0.9-10","Title":"General likelihood exploration","Description":"Functions for MLE, MCMC, CIs (originally in Fortran)","Published":"2013-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BHH2","Version":"2016.05.31","Title":"Useful Functions for Box, Hunter and Hunter II","Description":"Functions and data sets reproducing some examples in\n Box, Hunter and Hunter II. Useful for statistical design\n of experiments, especially factorial experiments. ","Published":"2016-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bhm","Version":"1.11","Title":"Biomarker Threshold Models","Description":"Biomarker threshold models are tools to fit both predictive and prognostic biomarker effects. ","Published":"2017-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BHMSMAfMRI","Version":"1.1","Title":"Bayesian Hierarchical Multi-Subject Multiscale Analysis of\nFunctional MRI Data","Description":"Performs Bayesian hierarchical multi-subject multiscale analysis of fMRI data as described in Sanyal & Ferreira (2012) using wavelet based prior that borrows strength across subjects and returns posterior smoothed versions of the fMRI data and samples from the posterior distribution.","Published":"2017-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BHPMF","Version":"1.0","Title":"Uncertainty Quantified Matrix Completion using Bayesian\nHierarchical Matrix Factorization","Description":"Fills the gaps of a matrix incorporating a hierarchical side\n information while providing uncertainty quantification.","Published":"2017-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"biasbetareg","Version":"1.0","Title":"Bias correction of the parameter estimates of the beta\nregression model","Description":"Bias correction of second order of the maximum likelihood\n estimators of the parameters of the beta regression model.","Published":"2012-10-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"BiasedUrn","Version":"1.07","Title":"Biased Urn Model Distributions","Description":"Statistical models of biased sampling in the form of \n univariate and multivariate noncentral hypergeometric distributions, \n including Wallenius' noncentral hypergeometric distribution and\n Fisher's noncentral hypergeometric distribution \n (also called extended hypergeometric distribution). \n See vignette(\"UrnTheory\") for explanation of these distributions.","Published":"2015-12-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bib2df","Version":"0.2","Title":"Parse a BibTeX File to a Data.frame","Description":"Parse a BibTeX file to a data.frame to make it accessible for further analysis and visualization.","Published":"2017-05-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BiBitR","Version":"0.2.2","Title":"R Wrapper for Java Implementation of BiBit","Description":"A simple R wrapper for the Java BiBit algorithm from \"A\n biclustering algorithm for extracting bit-patterns from binary datasets\"\n from Domingo et al. (2011) . An simple adaption for the BiBit algorithm which allows noise in the biclusters is also introduced. Further, a workflow to guide the algorithm towards given patterns is included as well. ","Published":"2017-02-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bibliometrix","Version":"1.6","Title":"Bibliometric and Co-Citation Analysis Tool","Description":"Tool for quantitative research in scientometrics and bibliometrics.\n It provides various routines for importing bibliographic data from SCOPUS () and \n Thomson Reuters' ISI Web of Knowledge () databases, performing bibliometric analysis \n and building data matrices for co-citation, coupling, scientific collaboration and co-word analysis.","Published":"2017-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bibliospec","Version":"0.0.4","Title":"Reading Mass Spectrometric Search Results","Description":"R class to access 'sqlite', 'BiblioSpec' generated, mass spectrometry search result files,\n containing detailed information about peptide spectra matches.\n Convert 'Mascot' '.dat' or e.g. 'comet' '.pep.xml' files with 'BiblioSpec' into 'sqlite' files and than \n access them with the 'CRAN' 'bibliospec' package to analyse with the R-packages 'specL' to generate\n spectra libraries, 'protViz' to annotate spectra, or 'prozor' for false discovery rate \n estimation and protein inference.","Published":"2016-07-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bibtex","Version":"0.4.0","Title":"bibtex parser","Description":"Utility to parse a bibtex file","Published":"2014-12-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"biclique","Version":"1.0.1","Title":"Maximal Biclique Enumeration in Bipartite Graphs","Description":"A tool for enumerating maximal complete bipartite graphs. The input should be a edge list file or a binary matrix file. \n The output are maximal complete bipartite graphs. Algorithms used can be found in this paper Y Zhang et al. BMC Bioinformatics 2014 15:110 .","Published":"2017-05-07","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"biclust","Version":"1.2.0","Title":"BiCluster Algorithms","Description":"The main function biclust provides several algorithms to\n find biclusters in two-dimensional data: Cheng and Church,\n Spectral, Plaid Model, Xmotifs and Bimax. In addition, the\n package provides methods for data preprocessing (normalization\n and discretisation), visualisation, and validation of bicluster\n solutions.","Published":"2015-05-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BiDimRegression","Version":"1.0.6","Title":"Calculates the bidimensional regression between two 2D\nconfigurations","Description":"An S3 class with a method for calculates the bidimensional regression between two 2D configurations following the approach by Tobler (1965).","Published":"2014-03-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BIEN","Version":"1.1.0","Title":"Tools for Accessing the Botanical Information and Ecology\nNetwork Database","Description":"Provides Tools for Accessing the Botanical Information and Ecology Network Database. The BIEN database contains cleaned and standardized botanical data including occurrence, trait, plot and taxonomic data (See for more Information). This package provides functions that query the BIEN database by constructing and executing optimized SQL queries.","Published":"2017-03-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bifactorial","Version":"1.4.7","Title":"Inferences for bi- and trifactorial trial designs","Description":"This package makes global and multiple inferences for\n given bi- and trifactorial clinical trial designs using\n bootstrap methods and a classical approach.","Published":"2013-03-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"bife","Version":"0.4","Title":"Binary Choice Models with Fixed Effects","Description":"Estimates fixed effects binary choice models (logit and probit) with potentially many individual fixed effects and computes average partial effects. Incidental parameter bias can be reduced with a bias-correction proposed by Hahn and Newey (2004) .","Published":"2017-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BIFIEsurvey","Version":"2.1-6","Title":"Tools for Survey Statistics in Educational Assessment","Description":"\n Contains tools for survey statistics (especially in educational\n assessment) for datasets with replication designs (jackknife, \n bootstrap, replicate weights). Descriptive statistics, linear\n and logistic regression, path models for manifest variables\n with measurement error correction and two-level hierarchical\n regressions for weighted samples are included. Statistical \n inference can be conducted for multiply imputed datasets and\n nested multiply imputed datasets. \n This package is developed by BIFIE (Federal Institute for \n Educational Research, Innovation and Development of the Austrian \n School System; Salzburg, Austria).","Published":"2017-05-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bigalgebra","Version":"0.8.4","Title":"BLAS routines for native R matrices and big.matrix objects","Description":"This package provides arithmetic functions for R matrix and big.matrix objects.","Published":"2014-04-16","License":"LGPL-3 | Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"biganalytics","Version":"1.1.14","Title":"Utilities for 'big.matrix' Objects from Package 'bigmemory'","Description":"Extend the 'bigmemory' package with various analytics.\n Functions 'bigkmeans' and 'binit' may also be used with native R objects.\n For 'tapply'-like functions, the bigtabulate package may also be helpful.\n For linear algebra support, see 'bigalgebra'. For mutex (locking) support\n for advanced shared-memory usage, see 'synchronicity'.","Published":"2016-02-18","License":"LGPL-3 | Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"BIGDAWG","Version":"1.5.5","Title":"Case-Control Analysis of Multi-Allelic Loci","Description":"Data sets and functions for chi-squared Hardy-Weinberg and case-control\n association tests of highly polymorphic genetic data [e.g., human leukocyte antigen\n (HLA) data]. Performs association tests at multiple levels of polymorphism\n (haplotype, locus and HLA amino-acids) as described in Pappas DJ, Marin W, Hollenbach\n JA, Mack SJ (2016) . Combines rare variants to a \n common class to account for sparse cells in tables as described by Hollenbach JA, \n Mack SJ, Thomson G, Gourraud PA (2012) .","Published":"2016-08-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"bigFastlm","Version":"0.0.2","Title":"Fast Linear Models for Objects from the 'bigmemory' Package","Description":"A reimplementation of the fastLm() functionality of 'RcppEigen' for\n big.matrix objects for fast out-of-memory linear model fitting.","Published":"2017-03-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bigGP","Version":"0.1-6","Title":"Distributed Gaussian Process Calculations","Description":"Distributes Gaussian process calculations across nodes\n in a distributed memory setting, using Rmpi. The bigGP class \n provides high-level methods for maximum likelihood with normal data, \n prediction, calculation of uncertainty (i.e., posterior covariance \n calculations), and simulation of realizations. In addition, bigGP \n provides an API for basic matrix calculations with distributed \n covariance matrices, including Cholesky decomposition, back/forwardsolve, \n crossproduct, and matrix multiplication.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bigKRLS","Version":"1.5.3","Title":"Optimized Kernel Regularized Least Squares","Description":"Functions for Kernel-Regularized Least Squares optimized for speed and memory usage are provided along with visualization tools. \n For working papers, sample code, and recent presentations visit .","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"biglars","Version":"1.0.2","Title":"Scalable Least-Angle Regression and Lasso","Description":"Least-angle regression, lasso and stepwise regression for\n numeric datasets in which the number of observations is greater\n than the number of predictors. The functions can be used with\n the ff library to accomodate datasets that are too large to be\n held in memory.","Published":"2011-12-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"biglasso","Version":"1.3-6","Title":"Extending Lasso Model Fitting to Big Data","Description":"Extend lasso and elastic-net model fitting for ultrahigh-dimensional, \n multi-gigabyte data sets that cannot be loaded into memory. It's much more \n memory- and computation-efficient as compared to existing lasso-fitting packages \n like 'glmnet' and 'ncvreg', thus allowing for very powerful big data analysis \n even with an ordinary laptop.","Published":"2017-05-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"biglm","Version":"0.9-1","Title":"bounded memory linear and generalized linear models","Description":"Regression for data too large to fit in memory","Published":"2013-05-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"bigmemory","Version":"4.5.19","Title":"Manage Massive Matrices with Shared Memory and Memory-Mapped\nFiles","Description":"Create, store, access, and manipulate massive matrices.\n Matrices are allocated to shared memory and may use memory-mapped\n files. Packages 'biganalytics', 'bigtabulate', 'synchronicity', and\n 'bigalgebra' provide advanced functionality.","Published":"2016-03-28","License":"LGPL-3 | Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"bigmemory.sri","Version":"0.1.3","Title":"A shared resource interface for Bigmemory Project packages","Description":"This package provides a shared resource interface for the bigmemory and synchronicity packages.","Published":"2014-08-18","License":"LGPL-3 | Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"bigml","Version":"0.1.2","Title":"Bindings for the BigML API","Description":"The 'bigml' package contains bindings for the BigML API.\n The package includes methods that provide straightforward access\n to basic API functionality, as well as methods that accommodate\n idiomatic R data types and concepts.","Published":"2015-05-20","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"bigpca","Version":"1.0.3","Title":"PCA, Transpose and Multicore Functionality for 'big.matrix'\nObjects","Description":"Adds wrappers to add functionality for big.matrix objects (see the bigmemory project).\n This allows fast scalable principle components analysis (PCA), or singular value decomposition (SVD).\n There are also functions for transposing, using multicore 'apply' functionality, data importing \n and for compact display of big.matrix objects. Most functions also work for standard matrices if \n RAM is sufficient.","Published":"2015-11-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bigQueryR","Version":"0.3.1","Title":"Interface with Google BigQuery with Shiny Compatibility","Description":"Interface with 'Google BigQuery',\n see for more information.\n This package uses 'googleAuthR' so is compatible with similar packages, \n including 'Google Cloud Storage' () for result extracts. ","Published":"2017-05-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"BigQuic","Version":"1.1-7","Title":"Big Quadratic Inverse Covariance Estimation","Description":"Use Newton's method, coordinate descent, and METIS clustering\n to solve the L1 regularized Gaussian MLE inverse covariance\n matrix estimation problem.","Published":"2017-02-02","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bigReg","Version":"0.1.2","Title":"Generalized Linear Models (GLM) for Large Data Sets","Description":"Allows the user to carry out GLM on very large\n data sets. Data can be created using the data_frame() function and appended\n to the object with object$append(data); data_frame and data_matrix objects\n are available that allow the user to store large data on disk. The data is\n stored as doubles in binary format and any character columns are transformed\n to factors and then stored as numeric (binary) data while a look-up table is\n stored in a separate .meta_data file in the same folder. The data is stored in\n blocks and GLM regression algorithm is modified and carries out a MapReduce-\n like algorithm to fit the model. The functions bglm(), and summary()\n and bglm_predict() are available for creating and post-processing of models.\n The library requires Armadillo installed on your system. It probably won't \n function on windows since multi-core processing is done using mclapply() \n which forks R on Unix/Linux type operating systems.","Published":"2016-07-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bigrquery","Version":"0.4.0","Title":"An Interface to Google's 'BigQuery' 'API'","Description":"Easily talk to Google's 'BigQuery' database from R.","Published":"2017-06-23","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bigRR","Version":"1.3-10","Title":"Generalized Ridge Regression (with special advantage for p >> n\ncases)","Description":"The package fits large-scale (generalized) ridge regression for various distributions of response. The shrinkage parameters (lambdas) can be pre-specified or estimated using an internal update routine (fitting a heteroscedastic effects model, or HEM). It gives possibility to shrink any subset of parameters in the model. It has special computational advantage for the cases when the number of shrinkage parameters exceeds the number of observations. For example, the package is very useful for fitting large-scale omics data, such as high-throughput genotype data (genomics), gene expression data (transcriptomics), metabolomics data, etc.","Published":"2014-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BigSEM","Version":"0.2","Title":"Constructing Large Systems of Structural Equations","Description":"Construct large systems of structural equations using the two-stage penalized least squares (2SPLS) method proposed by Chen, Zhang and Zhang (2016).","Published":"2016-09-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bigsplines","Version":"1.1-0","Title":"Smoothing Splines for Large Samples","Description":"Fits smoothing spline regression models using scalable algorithms designed for large samples. Seven marginal spline types are supported: linear, cubic, different cubic, cubic periodic, cubic thin-plate, ordinal, and nominal. Random effects and parametric effects are also supported. Response can be Gaussian or non-Gaussian: Binomial, Poisson, Gamma, Inverse Gaussian, or Negative Binomial.","Published":"2017-02-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bigstep","Version":"0.7.4","Title":"Stepwise Selection for Large Data Sets","Description":"Selecting linear and generalized linear models for large data sets\n using modified stepwise procedure and modern selection criteria (like\n modifications of Bayesian Information Criterion). Selection can be\n performed on data which exceed RAM capacity. Special selection strategy is\n available, faster than classical stepwise procedure.","Published":"2017-04-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bigtabulate","Version":"1.1.5","Title":"Table, Apply, and Split Functionality for Matrix and\n'big.matrix' Objects","Description":"Extend the bigmemory package with 'table', 'tapply', and 'split'\n support for 'big.matrix' objects. The functions may also be used with native R\n matrices for improving speed and memory-efficiency.","Published":"2016-02-18","License":"LGPL-3 | Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"bigtcr","Version":"1.0","Title":"Nonparametric Analysis of Bivariate Gap Time with Competing\nRisks","Description":"For studying recurrent disease and death with competing\n risks, comparisons based on the well-known cumulative incidence function\n can be confounded by different prevalence rates of the competing events.\n Alternatively, comparisons of the conditional distribution of the survival\n time given the failure event type are more relevant for investigating the\n prognosis of different patterns of recurrence disease. This package implements\n a nonparametric estimator for the conditional cumulative incidence function\n and a nonparametric conditional bivariate cumulative incidence function for the\n bivariate gap times proposed in Huang et al. (2016) .","Published":"2016-10-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"BigTSP","Version":"1.0","Title":"Top Scoring Pair based methods for classification","Description":"This package is trying to implement Top Scoring Pair based\n methods for classification including LDCA, TSP-tree, TSP-random\n forest and TSP gradient boosting algorithm.","Published":"2012-08-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BigVAR","Version":"1.0.2","Title":"Dimension Reduction Methods for Multivariate Time Series","Description":"Estimates VAR and VARX models with structured Lasso Penalties.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bikedata","Version":"0.0.1","Title":"Download and Aggregate Data from Public Hire Bicycle Systems","Description":"Download and aggregate data from all public hire bicycle systems\n which provide open data, currently including Santander Cycles in London,\n U.K., and from the U.S.A., citibike in New York City NY, Divvy in Chicago\n IL, Capital Bikeshare in Washington DC, Hubway in Boston MA, and Metro in\n Los Angeles LA.","Published":"2017-05-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bikeshare14","Version":"0.1.0","Title":"Bay Area Bike Share Trips in 2014","Description":"Anonymised Bay Area bike share trip data for the year 2014. \n Also contains additional metadata on stations and weather.","Published":"2016-08-21","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"bild","Version":"1.1-5","Title":"BInary Longitudinal Data","Description":"Performs logistic regression for binary longitudinal\n data, allowing for serial dependence among observations from a given\n individual and a random intercept term. Estimation is via maximization\n of the exact likelihood of a suitably defined model. Missing values and \n unbalanced data are allowed, with some restrictions. ","Published":"2015-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bimetallic","Version":"1.0","Title":"Power for SNP analyses using silver standard cases","Description":"A power calculator for Genome-wide association studies\n (GWAs) with combined gold (error-free) and silver (erroneous)\n phenotyping per McDavid A, Crane PK, Newton KM, Crosslin DR, et\n al. (2011)","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bimixt","Version":"1.0","Title":"Estimates Mixture Models for Case-Control Data","Description":"Estimates non-Gaussian mixture models of case-control data. The four types of models supported are binormal, two component constrained, two component unconstrained, and four component. The most general model is the four component model, under which both cases and controls are distributed according to a mixture of two unimodal distributions. In the four component model, the two component distributions of the control mixture may be distinct from the two components of the case mixture distribution. In the two component unconstrained model, the components of the control and case mixtures are the same; however the mixture probabilities may differ for cases and controls. In the two component constrained model, all controls are distributed according to one of the two components while cases follow a mixture distribution of the two components. In the binormal model, cases and controls are distributed according to distinct unimodal distributions. These models assume that Box-Cox transformed case and control data with a common lambda parameter are distributed according to Gaussian mixture distributions. Model parameters are estimated using the expectation-maximization (EM) algorithm. Likelihood ratio test comparison of nested models can be performed using the lr.test function. AUC and PAUC values can be computed for the model-based and empirical ROC curves using the auc and pauc functions, respectively. The model-based and empirical ROC curves can be graphed using the roc.plot function. Finally, the model-based density estimates can be visualized by plotting a model object created with the bimixt.model function. ","Published":"2015-08-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"Binarize","Version":"1.2","Title":"Binarization of One-Dimensional Data","Description":"Provides methods for the binarization of one-dimensional data and some visualization functions.","Published":"2017-02-06","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"BinaryEMVS","Version":"0.1","Title":"Variable Selection for Binary Data Using the EM Algorithm","Description":"Implements variable selection for high dimensional datasets with a binary response\n variable using the EM algorithm. Both probit and logit models are supported. Also included \n is a useful function to generate high dimensional data with correlated variables.","Published":"2016-01-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BinaryEPPM","Version":"2.0","Title":"Mean and Variance Modeling of Binary Data","Description":"Modeling under- and over-dispersed binary data using extended Poisson process models (EPPM).","Published":"2016-11-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"binaryLogic","Version":"0.3.5","Title":"Binary Logic","Description":"Convert to binary numbers (Base2). Shift, rotate, summary. Based on logical vector.","Published":"2016-06-24","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"binda","Version":"1.0.3","Title":"Multi-Class Discriminant Analysis using Binary Predictors","Description":"The \"binda\" package implements functions for multi-class\n discriminant analysis using binary predictors, for corresponding \n variable selection, and for dichotomizing continuous data.","Published":"2015-07-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"bindata","Version":"0.9-19","Title":"Generation of Artificial Binary Data","Description":"Generation of correlated artificial binary data.","Published":"2012-11-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bindr","Version":"0.1","Title":"Parametrized Active Bindings","Description":"Provides a simple interface for creating active bindings where the\n bound function accepts additional arguments.","Published":"2016-11-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bindrcpp","Version":"0.2","Title":"An 'Rcpp' Interface to Active Bindings","Description":"Provides an easy way to fill an environment with active bindings\n that call a C++ function.","Published":"2017-06-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"binequality","Version":"1.0.1","Title":"Methods for Analyzing Binned Income Data","Description":"Methods for model selection, model averaging, and calculating metrics, such as the Gini, Theil, Mean Log Deviation, etc, on binned income data where the topmost bin is right-censored. We provide both a non-parametric method, termed the bounded midpoint estimator (BME), which assigns cases to their bin midpoints; except for the censored bins, where cases are assigned to an income estimated by fitting a Pareto distribution. Because the usual Pareto estimate can be inaccurate or undefined, especially in small samples, we implement a bounded Pareto estimate that yields much better results. We also provide a parametric approach, which fits distributions from the generalized beta (GB) family. Because some GB distributions can have poor fit or undefined estimates, we fit 10 GB-family distributions and use multimodel inference to obtain definite estimates from the best-fitting distributions. We also provide binned income data from all United States of America school districts, counties, and states.","Published":"2016-12-17","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"binford","Version":"0.1.0","Title":"Binford's Hunter-Gatherer Data","Description":"Binford's hunter-gatherer data includes more than 200 variables\n coding aspects of hunter-gatherer subsistence, mobility, and social organization\n for 339 ethnographically documented groups of hunter-gatherers.","Published":"2016-08-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bingat","Version":"1.2.2","Title":"Binary Graph Analysis Tools","Description":"Tools to analyze binary graph objects.","Published":"2016-01-15","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"binGroup","Version":"1.1-0","Title":"Evaluation and experimental design for binomial group testing","Description":"This package provides methods for estimation and\n hypothesis testing of proportions in group testing designs. It\n involves methods for estimating a proportion in a single\n population (assuming sensitivity and specificity 1 in designs\n with equal group sizes), as well as hypothesis tests and\n functions for experimental design for this situation. For\n estimating one proportion or the difference of proportions, a\n number of confidence interval methods are included, which can\n deal with various different pool sizes. Further, regression\n methods are implemented for simple pooling and matrix pooling\n designs.","Published":"2012-08-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"binhf","Version":"1.0-1","Title":"Haar-Fisz functions for binomial data","Description":"Binomial Haar-Fisz transforms for Gaussianization","Published":"2014-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"binman","Version":"0.1.0","Title":"A Binary Download Manager","Description":"Tools and functions for managing the download of binary files.\n Binary repositories are defined in 'YAML' format. Defining new \n pre-download, download and post-download templates allow additional \n repositories to be added.","Published":"2017-01-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"binMto","Version":"0.0-6","Title":"Asymptotic simultaneous confidence intervals for many-to-one\ncomparisons of proportions","Description":"Asymptotic simultaneous confidence intervals for comparison of many treatments with one control,\n for the difference of binomial proportions, allows for Dunnett-like-adjustment, Bonferroni or unadjusted intervals.\n Simulation of power of the above interval methods, approximate calculation of any-pair-power, and sample size\n iteration based on approximate any-pair power. \n Exact conditional maximum test for many-to-one comparisons to a control.","Published":"2013-10-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BinNonNor","Version":"1.3","Title":"Data Generation with Binary and Continuous Non-Normal Components","Description":"Generation of multiple binary and continuous non-normal variables simultaneously \n given the marginal characteristics and association structure based on the methodology \n proposed by Demirtas et al. (2012).","Published":"2016-05-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"BinNor","Version":"2.1","Title":"Simultaneous Generation of Multivariate Binary and Normal\nVariates","Description":"Generating multiple binary and normal variables simultaneously given marginal characteristics and association structure based on the methodology proposed by Demirtas and Doganay (2012).","Published":"2016-05-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"binom","Version":"1.1-1","Title":"Binomial Confidence Intervals For Several Parameterizations","Description":"Constructs confidence intervals on the probability of\n success in a binomial experiment via several parameterizations","Published":"2014-01-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"binomen","Version":"0.1.2","Title":"'Taxonomic' Specification and Parsing Methods","Description":"Includes functions for working with taxonomic data,\n including functions for combining, separating, and filtering\n taxonomic groups by any rank or name. Allows standard ('SE')\n and non-standard evaluation ('NSE').","Published":"2017-04-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"binomialcftp","Version":"1.0","Title":"Generates binomial random numbers via the coupling from the past\nalgorithm","Description":"Binomial random numbers are generated via the perfect\n sampling algorithm. At each iteration dual markov chains are\n generated and coalescence is checked. In case coalescence\n occurs, the resulting number is outputted. In case not, then\n the algorithm is restarted from T(t)=2*T(t) until coalescence\n occurs.","Published":"2012-09-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"binomlogit","Version":"1.2","Title":"Efficient MCMC for Binomial Logit Models","Description":"The R package contains different MCMC schemes to estimate the regression coefficients of a binomial (or binary) logit model within a Bayesian framework: a data-augmented independence MH-sampler, an auxiliary mixture sampler and a hybrid auxiliary mixture (HAM) sampler. All sampling procedures are based on algorithms using data augmentation, where the regression coefficients are estimated by rewriting the logit model as a latent variable model called difference random utility model (dRUM).","Published":"2014-03-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"binomSamSize","Version":"0.1-5","Title":"Confidence Intervals and Sample Size Determination for a\nBinomial Proportion under Simple Random Sampling and Pooled\nSampling","Description":"\n A suite of functions to compute confidence intervals and necessary\n sample sizes for the parameter p of the Bernoulli B(p)\n distribution under simple random sampling or under pooled\n sampling. Such computations are e.g. of interest when investigating\n the incidence or prevalence in populations.\n The package contains functions to compute coverage probabilities and\n coverage coefficients of the provided confidence intervals\n procedures. Sample size calculations are based on expected length.","Published":"2017-03-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"binomTools","Version":"1.0-1","Title":"Performing diagnostics on binomial regression models","Description":"This package provides a range of diagnostic methods for\n binomial regression models.","Published":"2011-08-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"BinOrdNonNor","Version":"1.3","Title":"Concurrent Generation of Binary, Ordinal and Continuous Data","Description":"Generation of samples from a mix of binary, ordinal and continuous random variables with a pre-specified correlation matrix and marginal distributions.","Published":"2017-03-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"binr","Version":"1.1","Title":"Cut Numeric Values into Evenly Distributed Groups","Description":"Implementation of algorithms for cutting numerical values\n exhibiting a potentially highly skewed distribution into evenly distributed\n groups (bins). This functionality can be applied for binning discrete\n values, such as counts, as well as for discretization of continuous values,\n for example, during generation of features used in machine learning\n algorithms.","Published":"2015-03-10","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"binseqtest","Version":"1.0.3","Title":"Exact Binary Sequential Designs and Analysis","Description":"For a series of binary responses, create stopping boundary with exact results after stopping, allowing updating for missing assessments.","Published":"2016-12-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"binsmooth","Version":"0.1.0","Title":"Generate PDFs and CDFs from Binned Data","Description":"Provides several methods for generating density functions\n based on binned data. Data are assumed to be nonnegative, but the bin widths\n need not be uniform, and the top bin may be unbounded. All PDF smoothing methods\n maintain the areas specified by the binned data. (Equivalently, all CDF\n smoothing methods interpolate the points specified by the binned data.) An\n estimate for the mean of the distribution may be supplied as an optional\n argument, which greatly improves the reliability of statistics computed from\n the smoothed density functions. Methods include step function, recursive\n subdivision, and optimized spline.","Published":"2016-08-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"binst","Version":"0.2.0","Title":"Data Preprocessing, Binning for Classification and Regression","Description":"Various supervised and unsupervised binning tools\n including using entropy, recursive partition methods\n and clustering.","Published":"2016-06-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bio.infer","Version":"1.3-3","Title":"Predict environmental conditions from biological observations","Description":"Imports benthic count data, reformats this data, and\n computes environmental inferences from this data.","Published":"2014-02-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bio3d","Version":"2.3-2","Title":"Biological Structure Analysis","Description":"Utilities to process, organize and explore protein structure,\n sequence and dynamics data. Features include the ability to read and write\n structure, sequence and dynamic trajectory data, perform sequence and structure\n database searches, data summaries, atom selection, alignment, superposition,\n rigid core identification, clustering, torsion analysis, distance matrix\n analysis, structure and sequence conservation analysis, normal mode analysis,\n principal component analysis of heterogeneous structure data, and correlation\n network analysis from normal mode and molecular dynamics data. In addition,\n various utility functions are provided to enable the statistical and graphical\n power of the R environment to work with biological sequence and structural data.\n Please refer to the URLs below for more information.","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Biocomb","Version":"0.3","Title":"Feature Selection and Classification with the Embedded\nValidation Procedures for Biomedical Data Analysis","Description":"Contains functions for the data analysis with the emphasis on biological data, including several algorithms for feature ranking, feature selection, classification\n algorithms with the embedded validation procedures.\n The functions can deal with numerical as well as with nominal features. Includes also the functions for calculation\n of feature AUC (Area Under the ROC Curve) and HUM (hypervolume under manifold) values and construction 2D- and 3D- ROC curves.\n Provides the calculation of Area Above the RCC (AAC) values and construction of Relative Cost Curves\n (RCC) to estimate the classifier performance under unequal misclassification costs problem.\n There exists the special function to deal with missing values, including different imputing schemes.","Published":"2017-03-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"Biodem","Version":"0.4","Title":"Biodemography Functions","Description":"The Biodem package provides a number of functions for Biodemographic analysis.","Published":"2015-07-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BiodiversityR","Version":"2.8-3","Title":"Package for Community Ecology and Suitability Analysis","Description":"Graphical User Interface (via the R-Commander) and utility functions (often based on the vegan package) for statistical analysis of biodiversity and ecological communities, including species accumulation curves, diversity indices, Renyi profiles, GLMs for analysis of species abundance and presence-absence, distance matrices, Mantel tests, and cluster, constrained and unconstrained ordination analysis. A book on biodiversity and community ecology analysis is available for free download from the website. In 2012, methods for (ensemble) suitability modelling and mapping were expanded in the package.","Published":"2017-06-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BIOdry","Version":"0.5","Title":"Multilevel Modeling of Dendroclimatical Fluctuations","Description":"Multilevel ecological data series (MEDS) are sequences of observations ordered according to temporal/spatial hierarchies that are defined by sample designs, with sample variability confined to ecological factors. Dendroclimatic MEDS of tree rings and climate are modeled into normalized fluctuations of tree growth and aridity. Modeled fluctuations (model frames) are compared with Mantel correlograms on multiple levels defined by sample design. Package implementation can be understood by running examples in modelFrame(), and muleMan() functions. ","Published":"2017-04-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BioFTF","Version":"1.2-0","Title":"Biodiversity Assessment Using Functional Tools","Description":"The main drawback of the most common biodiversity indices is that different measures may lead to different rankings among communities. This instrument overcomes this limit using some functional tools with the diversity profiles. In particular, the derivatives, the curvature, the radius of curvature, the arc length, and the surface area are proposed. The goal of this method is to interpret in detail the diversity profiles and obtain an ordering between different ecological communities on the basis of diversity. In contrast to the typical indices of diversity, the proposed method is able to capture the multidimensional aspect of biodiversity, because it takes into account both the evenness and the richness of the species present in an ecological community.","Published":"2016-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"biogas","Version":"1.7.0","Title":"Process Biogas Data and Predict Biogas Production","Description":"High- and low-level functions for processing biogas data and predicting biogas production. Molar mass and calculated oxygen demand (COD') can be determined from a chemical formula. Measured gas volume can be corrected for water vapor and to (possibly user-defined) standard temperature and pressure. Gas composition, cumulative production, or other variables can be interpolated to a specified time. Cumulative biogas and methane production (and rates) can be calculated using volumetric, manometric, or gravimetric methods for any number of reactors. With cumulative methane production data and data on reactor contents, biochemical methane potential (BMP) can be calculated and summarized, including subtraction of the inoculum contribution and normalization by substrate mass. Cumulative production and production rates can be summarized in several different ways (e.g., omitting normalization) using the same function. Lastly, biogas quantity and composition can be predicted from substrate composition and additional, optional data.","Published":"2017-02-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"biogeo","Version":"1.0","Title":"Point Data Quality Assessment and Coordinate Conversion","Description":"Functions for error detection and correction in point data quality datasets that are used in species distribution modelling. Includes functions for parsing and converting coordinates into decimal degrees from various formats.","Published":"2016-04-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"BioGeoBEARS","Version":"0.2.1","Title":"BioGeography with Bayesian (and Likelihood) Evolutionary\nAnalysis in R Scripts","Description":"BioGeoBEARS allows probabilistic inference of both historical biogeography (ancestral geographic ranges on a phylogeny) as well as comparison of different models of range evolution. It reproduces the model available in LAGRANGE (Ree and Smith 2008), as well as making available numerous additional models. For example, LAGRANGE as typically run has two free parameters, d (dispersal rate, i.e. the rate of range addition along a phylogenetic branch) and e (extinction rate, really the rate of local range loss along a phylogenetic branch). LAGRANGE also has a fixed cladogenic model which gives equal probability to a number of allowed range inheritance events, e.g.: (1) vicariance, (2) a new species starts in a subset of the ancestral range, (3) the ancestral range is copied to both species; in all cases, at least one species must have a starting range of size 1. LAGRANGE assigns equal probability to each of these events, and zero probability to other events. BioGeoBEARS adds an additional cladogenic event: founder-event speciation (the new species jumps to a range outside of the ancestral range), and also allows the relative weighting of the different sorts of events to be made into free parameters, allowing optimization and standard model choice procedures to pick the best model. The relative probability of different descendent range sizes is also parameterized and thus can also be specified or estimated. The flexibility available in BioGeoBEARS also enables the natural incorporation of (1) imperfect detection of geographic ranges in the tips, and (2) inclusion of fossil geographic range data, when the fossils are tips on the phylogeny. Bayesian analysis has been implemented through use of the \"LaplacesDemon\" package, however this package is now maintained off of CRAN, so its usage is not formally included in BioGeoBEARS at the current time. CITATION INFO: This package is the result of my Ph.D. research, please cite the package if you use it! Type: citation(package=\"BioGeoBEARS\") to get the citation information.","Published":"2014-01-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"biogram","Version":"1.4","Title":"N-Gram Analysis of Biological Sequences","Description":"Tools for extraction and analysis of various\n n-grams (k-mers) derived from biological sequences (proteins\n or nucleic acids). Contains QuiPT (quick permutation test) for fast\n feature-filtering of the n-gram data.","Published":"2017-01-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Biograph","Version":"2.0.6","Title":"Explore Life Histories","Description":"Transition rates are computed from transitions and exposures.Useful graphics and life-course indicators are computed. The package structures the data for multistate statistical and demographic modeling of life histories. \t","Published":"2016-03-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bioimagetools","Version":"1.1.0","Title":"Tools for Microscopy Imaging","Description":"Tools for 3D imaging, mostly for biology/microscopy. \n Read and write TIFF stacks. Functions for segmentation, filtering and analysing 3D point patterns.","Published":"2017-02-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bioinactivation","Version":"1.1.5","Title":"Simulation of Dynamic Microbial Inactivation","Description":"Prediction and adjustment to experimental data of microbial\n inactivation. Several models available in the literature are implemented.","Published":"2017-01-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BioInstaller","Version":"0.1.2","Title":"Lightweight Biology Software Installer","Description":"\n Can be used to install and download massive bioinformatics analysis softwares and databases, such as NGS reads mapping tools with its required databases.","Published":"2017-06-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"biolink","Version":"0.1.2","Title":"Create Hyperlinks to Biological Databases and Resources","Description":"Generate urls and hyperlinks to commonly used biological databases\n and resources based on standard identifiers. This is primarily useful when\n writing dynamic reports that reference things like gene symbols in text or\n tables, allowing you to, for example, convert gene identifiers to hyperlinks\n pointing to their entry in the NCBI Gene database. Currently supports NCBI\n Gene, PubMed, Gene Ontology, CRAN and Bioconductor.","Published":"2017-03-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Biolinv","Version":"0.1-1","Title":"Modelling and Forecasting Biological Invasions","Description":"Analysing and forecasting biological invasions time series\n with a stochastic, non mechanistic approach that gives proper weight\n to the anthropic component, accounts for habitat suitability and\n provides measures of precision for its estimates.","Published":"2017-02-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BIOM.utils","Version":"0.9","Title":"Utilities for the BIOM (Biological Observation Matrix) Format","Description":"Provides utilities to facilitate import, export and computation with the \n BIOM (Biological Observation Matrix) format (http://biom-format.org).","Published":"2014-08-29","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"BioMark","Version":"0.4.5","Title":"Find Biomarkers in Two-Class Discrimination Problems","Description":"Variable selection methods are provided for several classification methods: the lasso/elastic net, PCLDA, PLSDA, and several t-tests. Two approaches for selecting cutoffs can be used, one based on the stability of model coefficients under perturbation, and the other on higher criticism.","Published":"2015-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"biomartr","Version":"0.5.1","Title":"Genomic Data Retrieval","Description":"Perform metagenomic data retrieval and functional annotation\n retrieval. In detail, this package aims to provide users with a standardized\n way to automate genome, proteome, coding sequence ('CDS'), 'GFF', and metagenome\n retrieval from 'NCBI' and 'ENSEMBL' databases. Furthermore, an interface to the 'BioMart' database\n (Smedley et al. (2009) ) allows users to retrieve\n functional annotation for genomic loci. Users can download entire databases such\n as 'NCBI RefSeq' (Pruitt et al. (2007) ), 'NCBI nr',\n 'NCBI nt' and 'NCBI Genbank' (Benson et al. (2013) ) as\n well as 'ENSEMBL' and 'ENSEMBLGENOMES' with only one command.","Published":"2017-05-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BIOMASS","Version":"1.1","Title":"Estimating Aboveground Biomass and Its Uncertainty in Tropical\nForests","Description":"Contains functions to estimate aboveground biomass/carbon and its uncertainty in tropical forests. These functions allow to (1) retrieve and to correct taxonomy, (2) estimate wood density and its uncertainty, (3) construct height-diameter models, (4) estimate the above-ground biomass/carbon at the stand level with associated uncertainty. To cite BIOMASS, please use citation(\"BIOMASS\").","Published":"2017-01-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"biomod2","Version":"3.3-7","Title":"Ensemble Platform for Species Distribution Modeling","Description":"Functions for species distribution modeling, calibration and\n evaluation, ensemble of models.","Published":"2016-03-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bionetdata","Version":"1.0.1","Title":"Biological and chemical data networks","Description":"Data Package that includes several examples of chemical and biological data networks, i.e. data graph structured.","Published":"2014-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bioOED","Version":"0.1.1","Title":"Sensitivity Analysis and Optimum Experiment Design for Microbial\nInactivation","Description":"Extends the bioinactivation package with functions for Sensitivity\n Analysis and Optimum Experiment Design.","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BioPET","Version":"0.2.1","Title":"Biomarker Prognostic Enrichment Tool","Description":"Prognostic Enrichment is a clinical trial strategy of evaluating an intervention in a patient population with a higher rate of the unwanted event than the broader patient population (R. Temple (2010) ). A higher event rate translates to a lower sample size for the clinical trial, which can have both practical and ethical advantages. This package is a tool to help evaluate biomarkers for prognostic enrichment of clinical trials. ","Published":"2017-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BioPhysConnectoR","Version":"1.6-10","Title":"BioPhysConnectoR","Description":"Utilities and functions to investigate the relation\n between biomolecular structures, their interactions, and the\n evolutionary information revealed in sequence alignments of\n these molecules.","Published":"2013-01-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bioplots","Version":"0.0.1","Title":"Visualization of Overlapping Results with Heatmap","Description":"Visualization of complex biological datasets is\n essential to understand complementary spects of biology\n in big data era.\n In addition, analyzing of multiple datasets enables to\n understand biologcal processes deeply and accurately.\n Multiple datasets produce multiple analysis results, and\n these overlappings are usually visualized in Venn diagram.\n bioplots is a tiny R package that generates a heatmap to\n visualize overlappings instead of using Venn diagram.","Published":"2016-06-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bioPN","Version":"1.2.0","Title":"Simulation of deterministic and stochastic biochemical reaction\nnetworks using Petri Nets","Description":"\n bioPN is a package suited to perform simulation of deterministic and stochastic systems of biochemical reaction\n networks.\n Models are defined using a subset of Petri Nets, in a way that is close at how chemical reactions\n are defined.\n For deterministic solutions, bioPN creates the associated system of differential equations \"on the fly\", and\n solves it with a Runge Kutta Dormand Prince 45 explicit algorithm.\n For stochastic solutions, bioPN offers variants of Gillespie algorithm, or SSA.\n For hybrid deterministic/stochastic,\n it employs the Haseltine and Rawlings algorithm, that partitions the system in fast and slow reactions.\n bioPN algorithms are developed in C to achieve adequate performance.","Published":"2014-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"biorxivr","Version":"0.1.3","Title":"Search and Download Papers from the bioRxiv Preprint Server","Description":"The bioRxiv preprint server (http://www.biorxiv.org) is a website where scientists can post preprints of scholarly texts in biology. Users can search and download PDFs in bulk from the preprint server. The text of abstracts are stored as raw text within R, and PDFs can easily be saved and imported for text mining with packages such as 'tm'.","Published":"2016-04-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bios2mds","Version":"1.2.2","Title":"From BIOlogical Sequences to MultiDimensional Scaling","Description":"Bios2mds is primarily dedicated to the analysis of\n biological sequences by metric MultiDimensional Scaling with\n projection of supplementary data. It contains functions for\n reading multiple sequence alignment files, calculating distance\n matrices, performing metric multidimensional scaling and\n visualizing results.","Published":"2012-06-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"biosignalEMG","Version":"2.0.1","Title":"Tools for Electromyogram Signals (EMG) Analysis","Description":"Data processing tools to compute the rectified, integrated and the averaged EMG. Routines for automatic detection of activation phases. A routine to compute and plot the ensemble average of the EMG. An EMG signal simulator for general purposes.","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"biospear","Version":"1.0.0","Title":"Biomarker Selection in Penalized Regression Models","Description":"Provides a useful R tool for developing and validating prediction models, estimate expected survival of patients and visualize them graphically. \n Most of the implemented methods are based on penalized regressions such as: the lasso (Tibshirani R (1996)), the elastic net (Zou H et al. (2005) ), the adaptive lasso (Zou H (2006) ), the stability selection (Meinshausen N et al. (2010) ), some extensions of the lasso (Ternes et al. (2016) ), some methods for the interaction setting (Ternes N et al. (2016) ), or others.\n A function generating simulated survival data set is also provided.","Published":"2017-05-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BioStatR","Version":"2.0.0","Title":"Initiation à la Statistique avec R","Description":"This packages provides datasets and functions for the book \"Initiation à la Statistique avec R\", Dunod, 2ed, 2014.","Published":"2014-08-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"biotic","Version":"0.1.2","Title":"Calculation of Freshwater Biotic Indices","Description":"Calculates a range of UK freshwater invertebrate biotic indices\n including BMWP, Whalley, WHPT, Habitat-specific BMWP, AWIC, LIFE and PSI.","Published":"2016-04-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"biotools","Version":"3.1","Title":"Tools for Biometry and Applied Statistics in Agricultural\nScience","Description":"Tools designed to perform and work with cluster analysis (including Tocher's algorithm), \n\tdiscriminant analysis and path analysis (standard and under collinearity), as well as some \n\tuseful miscellaneous tools for dealing with sample size and optimum plot size calculations.\n\tMantel's permutation test can be found in this package. A new approach for calculating its\n\tpower is implemented. biotools also contains the new tests for genetic covariance components.\n\tAn approach for predicting spatial gene diversity is implemented.","Published":"2017-05-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bipartite","Version":"2.08","Title":"Visualising Bipartite Networks and Calculating Some (Ecological)\nIndices","Description":"Functions to visualise webs and calculate a series of indices commonly used to describe pattern in (ecological) webs. It focuses on webs consisting of only two levels (bipartite), e.g. pollination webs or predator-prey-webs. Visualisation is important to get an idea of what we are actually looking at, while the indices summarise different aspects of the web's topology. ","Published":"2017-03-31","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"biplotbootGUI","Version":"1.1","Title":"Bootstrap on Classical Biplots and Clustering Disjoint Biplot","Description":"A GUI with which the user can construct and interact with Bootstrap methods on Classical Biplots and with Clustering and/or Disjoint Biplot.","Published":"2015-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BiplotGUI","Version":"0.0-7","Title":"Interactive Biplots in R","Description":"Provides a GUI with which users can construct and interact\n with biplots.","Published":"2013-03-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"BIPOD","Version":"0.2.1","Title":"BIPOD (Bayesian Inference for Partially Observed diffusions)","Description":"Bayesian parameter estimation for (partially observed)\n two-dimensional diffusions.","Published":"2014-03-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"birdnik","Version":"0.1.0","Title":"Connector for the Wordnik API","Description":"A connector to the API for 'Wordnik' , a dictionary service that also provides\n bigram generation, word frequency data, and a whole host of other functionality.","Published":"2016-08-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"birdring","Version":"1.3","Title":"Methods to Analyse Ring Re-Encounter Data","Description":"R functions to read EURING data and analyse re-encounter data of birds marked by metal rings. For a tutorial, go to http://www.tandfonline.com/doi/full/10.1080/03078698.2014.933053.","Published":"2015-10-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"birk","Version":"2.1.2","Title":"MA Birk's Functions","Description":"Collection of tools to make R more convenient. Includes tools to\n summarize data using statistics not available with base R and manipulate\n objects for analyses.","Published":"2016-07-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bisectr","Version":"0.1.0","Title":"Tools to find bad commits with git bisect","Description":"Tools to find bad commits with git bisect. See\n https://github.com/wch/bisectr for examples and test script\n templates.","Published":"2012-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BiSEp","Version":"2.2","Title":"Toolkit to Identify Candidate Synthetic Lethality","Description":"Enables the user to infer potential synthetic lethal relationships\n by analysing relationships between bimodally distributed gene pairs in big\n gene expression datasets. Enables the user to visualise these candidate\n synthetic lethal relationships.","Published":"2017-01-26","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"bisoreg","Version":"1.4","Title":"Bayesian Isotonic Regression with Bernstein Polynomials","Description":"Provides functions for fitting Bayesian monotonic regression models to data.","Published":"2015-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BisRNA","Version":"0.2.1","Title":"Analysis of RNA Cytosine-5 Methylation","Description":"Bisulfite-treated RNA non-conversion in a set of samples is analysed as\n follows: each sample's Poisson parameter is estimated, and non-conversion\n p-values are calculated for each sample and adjusted for multiple testing.\n Finally, combined non-conversion p-value and standard error of the non-conversion\n are calculated on the intersection of the set of samples.\n A low combined non-conversion p-value points to methylation of the\n corresponding RNA cytosine, or another event blocking bisulfite conversion.","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bit","Version":"1.1-12","Title":"A class for vectors of 1-bit booleans","Description":"bitmapped vectors of booleans (no NAs), \n coercion from and to logicals, integers and integer subscripts; \n fast boolean operators and fast summary statistics. \n With 'bit' vectors you can store true binary booleans {FALSE,TRUE} at the \n expense of 1 bit only, on a 32 bit architecture this means factor 32 less \n RAM and ~ factor 32 more speed on boolean operations. Due to overhead of \n R calls, actual speed gain depends on the size of the vector: expect gains \n for vectors of size > 10000 elements. Even for one-time boolean operations \n it can pay-off to convert to bit, the pay-off is obvious, when such \n components are used more than once. \n Reading from and writing to bit is approximately as fast as accessing \n standard logicals - mostly due to R's time for memory allocation. The package \n allows to work with pre-allocated memory for return values by calling .Call() \n directly: when evaluating the speed of C-access with pre-allocated vector \n memory, coping from bit to logical requires only 70% of the time for copying \n from logical to logical; and copying from logical to bit comes at a \n performance penalty of 150%. the package now contains further classes for \n representing logical selections: 'bitwhich' for very skewed selections and \n 'ri' for selecting ranges of values for chunked processing. All three index \n classes can be used for subsetting 'ff' objects (ff-2.1-0 and higher).","Published":"2014-04-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bit64","Version":"0.9-7","Title":"A S3 Class for Vectors of 64bit Integers","Description":"\n Package 'bit64' provides serializable S3 atomic 64bit (signed) integers. \n These are useful for handling database keys and exact counting in +-2^63.\n WARNING: do not use them as replacement for 32bit integers, integer64 are not\n supported for subscripting by R-core and they have different semantics when \n combined with double, e.g. integer64 + double => integer64. \n Class integer64 can be used in vectors, matrices, arrays and data.frames. \n Methods are available for coercion from and to logicals, integers, doubles, \n characters and factors as well as many elementwise and summary functions. \n Many fast algorithmic operations such as 'match' and 'order' support inter-\n active data exploration and manipulation and optionally leverage caching.","Published":"2017-05-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bitops","Version":"1.0-6","Title":"Bitwise Operations","Description":"Functions for bitwise operations on integer vectors.","Published":"2013-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BiTrinA","Version":"1.2","Title":"Binarization and Trinarization of One-Dimensional Data","Description":"Provides methods for the binarization and trinarization of one-dimensional data and some visualization functions.","Published":"2017-02-06","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"bitrugs","Version":"0.1","Title":"Bayesian Inference of Transmission Routes Using Genome Sequences","Description":"MCMC methods to estimate transmission dynamics and infection routes in hospitals using genomic sampling data.","Published":"2016-05-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BivarP","Version":"1.0","Title":"Estimating the Parameters of Some Bivariate Distributions","Description":"Parameter estimation of bivariate distribution functions\n modeled as a Archimedean copula function. The input data may contain\n values from right censored. Used marginal distributions are two-parameter.\n Methods for density, distribution, survival, random sample generation.","Published":"2015-04-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"bivarRIpower","Version":"1.2","Title":"Sample size calculations for bivariate longitudinal data","Description":"Implements sample size calculations for bivariate random\n intercept regression model that are described in Comulada and\n Weiss (2010)","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BivRegBLS","Version":"1.0.0","Title":"Tolerance Intervals and Errors-in-Variables Regressions in\nMethod Comparison Studies","Description":"Assess the agreement in method comparison studies by tolerance intervals and errors-in-variables regressions. The Ordinary Least Square regressions (OLSv and OLSh), the Deming Regression (DR), and the (Correlated)-Bivariate Least Square regressions (BLS and CBLS) can be used with unreplicated or replicated data. The BLS and CBLS are the two main functions to estimate a regression line, while XY.plot and MD.plot are the two main graphical functions to display, respectively an (X,Y) plot or (M,D) plot with the BLS or CBLS results. Assuming no proportional bias, the (M,D) plot (Band-Altman plot) may be simplified by calculating horizontal lines intervals with tolerance intervals (beta-expectation (type I) or beta-gamma content (type II)).","Published":"2017-01-06","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"bivrp","Version":"1.0","Title":"Bivariate Residual Plots with Simulation Polygons","Description":"Generates bivariate residual plots with simulation polygons for any diagnostics and bivariate model from which functions to extract the desired diagnostics, simulate new data and refit the models are available.","Published":"2016-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BivUnifBin","Version":"1.1","Title":"Generation of Bivariate Uniform Data and Its Relation to\nBivariate Binary Data","Description":"Simulation of bivariate uniform data with a full range of correlations based on two beta densities and computation of the tetrachoric correlation (correlation of bivariate uniform data) from the phi coefficient (correlation of bivariate binary data) and vice versa.","Published":"2017-01-26","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"biwavelet","Version":"0.20.11","Title":"Conduct Univariate and Bivariate Wavelet Analyses","Description":"This is a port of the WTC MATLAB package written by Aslak Grinsted\n and the wavelet program written by Christopher Torrence and Gibert P.\n Compo. This package can be used to perform univariate and bivariate\n (cross-wavelet, wavelet coherence, wavelet clustering) analyses.","Published":"2016-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"biwt","Version":"1.0","Title":"Functions to compute the biweight mean vector and covariance &\ncorrelation matrices","Description":"Compute multivariate location, scale, and correlation\n estimates based on Tukey's biweight M-estimator.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bizdays","Version":"1.0.3","Title":"Business Days Calculations and Utilities","Description":"Business days calculations based on a list of holidays and\n nonworking weekdays. Quite useful for fixed income and derivatives pricing.","Published":"2017-05-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bkmr","Version":"0.2.0","Title":"Bayesian Kernel Machine Regression","Description":"Implementation of a statistical approach \n for estimating the joint health effects of multiple \n concurrent exposures.","Published":"2017-03-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BKPC","Version":"1.0","Title":"Bayesian Kernel Projection Classifier","Description":"Bayesian kernel projection classifier is a nonlinear multicategory classifier which performs the classification of the projections of the data to the principal axes of the feature space. A Gibbs sampler is implemented to find the posterior distributions of the parameters.","Published":"2016-02-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"blackbox","Version":"1.0.18","Title":"Black Box Optimization and Exploration of Parameter Space","Description":"Performs prediction of a response function from simulated response values, allowing black-box optimization of functions estimated with some error. Includes a simple user interface for such applications, as well as more specialized functions designed to be called by the Migraine software (see URL). The latter functions are used for prediction of likelihood surfaces and implied likelihood ratio confidence intervals, and for exploration of predictor space of the surface. Prediction of the response is based on ordinary kriging (with residual error) of the input. Estimation of smoothing parameters is performed by generalized cross-validation.","Published":"2017-02-03","License":"CeCILL-2","snapshot_date":"2017-06-23"} {"Package":"BlakerCI","Version":"1.0-5","Title":"Blaker's Binomial Confidence Limits","Description":"Fast and accurate calculation of Blaker's binomial confidence limits (and some related stuff).","Published":"2015-08-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BlandAltmanLeh","Version":"0.3.1","Title":"Plots (Slightly Extended) Bland-Altman Plots","Description":"Bland-Altman Plots using either base graphics or ggplot2,\n augmented with confidence intervals, with detailed return values and\n a sunflowerplot option for data with ties.","Published":"2015-12-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"blatr","Version":"1.0.1","Title":"Send Emails Using 'Blat' for Windows","Description":"A wrapper around the 'Blat' command line SMTP mailer for Windows.\n 'Blat' is public domain software, but be sure to read the license before use.\n It can be found at the Blat website http://www.blat.net.","Published":"2015-03-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Blaunet","Version":"2.0.4","Title":"Calculate and Analyze Blau Status for Measuring Social Distance","Description":"An integrated set of tools to calculate and analyze Blau statuses quantifying social distance between individuals belonging to organizations. Relational (network) data may be incorporated for additional analyses.","Published":"2016-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"blavaan","Version":"0.2-4","Title":"Bayesian Latent Variable Analysis","Description":"Fit a variety of Bayesian latent variable models, including confirmatory\n factor analysis, structural equation models, and latent growth curve models.","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BLCOP","Version":"0.3.1","Title":"Black-Litterman and Copula Opinion Pooling Frameworks","Description":"An implementation of the Black-Litterman Model and Atilio\n Meucci's copula opinion pooling framework.","Published":"2015-02-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"blendedLink","Version":"1.0","Title":"A New Link Function that Blends Two Specified Link Functions","Description":"A new link function that equals one specified link function up to a cutover then a linear rescaling of another specified link function. For use in glm() or glm2(). The intended use is in binary regression, in which case the first link should be set to \"log\" and the second to \"logit\". This ensures that fitted probabilities are between 0 and 1 and that exponentiated coefficients can be interpreted as relative risks for probabilities up to the cutoff.","Published":"2017-01-31","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"blender","Version":"0.1.2","Title":"Analyze biotic homogenization of landscapes","Description":"Tools for assessing exotic species' contributions to\n landscape homogeneity using average pairwise Jaccard similarity\n and an analytical approximation derived in Harris et al. (2011,\n \"Occupancy is nine-tenths of the law,\" The American\n Naturalist). Also includes a randomization method for assessing\n sources of model error.","Published":"2014-02-22","License":"GPL-2 | Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"blighty","Version":"3.1-4","Title":"United Kingdom coastlines","Description":"Function for drawing the coastline of the British Isles","Published":"2012-04-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"blkbox","Version":"1.0","Title":"Data Exploration with Multiple Machine Learning Algorithms","Description":"Allows data to be processed by multiple machine learning algorithms\n at the same time, enables feature selection of data by single a algorithm or\n combinations of multiple. Easy to use tool for k-fold cross validation and\n nested cross validation.","Published":"2016-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"blkergm","Version":"1.1","Title":"Fitting block ERGM given the block structure on social networks","Description":"This package is an extension to the \"ergm\" package which implements the block ergms.","Published":"2014-08-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"blm","Version":"2013.2.4.4","Title":"Binomial linear and linear-expit regression","Description":"Implements regression models for binary data on the absolute risk scale. These models are applicable to cohort and population-based case-control data.","Published":"2013-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"blme","Version":"1.0-4","Title":"Bayesian Linear Mixed-Effects Models","Description":"Maximum a posteriori estimation for linear and generalized\n linear mixed-effects models in a Bayesian setting. Extends\n 'lme4' by Douglas Bates, Martin Maechler, Ben Bolker, and Steve Walker.","Published":"2015-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"blmeco","Version":"1.1","Title":"Data Files and Functions Accompanying the Book \"Bayesian Data\nAnalysis in Ecology using R, BUGS and Stan\"","Description":"Data files and functions accompanying the book Korner-Nievergelt, Roth, von Felten, Guelat, Almasi, Korner-Nievergelt (2015) \"Bayesian Data Analysis in Ecology using R, BUGS and Stan\", Elsevier, New York.","Published":"2015-08-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BLModel","Version":"1.0.2","Title":"Black-Litterman Posterior Distribution","Description":"Posterior distribution in the Black-Litterman model is computed from a prior distribution given in the form of a time series of asset returns and a continuous distribution of views provided by the user as an external function.","Published":"2017-03-29","License":"GNU General Public License version 3","snapshot_date":"2017-06-23"} {"Package":"blob","Version":"1.1.0","Title":"A Simple S3 Class for Representing Vectors of Binary Data\n('BLOBS')","Description":"R's raw vector is useful for storing a single binary object.\n What if you want to put a vector of them in a data frame? The blob\n package provides the blob object, a list of raw vectors, suitable for\n use as a column in data frame.","Published":"2017-06-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"blockcluster","Version":"4.2.3","Title":"Coclustering Package for Binary, Categorical, Contingency and\nContinuous Data-Sets","Description":"Simultaneous clustering of rows and columns, usually designated by\n biclustering, co-clustering or block clustering, is an important technique\n in two way data analysis. It consists of estimating a mixture model which\n takes into account the block clustering problem on both the individual and\n variables sets. The blockcluster package provides a bridge between the C++\n core library and the R statistical computing environment. This package\n allows to co-cluster binary, contingency, continuous and categorical\n data-sets. It also provides utility functions to visualize the results.\n This package may be useful for various applications in fields of Data\n mining, Information retrieval, Biology, computer vision and many more. More\n information about the project and comprehensive tutorial can be found on\n the link mentioned in URL.","Published":"2017-02-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"blockmatrix","Version":"1.0","Title":"blockmatrix: Tools to solve algebraic systems with partitioned\nmatrices","Description":"Some elementary matrix algebra tools are implemented to manage\n block matrices or partitioned matrix, i.e. \"matrix of matrices\"\n (http://en.wikipedia.org/wiki/Block_matrix). The block matrix is here\n defined as a new S3 object. In this package, some methods for \"matrix\"\n object are rewritten for \"blockmatrix\" object. New methods are implemented.\n This package was created to solve equation systems with block matrices for\n the analysis of environmental vector time series .\n Bugs/comments/questions/collaboration of any kind are warmly welcomed.","Published":"2014-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BlockMessage","Version":"1.0","Title":"Creates strings that show a text message in 8 by 8 block letters","Description":"Creates strings that show a text message in 8 by 8 block\n letters","Published":"2013-03-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"blockmodeling","Version":"0.1.8","Title":"An R package for Generalized and classical blockmodeling of\nvalued networks","Description":"The package is primarly ment as an implementation of\n Generalized blockmodeling for valued networks. In addition,\n measurese of similarity or dissimilarity based on structural\n equivalence and regular equivalence (REGE algorithem) can be\n computed and partitioned matrices can be ploted.","Published":"2010-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"blockmodels","Version":"1.1.1","Title":"Latent and Stochastic Block Model Estimation by a 'V-EM'\nAlgorithm","Description":"Latent and Stochastic Block Model estimation by a Variational EM algorithm.\n Various probability distribution are provided (Bernoulli,\n Poisson...), with or without covariates.","Published":"2015-04-21","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"blockrand","Version":"1.3","Title":"Randomization for block random clinical trials","Description":"Create randomizations for block random clinical trials.\n Can also produce a pdf file of randomization cards.","Published":"2013-01-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"blocksdesign","Version":"2.5","Title":"Nested and Crossed Block Designs for Factorial, Fractional\nFactorial and Unstructured Treatment Sets","Description":"Constructs randomized nested row-and-column type block designs\n with arbitrary depth of nesting for arbitrary factorial or fractional \n factorial treatment designs. The treatment model can be defined\n by a models.matrix formula which allows any feasible \n combination of quantitative or qualitative model terms.\n Any feasible design size can be defined and, where necessary, \n a D-optimal swapping routine will find the best fraction for the required \n design size. Blocks are nested hierarchically and the block model \n for any particular level of nesting can comprise either a simple nested blocks \n design or a crossed row-and-column blocks design. Block sizes \n are either all equal or differ, at most, by one plot within any particular row\n or column classification and any particular level of nesting. The design outputs \n include a data frame showing the allocation of treatments to blocks, a table\n showing block levels, the fractional design efficiency, \n the achieved D-efficiency, the achieved A-efficiency\n (unstructured treatments only) and A-efficiency upper bounds, where available,\n for each stratum in the design. For designs with simple unstructured treatments,\n a plan layout showing the allocation of treatments to blocks or to rows and\n columns in the bottom stratum of the design is also given.","Published":"2017-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"blockseg","Version":"0.2","Title":"Two Dimensional Change-Points Detection","Description":"Segments a matrix in blocks with constant values.","Published":"2016-02-10","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"blockTools","Version":"0.6-3","Title":"Block, Assign, and Diagnose Potential Interference in Randomized\nExperiments","Description":"Blocks units into experimental blocks, with one unit per treatment condition, by creating a measure of multivariate distance between all possible pairs of units. Maximum, minimum, or an allowable range of differences between units on one variable can be set. Randomly assign units to treatment conditions. Diagnose potential interference between units assigned to different treatment conditions. Write outputs to .tex and .csv files.","Published":"2016-12-02","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Blossom","Version":"1.4","Title":"Statistical Comparisons with Distance-Function Based Permutation\nTests","Description":"Provides tools for making statistical comparisons with distance-function based permutation tests developed by P. W. Mielke, Jr. and colleagues at Colorado State University (Mielke, P. W. & Berry, K. J. Permutation Methods: A Distance Function Approach (Springer, New York, 2001)) and for testing parameters estimated in linear models with permutation procedures developed by B. S. Cade and colleagues at the Fort Collins Science Center, U. S. Geological Survey.","Published":"2016-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BLPestimatoR","Version":"0.1.4","Title":"Performs a BLP Demand Estimation","Description":"Provides the estimation algorithm to perform the demand estimation described in Berry, Levinsohn and Pakes (1995) . The routine uses analytic gradients and offers a large number of implemented integration methods and optimization routines.","Published":"2017-05-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BLR","Version":"1.4","Title":"Bayesian Linear Regression","Description":"Bayesian Linear Regression","Published":"2014-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"blsAPI","Version":"0.1.8","Title":"Request Data from the U.S. Bureau of Labor Statistics API","Description":"Allows users to request data for one or multiple series through the\n U.S. Bureau of Labor Statistics API. Users provide parameters as specified in\n and the function returns a JSON\n string.","Published":"2017-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"blscrapeR","Version":"2.1.5","Title":"An API Wrapper for the Bureau of Labor Statistics (BLS)","Description":"Scrapes various data from . The U.S. Bureau of Labor Statistics is the statistical branch of the United States Department of Labor. The package has additional functions to help parse, analyze and visualize the data.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"BMA","Version":"3.18.7","Title":"Bayesian Model Averaging","Description":"Package for Bayesian model averaging and variable selection for linear models,\n generalized linear models and survival models (cox\n regression).","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BMAmevt","Version":"1.0.1","Title":"Multivariate Extremes: Bayesian Estimation of the Spectral\nMeasure","Description":"Toolkit for Bayesian estimation of the dependence structure\n in Multivariate Extreme Value parametric models.","Published":"2017-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bmd","Version":"0.5","Title":"Benchmark dose analysis for dose-response data","Description":"Benchmark dose analysis for continuous and quantal\n dose-response data.","Published":"2012-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bmem","Version":"1.5","Title":"Mediation analysis with missing data using bootstrap","Description":"Four methods for mediation analysis with missing data: Listwise deletion, Pairwise deletion, Multiple imputation, and Two Stage Maximum Likelihood algorithm. For MI and TS-ML, auxiliary variables can be included. Bootstrap confidence intervals for mediation effects are obtained. The robust method is also implemented for TS-ML. Since version 1.4, bmem adds the capability to conduct power analysis for mediation models.","Published":"2013-10-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bmeta","Version":"0.1.2","Title":"Bayesian Meta-Analysis and Meta-Regression","Description":"Provides a collection of functions for conducting meta-analyses under Bayesian context in R. The package includes functions for computing various effect size or outcome measures (e.g. odds ratios, mean difference and incidence rate ratio) for different types of data based on MCMC simulations. Users are allowed to fit fixed- and random-effects models with different priors to the data. Meta-regression can be carried out if effects of additional covariates are observed. Furthermore, the package provides functions for creating posterior distribution plots and forest plot to display main model output. Traceplots and some other diagnostic plots are also available for assessing model fit and performance.","Published":"2016-01-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BMhyd","Version":"1.2-8","Title":"PCM for Hybridization","Description":"The BMhyd package analyzes the phenotypic evolution of species of hybrid origin on a phylogenetic network. This package can detect the hybrid vigor effect, a burst of variation at formation, and the relative portion of heritability from its parents. Parameters are estimated by maximum likelihood. Users need to enter a comparative data set, a phylogeny, and information on gene flow leading to hybrids. ","Published":"2015-08-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BMisc","Version":"1.0.1","Title":"Miscellaneous Functions for Panel Data, Quantiles, and Printing\nResults","Description":"These are miscellaneous functions for working with panel data, quantiles, and printing results. For panel data, the package includes functions for making a panel data balanced (that is, dropping missing individuals that have missing observations in any time period), converting id numbers to row numbers, and to treat repeated cross sections as panel data under the assumption of rank invariance. For quantiles, there are functions to make ecdf functions from a set of data points (this is particularly useful when a distribution function is created in several steps) and to combine distribution functions based on some external weights; these distribution functions can easily be inverted to obtain quantiles. Finally, there are several other miscellaneous functions for obtaining weighted means, weighted distribution functions, and weighted quantiles; to generate summary statistics and their differences for two groups; and to drop covariates from formulas.","Published":"2017-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Bmix","Version":"0.6","Title":"Bayesian Sampling for Stick-Breaking Mixtures","Description":"This is a bare-bones implementation of sampling algorithms\n for a variety of Bayesian stick-breaking (marginally DP)\n mixture models, including particle learning and Gibbs sampling\n for static DP mixtures, particle learning for dynamic BAR\n stick-breaking, and DP mixture regression. The software is\n designed to be easy to customize to suit different situations\n and for experimentation with stick-breaking models. Since\n particles are repeatedly copied, it is not an especially\n efficient implementation.","Published":"2016-02-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bmixture","Version":"0.5","Title":"Bayesian Estimation for Finite Mixture of Distributions","Description":"Provides statistical tools for Bayesian estimation for finite mixture of distributions, mainly mixture of Gamma, Normal and t-distributions. The package is implemented the recent improvements in Bayesian literature for the finite mixture of distributions, including Mohammadi and et al. (2013) and Mohammadi and Salehi-Rad (2012) .","Published":"2017-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bmk","Version":"1.0","Title":"MCMC diagnostics package","Description":"MCMC diagnostic package that contains tools to diagnose\n convergence as well as to evaluate sensitivity studies,\n Includes summary functions which output mean, median,\n 95percentCI, Gelman & Rubin diagnostics and the Hellinger\n distance based diagnostics, Also contains functions to\n determine when an MCMC chain has converged via Hellinger\n distance, A function is also provided to compare outputs from\n identically dimensioned chains for determining sensitivy to\n prior distribution assumptions","Published":"2012-10-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bmlm","Version":"1.3.0","Title":"Bayesian Multilevel Mediation","Description":"Easy estimation of Bayesian multilevel mediation models with Stan.","Published":"2017-06-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"bmmix","Version":"0.1-2","Title":"Bayesian multinomial mixture","Description":"Bayesian multinomial mixture model ","Published":"2014-05-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BMN","Version":"1.02","Title":"The pseudo-likelihood method for pairwise binary markov networks","Description":"This package implements approximate and exact methods for\n pairwise binary markov models. The exact method uses an\n implementation of the junction tree algorithm for binary\n graphical models. For more details see the help files","Published":"2010-04-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bmp","Version":"0.2","Title":"Read Windows Bitmap (BMP) images","Description":"Reads Windows BMP format images. Currently limited to 8 bit\n greyscale images and 24,32 bit (A)RGB images. Pure R implementation without\n external dependencies.","Published":"2013-08-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bmrm","Version":"3.3","Title":"Bundle Methods for Regularized Risk Minimization Package","Description":"Bundle methods for minimization of convex and non-convex risk\n under L1 or L2 regularization. Implements the algorithm proposed by Teo et\n al. (JMLR 2010) as well as the extension proposed by Do and Artieres (JMLR\n 2012). The package comes with lot of loss functions for machine learning\n which make it powerful for big data analysis. The applications includes:\n structured prediction, linear SVM, multi-class SVM, f-beta optimization,\n ROC optimization, ordinal regression, quantile regression,\n epsilon insensitive regression, least mean square, logistic regression,\n least absolute deviation regression (see package examples), etc... all with\n L1 and L2 regularization.","Published":"2017-05-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BMRV","Version":"1.32","Title":"Bayesian Models for Rare Variant Association Analysis","Description":"Provides two Bayesian models for detecting the association between rare genetic variants and a trait that can be continuous, ordinal or binary. Bayesian latent variable collapsing model (BLVCM) detects interaction effect and is dedicated to twin design while it can also be applied to independent samples. Hierarchical Bayesian multiple regression model (HBMR) incorporates genotype uncertainty information and can be applied to either independent or family samples. Furthermore, it deals with continuous, binary and ordinal traits.","Published":"2016-11-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BMS","Version":"0.3.4","Title":"Bayesian Model Averaging Library","Description":"Bayesian model averaging for linear models with a wide choice of (customizable) priors. Built-in priors include coefficient priors (fixed, flexible and hyper-g priors), 5 kinds of model priors, moreover model sampling by enumeration or various MCMC approaches. Post-processing functions allow for inferring posterior inclusion and model probabilities, various moments, coefficient and predictive densities. Plotting functions available for posterior model size, MCMC convergence, predictive and coefficient densities, best models representation, BMA comparison.","Published":"2015-11-24","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"BNDataGenerator","Version":"1.0","Title":"Data Generator based on Bayesian Network Model","Description":"Data generator based on Bayesian network model","Published":"2014-12-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bnlearn","Version":"4.1.1","Title":"Bayesian Network Structure Learning, Parameter Learning and\nInference","Description":"Bayesian network structure learning, parameter learning and\n inference.\n This package implements constraint-based (GS, IAMB, Inter-IAMB, Fast-IAMB,\n MMPC, Hiton-PC), pairwise (ARACNE and Chow-Liu), score-based (Hill-Climbing\n and Tabu Search) and hybrid (MMHC and RSMAX2) structure learning algorithms\n for discrete, Gaussian and conditional Gaussian networks, along with many\n score functions and conditional independence tests.\n The Naive Bayes and the Tree-Augmented Naive Bayes (TAN) classifiers are\n also implemented.\n Some utility functions (model comparison and manipulation, random data\n generation, arc orientation testing, simple and advanced plots) are\n included, as well as support for parameter estimation (maximum likelihood\n and Bayesian) and inference, conditional probability queries and\n cross-validation. Development snapshots with the latest bugfixes are\n available from .","Published":"2017-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bnnSurvival","Version":"0.1.5","Title":"Bagged k-Nearest Neighbors Survival Prediction","Description":"Implements a bootstrap aggregated (bagged) version of\n the k-nearest neighbors survival probability prediction method (Lowsky et\n al. 2013). In addition to the bootstrapping of training samples, the\n features can be subsampled in each baselearner to break the correlation\n between them. The Rcpp package is used to speed up the computation.","Published":"2017-05-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bnormnlr","Version":"1.0","Title":"Bayesian Estimation for Normal Heteroscedastic Nonlinear\nRegression Models","Description":"Implementation of Bayesian estimation in normal heteroscedastic nonlinear regression Models following Cepeda-Cuervo, (2001).","Published":"2014-12-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BNPdensity","Version":"2017.03","Title":"Ferguson-Klass Type Algorithm for Posterior Normalized Random\nMeasures","Description":"Bayesian nonparametric density estimation modeling mixtures by a Ferguson-Klass type algorithm for posterior normalized random measures.","Published":"2017-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BNPMIXcluster","Version":"0.2.0","Title":"Bayesian Nonparametric Model for Clustering with Mixed Scale\nVariables","Description":"Bayesian nonparametric approach for clustering that is capable to combine different types of variables (continuous, ordinal and nominal) and also accommodates for different sampling probabilities in a complex survey design. The model is based on a location mixture model with a Poisson-Dirichlet process prior on the location parameters of the associated latent variables. The package performs the clustering model described in Carmona, C., Nieto-Barajas, L. E., Canale, A. (2016) .","Published":"2017-02-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"bnpmr","Version":"1.1","Title":"Bayesian monotonic nonparametric regression","Description":"Implements the Bayesian nonparametric monotonic regression\n method described in Bornkamp & Ickstadt (2009), Biometrics, 65,\n 198-205.","Published":"2013-05-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"BNPTSclust","Version":"1.1","Title":"A Bayesian Nonparametric Algorithm for Time Series Clustering","Description":"Performs the algorithm for time series clustering described in Nieto-Barajas and Contreras-Cristan (2014).","Published":"2015-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BNSL","Version":"0.1.2","Title":"Bayesian Network Structure Learning","Description":"From a given data frame, this package learns its Bayesian network structure based on a selected score.","Published":"2017-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BNSP","Version":"1.1.1","Title":"Bayesian Non- And Semi-Parametric Model Fitting","Description":"MCMC for Dirichlet process mixtures.","Published":"2017-02-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bnspatial","Version":"1.0.1","Title":"Spatial Implementation of Bayesian Networks and Mapping","Description":"Package for the spatial implementation of Bayesian Networks and mapping in geographical space. It makes maps of expected value (or most likely state) given known and unknown conditions, maps of uncertainty measured as coefficient of variation or Shannon index (entropy), maps of probability associated to any states of any node of the network. Some additional features are provided as well: parallel processing options, data discretization routines and function wrappers designed for users with minimal knowledge of the R language. Outputs can be exported to any common GIS format. Development was funded by the European Union FP7 (2007-2013), under project ROBIN (agreement 283093).","Published":"2016-12-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bnstruct","Version":"1.0.2","Title":"Bayesian Network Structure Learning from Data with Missing\nValues","Description":"Bayesian Network Structure Learning from Data with Missing Values.\n The package implements the Silander-Myllymaki complete search,\n the Max-Min Parents-and-Children, the Hill-Climbing, the\n Max-Min Hill-climbing heuristic searches, and the Structural\n Expectation-Maximization algorithm. Available scoring functions are\n BDeu, AIC, BIC. The package also implements methods for generating and using\n bootstrap samples, imputed data, inference.","Published":"2016-12-13","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"boa","Version":"1.1.8-2","Title":"Bayesian Output Analysis Program (BOA) for MCMC","Description":"A menu-driven program and library of functions for carrying out\n convergence diagnostics and statistical and graphical analysis of Markov\n chain Monte Carlo sampling output.","Published":"2016-06-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BoardGames","Version":"1.0.0","Title":"Board Games and Tools for Building Board Games","Description":"Tools for constructing board/grid based games, as well as readily available game(s) for your entertainment.","Published":"2016-07-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bodenmiller","Version":"0.1","Title":"Profilling of Peripheral Blood Mononuclear Cells using CyTOF","Description":"This data package contains a subset of the Bodenmiller et al, Nat Biotech 2012 dataset for testing single cell, high dimensional analysis and visualization methods.","Published":"2015-12-18","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"BOG","Version":"2.0","Title":"Bacterium and Virus Analysis of Orthologous Groups (BOG) is a\nPackage for Identifying Differentially Regulated Genes in the\nLight of Gene Functions","Description":"An implementation of three statistical tests for identification of COG (Cluster of Orthologous Groups) that are over represented among genes that show differential expression under conditions. It also provides tabular and graphical summaries of the results for easy visualisation and presentation. ","Published":"2015-03-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"boilerpipeR","Version":"1.3","Title":"Interface to the Boilerpipe Java Library","Description":"Generic Extraction of main text content from HTML files; removal\n of ads, sidebars and headers using the boilerpipe \n (http://code.google.com/p/boilerpipe/) Java library. The\n extraction heuristics from boilerpipe show a robust performance for a wide\n range of web site templates.","Published":"2015-05-11","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"BOIN","Version":"2.4","Title":"Bayesian Optimal INterval (BOIN) Design for Single-Agent and\nDrug- Combination Phase I Clinical Trials","Description":"The Bayesian optimal interval (BOIN) design is a novel phase I\n clinical trial design for finding the maximum tolerated dose (MTD). It can be\n used to design both single-agent and drug-combination trials. The BOIN design\n is motivated by the top priority and concern of clinicians when testing a new\n drug, which is to effectively treat patients and minimize the chance of exposing\n them to subtherapeutic or overly toxic doses. The prominent advantage of the\n BOIN design is that it achieves simplicity and superior performance at the same\n time. The BOIN design is algorithm-based and can be implemented in a simple\n way similar to the traditional 3+3 design. The BOIN design yields an average\n performance that is comparable to that of the continual reassessment method\n (CRM, one of the best model-based designs) in terms of selecting the MTD, but\n has a substantially lower risk of assigning patients to subtherapeutic or overly\n toxic doses.","Published":"2016-08-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bold","Version":"0.4.0","Title":"Interface to Bold Systems 'API'","Description":"A programmatic interface to the Web Service methods provided by\n Bold Systems for genetic 'barcode' data. Functions include methods for\n searching by sequences by taxonomic names, ids, collectors, and\n institutions; as well as a function for searching for specimens, and\n downloading trace files.","Published":"2017-01-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Bolstad","Version":"0.2-34","Title":"Functions for Elementary Bayesian Inference","Description":"A set of R functions and data sets for the book Introduction to Bayesian Statistics, Bolstad, W.M. (2017), John Wiley & Sons ISBN 978-1-118-09156-2.","Published":"2017-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Bolstad2","Version":"1.0-28","Title":"Bolstad functions","Description":"A set of R functions and data sets for the book\n Understanding Computational Bayesian Statistics, Bolstad, W.M.\n (2009), John Wiley & Sons ISBN 978-0470046098","Published":"2013-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BonEV","Version":"1.0","Title":"An Improved Multiple Testing Procedure for Controlling False\nDiscovery Rates","Description":"An improved multiple testing procedure for controlling false discovery rates which is developed based on the Bonferroni procedure with integrated estimates from the Benjamini-Hochberg procedure and the Storey's q-value procedure. It controls false discovery rates through controlling the expected number of false discoveries.","Published":"2016-02-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bookdown","Version":"0.4","Title":"Authoring Books and Technical Documents with R Markdown","Description":"Output formats and utilities for authoring books and technical documents with R Markdown.","Published":"2017-05-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bookdownplus","Version":"1.0.2","Title":"Generate Varied Books and Documents with R 'bookdown' Package","Description":"A collection and selector of R 'bookdown' templates. 'bookdownplus' helps you write academic journal articles, guitar books, chemical equations, mails, calendars, and diaries. R 'bookdownplus' extends the features of 'bookdown', and simplifies the procedure. Users only have to choose a template, clarify the book title and author name, and then focus on writing the text. No need to struggle in YAML and LaTeX.","Published":"2017-06-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"boolean3","Version":"3.1.6","Title":"Boolean Binary Response Models","Description":"This package implements a\n partial-observability procedure for testing Boolean\n hypotheses that generalizes the binary response GLM as\n outlined in Braumoeller (2003).","Published":"2014-11-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BoolFilter","Version":"1.0.0","Title":"Optimal Estimation of Partially Observed Boolean Dynamical\nSystems","Description":"Tools for optimal and approximate state estimation as well as\n network inference of Partially-Observed Boolean Dynamical Systems.","Published":"2017-01-09","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"BoolNet","Version":"2.1.3","Title":"Construction, Simulation and Analysis of Boolean Networks","Description":"Provides methods to reconstruct and generate synchronous,\n asynchronous, probabilistic and temporal Boolean networks, and to\n analyze and visualize attractors in Boolean networks.","Published":"2016-11-21","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"Boom","Version":"0.7","Title":"Bayesian Object Oriented Modeling","Description":"A C++ library for Bayesian modeling, with an emphasis on\n Markov chain Monte Carlo. Although boom contains a few R utilities\n (mainly plotting functions), its primary purpose is to install the\n BOOM C++ library on your system so that other packages can link\n against it.","Published":"2017-05-28","License":"LGPL-2.1 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"BoomSpikeSlab","Version":"0.9.0","Title":"MCMC for Spike and Slab Regression","Description":"Spike and slab regression a la McCulloch and George (1997).","Published":"2017-05-28","License":"LGPL-2.1 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"boostmtree","Version":"1.1.0","Title":"Boosted Multivariate Trees for Longitudinal Data","Description":"Implements Friedman's gradient descent boosting algorithm for longitudinal data using multivariate tree base learners. A time-covariate interaction effect is modeled using penalized B-splines (P-splines) with estimated adaptive smoothing parameter.","Published":"2016-04-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"boostr","Version":"1.0.0","Title":"A modular framework to bag or boost any estimation procedure","Description":"boostr provides a modular framework that return the focus of\n ensemble learning back to 'learning' (instead of programming).","Published":"2014-05-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"boostSeq","Version":"1.0","Title":"Optimized GWAS cohort subset selection for resequencing studies","Description":"This package contains functionality to select a subsample\n of a genotyped cohort e.g. from a GWAS that is preferential for\n resequencing under the assumtion that causal variants share a\n haplotype with the risk allele of associated variants. The\n subsample is selected such that is contains risk alleles at\n maximum frequency for all SNPs specified. Phentoypes can also\n be included as additional variables to obtain a higher fraction\n of extreme phenotypes. An arbitrary number of SNPs and/or\n phentoypes can be specified for enrichment in a single\n subsample.","Published":"2012-08-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"boot","Version":"1.3-19","Title":"Bootstrap Functions (Originally by Angelo Canty for S)","Description":"Functions and datasets for bootstrapping from the\n book \"Bootstrap Methods and Their Application\" by A. C. Davison and \n D. V. Hinkley (1997, CUP), originally written by Angelo Canty for S.","Published":"2017-04-21","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"bootES","Version":"1.2","Title":"Bootstrap Effect Sizes","Description":"Calculate robust measures of effect sizes using the bootstrap.","Published":"2015-08-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bootLR","Version":"1.0","Title":"Bootstrapped Confidence Intervals for (Negative) Likelihood\nRatio Tests","Description":"Computes appropriate confidence intervals for the likelihood ratio tests commonly used in medicine/epidemiology. It is particularly useful when the sensitivity or specificity in the sample is 100%. Note that this does not perform the test on nested models--for that, see 'epicalc::lrtest'.","Published":"2015-07-13","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"BootMRMR","Version":"0.1","Title":"Bootstrap-MRMR Technique for Informative Gene Selection","Description":"Selection of informative features like genes, transcripts, RNA seq, etc. using Bootstrap Maximum Relevance and Minimum Redundancy technique from a given high dimensional genomic dataset. Informative gene selection involves identification of relevant genes and removal of redundant genes as much as possible from a large gene space. Main applications in high-dimensional expression data analysis (e.g. microarray data, NGS expression data and other genomics and proteomics applications).","Published":"2016-09-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bootnet","Version":"1.0.0","Title":"Bootstrap Methods for Various Network Estimation Routines","Description":"Bootstrap methods to assess accuracy and stability of estimated network structures\n and centrality indices. Allows for flexible specification of any undirected network \n estimation procedure in R, and offers default sets for 'qgraph', 'IsingFit', 'IsingSampler',\n 'glasso', 'huge' and 'parcor' packages.","Published":"2017-05-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BootPR","Version":"0.60","Title":"Bootstrap Prediction Intervals and Bias-Corrected Forecasting","Description":"Bias-Corrected Forecasting and Bootstrap Prediction Intervals for Autoregressive Time Series","Published":"2014-04-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bootRes","Version":"1.2.3","Title":"Bootstrapped Response and Correlation Functions","Description":"Calculation of Bootstrapped Response and Correlation\n Functions for Use in Dendroclimatology","Published":"2012-11-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bootruin","Version":"1.2-4","Title":"A Bootstrap Test for the Probability of Ruin in the Classical\nRisk Process","Description":"We provide a framework for testing the probability of ruin in the classical (compound Poisson) risk process. It also includes some procedures for assessing and comparing the performance between the bootstrap test and the test using asymptotic normality.","Published":"2016-12-30","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"bootspecdens","Version":"3.0","Title":"Testing equality of spectral densities","Description":"Bootstrap for testing the hypothesis that the spectral\n densities of a number m, m>=2, not necessarily independent time\n series are equal","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bootsPLS","Version":"1.0.3","Title":"Bootstrap Subsamplings of Sparse Partial Least Squares -\nDiscriminant Analysis for Classification and Signature\nIdentification","Description":"Applicable to any classification problem with more than 2 classes. It relies on bootstrap subsamplings of sPLS-DA and provides tools to select the most stable variables (defined as the ones consistently selected over the bootstrap subsamplings) and to predict the class of test samples.","Published":"2015-08-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bootStepAIC","Version":"1.2-0","Title":"Bootstrap stepAIC","Description":"Model selection by bootstrapping the stepAIC() procedure.","Published":"2009-06-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bootstrap","Version":"2017.2","Title":"Functions for the Book \"An Introduction to the Bootstrap\"","Description":"Software (bootstrap, cross-validation, jackknife) and data\n for the book \"An Introduction to the Bootstrap\" by B. Efron and\n R. Tibshirani, 1993, Chapman and Hall. This package is\n primarily provided for projects already based on it, and for\n support of the book. New projects should preferentially use the\n recommended package \"boot\".","Published":"2017-02-27","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bootSVD","Version":"0.5","Title":"Fast, Exact Bootstrap Principal Component Analysis for High\nDimensional Data","Description":"Implements fast, exact bootstrap Principal Component Analysis and\n Singular Value Decompositions for high dimensional data, as described in\n . For data matrices that are too large to operate\n on in memory, users can input objects with class 'ff' (see the 'ff'\n package), where the actual data is stored on disk. In response, this\n package will implement a block matrix algebra procedure for calculating the\n principal components (PCs) and bootstrap PCs. Depending on options set by\n the user, the 'parallel' package can be used to parallelize the calculation of\n the bootstrap PCs.","Published":"2015-06-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"boottol","Version":"2.0","Title":"Bootstrap Tolerance Levels for Credit Scoring Validation\nStatistics","Description":"Used to create bootstrap tolerance levels for the Kolmogorov-Smirnov (KS) statistic, the area under receiver operator characteristic curve (AUROC) statistic, and the Gini coefficient for each score cutoff. Also provides a bootstrap alternative to the Vasicek test.","Published":"2015-03-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BootWPTOS","Version":"1.2","Title":"Test Stationarity using Bootstrap Wavelet Packet Tests","Description":"Provides significance tests for second-order stationarity\n\tfor time series using bootstrap wavelet packet tests.","Published":"2016-06-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"boral","Version":"1.3.1","Title":"Bayesian Ordination and Regression AnaLysis","Description":"Bayesian approaches for analyzing multivariate data in ecology. Estimation is performed using Markov Chain Monte Carlo (MCMC) methods via JAGS. Three types of models may be fitted: 1) With explanatory variables only, boral fits independent column Generalized Linear Models (GLMs) to each column of the response matrix; 2) With latent variables only, boral fits a purely latent variable model for model-based unconstrained ordination; 3) With explanatory and latent variables, boral fits correlated column GLMs with latent variables to account for any residual correlation between the columns of the response matrix. ","Published":"2017-04-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Boruta","Version":"5.2.0","Title":"Wrapper Algorithm for All Relevant Feature Selection","Description":"An all relevant feature selection wrapper algorithm.\n It finds relevant features by comparing original attributes'\n importance with importance achievable at random, estimated\n using their permuted copies.","Published":"2017-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BoSSA","Version":"2.1","Title":"A Bunch of Structure and Sequence Analysis","Description":"Reads and plots phylogenetic placements obtained using the 'pplacer' and 'guppy' softwares .","Published":"2017-05-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"bossMaps","Version":"0.1.0","Title":"Convert Binary Species Range Maps into Continuous Surfaces Based\non Distance to Range Edge","Description":"Contains functions to convert binary (presence-absence) expert species range maps (like those found in a field guide) into continuous surfaces based on distance to range edge. These maps can then be used in species distribution models such as Maximum Entropy (Phillips 2008 ) using additional information (such as point occurrence data) to refine the expert map.","Published":"2016-12-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"boussinesq","Version":"1.0.3","Title":"Analytic Solutions for (ground-water) Boussinesq Equation","Description":"This package is a collection of R functions implemented\n from published and available analytic solutions for the\n One-Dimensional Boussinesq Equation (ground-water). In\n particular, the function \"beq.lin\" is the analytic solution of\n the linearized form of Boussinesq Equation between two\n different head-based boundary (Dirichlet) conditions;\n \"beq.song\" is the non-linear power-series analytic solution of\n the motion of a wetting front over a dry bedrock (Song at al,\n 2007, see complete reference on function documentation).\n Bugs/comments/questions/collaboration of any kind are warmly\n welcomed.","Published":"2013-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"boxoffice","Version":"0.1.1","Title":"Downloads Box Office Information for Given Dates (How Much Each\nMovie Earned in Theaters)","Description":"Download daily box office information (how much each movie earned\n in theaters) using data from either Box Office Mojo () or\n The Numbers ().","Published":"2016-08-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"boxplotdbl","Version":"1.2.2","Title":"Double Box Plot for Two-Axes Correlation","Description":"Correlation chart of two set (x and y) of data. \n Using Quartiles with boxplot style. \n Visualize the effect of factor. ","Published":"2013-11-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"boxr","Version":"0.3.4","Title":"Interface for the 'Box.com API'","Description":"An R interface for the remote file hosting service 'Box' \n (). In addition to uploading and downloading files,\n this package includes functions which mirror base R operations for local \n files, (e.g. box_load(), box_save(), box_read(), box_setwd(), etc.), as well\n as 'git' style functions for entire directories (e.g. box_fetch(), \n box_push()).","Published":"2017-01-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bpa","Version":"0.1.1","Title":"Basic Pattern Analysis","Description":"Run basic pattern analyses on character sets, digits, or combined\n input containing both characters and numeric digits. Useful for data\n cleaning and for identifying columns containing multiple or nonstandard\n formats.","Published":"2016-04-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bpca","Version":"1.2-2","Title":"Biplot of Multivariate Data Based on Principal Components\nAnalysis","Description":"Implements biplot (2d and 3d) of multivariate data based\n on principal components analysis and diagnostic tools of the quality of the reduction.","Published":"2013-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bpcp","Version":"1.3.4","Title":"Beta Product Confidence Procedure for Right Censored Data","Description":"Calculates nonparametric pointwise confidence intervals for the survival distribution for right censored data. Has two-sample tests for dissimilarity (e.g., difference, ratio or odds ratio) in survival at a fixed time. Especially important for small sample sizes or heavily censored data. Includes mid-p options.","Published":"2016-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bPeaks","Version":"1.2","Title":"bPeaks: an intuitive peak-calling strategy to detect\ntranscription factor binding sites from ChIP-seq data in small\neukaryotic genomes","Description":"bPeaks is a simple approach to identify transcription factor binding sites from ChIP-seq data. Our general philosophy is to provide an easy-to-use tool, well-adapted for small eukaryotic genomes (< 20 Mb). bPeaks uses a combination of 4 cutoffs (T1, T2, T3 and T4) to mimic \"good peak\" properties as described by biologists who visually inspect the ChIP-seq data on a genome browser. For yeast genomes, bPeaks calculates the proportion of peaks that fall in promoter sequences. These peaks are good candidates as transcription factor binding sites. ","Published":"2014-02-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"BPEC","Version":"1.0","Title":"Bayesian Phylogeographic and Ecological Clustering","Description":"Model-based clustering for phylogeographic data comprising mtDNA sequences and geographical locations along with optional environmental characteristics, aiming to identify migration events that led to homogeneous population clusters. ","Published":"2016-04-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bpkde","Version":"1.0-7","Title":"Back-Projected Kernel Density Estimation","Description":"Nonparametric multivariate kernel density \\\n estimation using a back-projected kernel.","Published":"2014-09-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bpp","Version":"1.0.0","Title":"Computations Around Bayesian Predictive Power","Description":"Implements functions to update Bayesian Predictive Power Computations after not stopping a clinical trial at an interim analysis. Such an interim analysis can either be blinded or unblinded. Code is provided for Normally distributed endpoints with known variance, with a prominent example being the hazard ratio.","Published":"2016-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bqtl","Version":"1.0-32","Title":"Bayesian QTL Mapping Toolkit","Description":"QTL mapping toolkit for inbred crosses and recombinant\n inbred lines. Includes maximum likelihood and Bayesian tools.","Published":"2016-01-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BradleyTerry2","Version":"1.0-6","Title":"Bradley-Terry Models","Description":"Specify and fit the Bradley-Terry model, including structured versions in which the parameters are related to explanatory variables through a linear predictor and versions with contest-specific effects, such as a home advantage.","Published":"2015-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"braidReports","Version":"0.5.3","Title":"Visualize Combined Action Response Surfaces and Report BRAID\nAnalyses","Description":"Provides functions to generate, format, and style surface plots for visualizing combined action data. Also provides functions for reporting on a BRAID analysis, including plotting curve-shifts, calculating IAE values, and producing full BRAID analysis reports.","Published":"2016-04-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"braidrm","Version":"0.71","Title":"Fitting Dose Response with the BRAID Combined Action Model","Description":"Contains functions for evaluating, analyzing, and fitting combined action dose response surfaces with the Bivariate Response to Additive Interacting Dose (BRAID) model of combined action.","Published":"2016-03-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"BrailleR","Version":"0.24.2","Title":"Improved Access for Blind Users","Description":"Blind users do not have access to the graphical output from R\n without printing the content of graphics windows to an embosser of some kind. This\n is not as immediate as is required for efficient access to statistical output.\n The functions here are created so that blind people can make even better use\n of R. This includes the text descriptions of graphs, convenience functions\n to replace the functionality offered in many GUI front ends, and experimental\n functionality for optimising graphical content to prepare it for embossing as\n tactile images.","Published":"2016-03-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"brainGraph","Version":"1.0.0","Title":"Graph Theory Analysis of Brain MRI Data","Description":"A set of tools for performing graph theory analysis of brain MRI\n data. It is best suited to data from a Freesurfer analysis (cortical\n thickness, volumes, local gyrification index, surface area), but also works\n with e.g., tractography data from FSL and fMRI data from DPABI. It contains\n a graphical user interface for graph visualization and data exploration and\n several functions for generating useful figures.","Published":"2017-04-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"brainR","Version":"1.2","Title":"Helper functions to misc3d and rgl packages for brain imaging","Description":"This includes functions for creating 3D and 4D images using WebGL, RGL, and JavaScript Commands. This package relies on the X ToolKit (XTK, https://github.com/xtk/X#readme). ","Published":"2014-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"brainwaver","Version":"1.6","Title":"Basic wavelet analysis of multivariate time series with a\nvisualisation and parametrisation using graph theory","Description":"This package computes the correlation matrix for each\n scale of a wavelet decomposition, namely the one performed by\n the R package waveslim (Whitcher, 2000). An hypothesis test is\n applied to each entry of one matrix in order to construct an\n adjacency matrix of a graph. The graph obtained is finally\n analysed using the small-world theory (Watts and Strogatz,\n 1998) and using the computation of efficiency (Latora, 2001),\n tested using simulated attacks. The brainwaver project is\n complementary to the camba project for brain-data\n preprocessing. A collection of scripts (with a makefile) is\n avalaible to download along with the brainwaver package, see\n information on the webpage mentioned below.","Published":"2012-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Branching","Version":"0.9.4","Title":"Simulation and Estimation for Branching Processes","Description":"Simulation and parameter estimation of multitype Bienayme - Galton - Watson processes.","Published":"2016-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"brant","Version":"0.1-3","Title":"Test for Parallel Regression Assumption","Description":"Tests the parallel regression assumption for ordinal logit models generated with the function polr() from the package 'MASS'.","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"braQCA","Version":"0.9.9.6","Title":"Bootstrapped Robustness Assessment for Qualitative Comparative\nAnalysis","Description":"Test the robustness of a user's Qualitative Comparative Analysis\n solutions to randomness, using the bootstrapped assessment: baQCA(). This\n package also includes a function that provides recommendations for improving\n solutions to reach typical significance levels: brQCA(). After applying recommendations \n from brQCA(), QCAdiff() shows which cases are excluded from the final result.","Published":"2017-02-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"brea","Version":"0.1.0","Title":"Bayesian Recurrent Event Analysis","Description":"A function to produce MCMC samples for posterior inference in semiparametric Bayesian discrete time competing risks recurrent events models.","Published":"2016-10-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"breakage","Version":"1.1-1","Title":"SICM pipette tip geometry estimation","Description":"Estimates geometry of SICM pipette tips by fitting a physical model to recorded breakage-current data.","Published":"2014-12-08","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"breakaway","Version":"3.0","Title":"Species Richness Estimation and Modeling","Description":"Species richness estimation is an important problem in biodiversity analysis. This package provides methods for total species richness estimation (observed plus unobserved) and a method for modelling total diversity with covariates. breakaway() estimates total (observed plus unobserved) species richness. Microbial diversity datasets are characterized by a large number of rare species and a small number of highly abundant species. The class of models implemented by breakaway() is flexible enough to model both these features. breakaway_nof1() implements a similar procedure however does not require a singleton count. betta() provides a method for modelling total diversity with covariates in a way that accounts for its estimated nature and thus accounts for unobserved taxa, and betta_random() permits random effects modelling.","Published":"2016-03-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"breakfast","Version":"0.1.0","Title":"Multiple Change-Point Detection and Segmentation","Description":"Performs multiple change-point detection in data sequences, or data sequence\n segmentation, using computationally efficient multiscale methods. This version only\n implements the \"Tail-Greedy Unbalanced Haar\" change-point detection methodology; more\n methods will be added in future versions. To start with, see the function\n segment.mean.","Published":"2017-05-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"breakpoint","Version":"1.2","Title":"An R Package for Multiple Break-Point Detection via the\nCross-Entropy Method","Description":"Implements the Cross-Entropy (CE) method, which is a model based stochastic optimization technique to estimate both the number and their corresponding locations of break-points in continuous and discrete measurements (Priyadarshana and Sofronov (2015), Priyadarshana and Sofronov (2012a), Priyadarshana and Sofronov (2012b)).","Published":"2016-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"breathtestcore","Version":"0.3.0","Title":"Core Functions to Read and Fit 13c Time Series from Breath Tests","Description":"Reads several formats of 13C data (IRIS/Wagner, BreathID) and CSV.\n Creates artificial sample data for testing. \n Fits Maes/Ghoos, Bluck-Coward self-correcting formula using 'nls', 'nlme'.\n See Bluck L J C and Coward W A 2006 .\n This package contains a refactored subset of github package \n 'dmenne/d13cbreath' without database and display functions. Methods to \n fit breath test curves with Bayesian Stan methods are refactored to \n github package 'dmenne/breathteststan'. For a Shiny GUI, see \n package 'dmenne/breathtestshiny'.","Published":"2017-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"breathteststan","Version":"0.3.0","Title":"Stan-Based Fit to Gastric Emptying Curves","Description":"Stan-based curve-fitting function\n for use with package 'breathtestcore' by the same author.\n Stan functions are refactored here for easier testing.","Published":"2017-05-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bReeze","Version":"0.4-0","Title":"Functions for wind resource assessment","Description":"A collection of functions to analyse, visualize and interpret wind data\n and to calculate the potential energy production of wind turbines.","Published":"2014-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"brew","Version":"1.0-6","Title":"Templating Framework for Report Generation","Description":"brew implements a templating framework for mixing text and\n R code for report generation. brew template syntax is similar\n to PHP, Ruby's erb module, Java Server Pages, and Python's psp\n module.","Published":"2011-04-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"brewdata","Version":"0.4","Title":"Extracting Usable Data from the Grad Cafe Results Search","Description":"Retrieves and parses graduate admissions survey data from the Grad Cafe website (http://thegradcafe.com).","Published":"2015-01-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"brglm","Version":"0.5-9","Title":"Bias reduction in binomial-response generalized linear models","Description":"Fit generalized linear models with binomial responses using either an adjusted-score approach to bias reduction or maximum penalized likelihood where penalization is by Jeffreys invariant prior. These procedures return estimates with improved frequentist properties (bias, mean squared error) that are always finite even in cases where the maximum likelihood estimates are infinite (data separation). Fitting takes place by fitting generalized linear models on iteratively updated pseudo-data. The interface is essentially the same as 'glm'. More flexibility is provided by the fact that custom pseudo-data representations can be specified and used for model fitting. Functions are provided for the construction of confidence intervals for the reduced-bias estimates.","Published":"2013-11-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"brglm2","Version":"0.1.4","Title":"Bias Reduction in Generalized Linear Models","Description":"Estimation and inference from generalized linear models based on various methods for bias reduction. The 'brglmFit' fitting method can achieve reduction of estimation bias by solving either the mean bias-reducing adjusted score equations in Firth (1993) and Kosmidis and Firth (2009) , or the median bias-reduction adjusted score equations in Kenne et al. (2016) , or through the direct subtraction of an estimate of the bias of the maximum likelihood estimator from the maximum likelihood estimates as in Cordeiro and McCullagh (1991) . Estimation in all cases takes place via a quasi Fisher scoring algorithm, and S3 methods for the construction of of confidence intervals for the reduced-bias estimates are provided. In the special case of generalized linear models for binomial and multinomial responses, the adjusted score approaches return estimates with improved frequentist properties, that are also always finite, even in cases where the maximum likelihood estimates are infinite (e.g. complete and quasi-complete separation). 'brglm2' also provides pre-fit and post-fit methods for detecting separation and infinite maximum likelihood estimates in binomial response generalized linear models.","Published":"2017-05-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"bride","Version":"1.3","Title":"Brier score decomposition of probabilistic forecasts for binary\nevents","Description":"Decomposes the empirical Brier score into reliability, resolution and uncertainty. Two different estimators for the components are provided: The original estimators proposed by Murphy (1974), and the bias-corrected estimators proposed by Ferro and Fricker (2012). Sampling variances of all the components are estimated. This package applies only to probabilistic predictions of binary events.","Published":"2013-07-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bridgedist","Version":"0.1.0","Title":"An Implementation of the Bridge Distribution with Logit-Link as\nin Wang and Louis (2003)","Description":"An implementation of the bridge distribution with logit-link in\n R. In Wang and Louis (2003) , such a univariate\n bridge distribution was derived as the distribution of the random intercept that\n 'bridged' a marginal logistic regression and a conditional logistic regression.\n The conditional and marginal regression coefficients are a scalar multiple\n of each other. Such is not the case if the random intercept distribution was\n Gaussian.","Published":"2016-04-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bridger2","Version":"0.1.0","Title":"Genome-Wide RNA Degradation Analysis Using BRIC-Seq Data","Description":"BRIC-seq is a genome-wide approach for determining RNA stability in mammalian cells.\n This package provides a series of functions for performing quality check of your BRIC-seq data,\n calculation of RNA half-life for each transcript and comparison of RNA half-lives between two conditions.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bridgesampling","Version":"0.1-1","Title":"Bridge Sampling for Marginal Likelihoods and Bayes Factors","Description":"Provides functions for estimating marginal likelihoods, Bayes factors,\n posterior model probabilities, and normalizing constants in general,\n via different versions of bridge sampling (Meng & Wong, 1996,\n ).","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"briskaR","Version":"0.1.0","Title":"Biological Risk Assessment","Description":"A spatio-temporal exposure-hazard model for assessing biological\n risk and impact. The model is based on stochastic geometry for describing\n the landscape and the exposed individuals, a dispersal kernel for the\n dissemination of contaminants and an ecotoxicological equation.","Published":"2016-10-11","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"brlrmr","Version":"0.1.2","Title":"Bias Reduction with Missing Binary Response","Description":"Provides two main functions, il() and fil(). The il() function implements the EM algorithm developed by Ibrahim and Lipsitz (1996) to estimate the parameters of a logistic regression model with the missing response when the missing data mechanism is nonignorable. The fil() function implements the algorithm proposed by Maity et. al. (2017+) to reduce the bias produced by the method of Ibrahim and Lipsitz (1996) .","Published":"2017-06-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"brm","Version":"1.0","Title":"Binary Regression Model","Description":"Fits novel models for the conditional relative risk, risk difference and odds ratio.","Published":"2016-09-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"brms","Version":"1.7.0","Title":"Bayesian Regression Models using Stan","Description":"Fit Bayesian generalized (non-)linear multilevel models \n using Stan for full Bayesian inference. A wide range of distributions \n and link functions are supported, allowing users to fit -- among others -- \n linear, robust linear, count data, survival, response times, ordinal, \n zero-inflated, hurdle, and even self-defined mixture models all in a \n multilevel context. Further modeling options include non-linear and \n smooth terms, auto-correlation structures, censored data, meta-analytic \n standard errors, and quite a few more. In addition, all parameters of the \n response distribution can be predicted in order to perform distributional \n regression. Prior specifications are flexible and explicitly encourage \n users to apply prior distributions that actually reflect their beliefs.\n Model fit can easily be assessed and compared with posterior predictive \n checks and leave-one-out cross-validation.","Published":"2017-05-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"brnn","Version":"0.6","Title":"Bayesian Regularization for Feed-Forward Neural Networks","Description":"Bayesian regularization for feed-forward neural networks.","Published":"2016-01-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Brobdingnag","Version":"1.2-4","Title":"Very large numbers in R","Description":"Handles very large numbers in R. Real numbers are held\n using their natural logarithms, plus a logical flag indicating\n sign. The package includes a vignette that gives a\n step-by-step introduction to using S4 methods.","Published":"2013-12-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"broman","Version":"0.65-4","Title":"Karl Broman's R Code","Description":"Miscellaneous R functions, including functions related to\n graphics (mostly for base graphics), permutation tests, running\n mean/median, and general utilities.","Published":"2017-05-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"broom","Version":"0.4.2","Title":"Convert Statistical Analysis Objects into Tidy Data Frames","Description":"Convert statistical analysis objects from R into tidy data frames,\n so that they can more easily be combined, reshaped and otherwise processed\n with tools like 'dplyr', 'tidyr' and 'ggplot2'. The package provides three\n S3 generics: tidy, which summarizes a model's statistical findings such as\n coefficients of a regression; augment, which adds columns to the original\n data such as predictions, residuals and cluster assignments; and glance, which\n provides a one-row summary of model-level statistics.","Published":"2017-02-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"brotli","Version":"1.0","Title":"A Compression Format Optimized for the Web","Description":"A lossless compressed data format that uses a combination of the\n LZ77 algorithm and Huffman coding. Brotli is similar in speed to deflate (gzip)\n but offers more dense compression.","Published":"2017-03-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Brq","Version":"2.0","Title":"Bayesian Analysis of Quantile Regression Models","Description":"Bayesian estimation and variable selection for quantile\n regression models.","Published":"2017-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"brr","Version":"1.0.0","Title":"Bayesian Inference on the Ratio of Two Poisson Rates","Description":"Implementation of the Bayesian inference for the two independent Poisson samples model, using the semi-conjugate family of prior distributions.","Published":"2015-09-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"brranching","Version":"0.2.0","Title":"Fetch 'Phylogenies' from Many Sources","Description":"Includes methods for fetching 'phylogenies' from a variety\n of sources, currently includes 'Phylomatic'\n (), with more in the future.","Published":"2016-04-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"brt","Version":"1.1.0","Title":"Biological Relevance Testing","Description":"Analyses of large-scale -omics datasets commonly use p-values as the indicators of statistical significance. However, considering p-value alone neglects the importance of effect size (i.e., the mean difference between groups) in determining the biological relevance of a significant difference. Here, we present a novel algorithm for computing a new statistic, the biological relevance testing (BRT) index, in the frequentist hypothesis testing framework to address this problem. ","Published":"2017-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BRugs","Version":"0.8-6","Title":"Interface to the 'OpenBUGS' MCMC Software","Description":"Fully-interactive R interface to the 'OpenBUGS' software for Bayesian analysis using MCMC sampling. Runs natively and stably in 32-bit R under Windows. Versions running on Linux and on 64-bit R under Windows are in \"beta\" status and less efficient.","Published":"2015-12-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BSagri","Version":"0.1-8","Title":"Statistical methods for safety assessment in agricultural field\ntrials","Description":"Collection of functions, data sets and code examples \n for evaluations of field trials with the objective of equivalence assessment.","Published":"2013-11-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bsam","Version":"1.1.1","Title":"Bayesian State-Space Models for Animal Movement","Description":"Tools to fit Bayesian state-space models to animal tracking data. Models are provided for location \n filtering, location filtering and behavioural state estimation, and their hierarchical versions. \n The models are primarily intended for fitting to ARGOS satellite tracking data but options exist to fit \n to other tracking data types. For Global Positioning System data, consider the 'moveHMM' package. \n Simplified Markov Chain Monte Carlo convergence diagnostic plotting is provided but users are encouraged \n to explore tools available in packages such as 'coda' and 'boa'.","Published":"2016-11-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BSDA","Version":"1.01","Title":"Basic Statistics and Data Analysis","Description":"Data sets for book \"Basic Statistics and Data Analysis\" by\n Larry J. Kitchens","Published":"2012-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bsearchtools","Version":"0.0.61","Title":"Binary Search Tools","Description":"Exposes the binary search functions of the C++ standard library (std::lower_bound, std::upper_bound) plus other convenience functions, allowing faster lookups on sorted vectors.","Published":"2017-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BSGS","Version":"2.0","Title":"Bayesian Sparse Group Selection","Description":"The integration of Bayesian variable and sparse group variable selection approaches for regression models. ","Published":"2015-06-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BSGW","Version":"0.9.2","Title":"Bayesian Survival Model with Lasso Shrinkage Using Generalized\nWeibull Regression","Description":"Bayesian survival model using Weibull regression on both scale and shape parameters. Dependence of shape parameter on covariates permits deviation from proportional-hazard assumption, leading to dynamic - i.e. non-constant with time - hazard ratios between subjects. Bayesian Lasso shrinkage in the form of two Laplace priors - one for scale and one for shape coefficients - allows for many covariates to be included. Cross-validation helper functions can be used to tune the shrinkage parameters. Monte Carlo Markov Chain (MCMC) sampling using a Gibbs wrapper around Radford Neal's univariate slice sampler (R package MfUSampler) is used for coefficient estimation.","Published":"2016-09-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bshazard","Version":"1.0","Title":"Nonparametric Smoothing of the Hazard Function","Description":"The function estimates the hazard function non parametrically from a survival object (possibly adjusted for covariates). The smoothed estimate is based on B-splines from the perspective of generalized linear mixed models. Left truncated and right censoring data are allowed.","Published":"2014-02-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BsMD","Version":"2013.0718","Title":"Bayes Screening and Model Discrimination","Description":"Bayes screening and model discrimination follow-up designs.","Published":"2013-07-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bspec","Version":"1.5","Title":"Bayesian Spectral Inference","Description":"Bayesian inference on the (discrete) power spectrum of time series.","Published":"2015-04-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bsplus","Version":"0.1.0","Title":"Adds Functionality to the R Markdown + Shiny Bootstrap Framework","Description":"The Bootstrap framework lets you add some JavaScript functionality to your web site by\n adding attributes to your HTML tags - Bootstrap takes care of the JavaScript\n . If you are using R Markdown or Shiny, you can\n use these functions to create collapsible sections, accordion panels, modals, tooltips,\n popovers, and an accordion sidebar framework (not described at Bootstrap site).","Published":"2017-01-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bspmma","Version":"0.1-1","Title":"bspmma: Bayesian Semiparametric Models for Meta-Analysis","Description":"Some functions for nonparametric and semiparametric\n Bayesian models for random effects meta-analysis","Published":"2012-07-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"BSquare","Version":"1.1","Title":"Bayesian Simultaneous Quantile Regression","Description":"This package models the quantile process as a function of\n predictors.","Published":"2013-05-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BSSasymp","Version":"1.2-0","Title":"Asymptotic Covariance Matrices of Some BSS Mixing and Unmixing\nMatrix Estimates","Description":"Functions to compute the asymptotic covariance matrices of mixing and unmixing matrix estimates of the following blind source separation (BSS) methods: symmetric and squared symmetric FastICA, regular and adaptive deflation-based FastICA, FOBI, JADE, AMUSE and deflation-based and symmetric SOBI. Also functions to estimate these covariances based on data are available. ","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bssn","Version":"0.7","Title":"Birnbaum-Saunders Model Based on Skew-Normal Distribution","Description":"It provides the density, distribution function, quantile function, random number generator, reliability function, failure rate, likelihood function,\n moments and EM algorithm for Maximum Likelihood estimators, also empirical quantile and generated envelope for a given sample, all this for the three parameter\n Birnbaum-Saunders model based on Skew-Normal Distribution.\n Additionally, it provides the random number generator for the mixture of Birnbaum-Saunders model based on Skew-Normal distribution.","Published":"2016-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bst","Version":"0.3-14","Title":"Gradient Boosting","Description":"Functional gradient descent algorithm for a variety of convex and non-convex loss functions, for both classical and robust regression and classification problems. ","Published":"2016-09-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bsts","Version":"0.7.1","Title":"Bayesian Structural Time Series","Description":"Time series regression using dynamic linear models fit using\n MCMC. See Scott and Varian (2014) , among many\n other sources.","Published":"2017-05-28","License":"LGPL-2.1 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"btb","Version":"0.1.14","Title":"Beyond the Border","Description":"Kernel density estimation dedicated to urban geography.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"btergm","Version":"1.9.0","Title":"Temporal Exponential Random Graph Models by Bootstrapped\nPseudolikelihood","Description":"Temporal Exponential Random Graph Models (TERGM) estimated by maximum pseudolikelihood with bootstrapped confidence intervals or Markov Chain Monte Carlo maximum likelihood. Goodness of fit assessment for ERGMs, TERGMs, and SAOMs. Micro-level interpretation of ERGMs and TERGMs.","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"btf","Version":"1.2","Title":"Estimates Univariate Function via Bayesian Trend Filtering","Description":"Trend filtering uses the generalized\n lasso framework to fit an adaptive polynomial of degree k to\n estimate the function f_0 at each input x_i in the model: y_i =\n f_0(x_i) + epsilon_i, for i = 1, ..., n, and epsilon_i\n is sub-Gaussian with E(epsilon_i) = 0. Bayesian trend filtering adapts\n the genlasso framework to a fully Bayesian hierarchical model, estimating\n the penalty parameter lambda within a tractable Gibbs sampler.","Published":"2017-05-31","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"BTLLasso","Version":"0.1-6","Title":"Modelling Heterogeneity in Paired Comparison Data","Description":"Performs 'BTLLasso' (Schauberger and Tutz, 2017: Subject-Specific Modelling of Paired Comparison Data - a Lasso-Type Penalty Approach), a method to include different types of variables in paired\n comparison models and, therefore, to allow for heterogeneity between subjects. Variables can be subject-specific, object-specific and subject-object-specific and\n can have an influence on the attractiveness/strength of the objects. Suitable L1 penalty terms are used \n to cluster certain effects and to reduce the complexity of the models.","Published":"2017-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BTR","Version":"1.2.4","Title":"Training and Analysing Asynchronous Boolean Models","Description":"Tools for inferring asynchronous Boolean\n models from single-cell expression data.","Published":"2016-09-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BTSPAS","Version":"2014.0901","Title":"Bayesian Time-Strat. Population Analysis","Description":"BTSPAS provides advanced Bayesian methods to estimate\n\t abundance and run-timing from temporally-stratified\n\t Petersen mark-recapture experiments. Methods include\n\t hierarchical modelling of the capture probabilities\n \t and spline smoothing of the daily run size. This version \n\t uses JAGS to sample from the posterior distribution.","Published":"2014-08-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BTYD","Version":"2.4","Title":"Implementing Buy 'Til You Die Models","Description":"This package contains functions for data preparation, parameter estimation, scoring, and plotting for the BG/BB, BG/NBD and Pareto/NBD models.","Published":"2014-11-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BTYDplus","Version":"1.0.1","Title":"Probabilistic Models for Assessing and Predicting your Customer\nBase","Description":"Provides advanced statistical methods to describe and predict customers'\n purchase behavior in a non-contractual setting. It uses historic transaction records to fit a\n probabilistic model, which then allows to compute quantities of managerial interest on a cohort-\n as well as on a customer level (Customer Lifetime Value, Customer Equity, P(alive), etc.). This\n package complements the BTYD package by providing several additional buy-till-you-die models, that\n have been published in the marketing literature, but whose implementation are complex and non-trivial.\n These models are: NBD, MBG/NBD, BG/CNBD-k, MBG/CNBD-k, Pareto/NBD (HB), Pareto/NBD (Abe) and Pareto/GGG.","Published":"2016-12-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BUCSS","Version":"0.0.2","Title":"Bias and Uncertainty Corrected Sample Size","Description":"Implements a method of correcting for publication bias and\n uncertainty when planning sample sizes in a future study from an original study. See Anderson, Kelley, & Maxwell (submitted, revised and resubmitted). ","Published":"2017-04-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"bujar","Version":"0.2-3","Title":"Buckley-James Regression for Survival Data with High-Dimensional\nCovariates","Description":"Buckley-James regression for right-censoring survival data with high-dimensional covariates. Implementations for survival data include boosting with componentwise linear least squares, componentwise smoothing splines, regression trees and MARS. Other high-dimensional tools include penalized regression for survival data.","Published":"2017-04-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"bulletr","Version":"0.1","Title":"Algorithms for Matching Bullet Lands","Description":"Analyze bullet lands using nonparametric methods. We provide a\n reading routine for x3p files (see for more\n information) and a host of analysis functions designed to assess the\n probability that two bullets were fired from the same gun barrel.","Published":"2017-04-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bunchr","Version":"1.2.0","Title":"Analyze Bunching in a Kink or Notch Setting","Description":"View and analyze data where bunching is expected. Estimate counter-\n factual distributions. For earnings data, estimate the compensated\n elasticity of earnings w.r.t. the net-of-tax rate.","Published":"2017-01-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"bundesligR","Version":"0.1.0","Title":"All Final Tables of the Bundesliga","Description":"All final tables of Germany's highest football (soccer!) league, the Bundesliga. Contains data from 1964 to 2016.","Published":"2016-08-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bupaR","Version":"0.2.0","Title":"Business Process Analytics in R","Description":"Functionalities for process analysis in R. This packages implements an S3-class for event log objects, and related handler functions. Imports related packages for subsetting event data, computation of descriptive statistics, handling of Petri Net objects and visualization of process maps. See also packages 'edeaR','processmapR', 'eventdataR' and 'processmonitR'.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"burnr","Version":"0.2.0","Title":"Advanced Fire History Analysis in R","Description":"Basic tools to analyze forest fire history data (e.g. FHX2) in R.","Published":"2017-05-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"BurStFin","Version":"1.02","Title":"Burns Statistics Financial","Description":"A suite of functions for finance, including the estimation\n\tof variance matrices via a statistical factor model or\n\tLedoit-Wolf shrinkage.","Published":"2014-03-09","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"BurStMisc","Version":"1.1","Title":"Burns Statistics Miscellaneous","Description":"Script search, corner, genetic optimization, permutation tests, write expect test.","Published":"2016-08-13","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"bursts","Version":"1.0-1","Title":"Markov model for bursty behavior in streams","Description":"An implementation of Jon Kleinberg's burst detection algorithm. Uses an infinite Markov model to detect periods of increased activity in a series of discrete events with known times, and provides a simple visualization of the results.","Published":"2014-02-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"BuyseTest","Version":"1.0","Title":"Generalized Pairwise Comparisons","Description":"Implementation of the Generalized Pairwise Comparisons. This test\n enables to compare two groups of observations in randomized trials(e.g treated\n vs. control patients) on several prioritized outcomes. Pairwise comparisons\n require consideration of all possible pairs of individuals, one taken from the\n treatment group and the other taken from the control group. The outcomes of the\n two individuals forming a pair are compared. Thresholds of minimal clinically\n significant differences can be defined. It is possible to analyse simultaneously\n several outcomes by prioritizing the variables that capture them. The highest\n priority is assigned to the variable considered the most clinically relevant.\n A natural way of handling uninformative or neutral pairs is to consider the\n outcomes in descending order of priority: whenever a pair is uninformative or\n neutral for an outcome of higher priority, the outcomes of lower priority are\n examined In the case of time-to-event endpoint, four methods to handle censored\n observations are available in this package (Gehan, Peto, Efron, and Peron).","Published":"2016-08-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"bvarsv","Version":"1.1","Title":"Bayesian Analysis of a Vector Autoregressive Model with\nStochastic Volatility and Time-Varying Parameters","Description":"R/C++ implementation of the model proposed by Primiceri (\"Time Varying Structural Vector Autoregressions and Monetary Policy\", Review of Economic Studies, 2005), with functionality for computing posterior predictive distributions and impulse responses.","Published":"2015-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bvenn","Version":"0.1","Title":"A Simple alternative to proportional Venn diagrams","Description":"This package implements a simple alternative to the\n traditional Venn diagram. It depicts each overlap as a separate\n bubble with area proportional to the overlap size. Relation of\n the bubbles to input sets is shown by their their arrangement.","Published":"2012-08-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bvls","Version":"1.4","Title":"The Stark-Parker algorithm for bounded-variable least squares","Description":"An R interface to the Stark-Parker implementation of an\n algorithm for bounded-variable least squares","Published":"2013-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"bvpSolve","Version":"1.3.3","Title":"Solvers for Boundary Value Problems of Differential Equations","Description":"Functions that solve boundary value problems ('BVP') of systems of ordinary\n differential equations ('ODE') and differential algebraic equations ('DAE').\n The functions provide an interface to the FORTRAN functions\n 'twpbvpC', 'colnew/colsys', and an R-implementation of the shooting method.","Published":"2016-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"BVS","Version":"4.12.1","Title":"Bayesian Variant Selection: Bayesian Model Uncertainty\nTechniques for Genetic Association Studies","Description":"The functions in this package focus on analyzing\n case-control association studies involving a group of genetic\n variants. In particular, we are interested in modeling the\n outcome variable as a function of a multivariate genetic\n profile using Bayesian model uncertainty and variable selection\n techniques. The package incorporates functions to analyze data\n sets involving common variants as well as extensions to model\n rare variants via the Bayesian Risk Index (BRI) as well as\n haplotypes. Finally, the package also allows the incorporation\n of external biological information to inform the marginal\n inclusion probabilities via the iBMU.","Published":"2012-08-09","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"bWGR","Version":"1.4","Title":"Bagging Whole-Genome Regression","Description":"Whole-genome regression methods on Bayesian framework fitted via EM\n or Gibbs sampling, with optional sampling techniques and kernel term.","Published":"2017-03-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"BWStest","Version":"0.2.1","Title":"Baumgartner Weiss Schindler Test of Equal Distributions","Description":"Performs the 'Baumgartner-Weiss-Schindler' two-sample test of equal\n probability distributions.","Published":"2017-03-21","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"bytescircle","Version":"1.1","Title":"Statistics About Bytes Contained in a File as a Circle Plot","Description":"Shows statistics about bytes contained in a file \n as a circle graph of deviations from mean in sigma increments. \n The function can be useful for statistically analyze the content of files \n in a glimpse: text files are shown as a green centered crown, compressed \n and encrypted files should be shown as equally distributed variations with \n a very low CV (sigma/mean), and other types of files can be classified between \n these two categories depending on their text vs binary content, which can be \n useful to quickly determine how information is stored inside them (databases, \n multimedia files, etc). ","Published":"2017-01-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"c060","Version":"0.2-4","Title":"Extended Inference for Lasso and Elastic-Net Regularized Cox and\nGeneralized Linear Models","Description":"c060 provides additional functions to perform stability selection, model validation and parameter tuning for glmnet models","Published":"2014-12-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"c212","Version":"0.93","Title":"Methods for Detecting Safety Signals in Clinical Trials Using\nBody-Systems (System Organ Classes)","Description":"Methods for detecting safety signals in clinical trials using groupings of adverse events by body-system or system organ class.The package title c212 is in reference to the original Engineering and Physical Sciences Research Council (UK) funded project which was named CASE 2/12.","Published":"2017-04-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"c3net","Version":"1.1.1","Title":"Infering large-scale gene networks with C3NET","Description":"This package allows inferring gene regulatory networks\n with direct physical interactions from microarray expression\n data using C3NET.","Published":"2012-07-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"C50","Version":"0.1.0-24","Title":"C5.0 Decision Trees and Rule-Based Models","Description":"C5.0 decision trees and rule-based models for pattern recognition.","Published":"2015-03-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ca","Version":"0.70","Title":"Simple, Multiple and Joint Correspondence Analysis","Description":"Computation and visualization of simple, multiple and joint correspondence analysis.","Published":"2016-12-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"cablecuttr","Version":"0.1.1","Title":"A CanIStream.It API Wrapper","Description":"A wrapper for the 'CanIStream.It' API for searching across the\n most popular streaming, rental, and purchase services to find where a\n movie is available. See for more information. ","Published":"2017-01-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cabootcrs","Version":"1.0","Title":"Bootstrap Confidence Regions for Correspondence Analysis","Description":"Performs correspondence analysis on a two-way contingency\n table and produces bootstrap-based elliptical confidence\n regions around the projected coordinates for the category\n points. Includes routines to plot the results in a variety of\n styles. Also reports the standard numerical output for\n correspondence analysis.","Published":"2013-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cacIRT","Version":"1.4","Title":"Classification Accuracy and Consistency under Item Response\nTheory","Description":"Computes classification accuracy and consistency indices under Item Response Theory. Implements the total score IRT-based methods in Lee, Hanson & Brennen (2002) and Lee (2010), the IRT-based methods in Rudner (2001, 2005), and the total score nonparametric methods in Lathrop & Cheng (2014). For dichotomous and polytomous tests.","Published":"2015-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CaDENCE","Version":"1.2.4","Title":"Conditional Density Estimation Network Construction and\nEvaluation","Description":"Parameters of a user-specified probability distribution are modelled by a multi-layer perceptron artificial neural network. This framework can be used to implement probabilistic nonlinear models including mixture density networks, heteroscedastic regression models, zero-inflated models, and the like.","Published":"2017-03-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CADFtest","Version":"0.3-3","Title":"A Package to Perform Covariate Augmented Dickey-Fuller Unit Root\nTests","Description":"Hansen's (1995) Covariate-Augmented\n Dickey-Fuller (CADF) test. The only required argument is y, the\n Tx1 time series to be tested. If no stationary covariate X is\n passed to the procedure, then an ordinary ADF test is\n performed. The p-values of the test are computed using the\n procedure illustrated in Lupi (2009).","Published":"2017-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CADStat","Version":"3.0.8","Title":"Provides a GUI to Several Statistical Methods","Description":"Using Java GUI for R (JGR), CADStat provides a user\n interface for several statistical methods -\n scatterplot, boxplot, linear regression, generalized linear\n regression, quantile regression, conditional probability\n calculations, and regression trees.","Published":"2017-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"caesar","Version":"0.1.0","Title":"Encrypts and Decrypts Strings","Description":"Encrypts and decrypts strings using either the Caesar cipher or a\n pseudorandom number generation (using set.seed()) method.","Published":"2017-01-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"cAIC4","Version":"0.2","Title":"Conditional Akaike information criterion for lme4","Description":"Provides functions for the estimation of the conditional Akaike \n\t\t\t information in generalized mixed-effects models fitted with (g)lmer \n\t\t\t form lme4.","Published":"2014-08-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Cairo","Version":"1.5-9","Title":"R graphics device using cairo graphics library for creating\nhigh-quality bitmap (PNG, JPEG, TIFF), vector (PDF, SVG,\nPostScript) and display (X11 and Win32) output","Description":"Cairo graphics device that can be use to create high-quality vector (PDF, PostScript and SVG) and bitmap output (PNG,JPEG,TIFF), and high-quality rendering in displays (X11 and Win32). Since it uses the same back-end for all output, copying across formats is WYSIWYG. Files are created without the dependence on X11 or other external programs. This device supports alpha channel (semi-transparent drawing) and resulting images can contain transparent and semi-transparent regions. It is ideal for use in server environments (file output) and as a replacement for other devices that don't have Cairo's capabilities such as alpha support or anti-aliasing. Backends are modular such that any subset of backends is supported.","Published":"2015-09-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cairoDevice","Version":"2.24","Title":"Embeddable Cairo Graphics Device Driver","Description":"This device uses Cairo and GTK to draw to the screen,\n file (png, svg, pdf, and ps) or memory (arbitrary GdkDrawable\n or Cairo context). The screen device may be embedded into RGtk2\n interfaces and supports all interactive features of other graphics\n devices, including getGraphicsEvent().","Published":"2017-01-06","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"calACS","Version":"2.2.2","Title":"Calculations for All Common Subsequences","Description":"Implements several string comparison algorithms, including calACS (count all common subsequences), lenACS (calculate the lengths of all common subsequences), and lenLCS (calculate the length of the longest common subsequence). Some algorithms differentiate between the more strict definition of subsequence, where a common subsequence cannot be separated by any other items, from its looser counterpart, where a common subsequence can be interrupted by other items. This difference is shown in the suffix of the algorithm (-Strict vs -Loose). For example, q-w is a common subsequence of q-w-e-r and q-e-w-r on the looser definition, but not on the more strict definition. calACSLoose Algorithm from Wang, H. All common subsequences (2007) IJCAI International Joint Conference on Artificial Intelligence, pp. 635-640.","Published":"2016-03-31","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Calculator.LR.FNs","Version":"1.2","Title":"Calculator for LR Fuzzy Numbers","Description":"Arithmetic operations scalar multiplication, addition, subtraction, multiplication and division of LR fuzzy numbers (which are on the basis of extension principle) have a complicate form for using in fuzzy Statistics, fuzzy Mathematics, machine learning, fuzzy data analysis and etc. Calculator for LR Fuzzy Numbers package relieve and aid applied users to achieve a simple and closed form for some complicated operator based on LR fuzzy numbers and also the user can easily draw the membership function of the obtained result by this package. ","Published":"2017-04-03","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"CALF","Version":"0.2.0","Title":"Coarse Approximation Linear Function","Description":"Contains greedy algorithms for coarse approximation linear\n functions.","Published":"2017-05-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CALIBERrfimpute","Version":"0.1-6","Title":"Multiple imputation using MICE and Random Forest","Description":"Functions to impute using Random Forest under Full Conditional Specifications (Multivariate Imputation by Chained Equations). The CALIBER programme is funded by the Wellcome Trust (086091/Z/08/Z) and the National Institute for Health Research (NIHR) under its Programme Grants for Applied Research programme (RP-PG-0407-10314). The author is supported by a Wellcome Trust Clinical Research Training Fellowship (0938/30/Z/10/Z).","Published":"2014-05-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"calibrar","Version":"0.2.0","Title":"Automated Parameter Estimation for Complex (Ecological) Models","Description":"Automated parameter estimation for complex (ecological) models in R. \n This package allows the parameter estimation or calibration of complex models, \n including stochastic ones. It is a generic tool that can be used for fitting \n any type of models, especially those with non-differentiable objective functions. \n It supports multiple phases and constrained optimization. \n It implements maximum likelihood estimation methods and automated construction \n of the objective function from simulated model outputs. \n See for more details.","Published":"2016-02-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"calibrate","Version":"1.7.2","Title":"Calibration of Scatterplot and Biplot Axes","Description":"Package for drawing calibrated scales with tick marks on (non-orthogonal) \n variable vectors in scatterplots and biplots. ","Published":"2013-09-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CalibrateSSB","Version":"1.0","Title":"Weighting and Estimation for Panel Data with Non-Response","Description":"Function to calculate weights and estimates for panel data with non-response.","Published":"2016-04-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"calibrator","Version":"1.2-6","Title":"Bayesian calibration of complex computer codes","Description":"Performs Bayesian calibration of computer models as per\n Kennedy and O'Hagan 2001. The package includes routines to find the\n hyperparameters and parameters; see the help page for stage1() for a\n worked example using the toy dataset. A tutorial is provided in the\n calex.Rnw vignette; and a suite of especially simple one dimensional\n examples appears in inst/doc/one.dim/.","Published":"2013-12-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"callr","Version":"1.0.0","Title":"Call R from R","Description":"It is sometimes useful to perform a computation in a\n separate R process, without affecting the current R process at all.\n This packages does exactly that.","Published":"2016-06-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"calmate","Version":"0.12.1","Title":"Improved Allele-Specific Copy Number of SNP Microarrays for\nDownstream Segmentation","Description":"A multi-array post-processing method of allele-specific copy-number estimates (ASCNs).","Published":"2015-10-27","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"CAM","Version":"1.0","Title":"Causal Additive Model (CAM)","Description":"The code takes an n x p data matrix and fits a Causal Additive Model (CAM) for estimating the causal structure of the underlying process. The output is a p x p adjacency matrix (a one in entry (i,j) indicates an edge from i to j). Details of the algorithm can be found in: P. Bühlmann, J. Peters, J. Ernest: \"CAM: Causal Additive Models, high-dimensional order search and penalized regression\", Annals of Statistics 42:2526-2556, 2014.","Published":"2015-03-05","License":"FreeBSD","snapshot_date":"2017-06-23"} {"Package":"CAMAN","Version":"0.74","Title":"Finite Mixture Models and Meta-Analysis Tools - Based on C.A.MAN","Description":"Tools for the analysis of finite semiparametric mixtures.\n These are useful when data is heterogeneous, e.g. in\n pharmacokinetics or meta-analysis. The NPMLE and VEM algorithms\n (flexible support size) and EM algorithms (fixed support size)\n are provided for univariate and bivariate data.","Published":"2016-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"camel","Version":"0.2.0","Title":"Calibrated Machine Learning","Description":"The package \"camel\" provides the implementation of a family of high-dimensional calibrated machine learning tools, including (1) LAD, SQRT Lasso and Calibrated Dantzig Selector for estimating sparse linear models; (2) Calibrated Multivariate Regression for estimating sparse multivariate linear models; (3) Tiger, Calibrated Clime for estimating sparse Gaussian graphical models. We adopt the combination of the dual smoothing and monotone fast iterative soft-thresholding algorithm (MFISTA). The computation is memory-optimized using the sparse matrix output, and accelerated by the path following and active set tricks.","Published":"2013-09-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CampaR1","Version":"0.8.4","Title":"Trajectory Analysis","Description":"Analysis algorithms extracted from the original 'campari' software package.\n They consists in a kinetic annotation of the trajectory based on the minimum spanning tree\n constructed on the distances between snapshots. The fast algorithm is implemented on\n the basis of a modified version of the birch algorithm, while the slow one is based on a\n simple leader clustering. For more information please visit the original documentation\n on .","Published":"2017-01-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"camsRad","Version":"0.3.0","Title":"Client for CAMS Radiation Service","Description":"Copernicus Atmosphere Monitoring Service (CAMS) radiations service \n provides time series of global, direct, and diffuse irradiations on horizontal\n surface, and direct irradiation on normal plane for the actual weather \n conditions as well as for clear-sky conditions.\n The geographical coverage is the field-of-view of the Meteosat satellite,\n roughly speaking Europe, Africa, Atlantic Ocean, Middle East. The time coverage\n of data is from 2004-02-01 up to 2 days ago. Data are available with a time step\n ranging from 15 min to 1 month. For license terms and to create an account,\n please see . ","Published":"2016-11-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"camtrapR","Version":"0.99.8","Title":"Camera Trap Data Management and Preparation of Occupancy and\nSpatial Capture-Recapture Analyses","Description":"Management of and data extraction from camera trap photographs in wildlife studies. The package provides a workflow for storing and sorting camera trap photographs, computes record databases and detection/non-detection matrices for occupancy and spatial capture-recapture analyses with great flexibility. In addition, it provides simple mapping functions (number of species, number of independent species detections by station) and can visualise activity data.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cancerGI","Version":"1.0.0","Title":"Analyses of Cancer Gene Interaction","Description":"Functions to perform the following analyses: i) inferring epistasis from RNAi double knockdown data; ii) identifying gene pairs of multiple mutation patterns; iii) assessing association between gene pairs and survival; and iv) calculating the smallworldness of a graph (e.g., a gene interaction network). Data and analyses are described in Wang, X., Fu, A. Q., McNerney, M. and White, K. P. (2014). Widespread genetic epistasis among breast cancer genes. Nature Communications. 5 4828. .","Published":"2016-04-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cancerTiming","Version":"3.1.8","Title":"Estimation of Temporal Ordering of Cancer Abnormalities","Description":"Timing copy number changes using estimates of mutational allele frequency from resequencing of tumor samples.","Published":"2016-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"candisc","Version":"0.7-2","Title":"Visualizing Generalized Canonical Discriminant and Canonical\nCorrelation Analysis","Description":"Functions for computing and visualizing \n\tgeneralized canonical discriminant analyses and canonical correlation analysis\n\tfor a multivariate linear model.\n\tTraditional canonical discriminant analysis is restricted to a one-way 'MANOVA'\n\tdesign and is equivalent to canonical correlation analysis between a set of quantitative\n\tresponse variables and a set of dummy variables coded from the factor variable.\n\tThe 'candisc' package generalizes this to higher-way 'MANOVA' designs\n\tfor all factors in a multivariate linear model,\n\tcomputing canonical scores and vectors for each term. The graphic functions provide low-rank (1D, 2D, 3D) \n\tvisualizations of terms in an 'mlm' via the 'plot.candisc' and 'heplot.candisc' methods. Related plots are\n\tnow provided for canonical correlation analysis when all predictors are quantitative.","Published":"2016-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Canopy","Version":"1.2.0","Title":"Accessing Intra-Tumor Heterogeneity and Tracking Longitudinal\nand Spatial Clonal Evolutionary History by Next-Generation\nSequencing","Description":"A statistical framework and computational procedure for identifying\n the sub-populations within a tumor, determining the mutation profiles of each \n subpopulation, and inferring the tumor's phylogenetic history. The input are \n variant allele frequencies (VAFs) of somatic single nucleotide alterations \n (SNAs) along with allele-specific coverage ratios between the tumor and matched\n normal sample for somatic copy number alterations (CNAs). These quantities can\n be directly taken from the output of existing software. Canopy provides a \n general mathematical framework for pooling data across samples and sites to \n infer the underlying parameters. For SNAs that fall within CNA regions, Canopy\n infers their temporal ordering and resolves their phase. When there are \n multiple evolutionary configurations consistent with the data, Canopy outputs \n all configurations along with their confidence assessment.","Published":"2017-04-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"canprot","Version":"0.1.0","Title":"Chemical Composition of Differential Protein Expression","Description":"Datasets are collected here for differentially (up- and down-)\n expressed proteins identified in proteomic studies of cancer and in cell\n culture experiments. Tables of amino acid compositions of proteins are\n used for calculations of chemical composition, projected into selected\n basis species. Plotting functions are used to visualize the compositional\n differences and thermodynamic potentials for proteomic transformations.","Published":"2017-06-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CANSIM2R","Version":"0.11","Title":"Directly Extracts Complete CANSIM Data Tables","Description":"Extract CANSIM (Statistics Canada) tables and transform them into readily usable data in panel (wide) format. It can also extract more than one table at a time and produce the resulting merge by time period and geographical region.","Published":"2015-09-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"canvasXpress","Version":"0.16.2","Title":"Visualization Package for CanvasXpress in R","Description":"Enables creation of visualizations using the CanvasXpress framework\n in R. CanvasXpress is a standalone JavaScript library for reproducible research\n with complete tracking of data and end-user modifications stored in a single\n PNG image that can be played back. See for more\n information.","Published":"2017-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cape","Version":"2.0.2","Title":"Combined Analysis of Pleiotropy and Epistasis","Description":"Combines complementary information across multiple related\n phenotypes to infer directed epistatic interactions between genetic markers.\n This analysis can be applied to a variety of engineered and natural populations.","Published":"2016-06-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"caper","Version":"0.5.2","Title":"Comparative Analyses of Phylogenetics and Evolution in R","Description":"Functions for performing phylogenetic comparative analyses.","Published":"2013-11-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"capm","Version":"0.11.0","Title":"Companion Animal Population Management","Description":"Quantitative analysis to support companion animal population\n management. Some functions assist survey sampling tasks (calculate sample \n size for simple and complex designs, select sampling units and estimate \n population parameters) while others assist the modelling of population \n dynamics. For sampling methods see: Levy PS & Lemeshow S. (2013), \n ISBN-10: 0470040076; Lumley (2010), ISBN: 978-0-470-28430-8. For \n modelling of population dynamics see: Baquero et al (2016) \n ; Baquero et al (2016), \n ISSN 1679-9216; Amaku et al (2010) \n .","Published":"2017-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"capn","Version":"1.0.0","Title":"Capital Asset Pricing for Nature","Description":"Implements approximation methods for natural capital asset prices suggested by Fenichel and Abbott (2014) in Journal of the Associations of Environmental and Resource Economists (JAERE), Fenichel et al. (2016) in Proceedings of the National Academy of Sciences (PNAS), and Yun et al. (2017) in PNAS (accepted), and their extensions: creating Chebyshev polynomial nodes and grids, calculating basis of Chebyshev polynomials, approximation and their simulations for: V-approximation (single and multiple stocks, PNAS), P-approximation (single stock, PNAS), and Pdot-approximation (single stock, JAERE). Development of this package was generously supported by the Knobloch Family Foundation.","Published":"2017-06-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"captioner","Version":"2.2.3","Title":"Numbers Figures and Creates Simple Captions","Description":"Provides a method for automatically numbering figures,\n tables, or other objects. Captions can be displayed in full, or as citations.\n This is especially useful for adding figures and tables to R markdown\n documents without having to numbering them manually.","Published":"2015-07-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"captr","Version":"0.3.0","Title":"Client for the Captricity API","Description":"Get text from images of text using Captricity Optical Character\n Recognition (OCR) API. Captricity allows you to get text from handwritten\n forms --- think surveys --- and other structured paper documents. And it can\n output data in form a delimited file keeping field information intact. For more\n information, read .","Published":"2017-04-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"capushe","Version":"1.1.1","Title":"CAlibrating Penalities Using Slope HEuristics","Description":"Calibration of penalized criteria for model selection. The calibration methods available are based on the slope heuristics.","Published":"2016-04-19","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"capwire","Version":"1.1.4","Title":"Estimates population size from non-invasive sampling","Description":"Fits models from Miller et al. 2005 to estimate population\n sizes from natural populations. Several models are implemented.\n Package also includes functions to perform a likelihood ratio\n test to choose between models, perform parametric bootstrapping\n to obtain confidence intervals and multiple functions to\n simulate data.","Published":"2012-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"car","Version":"2.1-4","Title":"Companion to Applied Regression","Description":"\n Functions and Datasets to Accompany J. Fox and S. Weisberg, \n An R Companion to Applied Regression, Second Edition, Sage, 2011.","Published":"2016-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CARBayes","Version":"5.0","Title":"Spatial Generalised Linear Mixed Models for Areal Unit Data","Description":"Implements a class of univariate and multivariate spatial generalised linear mixed models for areal unit data, with inference in a Bayesian setting using Markov chain Monte Carlo (MCMC) simulation. The response variable can be binomial, Gaussian or Poisson, and spatial autocorrelation is modelled by a set of random effects that are assigned a conditional autoregressive (CAR) prior distribution. A number of different models are available for univariate spatial data, including models with no random effects as well as random effects modelled by different types of CAR prior. Additionally, a multivariate CAR (MCAR) model for multivariate spatial data is available, as is a two-level hierarchical model for individuals within areas. Full details are given in the vignette accompanying this package. The initial creation of this package was supported by the Economic and Social Research Council (ESRC) grant RES-000-22-4256, and on-going development has / is supported by the Engineering and Physical Science Research Council (EPSRC) grant EP/J017442/1, ESRC grant ES/K006460/1, and Innovate UK / Natural Environment Research Council (NERC) grant NE/N007352/1. ","Published":"2017-06-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CARBayesdata","Version":"2.0","Title":"Data Used in the Vignettes Accompanying the CARBayes and\nCARBayesST Packages","Description":"Spatio-temporal data from Scotland used in the vignettes accompanying the CARBayes (spatial modelling) and CARBayesST (spatio-temporal modelling) packages. For the CARBayes vignette the data include the Scottish lip cancer data and property price and respiratory hospitalisation data from the Greater Glasgow and Clyde health board. For the CARBayesST vignette the data include spatio-temporal data on property sales and respiratory hospitalisation and air pollution from the Greater Glasgow and Clyde health board. ","Published":"2016-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CARBayesST","Version":"2.5","Title":"Spatio-Temporal Generalised Linear Mixed Models for Areal Unit\nData","Description":"Implements a class of spatio-temporal generalised linear mixed models for areal unit data, with inference in a Bayesian setting using Markov chain Monte Carlo (MCMC) simulation. The response variable can be binomial, Gaussian or Poisson, but for some models only the binomial and Poisson data likelihoods are available. The spatio-temporal autocorrelation is modelled by random effects, which are assigned conditional autoregressive (CAR) style prior distributions. A number of different random effects structures are available, and full details are given in the vignette accompanying this package and the references in the help files. The creation of this package was supported by the Engineering and Physical Sciences Research Council (EPSRC) grant EP/J017442/1 and the Medical Research Council (MRC) grant MR/L022184/1.","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"carcass","Version":"1.6","Title":"Estimation of the Number of Fatalities from Carcass Searches","Description":"The number of bird or bat fatalities from collisions with buildings, towers or wind energy turbines can be estimated based on carcass searches and experimentally assessed carcass persistence times and searcher efficiency. Functions for estimating the probability that a bird or bat that died is found by a searcher are provided. Further functions calculate the posterior distribution of the number of fatalities based on the number of carcasses found and the estimated detection probability.","Published":"2016-03-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cardidates","Version":"0.4.7","Title":"Identification of Cardinal Dates in Ecological Time Series","Description":"Identification of cardinal dates\n (begin, time of maximum, end of mass developments)\n in ecological time series using fitted Weibull functions.","Published":"2015-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cardioModel","Version":"1.4","Title":"Cardiovascular Safety Exposure-Response Modeling in Early-Phase\nClinical Studies","Description":"Includes over 100 mixed-effects model structures describing the relationship between drug concentration and QT interval, heart rate/pulse rate or blood pressure. Given an exposure-response dataset, the tool fits each model structure to the observed data.","Published":"2016-04-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"care","Version":"1.1.10","Title":"High-Dimensional Regression and CAR Score Variable Selection","Description":"Implements the regression approach \n of Zuber and Strimmer (2011) \"High-dimensional regression and variable \n selection using CAR scores\" SAGMB 10: 34, .\n CAR scores measure the correlation between the response and the \n Mahalanobis-decorrelated predictors. The squared CAR score is a \n natural measure of variable importance and provides a canonical \n ordering of variables. This package provides functions for estimating \n CAR scores, for variable selection using CAR scores, and for estimating \n corresponding regression coefficients. Both shrinkage as well as \n empirical estimators are available.","Published":"2017-03-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"CARE1","Version":"1.1.0","Title":"Statistical package for population size estimation in\ncapture-recapture models","Description":"The R package CARE1, the first part of the program CARE\n (Capture-Recapture) in\n http://chao.stat.nthu.edu.tw/softwareCE.html, can be used to\n analyze epidemiological data via sample coverage approach (Chao\n et al. 2001a). Based on the input of records from several\n incomplete lists (or samples) of individuals, the R package\n CARE1 provides output of population size estimate and related\n statistics.","Published":"2012-10-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"caret","Version":"6.0-76","Title":"Classification and Regression Training","Description":"Misc functions for training and plotting classification and\n regression models.","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"caretEnsemble","Version":"2.0.0","Title":"Ensembles of Caret Models","Description":"Functions for creating ensembles of caret models: caretList\n and caretStack. caretList is a convenience function for fitting multiple\n caret::train models to the same dataset. caretStack will make linear or\n non-linear combinations of these models, using a caret::train model as a\n meta-model, and caretEnsemble will make a robust linear combination of\n models using a glm.","Published":"2016-02-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"caribou","Version":"1.1","Title":"Estimation of caribou abundance based on large scale\naggregations monitored by radio telemetry","Description":"This is a package for estimating the population size of\n migratory caribou herds based on large scale aggregations\n monitored by radio telemetry. It implements the methodology\n found in the article by Rivest et al. (1998) about caribou\n abundance estimation. It also includes a function based on the\n Lincoln-Petersen Index as applied to radio telemetry data by\n White and Garrott (1990).","Published":"2012-06-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CarletonStats","Version":"1.3","Title":"Functions for Statistics Classes at Carleton College","Description":"Includes commands for bootstrapping and permutation tests, a command for created grouped bar plots, and a demo of the quantile-normal plot for data drawn from different distributions.","Published":"2016-07-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CARLIT","Version":"1.0","Title":"Ecological Quality Ratios Calculation and Plot","Description":"Functions to calculate and plot ecological quality ratios (EQR) as specified by Ballesteros et al. 2007.","Published":"2015-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"caroline","Version":"0.7.6","Title":"A Collection of Database, Data Structure, Visualization, and\nUtility Functions for R","Description":"The caroline R library contains dozens of functions useful\n for: database migration (dbWriteTable2), database style joins &\n aggregation (nerge, groupBy & bestBy), data structure\n conversion (nv, tab2df), legend table making (sstable &\n leghead), plot annotation (labsegs & mvlabs), data\n visualization (violins, pies & raPlot), character string\n manipulation (m & pad), file I/O (write.delim), batch scripting\n and more. The package's greatest\n contributions lie in the database style merge, aggregation and\n interface functions as well as in it's extensive use and\n propagation of row, column and vector names in most functions.","Published":"2013-10-08","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"carpenter","Version":"0.2.1","Title":"Build Common Tables of Summary Statistics for Reports","Description":"Mainly used to build tables that are commonly presented for\n bio-medical/health research, such as basic characteristic tables or\n descriptive statistics.","Published":"2017-05-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"caRpools","Version":"0.83","Title":"CRISPR AnalyzeR for Pooled CRISPR Screens","Description":"CRISPR-Analyzer for pooled CRISPR screens (caRpools) provides an end-to-end analysis of CRISPR screens including quality control, hit candidate analysis, visualization and automated report generation using R markdown. Needs MAGeCK (http://sourceforge.net/p/mageck/wiki/Home/), bowtie2 for all functions. CRISPR (clustered regularly interspaced short palindromic repeats) is a method to perform genome editing. See for more information on\n CRISPR.","Published":"2015-12-06","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"CARrampsOcl","Version":"0.1.4","Title":"Reparameterized and marginalized posterior sampling for\nconditional autoregressive models, OpenCL implementation","Description":"This package fits Bayesian conditional autoregressive models for spatial and spatiotemporal data on a lattice. It uses OpenCL kernels running on GPUs to perform rejection sampling to obtain independent samples from the joint posterior distribution of model parameters.","Published":"2013-10-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cartogram","Version":"0.0.2","Title":"Create Cartograms with R","Description":"Construct continuous and non-contiguous area cartograms.","Published":"2016-09-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cartography","Version":"1.4.2","Title":"Thematic Cartography","Description":"Create and integrate maps in your R workflow. This package allows\n various cartographic representations such as proportional symbols, chroropleth,\n typology, flows or discontinuities. In addition, it also proposes some useful\n features like cartographic palettes, layout (scale, north arrow, title...), labels,\n legends or access to cartographic API to ease the graphic presentation of maps.","Published":"2017-03-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"carx","Version":"0.6.2","Title":"Censored Autoregressive Model with Exogenous Covariates","Description":"A censored time series class is designed. An estimation procedure\n is implemented to estimate the Censored AutoRegressive time series with\n eXogenous covariates (CARX), assuming normality of the innovations. Some other\n functions that might be useful are also included.","Published":"2016-03-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"caschrono","Version":"2.0","Title":"Séries Temporelles Avec R","Description":"Functions, data sets and exercises solutions for the book 'Séries Temporelles Avec R' (Yves Aragon, edp sciences, 2016). For all chapters, a vignette is available with some additional material and exercises solutions.","Published":"2016-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"casebase","Version":"0.1.0","Title":"Fitting Flexible Smooth-in-Time Hazards and Risk Functions via\nLogistic and Multinomial Regression","Description":"Implements the case-base sampling approach of Hanley and Miettinen (2009) , \n Saarela and Arjas (2015) , and Saarela (2015) , for fitting flexible hazard \n regression models to survival data with single event type or multiple competing causes via logistic and multinomial regression. \n From the fitted hazard function, cumulative incidence, risk functions of time, treatment and profile \n can be derived. This approach accommodates any log-linear hazard function of prognostic time, treatment, \n and covariates, and readily allows for non-proportionality. We also provide a plot method for visualizing \n incidence density via population time plots.","Published":"2017-04-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"caseMatch","Version":"1.0.7","Title":"Identify Similar Cases for Qualitative Case Studies","Description":"Allows users to identify similar cases for qualitative case studies using statistical matching methods.","Published":"2017-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"castor","Version":"1.1","Title":"Efficient Comparative Phylogenetics on Large Trees","Description":"Efficient tree manipulation functions including pruning, rerooting, calculation of most-recent common ancestors, calculating distances from the tree root and calculating pairwise distance matrices. Calculation of phylogenetic signal and mean trait depth (trait conservatism). Ancestral state reconstruction and hidden character prediction of discrete characters, using Maximum Likelihood and Maximum Parsimony methods. Simulating and fitting models of trait evolution, and generating random trees using birth-death models.","Published":"2017-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cat","Version":"0.0-6.5","Title":"Analysis of categorical-variable datasets with missing values","Description":"Analysis of categorical-variable with missing values","Published":"2012-10-30","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"catdap","Version":"1.2.4","Title":"Categorical Data Analysis Program Package","Description":"Categorical data analysis program package.","Published":"2016-09-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"catdata","Version":"1.2.1","Title":"Categorical Data","Description":"This R-package contains examples from the book \"Regression for Categorical Data\", Tutz 2011, Cambridge University Press. The names of the examples refer to the chapter and the data set that is used. ","Published":"2014-11-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CatDyn","Version":"1.1-0","Title":"Fishery Stock Assessment by Generalized Depletion Models","Description":"Based on fishery Catch Dynamics instead of fish Population Dynamics (hence CatDyn) and using high-frequency or medium-frequency catch in biomass or numbers, fishing nominal effort, and mean fish body weight by time step, from one or two fishing fleets, estimate stock abundance, natural mortality rate, and fishing operational parameters. It includes methods for data organization, plotting standard exploratory and analytical plots, predictions, for 77 types of models of increasing complexity, and 56 likelihood models for the data.","Published":"2015-05-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cate","Version":"1.0.4","Title":"High Dimensional Factor Analysis and Confounder Adjusted Testing\nand Estimation","Description":"Provides several methods for factor analysis in high dimension (both n,p >> 1) and methods to adjust for possible confounders in multiple hypothesis testing.","Published":"2015-10-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"catenary","Version":"1.1.1","Title":"Fits a Catenary to Given Points","Description":"Gives methods to create a catenary object and then plot it and get\n properties of it. Can construct from parameters or endpoints. Also can get\n catenary fitted to data.","Published":"2015-11-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CatEncoders","Version":"0.1.1","Title":"Encoders for Categorical Variables","Description":"Contains some commonly used categorical variable encoders, such as 'LabelEncoder' and 'OneHotEncoder'. Inspired by the encoders implemented in Python 'sklearn.preprocessing' package (see ).","Published":"2017-03-08","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"CateSelection","Version":"1.0","Title":"Categorical Variable Selection Methods","Description":"A multi-factor dimensionality reduction based forward selection method for genetic association mapping.","Published":"2014-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cati","Version":"0.99.1","Title":"Community Assembly by Traits: Individuals and Beyond","Description":"Detect and quantify community assembly processes using trait values of individuals or populations, the T-statistics and other metrics, and dedicated null models.","Published":"2016-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"catIrt","Version":"0.5-0","Title":"An R Package for Simulating IRT-Based Computerized Adaptive\nTests","Description":"Functions designed to simulate data that conform to basic\n unidimensional IRT models (for now 3-parameter binary response models\n and graded response models) along with Post-Hoc CAT simulations of\n those models with various item selection methods, ability estimation\n methods, and termination criteria.","Published":"2014-10-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CATkit","Version":"3.0.0.2","Title":"Chronomics Analysis Toolkit (CAT): Analyze Periodicity","Description":"Performs analysis of sinusoidal rhythms in time series data: actogram, smoothing, autocorrelation, crosscorrelation, several flavors of cosinor. ","Published":"2017-02-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"catlearn","Version":"0.4","Title":"Formal Modeling for Psychology","Description":"Formal psychological models, independently-replicated data sets against which to test them, and simulation archives.","Published":"2017-02-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"catnet","Version":"1.15.0","Title":"Categorical Bayesian Network Inference","Description":"Structure learning and parameter estimation of discrete Bayesian networks using likelihood-based criteria. Exhaustive search for fixed node orders and stochastic search of optimal orders via simulated annealing algorithm are implemented. ","Published":"2016-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"caTools","Version":"1.17.1","Title":"Tools: moving window statistics, GIF, Base64, ROC AUC, etc","Description":"Contains several basic utility functions including: moving\n (rolling, running) window statistic functions, read/write for\n GIF and ENVI binary files, fast calculation of AUC, LogitBoost\n classifier, base64 encoder/decoder, round-off-error-free sum\n and cumsum, etc.","Published":"2014-09-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"catR","Version":"3.12","Title":"Generation of IRT Response Patterns under Computerized Adaptive\nTesting","Description":"Provides routines for the generation of response patterns under unidimensional dichotomous and polytomous computerized adaptive testing (CAT) framework. It holds many standard functions to estimate ability, select the first item(s) to administer and optimally select the next item, as well as several stopping rules. Options to control for item exposure and content balancing are also available (Magis and Raiche (2012) ).","Published":"2017-01-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"catspec","Version":"0.97","Title":"Special models for categorical variables","Description":"`ctab' creates (multiway) percentage tables. `sqtab'\n contains a set of functions for estimating models for square\n tables such as quasi-independence, symmetry, uniform\n association. Examples show how to use these models in a\n loglinear model using glm or in a multinomial logistic model\n using mlogit or clogit","Published":"2013-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"catSurv","Version":"1.0.1","Title":"Computerized Adaptive Testing for Survey Research","Description":"Provides methods of computerized adaptive testing for survey researchers. Includes functionality for data fit with the classic item response methods including the latent trait model, Birnbaum`s three parameter model, the graded response, and the generalized partial credit model. Additionally, includes several ability parameter estimation and item selection routines. During item selection, all calculations are done in compiled C++ code.","Published":"2017-06-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CATT","Version":"2.0","Title":"The Cochran-Armitage Trend Test","Description":"This function conducts the Cochran-Armitage trend test to a 2 by k contingency table. It will report the test statistic (Z) and p-value.A linear trend in the frequencies will be calculated, because the weights (0,1,2) will be used by default. ","Published":"2017-05-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"causaldrf","Version":"0.3","Title":"Tools for Estimating Causal Dose Response Functions","Description":"Functions and data to estimate causal dose response functions given continuous, ordinal, or binary treatments.","Published":"2015-11-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"causaleffect","Version":"1.3.4","Title":"Deriving Expressions of Joint Interventional Distributions and\nTransport Formulas in Causal Models","Description":"Functions for identification and transportation of causal effects. Provides a conditional causal effect identification algorithm (IDC) by Shpitser, I. and Pearl, J. (2006) , an algorithm for transportability from multiple domains with limited experiments by Bareinboim, E. and Pearl, J. (2014) and a selection bias recovery algorithm by Bareinboim, E. and Tian, J. (2015) . All of the previously mentioned algorithms are based on a causal effect identification algorithm by Tian , J. (2002) . ","Published":"2017-05-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CausalFX","Version":"1.0.1","Title":"Methods for Estimating Causal Effects from Observational Data","Description":"Estimate causal effects of one variable on another, currently for\n binary data only. Methods include instrumental variable bounds, adjustment by a \n given covariate set, adjustment by an induced covariate set using a variation of \n the PC algorithm, and an effect bounding method (the Witness Protection Program) \n based on covariate adjustment with observable independence constraints.","Published":"2015-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CausalGAM","Version":"0.1-3","Title":"Estimation of Causal Effects with Generalized Additive Models","Description":"This package implements various estimators for average\n treatment effects---an inverse probability weighted (IPW)\n estimator, an augmented inverse probability weighted (AIPW)\n estimator, and a standard regression estimator---that make use\n of generalized additive models for the treatment assignment\n model and/or outcome model.","Published":"2010-02-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CausalImpact","Version":"1.2.1","Title":"Inferring Causal Effects using Bayesian Structural Time-Series\nModels","Description":"Implements a Bayesian approach to causal impact estimation in time\n series, as described in Brodersen et al. (2015) .\n See the package documentation on GitHub\n to get started.","Published":"2017-05-31","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"causalsens","Version":"0.1.1","Title":"Selection Bias Approach to Sensitivity Analysis for Causal\nEffects","Description":"The causalsens package provides functions to perform sensitivity analyses and to study how various assumptions about selection bias affects estimates of causal effects.","Published":"2015-07-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Causata","Version":"4.2-0","Title":"Analysis utilities for binary classification and Causata users","Description":"The Causata package provides utilities for \n extracting data from the Causata application, training binary classification \n models, and exporting models as PMML for scoring.","Published":"2016-12-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"CAvariants","Version":"3.4","Title":"Correspondence Analysis Variants","Description":"Provides six variants of two-way correspondence analysis (ca):\n simple ca, singly ordered ca, doubly ordered ca, non symmetrical ca,\n singly ordered non symmetrical ca, and doubly ordered non symmetrical\n ca.","Published":"2017-02-27","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"cba","Version":"0.2-19","Title":"Clustering for Business Analytics","Description":"Implements clustering techniques such as Proximus and Rock, utility functions for efficient computation of cross distances and data manipulation. ","Published":"2017-05-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cbanalysis","Version":"0.1.0","Title":"Coffee Break Descriptive Analysis","Description":"Contains function which subsets the input data frame based on the variable types and returns list of data frames.","Published":"2017-04-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cbar","Version":"0.1.0","Title":"Contextual Bayesian Anomaly Detection in R","Description":"Detect contextual anomalies in time-series data with Bayesian data\n analysis. It focuses on determining a normal range of target value, and\n provides simple-to-use functions to abstract the outcome.","Published":"2017-06-23","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cbird","Version":"1.0","Title":"Clustering of Multivariate Binary Data with Dimension Reduction\nvia L1-Regularized Likelihood Maximization","Description":"The clustering of binary data with reducing the dimensionality (CLUSBIRD) proposed by Yamamoto and Hayashi (2015) .","Published":"2017-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CBPS","Version":"0.13","Title":"Covariate Balancing Propensity Score","Description":"Implements the covariate balancing propensity score (CBPS) proposed\n by Imai and Ratkovic (2014) . The propensity score is\n estimated such that it maximizes the resulting covariate balance as well as the\n prediction of treatment assignment. The method, therefore, avoids an iteration\n between model fitting and balance checking. The package also implements several\n extensions of the CBPS beyond the cross-sectional, binary treatment setting.\n The current version implements the CBPS for longitudinal settings so that it can\n be used in conjunction with marginal structural models from Imai and Ratkovic\n (2015) , treatments with three- and four-\n valued treatment variables, continuous-valued treatments from Fong, Hazlett,\n and Imai (2015) , and the\n situation with multiple distinct binary treatments administered simultaneously.\n In the future it will be extended to other settings including the generalization\n of experimental and instrumental variable estimates. Recently add the optimal\n CBPS which chooses the optimal balancing function and results in doubly robust\n and efficient estimator for the treatment effect.","Published":"2016-12-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cbsodataR","Version":"0.2.1","Title":"Statistics Netherlands (CBS) Open Data API Client","Description":"The data and meta data from Statistics\n Netherlands (www.cbs.nl) can be browsed and downloaded. The client uses\n the open data API of Statistics Netherlands.","Published":"2016-01-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CCA","Version":"1.2","Title":"Canonical correlation analysis","Description":"The package provide a set of functions that extend the\n cancor function with new numerical and graphical outputs. It\n also include a regularized extension of the cannonical\n correlation analysis to deal with datasets with more variables\n than observations.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ccafs","Version":"0.1.0","Title":"Client for 'CCAFS' 'GCM' Data","Description":"Client for Climate Change, Agriculture, and Food Security ('CCAFS')\n General Circulation Models ('GCM') data. Data is stored in Amazon 'S3', from\n which we provide functions to fetch data.","Published":"2017-02-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CCAGFA","Version":"1.0.8","Title":"Bayesian Canonical Correlation Analysis and Group Factor\nAnalysis","Description":"Variational Bayesian algorithms for learning canonical correlation analysis (CCA), inter-battery factor analysis (IBFA), and group factor analysis (GFA). Inference with several random initializations can be run with the functions CCAexperiment() and GFAexperiment().","Published":"2015-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ccaPP","Version":"0.3.2","Title":"(Robust) Canonical Correlation Analysis via Projection Pursuit","Description":"Canonical correlation analysis and maximum correlation via\n projection pursuit, as well as fast implementations of correlation\n estimators, with a focus on robust and non-parametric methods.","Published":"2016-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cccd","Version":"1.5","Title":"Class Cover Catch Digraphs","Description":"Class Cover Catch Digraphs, neighborhood graphs, and\n relatives.","Published":"2015-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ccChooser","Version":"0.2.6","Title":"Developing a core collections","Description":"ccChooser can be used to developing and evaluation of core\n collections for germplasm collections (entire collection). This\n package used to develop a core collection for biological\n resources like genbanks. A core collection is defined as a\n sample of accessions that represent, with the lowest possible\n level of redundancy, the genetic diversity (the richness of\n gene or genotype categories) of the entire collection. The\n establishing a core collection that represents genetic\n diversity of the entire collection with minimum loss of its\n original diversity and minimum redundancies is an important\n problem for gene-banks curators and crop breeders. ccChooser\n establish core collection base on phenotypic data (agronomic,\n morphological, phenological).","Published":"2012-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cccp","Version":"0.2-4","Title":"Cone Constrained Convex Problems","Description":"Routines for solving convex optimization problems with cone constraints by means of interior-point methods. The implemented algorithms are partially ported from CVXOPT, a Python module for convex optimization (see for more information). ","Published":"2015-02-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cccrm","Version":"1.2.1","Title":"Concordance Correlation Coefficient for Repeated (and\nNon-Repeated) Measures","Description":"Estimates the Concordance Correlation Coefficient to assess agreement. The scenarios considered are non-repeated measures, non-longitudinal repeated measures (replicates) and longitudinal repeated measures. The estimation approaches implemented are variance components and U-statistics approaches.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ccda","Version":"1.1","Title":"Combined Cluster and Discriminant Analysis","Description":"This package implements the combined cluster and discriminant analysis method for finding homogeneous groups of data with known origin as described in Kovacs et. al (2014): Classification into homogeneous groups using combined cluster and discriminant analysis (CCDA). Environmental Modelling & Software. DOI: http://dx.doi.org/10.1016/j.envsoft.2014.01.010","Published":"2014-12-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ccdrAlgorithm","Version":"0.0.3","Title":"CCDr Algorithm for Learning Sparse Gaussian Bayesian Networks","Description":"Implementation of the CCDr (Concave penalized Coordinate Descent with reparametrization) structure learning algorithm as described in Aragam and Zhou (2015) . This is a fast, score-based method for learning Bayesian networks that uses sparse regularization and block-cyclic coordinate descent.","Published":"2017-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ccgarch","Version":"0.2.3","Title":"Conditional Correlation GARCH models","Description":"Functions for estimating and simulating the family of the\n CC-GARCH models.","Published":"2014-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cchs","Version":"0.3.0","Title":"Cox Model for Case-Cohort Data with Stratified\nSubcohort-Selection","Description":"Contains a function, also called 'cchs', that calculates Estimator III of Borgan et al (2000), . This estimator is for fitting a Cox proportional hazards model to data from a case-cohort study where the subcohort was selected by stratified simple random sampling.","Published":"2016-07-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cclust","Version":"0.6-21","Title":"Convex Clustering Methods and Clustering Indexes","Description":"Convex Clustering methods, including K-means algorithm,\n On-line Update algorithm (Hard Competitive Learning) and Neural Gas\n algorithm (Soft Competitive Learning), and calculation of several\n indexes for finding the number of clusters in a data set.","Published":"2017-01-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CCM","Version":"1.1","Title":"Correlation classification method (CCM)","Description":"Classification method that classifies a sample according\n to the class with the maximum mean (or any other function of)\n correlation between the test and training samples with known\n classes.","Published":"2013-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CCMnet","Version":"0.0-3","Title":"Simulate Congruence Class Model for Networks","Description":"Tools to simulate networks based on Congruence Class models.","Published":"2015-12-10","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CCP","Version":"1.1","Title":"Significance Tests for Canonical Correlation Analysis (CCA)","Description":"Significance tests for canonical correlation analysis,\n including asymptotic tests and a Monte Carlo method","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"CCpop","Version":"1.0","Title":"One and two locus GWAS of binary phenotype with\ncase-control-population design","Description":"Tests of association between SNPs or pairs of SNPs and binary phenotypes, in case-control / case-population / case-control-population studies.","Published":"2014-03-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ccRemover","Version":"1.0.1","Title":"Removes the Cell-Cycle Effect from Single-Cell RNA-Sequencing\nData","Description":"Implements a method for identifying and removing\n\t\t\t\tthe cell-cycle effect from scRNA-Seq data. The description of the \n\t\t\t\tmethod is in Barron M. and Li J. (2016) . Identifying and removing \n\t\t\t\tthe cell-cycle effect from single-cell RNA-Sequencing data. Submitted. \n\t\t\t\tDifferent from previous methods, ccRemover implements a mechanism that\n\t\t\t\tformally tests whether a component is cell-cycle related or not, and thus\n\t\t\t\twhile it often thoroughly removes the cell-cycle effect, it preserves\n\t\t\t\tother features/signals of interest in the data.","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cctools","Version":"0.1.0","Title":"Tools for the Continuous Convolution Trick in Nonparametric\nEstimation","Description":"Implements the uniform scaled beta distribution and\n the continuous convolution kernel density estimator.","Published":"2017-04-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CCTpack","Version":"1.5.1","Title":"Consensus Analysis, Model-Based Clustering, and Cultural\nConsensus Theory Applications","Description":"Consensus analysis, model-based clustering, and cultural consensus theory applications to response data (e.g. questionnaires). The models are applied using hierarchical Bayesian inference. The current package version supports binary, ordinal, and continuous data formats. ","Published":"2017-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cda","Version":"2.0.0","Title":"Coupled-Dipole Approximation for Electromagnetic Scattering by\nThree-Dimensional Clusters of Sub-Wavelength Particles","Description":"Coupled-dipole simulations for electromagnetic scattering of light by sub-wavelength particles in arbitrary 3-dimensional configurations. Scattering and absorption spectra are simulated by inversion of the interaction matrix, or by an order-of-scattering approximation scheme. High-level functions are provided to simulate spectra with varying angles of incidence, as well as with full angular averaging. ","Published":"2016-08-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cdata","Version":"0.1.1","Title":"Wrappers for 'tidyr::gather()' and 'tidyr::spread()'","Description":"Supplies deliberately verbose wrappers for 'tidyr::gather()' and 'tidyr::spread()', and an explanatory vignette. Useful for training and for enforcing preconditions.","Published":"2017-05-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cdb","Version":"0.0.1","Title":"Reading and Writing Constant DataBases","Description":"A constant database is a data structure created by Daniel\n J. Bernstein in his cdb package. Its format consists on a\n sequence of (key,value)-pairs. This R package replicates the\n basic utilities for reading (cdbget) and writing (cdbdump)\n constant databases.","Published":"2013-04-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cdcfluview","Version":"0.5.1","Title":"Retrieve U.S. Flu Season Data from the CDC FluView Portal","Description":"The U.S. Centers for Disease Control (CDC) maintains a portal\n for\n accessing state, regional and national influenza statistics as well as\n Mortality Surveillance Data. The Flash interface makes it difficult and \n time-consuming to select and retrieve influenza data. This package \n provides functions to access the data provided by the portal's underlying API.","Published":"2016-12-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cdcsis","Version":"1.0","Title":"Conditional Distance Correlation and Its Related Feature\nScreening Method","Description":"Gives conditional distance correlation and performs the conditional distance correlation sure independence screening procedure for ultrahigh dimensional data. The conditional distance correlation is a novel conditional dependence measurement of two random variables given a third variable. The conditional distance correlation sure independence screening is used for screening variables in ultrahigh dimensional setting.","Published":"2014-10-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CDF.PSIdekick","Version":"1.2","Title":"Evaluate Differentially Private Algorithms for Publishing\nCumulative Distribution Functions","Description":"Designed by and for the community of differential privacy algorithm developers. It can be used to empirically evaluate and visualize Cumulative Distribution Functions incorporating noise that satisfies differential privacy, with numerous options made to streamline collection of utility measurements across variations of key parameters, such as epsilon, domain size, sample size, data shape, etc. Developed by researchers at Harvard PSI.","Published":"2016-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cdfquantreg","Version":"1.1.1","Title":"Quantile Regression for Random Variables on the Unit Interval","Description":"Employs a two-parameter family of\n distributions for modelling random variables on the (0, 1) interval by\n applying the cumulative distribution function (cdf) of one parent\n distribution to the quantile function of another.","Published":"2017-01-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CDFt","Version":"1.0.1","Title":"Statistical downscaling through CDF-transform","Description":"This package proposes a statistical downscaling method for\n cumulative distribution functions (CDF), as well as the\n computation of the Cram\\`er-von Mises statistics U, and the\n Kolmogorov-Smirnov statistics KS.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CDLasso","Version":"1.1","Title":"Coordinate Descent Algorithms for Lasso Penalized L1, L2, and\nLogistic Regression","Description":"Coordinate Descent Algorithms for Lasso Penalized L1, L2,\n and Logistic Regression","Published":"2013-05-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cdlTools","Version":"0.11","Title":"Tools to Download and Work with USDA Cropscape Data","Description":"Downloads USDA National Agricultural Statistics Service (NASS) \n cropscape data for a specified state. Utilities for fips, abbreviation, \n and name conversion are also provided. Full functionality requires an \n internet connection, but data sets can be cached for later off-line use.","Published":"2016-08-01","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"CDM","Version":"5.6-16","Title":"Cognitive Diagnosis Modeling","Description":"\n Functions for cognitive diagnosis modeling\n and multidimensional item response modeling for\n dichotomous and polytomous data. This package\n enables the estimation of the DINA and DINO model,\n the multiple group (polytomous) GDINA model,\n the multiple choice DINA model, the general diagnostic\n model (GDM), the multidimensional linear compensatory\n item response model and the structured latent class\n model (SLCA).","Published":"2017-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CDNmoney","Version":"2012.4-2","Title":"Components of Canadian Monetary and Credit Aggregates","Description":"Components of Canadian Credit Aggregates and Monetary Aggregates with continuity adjustments.","Published":"2015-05-01","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cdom","Version":"0.1.0","Title":"R Functions to Model CDOM Spectra","Description":"Wrapper functions to model and extract various quantitative information from absorption spectra of chromophoric dissolved organic matter (CDOM).","Published":"2016-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CDROM","Version":"1.1","Title":"Phylogenetically Classifies Retention Mechanisms of Duplicate\nGenes from Gene Expression Data","Description":"Classification is based on the recently developed phylogenetic\n approach by Assis and Bachtrog (2013). The method classifies the\n evolutionary mechanisms retaining pairs of duplicate genes (conservation,\n neofunctionalization, subfunctionalization, or specialization) by comparing gene\n expression profiles of duplicate genes in one species to those of their single-\n copy ancestral genes in a sister species.","Published":"2016-04-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cds","Version":"1.0.3","Title":"Constrained Dual Scaling for Detecting Response Styles","Description":"This is an implementation of constrained dual scaling for\n detecting response styles in categorical data, including utility functions. The\n procedure involves adding additional columns to the data matrix representing the\n boundaries between the rating categories. The resulting matrix is then doubled\n and analyzed by dual scaling. One-dimensional solutions are sought which provide\n optimal scores for the rating categories. These optimal scores are constrained\n to follow monotone quadratic splines. Clusters are introduced within which the\n response styles can vary. The type of response style present in a cluster can\n be diagnosed from the optimal scores for said cluster, and this can be used to\n construct an imputed version of the data set which adjusts for response styles.","Published":"2016-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CDVine","Version":"1.4","Title":"Statistical Inference of C- And D-Vine Copulas","Description":"Functions for statistical inference of canonical vine (C-vine)\n and D-vine copulas. Tools for bivariate exploratory data analysis and for bivariate\n as well as vine copula selection are provided. Models can be estimated\n either sequentially or by joint maximum likelihood estimation.\n Sampling algorithms and plotting methods are also included.\n Data is assumed to lie in the unit hypercube (so-called copula\n data).","Published":"2015-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CDVineCopulaConditional","Version":"0.1.0","Title":"Sampling from Conditional C- and D-Vine Copulas","Description":"Provides tools for sampling from a conditional copula density decomposed via \n Pair-Copula Constructions as C- or D- vine. Here, the vines which can be used for such \n sampling are those which sample as first the conditioning variables (when following the \n sampling algorithms shown in Aas et al. (2009) ). \n The used sampling algorithm is presented and discussed in Bevacqua et al. (2017) \n , and it is a modified version of that from Aas et al. (2009) \n . A function is available to select the best vine \n (based on information criteria) among those which allow for such conditional sampling. \n The package includes a function to compare scatterplot matrices and pair-dependencies of \n two multivariate datasets.","Published":"2017-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CEC","Version":"0.9.4","Title":"Cross-Entropy Clustering","Description":"Cross-Entropy Clustering (CEC) divides the data into Gaussian type clusters. It performs the automatic reduction of unnecessary clusters, while at the same time allows the simultaneous use of various type Gaussian mixture models.","Published":"2016-04-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cec2005benchmark","Version":"1.0.4","Title":"Benchmark for the CEC 2005 Special Session on Real-Parameter\nOptimization","Description":"This package is a wrapper for the C implementation of the 25 benchmark functions for the CEC 2005 Special Session on Real-Parameter Optimization. The original C code by Santosh Tiwari and related documentation are available at http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC-05/CEC05.htm.","Published":"2015-02-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cec2013","Version":"0.1-5","Title":"Benchmark functions for the Special Session and Competition on\nReal-Parameter Single Objective Optimization at CEC-2013","Description":"This package provides R wrappers for the C implementation of 28 benchmark functions defined for the Special Session and Competition on Real-Parameter Single Objective Optimization at CEC-2013. The focus of this package is to provide an open-source and multi-platform implementation of the CEC2013 benchmark functions, in order to make easier for researchers to test the performance of new optimization algorithms in a reproducible way. The original C code (Windows only) was provided by Jane Jing Liang, while GNU/Linux comments were made by Janez Brest. This package was gently authorised for publication on CRAN by Ponnuthurai Nagaratnam Suganthan. The official documentation is available at http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC2013/CEC2013.htm. Bugs reports/comments/questions are very welcomed (in English, Spanish or Italian).","Published":"2015-01-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"CEGO","Version":"2.1.0","Title":"Combinatorial Efficient Global Optimization","Description":"Model building, surrogate model\n based optimization and Efficient Global Optimization in combinatorial\n or mixed search spaces.","Published":"2016-08-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"celestial","Version":"1.3","Title":"Collection of Common Astronomical Conversion Routines and\nFunctions","Description":"Contains a number of common astronomy conversion routines, particularly the HMS and degrees schemes, which can be fiddly to convert between on mass due to the textural nature of the former. It allows users to coordinate match datasets quickly. It also contains functions for various cosmological calculations.","Published":"2015-06-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cellranger","Version":"1.1.0","Title":"Translate Spreadsheet Cell Ranges to Rows and Columns","Description":"Helper functions to work with spreadsheets and the \"A1:D10\" style\n of cell range specification.","Published":"2016-07-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CellularAutomaton","Version":"1.1-1","Title":"One-Dimensional Cellular Automata","Description":"This package is an object-oriented implementation of one-dimensional cellular automata. It supports many of the features offered by Mathematica, including elementary rules, user-defined rules, radii, user-defined seeding, and plotting.","Published":"2013-08-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"cellVolumeDist","Version":"1.3","Title":"Functions to fit cell volume distributions and thereby estimate\ncell growth rates and division times","Description":"This package implements a methodology for using cell\n volume distributions to estimate cell growth rates and division\n times that is described in the paper entitled \"Cell Volume\n Distributions Reveal Cell Growth Rates and Division Times\", by\n Michael Halter, John T. Elliott, Joseph B. Hubbard, Alessandro\n Tona and Anne L. Plant, which is in press in the Journal of\n Theoretical Biology. In order to reproduce the analysis used\n to obtain Table 1 in the paper, execute the command\n \"example(fitVolDist)\".","Published":"2013-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cellWise","Version":"1.0.0","Title":"Analyzing Data with Cellwise Outliers","Description":"Tools for detecting cellwise outliers and robust methods to analyze data which may contain them. ","Published":"2016-12-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cem","Version":"1.1.17","Title":"Coarsened Exact Matching","Description":"Implementation of the Coarsened Exact Matching algorithm.","Published":"2016-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cems","Version":"0.4","Title":"Conditional Expectation Manifolds","Description":"Conditional expectation manifolds are an approach to compute principal curves and surfaces.","Published":"2015-11-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"censCov","Version":"1.0-0","Title":"Linear Regression with a Randomly Censored Covariate","Description":"Implementations of threshold regression approaches for linear\n\t regression models with a covariate subject to random censoring,\n\t including deletion threshold regression and completion threshold regression.\n\t Reverse survival regression, which flip the role of response variable and the\n\t covariate, is also considered.","Published":"2017-04-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"CensMixReg","Version":"1.0","Title":"Censored Linear Mixture Regression Models","Description":"Fit censored linear regression models where the random errors follow a finite mixture of Normal or Student-t distributions.\n Fit censored linear models of finite mixture multivariate Student-t and Normal distributions.","Published":"2016-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"censNID","Version":"0-0-1","Title":"censored NID samples","Description":"Implements AS138, AS139. ","Published":"2013-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"censorcopula","Version":"2.0","Title":"Estimate Parameter of Bivariate Copula","Description":"Implement an interval censor method \n to break ties when using data with ties to fitting a \n bivariate copula.","Published":"2016-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"censReg","Version":"0.5-26","Title":"Censored Regression (Tobit) Models","Description":"Maximum Likelihood estimation of censored regression (Tobit) models\n with cross-sectional and panel data.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CensRegMod","Version":"1.0","Title":"Fits Normal and Student-t Censored Regression Model","Description":"Fits univariate censored linear regression model under Normal or Student-t distribution","Published":"2015-01-24","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"CensSpatial","Version":"1.3","Title":"Censored Spatial Models","Description":"Fits linear regression models for censored spatial data. Provides different estimation methods as the SAEM (Stochastic Approximation of Expectation Maximization) algorithm and seminaive that uses Kriging prediction to estimate the response at censored locations and predict new values at unknown locations. Also offers graphical tools for assessing the fitted model.","Published":"2017-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"censusapi","Version":"0.2.0","Title":"Retrieve Data from the U.S. Census Bureau APIs","Description":"A wrapper for the U.S. Census Bureau APIs that returns data frames of \n\tCensus data and metadata. Available datasets include the \n\tDecennial Census, American Community Survey, Small Area Health Insurance Estimates,\n\tSmall Area Income and Poverty Estimates, and Population Estimates and Projections.\n\tSee for more information.","Published":"2017-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"censusGeography","Version":"0.1.0","Title":"Changes United States Census Geographic Code into Name of\nLocation","Description":"Converts the United States Census geographic code for city, state (FIP and ICP),\n region, and birthplace, into the name of the region. e.g. takes an input of\n Census city code 5330 to it's actual city, Philadelphia. Will return NA for code\n that doesn't correspond to real location.","Published":"2016-08-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"censusr","Version":"0.0.3","Title":"Collect Data from the Census API","Description":"Use the US Census API to collect summary data tables\n for SF1 and ACS datasets at arbitrary geographies.","Published":"2017-06-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"censys","Version":"0.1.0","Title":"Tools to Query the 'Censys' API","Description":"The 'Censys' public search engine enables researchers to quickly ask \n questions about the hosts and networks that compose the Internet. Details on how \n 'Censys' was designed and how it is operated are available at . \n Both basic and extended research access queries are made available. More information \n on the SQL dialect used by the 'Censys' engine can be found at \n .","Published":"2016-12-31","License":"AGPL + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cents","Version":"0.1-41","Title":"Censored time series","Description":"Fit censored time series","Published":"2014-08-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CEoptim","Version":"1.2","Title":"Cross-Entropy R Package for Optimization","Description":"Optimization solver based on the Cross-Entropy method.","Published":"2017-02-20","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"CePa","Version":"0.5","Title":"Centrality-based pathway enrichment","Description":"Use pathway topology information to assign weight to\n pathway nodes.","Published":"2012-09-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CepLDA","Version":"1.0.0","Title":"Discriminant Analysis of Time Series in the Presence of\nWithin-Group Spectral Variability","Description":"Performs cepstral based discriminant analysis of groups of time series \n when there exists Variability in power spectra from time series within the same group \n as described in R.T. Krafty (2016) \"Discriminant Analysis of Time Series in the \n Presence of Within-Group Spectral Variability\" Journal of Time Series Analysis.","Published":"2016-01-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cepp","Version":"1.7","Title":"Context Driven Exploratory Projection Pursuit","Description":"Functions and Data to support Context Driven Exploratory Projection Pursuit.","Published":"2016-01-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CerioliOutlierDetection","Version":"1.1.5","Title":"Outlier Detection Using the Iterated RMCD Method of Cerioli\n(2010)","Description":"Implements the iterated RMCD method of Cerioli (2010)\n\tfor multivariate outlier detection via robust Mahalanobis distances. Also\n\tprovides the finite-sample RMCD method discussed in the paper, as well as \n\tthe methods provided in Hardin and Rocke (2005) and Green and Martin (2014).","Published":"2016-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cernn","Version":"0.1","Title":"Covariance Estimation Regularized by Nuclear Norm Penalties","Description":"An implementation of the covariance estimation method\n proposed in Chi and Lange (2014), \"Stable estimation of a covariance matrix guided by nuclear norm penalties,\"\n Computational Statistics and Data Analysis 80:117-128.","Published":"2015-04-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cfa","Version":"0.10-0","Title":"Configural Frequency Analysis (CFA)","Description":"Analysis of configuration frequencies for simple and repeated measures, multiple-samples CFA, hierarchical CFA, bootstrap CFA, functional CFA, Kieser-Victor CFA, and Lindner's test using a conventional and an accelerated algorithm.","Published":"2017-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CFC","Version":"1.1.0","Title":"Cause-Specific Framework for Competing-Risk Analysis","Description":"Numerical integration of cause-specific survival curves to arrive at cause-specific cumulative incidence functions,\n with three usage modes: 1) Convenient API for parametric survival regression followed by competing-risk analysis, 2) API for\n CFC, accepting user-specified survival functions in R, and 3) Same as 2, but accepting survival functions in C++.","Published":"2017-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CfEstimateQuantiles","Version":"1.0","Title":"Estimate quantiles using any order Cornish-Fisher expansion","Description":"Estimate quantiles using formula (18) from\n http://www.jaschke-net.de/papers/CoFi.pdf (Yaschke; 2001)","Published":"2013-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cffdrs","Version":"1.7.6","Title":"Canadian Forest Fire Danger Rating System","Description":"This project provides a group of new functions to calculate the\n outputs of the two main components of the Canadian Forest Fire Danger Rating\n System (CFFDRS) at various time scales: the Fire Weather Index (FWI) System and\n the Fire Behaviour Prediction (FBP) System. Some functions have two versions,\n table and raster based.","Published":"2017-04-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cg","Version":"1.0-3","Title":"Compare Groups, Analytically and Graphically","Description":"Comprehensive data analysis software, and the name \"cg\" stands for \"compare groups.\" Its genesis and evolution are driven by common needs to compare administrations, conditions, etc. in medicine research and development. The current version provides comparisons of unpaired samples, i.e. a linear model with one factor of at least two levels. It also provides comparisons of two paired samples. Good data graphs, modern statistical methods, and useful displays of results are emphasized.","Published":"2016-01-04","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"cgam","Version":"1.6","Title":"Constrained Generalized Additive Model","Description":"A constrained generalized additive model is fitted by the cgam routine. Given a set of predictors, each of which may have a shape or order restrictions, the maximum likelihood estimator for the constrained generalized additive model is found using an iteratively re-weighted cone projection algorithm. The ShapeSelect routine chooses a subset of predictor variables and describes the component relationships with the response. For each predictor, the user need only specify a set of possible shape or order restrictions. A model selection method chooses the shapes and orderings of the relationships as well as the variables. The cone information criterion (CIC) is used to select the best combination of variables and shapes. A genetic algorithm may be used when the set of possible models is large. In addition, the wps routine implements a two-dimensional isotonic regression without additivity assumptions. ","Published":"2016-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cgAUC","Version":"1.2.1","Title":"Calculate AUC-type measure when gold standard is continuous and\nthe corresponding optimal linear combination of variables with\nrespect to it","Description":"The cgAUC can calculate the AUC-type measure of Obuchowski(2006) when gold standard is continuous, and find the optimal linear combination of variables with respect to this measure.","Published":"2014-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cgdsr","Version":"1.2.6","Title":"R-Based API for Accessing the MSKCC Cancer Genomics Data Server\n(CGDS)","Description":"Provides a basic set of R functions for querying the Cancer \n Genomics Data Server (CGDS), hosted by the Computational Biology Center at \n Memorial-Sloan-Kettering Cancer Center (MSKCC).","Published":"2017-04-11","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"cggd","Version":"0.8","Title":"Continuous Generalized Gradient Descent","Description":"Efficient procedures for fitting an entire regression\n sequences with different model types.","Published":"2012-07-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cgh","Version":"1.0-7.1","Title":"Microarray CGH analysis using the Smith-Waterman algorithm","Description":"Functions to analyze microarray comparative genome\n hybridization data using the Smith-Waterman algorithm","Published":"2010-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cghFLasso","Version":"0.2-1","Title":"Detecting hot spot on CGH array data with fused lasso\nregression","Description":"Spatial smoothing and hot spot detection using the fused\n lasso regression","Published":"2009-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cghRA","Version":"1.6.0","Title":"Array CGH Data Analysis and Visualization","Description":"Provides functions to import data from Agilent CGH arrays and process them according to the cghRA workflow. Implements several algorithms such as WACA, STEPS and cnvScore and an interactive graphical interface.","Published":"2017-03-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cghseg","Version":"1.0.2-1","Title":"Segmentation Methods for Array CGH Analysis","Description":"cghseg is an R package dedicated to the analysis of CGH\n profiles using segmentation models.","Published":"2016-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CGP","Version":"2.0-2","Title":"Composite Gaussian process models","Description":"Fit composite Gaussian process (CGP) models as described in Ba and Joseph (2012) \"Composite Gaussian Process Models for Emulating Expensive Functions\", Annals of Applied Statistics. The CGP model is capable of approximating complex surfaces that are not second-order stationary. Important functions in this package are CGP, print.CGP, summary.CGP, predict.CGP and plotCGP.","Published":"2014-09-21","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"cgwtools","Version":"3.0","Title":"Miscellaneous Tools","Description":"A set of tools the author has found useful for performing quick observations or evaluations of data, including a variety of ways to list objects by size, class, etc. Several other tools mimic Unix shell commands, including 'head', 'tail' ,'pushd' ,and 'popd'. The functions 'seqle' and 'reverse.seqle' mimic the base 'rle' but can search for linear sequences. The function 'splatnd' allows the user to generate zero-argument commands without the need for 'makeActiveBinding' .","Published":"2015-06-22","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"ChainLadder","Version":"0.2.4","Title":"Statistical Methods and Models for Claims Reserving in General\nInsurance","Description":"Various statistical methods and models which are\n typically used for the estimation of outstanding claims reserves\n in general insurance, including those to estimate the claims\n development result as required under Solvency II.","Published":"2017-01-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"changepoint","Version":"2.2.2","Title":"Methods for Changepoint Detection","Description":"Implements various mainstream and specialised changepoint methods for finding single and multiple changepoints within data. Many popular non-parametric and frequentist methods are included. The cpt.mean(), cpt.var(), cpt.meanvar() functions should be your first point of call.","Published":"2016-10-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"changepoint.np","Version":"0.0.2","Title":"Methods for Nonparametric Changepoint Detection","Description":"Implements the multiple changepoint algorithm PELT with a\n nonparametric cost function based on the empirical distribution of the data. The cpt.np() function should be your first point of call.\n This package is an extension to the \\code{changepoint} package which uses parametric changepoint methods. For further information on the methods see the\n documentation for \\code{changepoint}.","Published":"2016-07-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ChangepointTesting","Version":"1.0","Title":"Change Point Estimation for Clustered Signals","Description":"A multiple testing procedure for clustered alternative hypotheses. It is assumed that the p-values under the null hypotheses follow U(0,1) and that the distributions of p-values from the alternative hypotheses are stochastically smaller than U(0,1). By aggregating information, this method is more sensitive to detecting signals of low magnitude than standard methods. Additionally, sporadic small p-values appearing within a null hypotheses sequence are avoided by averaging on the neighboring p-values.","Published":"2016-05-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ChannelAttribution","Version":"1.10","Title":"Markov Model for the Online Multi-Channel Attribution Problem","Description":"Advertisers use a variety of online marketing channels to reach consumers and they want to know the degree each channel contributes to their marketing success. It's called the online multi-channel attribution problem. This package contains a probabilistic algorithm for the attribution problem. The model uses a k-order Markov representation to identifying structural correlations in the customer journey data. The package also contains three heuristic algorithms (first-touch, last-touch and linear-touch approach) for the same problem. The algorithms are implemented in C++.","Published":"2016-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ChannelAttributionApp","Version":"1.1","Title":"Shiny Web Application for the Multichannel Attribution Problem","Description":"Shiny Web Application for the Multichannel Attribution Problem. It is basically a user-friendly graphical interface for running and comparing all the attribution models in package 'ChannelAttribution'. For customizations or interest in other statistical methodologies for web data analysis please contact .","Published":"2016-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Chaos01","Version":"1.0.1","Title":"0-1 Test for Chaos","Description":"Computes and plot the results of the 0-1 test for chaos proposed\n by Gottwald and Melbourne (2004) . The algorithm is\n available in parallel for the independent values of parameter c.","Published":"2016-07-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ChaosGame","Version":"0.2","Title":"Chaos Game","Description":"The main objective of the package is to enter a word of at least two letters based on which an Iterated Function System with Probabilities (IFSP) is constructed, and a two-dimensional fractal containing the chosen word infinitely often is generated via the Chaos Game. Additionally, the package allows to project the two-dimensional fractal on several three-dimensional surfaces and to transform the fractal into another fractal with uniform marginals.","Published":"2016-03-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CharFun","Version":"0.1.0","Title":"Numerical Computation Cumulative Distribution Function and\nProbability Density Function from Characteristic Function","Description":"The Characteristic Functions Toolbox (CharFun) consists of a set of algorithms for evaluating selected characteristic functions and algorithms for numerical inversion of the (combined and/or compound) characteristic functions, used to evaluate the probability density function (PDF) and the cumulative distribution function (CDF).","Published":"2017-05-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ChargeTransport","Version":"1.0.2","Title":"Charge Transfer Rates and Charge Carrier Mobilities","Description":"This package provides functions to compute Marcus, Marcus-Levich-Jortner or Landau-Zener charge transfer rates. These rates can then be used to perform kinetic Monte Carlo simulations to estimate charge carrier mobilities in molecular materials. The preparation of this package was supported by the the Fondazione Cariplo (PLENOS project, ref. 2011-0349).","Published":"2014-06-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"charlatan","Version":"0.1.0","Title":"Make Fake Data","Description":"Make fake data, supporting addresses, person names, dates,\n times, colors, coordinates, currencies, digital object identifiers\n ('DOIs'), jobs, phone numbers, 'DNA' sequences, doubles and integers\n from distributions and within a range.","Published":"2017-06-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CHAT","Version":"1.1","Title":"Clonal Heterogeneity Analysis Tool","Description":"CHAT is a collection of tools developed for tumor subclonality analysis using high density DNA SNP array data and sequencing data. The pipeline consists of four major compartments: 1) tumor aneuploid genome proportion (AGP) calculation and ploidy estimation. 2) segment-specific AGP calculation and absolute copy number estimation for somatic CNAs. 3) cancer cell fraction correction for somatic SNVs in clonal or subclonal sCNA regions. 4) number of subclones estimation using Dirichlet process prior followed by MCMC approach. ","Published":"2014-08-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CHCN","Version":"1.5","Title":"Canadian Historical Climate Network","Description":"A compilation of historical through contemporary climate\n measurements scraped from the Environment Canada Website\n Including tools for scraping data, creating metadata and\n formating temperature files.","Published":"2012-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cheb","Version":"0.3","Title":"Discrete Linear Chebyshev Approximation","Description":"Discrete Linear Chebyshev Approximation","Published":"2013-02-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"chebpol","Version":"1.3-1789","Title":"Multivariate Chebyshev Interpolation","Description":"Contains methods for creating multivariate Chebyshev\n approximation of functions on a hypercube. Some methods for\n non-Chebyshev grids are also provided.","Published":"2015-10-28","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"checkarg","Version":"0.1.0","Title":"Check the Basic Validity of a (Function) Argument","Description":"Utility functions that allow checking the basic validity of a function argument or any other value, \n including generating an error and assigning a default in a single line of code. The main purpose of\n the package is to provide simple and easily readable argument checking to improve code robustness. ","Published":"2017-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CheckDigit","Version":"0.1-1","Title":"Calculate and verify check digits","Description":"A set of functions to calculate check digits according to\n various algorithms and to verify whether a string ends in a\n valid check digit","Published":"2013-04-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"checkmate","Version":"1.8.2","Title":"Fast and Versatile Argument Checks","Description":"Tests and assertions to perform frequent argument checks. A\n substantial part of the package was written in C to minimize any worries\n about execution time overhead.","Published":"2016-11-02","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"checkpoint","Version":"0.4.0","Title":"Install Packages from Snapshots on the Checkpoint Server for\nReproducibility","Description":"The goal of checkpoint is to solve the problem of package\n reproducibility in R. Specifically, checkpoint allows you to install packages\n as they existed on CRAN on a specific snapshot date as if you had a CRAN time\n machine. To achieve reproducibility, the checkpoint() function installs the\n packages required or called by your project and scripts to a local library\n exactly as they existed at the specified point in time. Only those packages\n are available to your project, thereby avoiding any package updates that came\n later and may have altered your results. In this way, anyone using checkpoint's\n checkpoint() can ensure the reproducibility of your scripts or projects at any\n time. To create the snapshot archives, once a day (at midnight UTC) Microsoft\n refreshes the Austria CRAN mirror on the \"Microsoft R Archived Network\"\n server (). Immediately after completion\n of the rsync mirror process, the process takes a snapshot, thus creating the\n archive. Snapshot archives exist starting from 2014-09-17.","Published":"2017-04-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cheddar","Version":"0.1-631","Title":"Analysis and Visualisation of Ecological Communities","Description":"Provides a flexible, extendable representation of an ecological community and a range of functions for analysis and visualisation, focusing on food web, body mass and numerical abundance data. Allows inter-web comparisons such as examining changes in community structure over environmental, temporal or spatial gradients.","Published":"2016-10-10","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"chemCal","Version":"0.1-37","Title":"Calibration Functions for Analytical Chemistry","Description":"Simple functions for plotting linear\n\tcalibration functions and estimating standard errors for measurements\n\taccording to the Handbook of Chemometrics and Qualimetrics: Part A\n\tby Massart et al. There are also functions estimating the limit\n\tof detection (LOD) and limit of quantification (LOQ).\n\tThe functions work on model objects from - optionally weighted - linear\n\tregression (lm) or robust linear regression ('rlm' from the 'MASS' package).","Published":"2015-10-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"chemmodlab","Version":"1.0.0","Title":"A Cheminformatics Modeling Laboratory for Fitting and Assessing\nMachine Learning Models","Description":"Contains a set of methods for fitting models and methods for\n validating the resulting models. The statistical methodologies comprise\n a comprehensive collection of approaches whose validity and utility have\n been accepted by experts in the Cheminformatics field. As promising new\n methodologies emerge from the statistical and data-mining communities, they\n will be incorporated into the laboratory. These methods are aimed at discovering\n quantitative structure-activity relationships (QSARs). However, the user can\n directly input their own choices of descriptors and responses, so the capability\n for comparing models is effectively unlimited.","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"chemometrics","Version":"1.4.2","Title":"Multivariate Statistical Analysis in Chemometrics","Description":"R companion to the book \"Introduction to Multivariate Statistical Analysis in Chemometrics\" written by K. Varmuza and P. Filzmoser (2009).","Published":"2017-03-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ChemometricsWithR","Version":"0.1.9","Title":"Chemometrics with R - Multivariate Data Analysis in the Natural\nSciences and Life Sciences","Description":"Functions and scripts used in the book \"Chemometrics with R - Multivariate Data Analysis in the Natural Sciences and Life Sciences\" by Ron Wehrens, Springer (2011).","Published":"2015-09-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ChemometricsWithRData","Version":"0.1.3","Title":"Data for package ChemometricsWithR","Description":"The package provides data sets used in the book\n \"Chemometrics with R - Multivariate Data Analysis in the\n Natural Sciences and Life Sciences\" by Ron Wehrens, Springer\n (2011).","Published":"2012-10-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ChemoSpec","Version":"4.4.17","Title":"Exploratory Chemometrics for Spectroscopy","Description":"A collection of functions for top-down exploratory data analysis\n of spectral data obtained via nuclear magnetic resonance (NMR), infrared (IR) or\n Raman spectroscopy. Includes functions for plotting and inspecting spectra, peak\n alignment, hierarchical cluster analysis (HCA), principal components analysis\n (PCA) and model-based clustering. Robust methods appropriate for this type of\n high-dimensional data are available. ChemoSpec is designed with metabolomics\n data sets in mind, where the samples fall into groups such as treatment and\n control. Graphical output is formatted consistently for publication quality\n plots. ChemoSpec is intended to be very user friendly and help you get usable\n results quickly. A vignette covering typical operations is available.","Published":"2017-02-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cherry","Version":"0.6-11","Title":"Multiple Testing Methods for Exploratory Research","Description":"Provides an alternative approach to multiple testing\n by calculating a simultaneous upper confidence bounds for the\n number of true null hypotheses among any subset of the hypotheses of interest. \n\tSome of the functions in this package are optionally enhanced by the 'gurobi'\n\tsoftware and its accompanying R package. For their installation, please follow the \n\tinstructions at www.gurobi.com and http://www.gurobi.com/documentation, respectively.","Published":"2015-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CHFF","Version":"0.1.0","Title":"Closest History Flow Field Forecasting for Bivariate Time Series","Description":"The software matches the current history to the closest history in a time series to build a forecast.","Published":"2016-05-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"chi","Version":"0.1","Title":"The Chi Distribution","Description":"Light weight implementation of the standard distribution \n functions for the chi distribution, wrapping those for the chi-squared \n distribution in the stats package.","Published":"2017-05-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"chi2x3way","Version":"1.1","Title":"Partitioning Chi-Squared and Tau Index for Three-Way Contingency\nTables","Description":"Provides two index partitions for three-way contingency tables:\n partition of the association measure chi-squared and of the predictability index tau \n under several representative hypotheses about the expected frequencies (hypothesized probabilities). ","Published":"2017-01-23","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"childsds","Version":"0.6.2","Title":"Data and Methods Around Reference Values in Pediatrics","Description":"Calculation of standard deviation scores adduced from different\n growth standards (WHO, UK, Germany, Italy, China, etc). Therefore, the calculation of SDS-values\n for different measures like BMI, weight, height, head circumference, different\n ratios, etc. are easy to carry out. Also, references for laboratory values in\n children are available: serum lipids, iron-related blood parameters. In the\n new version, there are also functions combining the gamlss lms() function with\n resampling methods for using with repeated measurements and family dependencies.","Published":"2017-06-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"chillR","Version":"0.66","Title":"Statistical Methods for Phenology Analysis in Temperate Fruit\nTrees","Description":"The phenology of plants (i.e. the timing of their annual life\n phases) depends on climatic cues. For temperate trees and many other plants,\n spring phases, such as leaf emergence and flowering, have been found to result\n from the effects of both cool (chilling) conditions and heat. Fruit tree\n scientists (pomologists) have developed some metrics to quantify chilling\n and heat. 'chillR' contains functions for processing temperature records into\n chilling (Chilling Hours, Utah Chill Units and Chill Portions) and heat units\n (Growing Degree Hours). Regarding chilling metrics, Chill Portions are often\n considered the most promising, but they are difficult to calculate. This package\n makes it easy. 'chillR' also contains procedures for conducting a PLS analysis\n relating phenological dates (e.g. bloom dates) to either mean temperatures or\n mean chill and heat accumulation rates, based on long-term weather and phenology\n records. As of version 0.65, it also includes functions for generating weather\n scenarios with a weather generator, for conducting climate change analyses\n for temperature-based climatic metrics and for plotting results from such\n analyses.","Published":"2017-03-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"chinese.misc","Version":"0.1.6","Title":"Miscellaneous Tools for Chinese Text Mining and More","Description":"Efforts are made to make Chinese text mining easier, faster, and robust to errors. \n Document term matrix can be generated by only one line of code; detecting encoding, \n segmenting and removing stop words are done automatically. \n\tSome convenient tools are also supplied.","Published":"2017-05-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"chipPCR","Version":"0.0.8-10","Title":"Toolkit of Helper Functions to Pre-Process Amplification Data","Description":"A collection of functions to pre-process amplification curve data from polymerase chain reaction (PCR) or isothermal amplification reactions. Contains functions to normalize and baseline amplification curves, to detect both the start and end of an amplification reaction, several smoothers (e.g., LOWESS, moving average, cubic splines, Savitzky-Golay), a function to detect false positive amplification reactions and a function to determine the amplification efficiency. Quantification point (Cq) methods include the first (FDM) and second approximate derivative maximum (SDM) methods (calculated by a 5-point-stencil) and the cycle threshold method. Data sets of experimental nucleic acid amplification systems (VideoScan HCU, capillary convective PCR (ccPCR)) and commercial systems are included. Amplification curves were generated by helicase dependent amplification (HDA), ccPCR or PCR. As detection system intercalating dyes (EvaGreen, SYBR Green) and hydrolysis probes (TaqMan) were used. ","Published":"2015-04-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ChIPtest","Version":"1.0","Title":"Nonparametric Methods for Identifying Differential Enrichment\nRegions with ChIP-Seq Data","Description":"Nonparametric Tests to identify the differential enrichment region for two conditions or time-course ChIP-seq data. It includes: data preprocessing function, estimation of a small constant used in hypothesis testing, a kernel-based two sample nonparametric test, two assumption-free two sample nonparametric test.","Published":"2016-07-20","License":"GPL (>= 2.15.1)","snapshot_date":"2017-06-23"} {"Package":"CHMM","Version":"0.1.0","Title":"Coupled Hidden Markov Models","Description":"An exact and a variational inference for\n coupled Hidden Markov Models applied to the joint detection of copy number variations.","Published":"2017-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"chngpt","Version":"2016.7-31","Title":"Change Point Regression","Description":"Change point regression models are also called two-phase regression, break-point regression, split-point regression, structural change models and threshold regression models. Hypothesis testing in change point logistic regression with or without interaction terms. Several options are provided for testing in models with interaction, including a maximum of likelihood ratios test that determines p-value through Monte Carlo. Estimation under change point model is also included, but less developed at this point.","Published":"2016-07-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CHNOSZ","Version":"1.1.0","Title":"Chemical Thermodynamics and Activity Diagrams","Description":"An integrated set of tools for thermodynamic calculations in compositional\n biology and geochemistry. Thermodynamic properties are taken from a database for minerals\n and inorganic and organic aqueous species including biomolecules, or from amino acid\n group additivity for proteins. High-temperature properties are calculated using the\n revised Helgeson-Kirkham-Flowers equations of state for aqueous species. Functions are\n provided to define a system using basis species, automatically balance reactions,\n calculate the chemical affinities of reactions for selected species, and plot the results\n on potential diagrams or equilibrium activity diagrams. Experimental features are\n available to calculate activity coefficients for aqueous species or for multidimensional\n optimization of thermodynamic variables using an objective function.","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ChocoLattes","Version":"0.1.0","Title":"Processing Data from Lattes CV Files","Description":"Processes data from Lattes CV \n () XML files. Extract, condition, and plot \n lists of journal and conference papers, book chapters, books, \n and more.","Published":"2017-04-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"choiceDes","Version":"0.9-1","Title":"Design Functions for Choice Studies","Description":"This package consists of functions to design DCMs and other types of choice \n studies (including MaxDiff and other tradeoffs)","Published":"2014-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ChoiceModelR","Version":"1.2","Title":"Choice Modeling in R","Description":"Implements an MCMC algorithm to estimate a hierarchical\n multinomial logit model with a normal heterogeneity\n distribution. The algorithm uses a hybrid Gibbs Sampler with a\n random walk metropolis step for the MNL coefficients for each\n unit. Dependent variable may be discrete or continuous.\n Independent variables may be discrete or continuous with\n optional order constraints. Means of the distribution of\n heterogeneity can optionally be modeled as a linear function of\n unit characteristics variables.","Published":"2012-11-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"choplump","Version":"1.0-0.4","Title":"Choplump tests","Description":"Choplump Tests are Permutation Tests for Comparing Two Groups with Some Positive but Many Zero Responses","Published":"2014-11-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"chopthin","Version":"0.2.1","Title":"The Chopthin Resampler","Description":"Resampling is a standard step in particle filtering and in\n sequential Monte Carlo. This package implements the chopthin resampler, which\n keeps a bound on the ratio between the largest and the smallest weights after\n resampling.","Published":"2016-01-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ChoR","Version":"0.0-1","Title":"Chordalysis R Package","Description":"\n Learning the structure of graphical models from datasets with thousands of variables.\n More information about the research papers detailing the theory behind Chordalysis is available at\n (KDD 2016, SDM 2015, ICDM 2014, ICDM 2013).\n The R package development site is .","Published":"2017-02-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"chords","Version":"0.95.4","Title":"Estimation in Respondent Driven Samples","Description":"Maximum likelihood estimation in respondent driven samples.","Published":"2017-01-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"choroplethr","Version":"3.6.1","Title":"Simplify the Creation of Choropleth Maps in R","Description":"Choropleths are thematic maps where geographic regions, such as\n states, are colored according to some metric, such as the number of people\n who live in that state. This package simplifies this process by 1.\n Providing ready-made functions for creating choropleths of common maps. 2.\n Providing data and API connections to interesting data sources for making\n choropleths. 3. Providing a framework for creating choropleths from\n arbitrary shapefiles. 4. Overlaying those maps over reference maps from\n Google Maps. ","Published":"2017-04-16","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"choroplethrAdmin1","Version":"1.1.1","Title":"Contains an Administrative-Level-1 Map of the World","Description":"Contains an administrative-level-1 map of the world.\n Administrative-level-1 is the generic term for the largest sub-national\n subdivision of a country. This package was created for use with the\n choroplethr package.","Published":"2017-02-22","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"choroplethrMaps","Version":"1.0.1","Title":"Contains Maps Used by the 'choroplethr' Package","Description":"Contains 3 maps. 1) US States 2) US Counties 3) Countries of the\n world.","Published":"2017-01-31","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"chromer","Version":"0.1","Title":"Interface to Chromosome Counts Database API","Description":"A programmatic interface to the Chromosome Counts Database\n (http://ccdb.tau.ac.il/). This package is part of the rOpenSci suite\n (http://ropensci.org)","Published":"2015-01-13","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"chromoR","Version":"1.0","Title":"Analysis of chromosomal interactions data (correction,\nsegmentation and comparison)","Description":"chromoR provides users with a statistical pipeline for analysing chromosomal interactions data (Hi-C data).It combines wavelet methods and a Bayesian approach for correction (bias and noise) and comparison (detecting significant changes between Hi-C maps) of Hi-C contact maps.In addition, it also support detection of change points in 1D Hi-C contact profiles.","Published":"2014-02-11","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"chron","Version":"2.3-50","Title":"Chronological Objects which can Handle Dates and Times","Description":"Provides chronological objects which can handle dates and times.","Published":"2017-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CHsharp","Version":"0.4","Title":"Choi and Hall Style Data Sharpening","Description":"Functions for use in perturbing data prior to use of nonparametric smoothers\n and clustering. ","Published":"2015-10-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"chunked","Version":"0.3","Title":"Chunkwise Text-File Processing for 'dplyr'","Description":"Text data can be processed chunkwise using 'dplyr' commands. These\n are recorded and executed per data chunk, so large files can be processed with\n limited memory using the 'LaF' package.","Published":"2016-06-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CIAAWconsensus","Version":"1.1","Title":"Isotope Ratio Meta-Analysis","Description":"Calculation of consensus values for atomic weights, isotope amount ratios, and isotopic abundances with the associated uncertainties using multivariate meta-regression approach for consensus building.","Published":"2016-12-31","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"CIDnetworks","Version":"0.8.1","Title":"Generative Models for Complex Networks with Conditionally\nIndependent Dyadic Structure","Description":"Generative models for complex networks with conditionally independent dyadic structure. Now supports directed arcs!","Published":"2015-04-08","License":"GPL (> 3)","snapshot_date":"2017-06-23"} {"Package":"CIFsmry","Version":"1.0.1.1","Title":"Weighted summary of cumulative incidence functions","Description":"Estimate of cumulative incidence function in two samples. Provide weighted summary statistics based on various methods and weights. ","Published":"2016-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cifti","Version":"0.4.2","Title":"Toolbox for Connectivity Informatics Technology Initiative\n('CIFTI') Files","Description":"Functions for the input/output and visualization of\n medical imaging data in the form of 'CIFTI' files \n .","Published":"2017-05-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cin","Version":"0.1","Title":"Causal Inference for Neuroscience","Description":"Many experiments in neuroscience involve randomized and fast stimulation while the continuous outcome measures respond at much slower time scale, for example event-related fMRI. This package provide valid statistical tools with causal interpretation under these challenging settings, without imposing model assumptions.","Published":"2011-12-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CINID","Version":"1.2","Title":"Curculionidae INstar IDentification","Description":"This package provides functions to compute a method for identifying the instar of Curculionid larvae from the observed distribution of the headcapsule size of mature larvae.","Published":"2014-10-07","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"CINOEDV","Version":"2.0","Title":"Co-Information based N-Order Epistasis Detector and Visualizer","Description":"Detecting and visualizing nonlinear interaction effects of single nucleotide polymorphisms or epistatic interactions, especially high-order epistatic interactions, are important topics in bioinformatics because of their significant mathematical and computational challenges. We present CINOEDV (Co-Information based N-Order Epistasis Detector and Visualizer) for detecting, visualizing, and analyzing high-order epistatic interactions by introducing virtual vertices into the construction of a hypergraph. CINOEDV was developed as an alternative to existing software to build a global picture of epistatic interactions and unexpected high-order epistatic interactions, which might provide useful clues for understanding the underlying genetic architecture of complex diseases.","Published":"2014-11-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cir","Version":"2.0.0","Title":"Centered Isotonic Regression and Dose-Response Utilities","Description":"Isotonic regression (IR), as well as a great small-sample improvement to IR called\n CIR, interval estimates for both, and additional utilities.","Published":"2017-03-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CircE","Version":"1.1","Title":"Circumplex models Estimation","Description":"This package contains functions for fitting circumplex\n structural models for correlation matrices (with negative\n correlation) by the method of maximum likelihood.","Published":"2014-09-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"circlize","Version":"0.4.0","Title":"Circular Visualization","Description":"Circular layout is an efficient way for the visualization of huge \n amounts of information. Here this package provides an implementation \n of circular layout generation in R as well as an enhancement of available \n software. The flexibility of the package is based on the usage of low-level \n graphics functions such that self-defined high-level graphics can be easily \n implemented by users for specific purposes. Together with the seamless \n connection between the powerful computational and visual environment in R, \n it gives users more convenience and freedom to design figures for \n better understanding complex patterns behind multiple dimensional data.","Published":"2017-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CircMLE","Version":"0.1.0","Title":"Maximum Likelihood Analysis of Circular Data","Description":"A series of wrapper functions to\n implement the 10 maximum likelihood models of animal orientation\n described by Schnute and Groot (1992) . The\n functions also include the ability to use different optimizer\n methods and calculate various model selection metrics (i.e., AIC,\n AICc, BIC).","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CircNNTSR","Version":"2.2","Title":"Statistical Analysis of Circular Data using Nonnegative\nTrigonometric Sums (NNTS) Models","Description":"Includes functions for the analysis of circular data using distributions based on Nonnegative Trigonometric Sums (NNTS). The package includes functions for calculation of densities and distributions, for the estimation of parameters, for plotting and more.","Published":"2016-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CircOutlier","Version":"3.2.3","Title":"Detection of Outliers in Circular-Circular Regression","Description":"Detection of outliers in circular-circular regression models, modifying its and estimating of models parameters.","Published":"2016-01-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CircStats","Version":"0.2-4","Title":"Circular Statistics, from \"Topics in circular Statistics\" (2001)","Description":"Circular Statistics, from \"Topics in circular Statistics\"\n (2001) S. Rao Jammalamadaka and A. SenGupta, World Scientific.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"circular","Version":"0.4-7","Title":"Circular Statistics","Description":"Circular Statistics, from \"Topics in circular Statistics\" (2001) S. Rao Jammalamadaka and A. SenGupta, World Scientific.","Published":"2013-11-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CircularDDM","Version":"0.0.9","Title":"Circular Drift-Diffusion Model","Description":"Circular drift-diffusion model for continuous reports.","Published":"2017-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cIRT","Version":"1.2.1","Title":"Choice Item Response Theory","Description":"Jointly model the accuracy of cognitive responses and item choices\n within a bayesian hierarchical framework as described by Culpepper and\n Balamuta (2015) . In addition, the package\n contains the datasets used within the analysis of the paper.","Published":"2017-04-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cit","Version":"2.1","Title":"Causal Inference Test","Description":"A likelihood-based hypothesis testing approach is implemented for assessing causal mediation. For example, it could be used to test for mediation of a known causal association between a DNA variant, the 'instrumental variable', and a clinical outcome or phenotype by gene expression or DNA methylation, the potential mediator. Another example would be testing mediation of the effect of a drug on a clinical outcome by the molecular target. The hypothesis test generates a p-value or permutation-based FDR value with confidence intervals to quantify uncertainty in the causal inference. The outcome can be represented by either a continuous or binary variable, the potential mediator is continuous, and the instrumental variable can be continuous or binary and is not limited to a single variable but may be a design matrix representing multiple variables.","Published":"2016-11-15","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"CITAN","Version":"2015.12-2","Title":"CITation ANalysis Toolpack","Description":"Supports quantitative\n research in scientometrics and bibliometrics. Provides\n various tools for preprocessing bibliographic\n data retrieved, e.g., from Elsevier's SciVerse Scopus,\n computing bibliometric impact of individuals,\n or modeling many phenomena encountered in the social sciences.","Published":"2015-12-13","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"citbcmst","Version":"1.0.4","Title":"CIT Breast Cancer Molecular SubTypes Prediction","Description":"This package implements the approach to assign tumor gene expression dataset to the 6 CIT Breast Cancer Molecular Subtypes described in Guedj et al 2012.","Published":"2014-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"citccmst","Version":"1.0.2","Title":"CIT Colon Cancer Molecular SubTypes Prediction","Description":"This package implements the approach to assign tumor gene expression dataset to the 6 CIT Colon Cancer Molecular Subtypes described in Marisa et al 2013.","Published":"2014-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Cite","Version":"0.1.0","Title":"An RStudio Addin to Insert BibTex Citation in Rmarkdown\nDocuments","Description":"Contain an RStudio addin to insert BibTex citation in Rmarkdown documents with a minimal user interface.","Published":"2016-07-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"citr","Version":"0.2.0","Title":"'RStudio' Add-in to Insert Markdown Citations","Description":"Functions and an 'RStudio' add-in that search a 'BibTeX'-file to create and\n insert formatted Markdown citations into the current document.","Published":"2016-09-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CityPlot","Version":"2.0","Title":"Visualization of structure and contents of a database","Description":"Input: a csv-file for each database table and a\n controlfile describing relations between tables. Output: An\n extended ER diagram","Published":"2012-05-07","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"CityWaterBalance","Version":"0.1.0","Title":"Track Flows of Water Through an Urban System","Description":"Retrieves data and estimates unmeasured flows of water through the \n urban network. Any city may be modeled with preassembled data, but data for \n US cities can be gathered via web services using this package and dependencies \n 'geoknife' and 'dataRetrieval'. ","Published":"2017-06-16","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"cjoint","Version":"2.0.4","Title":"AMCE Estimator for Conjoint Experiments","Description":"An R implementation of the Average Marginal Component-specific Effects (AMCE) estimator presented in Hainmueller, J., Hopkins, D., and Yamamoto T. (2014) Causal Inference in Conjoint Analysis: Understanding Multi-Dimensional Choices via Stated Preference Experiments. Political Analysis 22(1):1-30.","Published":"2016-03-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ck37r","Version":"1.0.0","Title":"Chris Kennedy's R Toolkit","Description":"Toolkit for statistical, machine learning, and targeted learning\n analyses. Functionality includes loading & auto-installing packages,\n standardizing datasets, creating missingness indicators, imputing missing\n values, creating multicore or multinode clusters, automatic SLURM integration,\n enhancing SuperLearner and TMLE with automatic parallelization, and many other\n SuperLearner analysis & plotting enhancements.","Published":"2017-06-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ckanr","Version":"0.1.0","Title":"Client for the Comprehensive Knowledge Archive Network ('CKAN')\n'API'","Description":"Client for 'CKAN' 'API' (http://ckan.org/). Includes interface\n to 'CKAN' 'APIs' for search, list, show for packages, organizations, and\n resources. In addition, provides an interface to the 'datastore' 'API'.","Published":"2015-10-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Ckmeans.1d.dp","Version":"4.2.0","Title":"Optimal and Fast Univariate Clustering","Description":"A fast dynamic programming algorithmic framework to\n achieve optimal univariate k-means, k-median, and k-segments\n clustering. Minimizing the sum of respective within-cluster\n distances, the algorithms guarantee optimality and\n reproducibility. Their advantage over heuristic clustering\n algorithms in efficiency and accuracy is increasingly pronounced\n as the number of clusters k increases. Weighted k-means and\n unweighted k-segments algorithms can also optimally segment time\n series and perform peak calling. An auxiliary function generates\n histograms that are adaptive to patterns in data. This package\n provides a powerful alternative to heuristic methods for\n univariate data analysis.","Published":"2017-05-30","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cladoRcpp","Version":"0.14.4","Title":"C++ implementations of phylogenetic cladogenesis calculations","Description":"This package implements in C++/Rcpp various cladogenesis-related calculations that are slow in pure R. These include the calculation of the probability of various scenarios for the inheritance of geographic range at the divergence events on a phylogenetic tree, and other calculations necessary for models which are not continuous-time markov chains (CTMC), but where change instead occurs instantaneously at speciation events. Typically these models must assess the probability of every possible combination of (ancestor state, left descendent state, right descendent state). This means that there are up to (# of states)^3 combinations to investigate, and in biogeographical models, there can easily be hundreds of states, so calculation time becomes an issue. C++ implementation plus clever tricks (many combinations can be eliminated a priori) can greatly speed the computation time over naive R implementations. CITATION INFO: This package is the result of my Ph.D. research, please cite the package if you use it! Type: citation(package=\"cladoRcpp\") to get the citation information.","Published":"2014-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clampSeg","Version":"1.0-1","Title":"Idealisation of Patch Clamp Recordings","Description":"Allows for idealisation of patch clamp recordings by implementing the non-parametric JUmp Local\n dEconvolution Segmentation filter JULES.","Published":"2017-06-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ClamR","Version":"2.1-1","Title":"Time Series Modeling for Climate Change Proxies","Description":"Implementation of the Wilkinson and Ivany (2002) approach to paleoclimate analysis, applied to isotope data extracted from clams.","Published":"2015-07-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"clarifai","Version":"0.4.2","Title":"Access to Clarifai API","Description":"Get description of images from Clarifai API. For more information,\n see . Clarifai uses a large deep learning cloud to come\n up with descriptive labels of the things in an image. It also provides how\n confident it is about each of the labels.","Published":"2017-04-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"class","Version":"7.3-14","Title":"Functions for Classification","Description":"Various functions for classification, including k-nearest\n neighbour, Learning Vector Quantization and Self-Organizing Maps.","Published":"2015-08-30","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"classGraph","Version":"0.7-5","Title":"Construct Graphs of S4 Class Hierarchies","Description":"Construct directed graphs of S4 class hierarchies and\n visualize them. In general, these graphs typically are DAGs (directed\n acyclic graphs), often simple trees in practice.","Published":"2015-09-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"classifierplots","Version":"1.3.3","Title":"Generates a Visualization of Classifier Performance as a Grid of\nDiagnostic Plots","Description":"\n Generates a visualization of binary classifier performance as a grid of\n diagnostic plots with just one function call. Includes ROC curves,\n prediction density, accuracy, precision, recall and calibration plots, all using\n ggplot2 for easy modification.\n Debug your binary classifiers faster and easier!","Published":"2017-04-06","License":"BSD 3-clause License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"classifly","Version":"0.4","Title":"Explore classification models in high dimensions","Description":"Given $p$-dimensional training data containing\n $d$ groups (the design space), a classification\n algorithm (classifier) predicts which group new data\n belongs to. Generally the input to these algorithms is\n high dimensional, and the boundaries between groups\n will be high dimensional and perhaps curvilinear or\n multi-faceted. This package implements methods for\n understanding the division of space between the groups.","Published":"2014-04-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"classiFunc","Version":"0.1.0","Title":"Classification of Functional Data","Description":"Efficient implementation of k-nearest neighbor estimator and a kernel estimator for functional data classification.","Published":"2017-05-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"classify","Version":"1.3","Title":"Classification Accuracy and Consistency under IRT models","Description":"IRT classification uses the probability that candidates of\n a given ability, will answer correctly questions of a specified\n difficulty to calculate the probability of their achieving\n every possible score in a test. Due to the IRT assumption of\n conditional independence (that is every answer given is assumed\n to depend only on the latent trait being measured) the\n probability of candidates achieving these potential scores can\n be expressed by multiplication of probabilities for item\n responses for a given ability. Once the true score and the\n probabilities of achieving all other scores have been\n determined for a candidate the probability of their score lying\n in the same category as that of their true score\n (classification accuracy), or the probability of consistent\n classification in a category over administrations\n (classification consistency), can be calculated.","Published":"2014-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"classInt","Version":"0.1-24","Title":"Choose Univariate Class Intervals","Description":"Selected commonly used methods for choosing univariate class intervals for mapping or other graphics purposes.","Published":"2017-04-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"classyfire","Version":"0.1-2","Title":"Robust multivariate classification using highly optimised SVM\nensembles","Description":"A collection of functions for the creation and application of highly optimised, robustly evaluated ensembles of support vector machines (SVMs). The package takes care of training individual SVM classifiers using a fast parallel heuristic algorithm, and combines individual classifiers into ensembles. Robust metrics of classification performance are offered by bootstrap resampling and permutation testing. ","Published":"2015-01-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cld2","Version":"1.1","Title":"Google's Compact Language Detector 2","Description":"Bindings to Google's C++ library Compact Language Detector 2\n (see for more information). Probabilistically\n detects over 80 languages in plain text or HTML. For mixed-language input it returns the\n top three detected languages and their approximate proportion of the total classified \n text bytes (e.g. 80% English and 20% French out of 1000 bytes). There is also a 'cld3'\n package on CRAN which uses a neural network model instead.","Published":"2017-06-10","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"cld3","Version":"1.0","Title":"Google's Compact Language Detector 3","Description":"Google's Compact Language Detector 3 is a neural network model for language \n identification and the successor of 'cld2' (available from CRAN). The algorithm is still\n experimental and takes a novel approach to language detection with different properties\n and outcomes. It can be useful to combine this with the Bayesian classifier results \n from 'cld2'. See for more information.","Published":"2017-06-07","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"cleanEHR","Version":"0.1","Title":"The Critical Care Clinical Data Processing Tools","Description":"\n A toolset to deal with the Critical Care Health Informatics Collaborative\n dataset. It is created to address various data reliability and accessibility\n problems of electronic healthcare records (EHR). It provides a unique\n platform which enables data manipulation, transformation, reduction,\n anonymisation, cleaning and validation.","Published":"2017-02-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cleangeo","Version":"0.2-1","Title":"Cleaning Geometries from Spatial Objects","Description":"\n Provides a set of utility tools to inspect spatial objects, facilitate\n handling and reporting of topology errors and geometry validity issues.\n Finally, it provides a geometry cleaner that will fix all geometry problems,\n and eliminate (at least reduce) the likelihood of having issues when doing\n spatial data processing.","Published":"2016-11-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cleanNLP","Version":"1.9.0","Title":"A Tidy Data Model for Natural Language Processing","Description":"Provides a set of fast tools for converting a textual corpus into a set of normalized\n tables. Users may make use of a Python back end with 'spaCy' \n or the Java back end 'CoreNLP' . A minimal back\n end with no external dependencies is also provided. Exposed annotation tasks include\n tokenization, part of speech tagging, named entity recognition, entity linking, sentiment\n analysis, dependency parsing, coreference resolution, and word embeddings. Summary\n statistics regarding token unigram, part of speech tag, and dependency type frequencies\n are also included to assist with analyses.","Published":"2017-05-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cleanr","Version":"1.1.3","Title":"Helps You to Code Cleaner","Description":"Check your R code for some of the most common layout flaws.\n Many tried to teach us how to write code less dreadful, be it implicitly as\n B. W. Kernighan and D. M. Ritchie (1988) \n in 'The C Programming Language' did, be it\n explicitly as R.C. Martin (2008) in\n 'Clean Code: A Handbook of Agile Software Craftsmanship' did.\n So we should check our code for files too long or wide, functions with too\n many lines, too wide lines, too many arguments or too many levels of \n nesting.\n Note: This is not a static code analyzer like pylint or the like. Checkout\n https://github.com/jimhester/lintr instead.","Published":"2017-01-18","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"clere","Version":"1.1.4","Title":"Simultaneous Variables Clustering and Regression","Description":"Implements an empirical Bayes approach for simultaneous variable clustering and regression. This version also (re)implements in C++ an R script proposed by Howard Bondell that fits the Pairwise Absolute Clustering and Sparsity (PACS) methodology (see Sharma et al (2013) ).","Published":"2016-03-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"clhs","Version":"0.5-6","Title":"Conditioned Latin Hypercube Sampling","Description":"Conditioned Latin hypercube sampling, as published by Minasny and McBratney (2006) . This method proposes to stratify sampling in presence of ancillary data. An extension of this method, which propose to associate a cost to each individual and take it into account during the optimisation process, is also proposed (Roudier et al., 2012, ).","Published":"2016-10-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ClickClust","Version":"1.1.5","Title":"Model-Based Clustering of Categorical Sequences","Description":"Clustering categorical sequences by means of finite mixtures with Markov model components is the main utility of ClickClust. The package also allows detecting blocks of equivalent states by forward and backward state selection procedures.","Published":"2016-10-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clickR","Version":"0.2.0","Title":"Fix Data and Create Report Tables from Different Objects","Description":"Fixes data errors in numerical, factor and date variables, checks data quality and performs report tables from models and summaries.","Published":"2017-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clickstream","Version":"1.2.1","Title":"Analyzes Clickstreams Based on Markov Chains","Description":"A set of tools to read, analyze and write lists of click sequences\n on websites (i.e., clickstream). A click can be represented by a number,\n character or string. Clickstreams can be modeled as zero- (only computes\n occurrence probabilities), first- or higher-order Markov chains.","Published":"2017-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"clifro","Version":"3.1-4","Title":"Easily Download and Visualise Climate Data from CliFlo","Description":"CliFlo is a web portal to the New Zealand National Climate\n Database and provides public access (via subscription) to around 6,500\n various climate stations (see for more\n information). Collating and manipulating data from CliFlo\n (hence clifro) and importing into R for further analysis, exploration and\n visualisation is now straightforward and coherent. The user is required to\n have an internet connection, and a current CliFlo subscription (free) if\n data from stations, other than the public Reefton electronic weather\n station, is sought.","Published":"2017-04-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"clikcorr","Version":"1.0","Title":"Censoring Data and Likelihood-Based Correlation Estimation","Description":"A profile likelihood based method of estimation and inference on the correlation coefficient of bivariate data with different types of censoring and missingness.","Published":"2016-06-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"climatol","Version":"3.0","Title":"Climate Tools (Series Homogenization and Derived Products)","Description":"Functions to homogenize climatological series and to produce climatological summaries and grids from the homogenized results, plus functions to draw wind-roses and Walter&Lieth diagrams.","Published":"2016-08-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"climbeR","Version":"0.0.1","Title":"Calculate Average Minimal Depth of a Maximal Subtree for\n'ranger' Package Forests","Description":"Calculates first, and second order, average minimal depth of a\n maximal subtree for a forest object produced by the R 'ranger'\n package. This variable importance metric is implemented as described in\n Ishwaran et. al. (\"High-Dimensional Variable Selection for Survival Data\",\n March 2010, ).","Published":"2016-11-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ClimClass","Version":"2.1.0","Title":"Climate Classification According to Several Indices","Description":"Classification of climate according to Koeppen - Geiger, of aridity\n indices, of continentality indices, of water balance after Thornthwaite, of\n viticultural bioclimatic indices. Drawing climographs: Thornthwaite, Peguy,\n Bagnouls-Gaussen.","Published":"2016-08-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"climdex.pcic","Version":"1.1-6","Title":"PCIC Implementation of Climdex Routines","Description":"PCIC's implementation of Climdex routines for computation of\n extreme climate indices.","Published":"2015-06-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ClimDown","Version":"1.0.2","Title":"Climate Downscaling Library for Daily Climate Model Output","Description":"A suite of routines for downscaling coarse scale global\n climate model (GCM) output to a fine spatial resolution. Includes\n Bias-Corrected Spatial Downscaling (BCDS), Constructed Analogues\n (CA), Climate Imprint (CI), and Bias Correction/Constructed\n Analogues with Quantile mapping reordering (BCCAQ). Developed by\n the the Pacific Climate Impacts Consortium (PCIC), Victoria,\n British Columbia, Canada.","Published":"2016-12-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clime","Version":"0.4.1","Title":"Constrained L1-minimization for Inverse (covariance) Matrix\nEstimation","Description":"A robust constrained L1 minimization method for estimating\n a large sparse inverse covariance matrix (aka precision\n matrix), and recovering its support for building graphical\n models. The computation uses linear programming.","Published":"2012-05-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"climextRemes","Version":"0.1.3","Title":"Tools for Analyzing Climate Extremes","Description":"Functions for fitting GEV and POT (via point process fitting)\n models for extremes in climate data, providing return values, return\n probabilities, and return periods for stationary and nonstationary models.\n Also provides differences in return values and differences in log return\n probabilities for contrasts of covariate values. Functions for estimating risk\n ratios for event attribution analyses, including uncertainty. Under the hood,\n many of the functions use functions from extRemes, including for fitting the\n statistical models.","Published":"2017-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"climtrends","Version":"1.0.6","Title":"Statistical Methods for Climate Sciences","Description":"Absolute homogeneity tests SNHT absolute 1-breaks, 1-break, \n SD different from 1, 2-breaks, Buishand, Pettitt, von Neumann ratio and \n ratio-rank, Worsley, and Craddock, Relative homogeneity tests SNHT \n absolute 1-breaks, 1-break SD different from 1, 2-breaks, Peterson \n and Easterling, and Vincent, Differences in scale between two groups Siegel–Tukey, \n Create reference time series mean, weights/correlation, finding outliers Grubbs, \n ESD, MAD, Tietjen Moore, Hampel, etc.","Published":"2016-05-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"climwin","Version":"1.1.0","Title":"Climate Window Analysis","Description":"Contains functions to detect and visualise periods of climate\n sensitivity (climate windows) for a given biological response.","Published":"2016-12-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"clinfun","Version":"1.0.14","Title":"Clinical Trial Design and Data Analysis Functions","Description":"Utilities to make your clinical collaborations easier if not\n fun. It contains functions for designing studies such as Simon\n 2-stage and group sequential designs and for data analysis such\n as Jonckheere-Terpstra test and estimating survival quantiles.","Published":"2017-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clinPK","Version":"0.9.0","Title":"Clinical Pharmacokinetics Toolkit","Description":"Calculates equations commonly used in clinical pharmacokinetics and clinical pharmacology, such as equations for dose individualization, compartmental pharmacokinetics, drug exposure, anthropomorphic calculations, clinical chemistry, and conversion of common clinical parameters. Where possible and relevant, it provides multiple published and peer-reviewed equations within the respective R function.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"clinsig","Version":"1.2","Title":"Clinical Significance Functions","Description":"Functions for calculating clinical significance.","Published":"2016-07-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clinUtiDNA","Version":"1.0","Title":"Clinical Utility of DNA Testing","Description":"This package provides the estimation of an index measuring\n the clinical utility of DNA testing in the context of\n gene-environment interactions on a disease. The corresponding\n gene-environment interaction effect on the additive scale can\n also be obtained. The estimation is based on case-control or\n cohort data. The method was developed by Nguyen et al. 2013.","Published":"2013-04-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clipr","Version":"0.3.3","Title":"Read and Write from the System Clipboard","Description":"Simple utility functions to read from and write to the Windows,\n OS X, and X11 clipboards.","Published":"2017-06-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clisymbols","Version":"1.2.0","Title":"Unicode Symbols at the R Prompt","Description":"A small subset of Unicode symbols, that are useful\n when building command line applications. They fall back to\n alternatives on terminals that do not support Unicode.\n Many symbols were taken from the 'figures' 'npm' package\n (see ).","Published":"2017-05-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CLME","Version":"2.0-6","Title":"Constrained Inference for Linear Mixed Effects Models","Description":"Estimation and inference for linear models where some or all of the\n fixed-effects coefficients are subject to order restrictions. This package uses\n the robust residual bootstrap methodology for inference, and can handle some\n structure in the residual variance matrix.","Published":"2016-11-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clogitboost","Version":"1.1","Title":"Boosting Conditional Logit Model","Description":"A set of functions to fit a boosting conditional logit model.","Published":"2015-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clogitL1","Version":"1.4","Title":"Fitting exact conditional logistic regression with lasso and\nelastic net penalties","Description":"Tools for the fitting and cross validation of exact conditional logistic regression models with lasso and elastic net penalties. Uses cyclic coordinate descent and warm starts to compute the entire path efficiently.","Published":"2014-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"clogitLasso","Version":"1.0.1","Title":"Lasso Estimation of Conditional Logistic Regression Models for\nMatched Case-Control Studies","Description":"Fit a sequence of conditional logistic regression models with lasso, for small to large sized samples.","Published":"2016-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cloudUtil","Version":"0.1.12","Title":"Cloud Utilization Plots","Description":"Provides means of plots for comparing utilization data of compute systems.","Published":"2016-06-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"clpAPI","Version":"1.2.7","Title":"R Interface to C API of COIN-OR Clp","Description":"R Interface to C API of COIN-OR Clp, depends on COIN-OR Clp Version >= 1.12.0.","Published":"2016-04-19","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CLSOCP","Version":"1.0","Title":"A smoothing Newton method SOCP solver","Description":"This package provides and implementation of a one step\n smoothing newton method for the solution of second order cone\n programming problems, originally described by Xiaoni Chi and\n Sanyang Liu.","Published":"2011-07-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clttools","Version":"1.3","Title":"Central Limit Theorem Experiments (Theoretical and Simulation)","Description":"Central limit theorem experiments presented by data frames or plots. Functions include generating theoretical sample space, corresponding probability, and simulated results as well.","Published":"2016-02-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"clubSandwich","Version":"0.2.2","Title":"Cluster-Robust (Sandwich) Variance Estimators with Small-Sample\nCorrections","Description":"Provides several cluster-robust variance estimators\n (i.e., sandwich estimators) for ordinary and weighted least squares linear\n regression models, including the bias-reduced linearization estimator introduced \n by Bell and McCaffrey (2002) \n and developed further by Pustejovsky and Tipton (2016) .\n The package includes functions for estimating the variance-\n covariance matrix and for testing single- and multiple-contrast hypotheses\n based on Wald test statistics. Tests of single regression coefficients use\n Satterthwaite or saddle-point corrections. Tests of multiple-contrast hypotheses \n use an approximation to Hotelling's T-squared distribution. Methods are\n provided for a variety of fitted models, including lm(), plm() (from package 'plm'),\n gls() and lme() (from 'nlme'), robu() (from 'robumeta'), and rma.uni() and rma.mv() (from\n 'metafor').","Published":"2016-12-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clue","Version":"0.3-53","Title":"Cluster Ensembles","Description":"CLUster Ensembles.","Published":"2017-01-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ClueR","Version":"1.2","Title":"Cluster Evaluation","Description":"CLUster Evaluation (CLUE) is a computational method for identifying optimal number of clusters in a given time-course dataset clustered by cmeans or kmeans algorithms and subsequently identify key kinases or pathways from each cluster. Its implementation in R is called ClueR. See Readme on for more details.","Published":"2017-04-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clues","Version":"0.5.9","Title":"Clustering Method Based on Local","Description":"We developed the clues R package to provide functions \n for automatically estimating the number of clusters and \n getting the final cluster partition without any input \n parameter except the stopping rule for convergence. \n The package also provides functions to\n evaluate and compare the performances of partitions of a data\n set both numerically and graphically.","Published":"2016-10-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CluMix","Version":"2.0","Title":"Clustering and Visualization of Mixed-Type Data","Description":"Provides utilities for clustering subjects and variables of mixed data types. Similarities between subjects are measured by Gower's general similarity coefficient with an extension of Podani for ordinal variables. Similarities between variables are assessed by combination of appropriate measures of association for different pairs of data types. Alternatively, variables can also be clustered by the 'ClustOfVar' approach. The main feature of the package is the generation of a mixed-data heatmap. For visualizing similarities between either subjects or variables, a heatmap of the corresponding distance matrix can be drawn. Associations between variables can be explored by a 'confounderPlot', which allows visual detection of possible confounding, collinear, or surrogate factors for some variables of primary interest. Distance matrices and dendrograms for subjects and variables can be derived and used for further visualizations and applications. This work was supported by BMBF grant 01ZX1609B, Germany.","Published":"2017-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clusrank","Version":"0.5-2","Title":"Wilcoxon Rank Sum Test for Clustered Data","Description":"Non-parametric tests (Wilcoxon rank sum test and Wilcoxon signed rank test) for clustered data.","Published":"2017-01-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"clust.bin.pair","Version":"0.0.6","Title":"Statistical Methods for Analyzing Clustered Matched Pair Data","Description":"Tests, utilities, and case studies for analyzing significance in clustered binary matched-pair\n data. The central function clust.bin.pair uses one of several tests to calculate a Chi-square \n statistic. Implemented are the tests Eliasziw, Obuchowski, Durkalski, and Yang with McNemar\n included for comparison. The utility functions nested.to.contingency and paired.to.contingency\n convert data between various useful formats. Thyroids and psychiatry are the canonical\n datasets from Obuchowski and Petryshen respectively.","Published":"2016-10-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cluster","Version":"2.0.6","Title":"\"Finding Groups in Data\": Cluster Analysis Extended Rousseeuw et\nal.","Description":"Methods for Cluster analysis. Much extended the original from\n\tPeter Rousseeuw, Anja Struyf and Mia Hubert,\n\tbased on Kaufman and Rousseeuw (1990) \"Finding Groups in Data\".","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cluster.datasets","Version":"1.0-1","Title":"Cluster Analysis Data Sets","Description":"A collection of data sets for teaching cluster analysis.","Published":"2013-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ClusterBootstrap","Version":"0.9.3","Title":"Analyze Clustered Data with Generalized Linear Models using the\nCluster Bootstrap","Description":"Provides functionality for the analysis of clustered data using the cluster bootstrap. ","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clusterCrit","Version":"1.2.7","Title":"Clustering Indices","Description":"Compute clustering validation indices.","Published":"2016-05-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ClusteredMutations","Version":"1.0.1","Title":"Location and Visualization of Clustered Somatic Mutations","Description":"Identification and visualization of groups of closely spaced mutations in the DNA sequence of cancer genome. The extremely mutated zones are searched in the symmetric dissimilarity matrix using the anti-Robinson matrix properties. Different data sets are obtained to describe and plot the clustered mutations information. ","Published":"2016-04-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clusterfly","Version":"0.4","Title":"Explore clustering interactively using R and GGobi","Description":"Visualise clustering algorithms with GGobi. Contains both\n general code for visualising clustering results and specific\n visualisations for model-based, hierarchical and SOM clustering.","Published":"2014-04-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"clusterGeneration","Version":"1.3.4","Title":"Random Cluster Generation (with Specified Degree of Separation)","Description":"We developed the clusterGeneration package to provide functions \n for generating random clusters, generating random \n covariance/correlation matrices,\n calculating a separation index (data and population version)\n for pairs of clusters or cluster distributions, and 1-D and 2-D\n projection plots to visualize clusters. The package also\n contains a function to generate random clusters based on\n factorial designs with factors such as degree of separation,\n number of clusters, number of variables, number of noisy\n variables.","Published":"2015-02-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clusterGenomics","Version":"1.0","Title":"Identifying clusters in genomics data by recursive partitioning","Description":"The Partitioning Algorithm based on Recursive Thresholding\n (PART) is used to recursively uncover clusters and subclusters\n in the data. Functionality is also available for visualization\n of the clustering.","Published":"2013-07-02","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"clusterhap","Version":"0.1","Title":"Clustering Genotypes in Haplotypes","Description":"One haplotype is a combination of SNP\n (Single Nucleotide Polymorphisms) within the QTL (Quantitative Trait Loci).\n clusterhap groups together all individuals of a population with the same haplotype.\n Each group contains individual with the same allele in each SNP,\n whether or not missing data. Thus, clusterhap groups individuals,\n that to be imputed, have a non-zero probability of having the same alleles\n in the entire sequence of SNP's. Moreover, clusterhap calculates such\n probability from relative frequencies.","Published":"2016-05-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clustering.sc.dp","Version":"1.0","Title":"Optimal Distance-Based Clustering for Multidimensional Data with\nSequential Constraint","Description":"A dynamic programming algorithm for optimal clustering multidimensional data with sequential constraint. The algorithm minimizes the sum of squares of within-cluster distances. The sequential constraint allows only subsequent items of the input data to form a cluster. The sequential constraint is typically required in clustering data streams or items with time stamps such as video frames, GPS signals of a vehicle, movement data of a person, e-pen data, etc. The algorithm represents an extension of Ckmeans.1d.dp to multiple dimensional spaces. Similarly to the one-dimensional case, the algorithm guarantees optimality and repeatability of clustering. Method clustering.sc.dp can find the optimal clustering if the number of clusters is known. Otherwise, methods findwithinss.sc.dp and backtracking.sc.dp can be used.","Published":"2015-05-04","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"clusternomics","Version":"0.1.1","Title":"Integrative Clustering for Heterogeneous Biomedical Datasets","Description":"Integrative context-dependent clustering for heterogeneous\n biomedical datasets. Identifies local clustering structures in related\n datasets, and a global clusters that exist across the datasets.","Published":"2017-03-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"clusterPower","Version":"0.5","Title":"Power calculations for cluster-randomized and cluster-randomized\ncrossover trials","Description":"This package enables researchers to calculate power for cluster-randomized crossover trials by employing a simulation-based approach. A particular study design is specified, with fixed sample sizes for all clusters and an assumed treatment effect, and the empirical power for that study design is calculated by simulating hypothetical datasets.","Published":"2013-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ClusterR","Version":"1.0.5","Title":"Gaussian Mixture Models, K-Means, Mini-Batch-Kmeans and\nK-Medoids Clustering","Description":"Gaussian mixture models, k-means, mini-batch-kmeans and k-medoids\n clustering with the option to plot, validate, predict (new data) and estimate the\n optimal number of clusters. The package takes advantage of 'RcppArmadillo' to\n speed up the computationally intensive parts of the functions.","Published":"2017-02-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ClusterRankTest","Version":"1.0","Title":"Rank Tests for Clustered Data","Description":"Nonparametric rank based tests (rank-sum tests and signed-rank tests) for clustered data, especially useful for clusters having informative cluster size and intra-cluster group size.","Published":"2016-04-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"clusterRepro","Version":"0.5-1.1","Title":"Reproducibility of gene expression clusters","Description":"A function for validating microarry clusters via\n reproducibility","Published":"2009-03-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"clusterSEs","Version":"2.4","Title":"Calculate Cluster-Robust p-Values and Confidence Intervals","Description":"Calculate p-values and confidence intervals using cluster-adjusted\n t-statistics (based on Ibragimov and Muller (2010) , pairs cluster bootstrapped t-statistics, and wild cluster bootstrapped t-statistics (the latter two techniques based on Cameron, Gelbach, and Miller (2008) . Procedures are included for use with GLM, ivreg, plm (pooling or fixed effects), and mlogit models.","Published":"2017-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clusterSim","Version":"0.45-2","Title":"Searching for Optimal Clustering Procedure for a Data Set","Description":"Distance measures (GDM1, GDM2,\tSokal-Michener, Bray-Curtis, for symbolic interval-valued data), cluster quality indices (Calinski-Harabasz, Baker-Hubert, Hubert-Levine, Silhouette, Krzanowski-Lai, Hartigan, Gap,\tDavies-Bouldin),\tdata normalization formulas, data generation (typical and non-typical data), HINoV method,\treplication analysis, linear ordering methods, spectral clustering, agreement indices between two partitions, plot functions (for categorical and symbolic interval-valued data).","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ClusterStability","Version":"1.0.3","Title":"Assessment of Stability of Individual Objects or Clusters in\nPartitioning Solutions","Description":"Allows one to assess the stability of individual objects, clusters \n and whole clustering solutions based on repeated runs of the K-means and K-medoids \n partitioning algorithms.","Published":"2016-02-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clustertend","Version":"1.4","Title":"Check the Clustering Tendency","Description":"Calculate some statistics aiming to help analyzing the clustering tendency of given data. In the first version, Hopkins' statistic is implemented.","Published":"2015-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clusteval","Version":"0.1","Title":"Evaluation of Clustering Algorithms","Description":"An R package that provides a suite of tools to evaluate\n clustering algorithms, clusterings, and individual clusters.","Published":"2012-08-31","License":"MIT","snapshot_date":"2017-06-23"} {"Package":"ClustGeo","Version":"1.0","Title":"Clustering of Observations with Geographical Constraints","Description":"Functions which allow to integrate geographical constraints in Ward hierarchical clustering. Geographical maps of typologies obtained can be displayed with the use of shapefiles.","Published":"2015-06-23","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"clustMD","Version":"1.2.1","Title":"Model Based Clustering for Mixed Data","Description":"Model-based clustering of mixed data (i.e. data which consist of\n continuous, binary, ordinal or nominal variables) using a parsimonious\n mixture of latent Gaussian variable models.","Published":"2017-05-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"clustMixType","Version":"0.1-17","Title":"k-Prototypes Clustering for Mixed Variable-Type Data","Description":"Functions to perform k-prototypes partitioning clustering for\n mixed variable-type data according to Z.Huang (1998): Extensions to the k-Means\n Algorithm for Clustering Large Data Sets with Categorical Variables, Data Mining\n and Knowledge Discovery 2, 283-304, .","Published":"2016-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ClustMMDD","Version":"1.0.4","Title":"Variable Selection in Clustering by Mixture Models for Discrete\nData","Description":"An implementation of a variable selection procedure in clustering by mixture models for discrete data (clustMMDD). Genotype data are examples of such data with two unordered observations (alleles) at each locus for diploid individual. The two-fold problem of variable selection and clustering is seen as a model selection problem where competing models are characterized by the number of clusters K, and the subset S of clustering variables. Competing models are compared by penalized maximum likelihood criteria. We considered asymptotic criteria such as Akaike and Bayesian Information criteria, and a family of penalized criteria with penalty function to be data driven calibrated. ","Published":"2016-05-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ClustOfVar","Version":"0.8","Title":"Clustering of variables","Description":"Cluster analysis of a set of variables. Variables can be\n quantitative, qualitative or a mixture of both.","Published":"2013-12-03","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"clustRcompaR","Version":"0.1.0","Title":"Easy Interface for Clustering a Set of Documents and Exploring\nGroup- Based Patterns","Description":"Provides an interface to perform cluster analysis on a corpus of text. Interfaces to \n Quanteda to assemble text corpuses easily. Deviationalizes text vectors prior to clustering \n using technique described by Sherin (Sherin, B. [2013]. A computational study of commonsense science: \n An exploration in the automated analysis of clinical interview data. Journal of the Learning Sciences, \n 22(4), 600-638. Chicago. http://dx.doi.org/10.1080/10508406.2013.836654). Uses cosine similarity as distance\n metric for two stage clustering process, involving Ward's algorithm hierarchical agglomerative clustering, \n and k-means clustering. Selects optimal number of clusters to maximize \"variance explained\" by clusters, \n adjusted by the number of clusters. Provides plotted output of clustering results as well as printed output. \n Assesses \"model fit\" of clustering solution to a set of preexisting groups in dataset.","Published":"2017-01-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"clustrd","Version":"1.2.0","Title":"Methods for Joint Dimension Reduction and Clustering","Description":"A class of methods that combine dimension reduction and clustering of continuous or categorical data. For continuous data, the package contains implementations of factorial K-means (Vichi and Kiers 2001; ) and reduced K-means (De Soete and Carroll 1994; ); both methods that combine principal component analysis with K-means clustering. For categorical data, the package provides MCA K-means (Hwang, Dillon and Takane 2006; ), i-FCB (Iodice D'Enza and Palumbo 2013, ) and Cluster Correspondence Analysis (van de Velden, Iodice D'Enza and Palumbo 2017; ), which combine multiple correspondence analysis with K-means.","Published":"2017-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clustsig","Version":"1.1","Title":"Significant Cluster Analysis","Description":"A complimentary package for use with hclust; simprof tests\n to see which (if any) clusters are statistically different. The\n null hypothesis is that there is no a priori group structure.\n See Clarke, K.R., Somerfield, P.J., and Gorley R.N. 2008.\n Testing of null hypothesis in exploratory community analyses:\n similarity profiles and biota-environment linkage. J. Exp. Mar.\n Biol. Ecol. 366, 56-69.","Published":"2014-01-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ClustVarLV","Version":"1.5.1","Title":"Clustering of Variables Around Latent Variables","Description":"Functions for the clustering of variables around Latent Variables.\n Each cluster of variables, which may be defined as a local or directional\n cluster, is associated with a latent variable. External variables measured on\n the same observations or/and additional information on the variables can be\n taken into account. A \"noise\" cluster or sparse latent variables can also de\n defined.","Published":"2016-12-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"clustvarsel","Version":"2.3","Title":"Variable Selection for Gaussian Model-Based Clustering","Description":"An R package implementing variable selection methodology for Gaussian model-based clustering which allows to find the (locally) optimal subset of variables in a data set that have group/cluster information. A greedy or headlong search can be used, either in a forward-backward or backward-forward direction, with or without sub-sampling at the hierarchical clustering stage for starting MCLUST models. By default the algorithm uses a sequential search, but parallelisation is also available.","Published":"2017-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clv","Version":"0.3-2.1","Title":"Cluster Validation Techniques","Description":"Package contains most of the popular internal and external\n cluster validation methods ready to use for the most of the\n outputs produced by functions coming from package \"cluster\".\n Package contains also functions and examples of usage for\n cluster stability approach that might be applied to algorithms\n implemented in \"cluster\" package as well as user defined\n clustering algorithms.","Published":"2013-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"clValid","Version":"0.6-6","Title":"Validation of Clustering Results","Description":"Statistical and biological validation of clustering results.","Published":"2014-03-25","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"cmaes","Version":"1.0-11","Title":"Covariance Matrix Adapting Evolutionary Strategy","Description":"Single objective optimization using a CMA-ES.","Published":"2011-01-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cmaesr","Version":"1.0.3","Title":"Covariance Matrix Adaptation Evolution Strategy","Description":"Pure R implementation of the Covariance Matrix Adaptation -\n Evolution Strategy (CMA-ES) with optional restarts (IPOP-CMA-ES).","Published":"2016-12-04","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CMC","Version":"1.0","Title":"Cronbach-Mesbah Curve","Description":"Calculation and plot of the stepwise Cronbach-Mesbah Curve","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CMF","Version":"1.0","Title":"Collective matrix factorization","Description":"Collective matrix factorization (CMF) finds joint low-rank representations for a collection of matrices with shared row or column entities. This code learns variational Bayesian approximation for CMF, supporting multiple likelihood potentials and missing data, while identifying both factors shared by multiple matrices and factors private for each matrix.","Published":"2014-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cmm","Version":"0.8","Title":"Categorical Marginal Models","Description":"Quite extensive package for the estimation of marginal models for categorical data.","Published":"2015-01-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cmna","Version":"1.0.0","Title":"Computational Methods for Numerical Analysis","Description":"Provides the source and examples for James P. Howard, II, \n \"Computational Methods for Numerical Analysis with R,\" \n\t\t\t , a forthcoming book on\n\t\t\t numerical methods in R.","Published":"2017-06-13","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CMPControl","Version":"1.0","Title":"Control Charts for Conway-Maxwell-Poisson Distribution","Description":"The main purpose of this package is to juxtapose the different control limits obtained by modelling a data set through the COM-Poisson distribution vs. the classical Poisson distribution. Accordingly, this package offers the ability to compute the COM-Poisson parameter estimates and plot associated Shewhart control charts for a given data set.","Published":"2014-04-30","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"CMplot","Version":"3.2.0","Title":"Circle Manhattan Plot","Description":"Manhattan plot, a type of scatter plot, was widely used to display the association results. However, it is usually time-consuming and laborious for a\n non-specialist user to write scripts and adjust parameters of an elaborate plot. Moreover, the ever-growing traits measured have necessitated the \n integration of results from different Genome-wide association study researches. Circle Manhattan Plot is the first open R package that can lay out \n Genome-wide association study P-value results in both traditional rectangular patterns, QQ-plot and novel circular ones. United in only one bull's eye style \n plot, association results from multiple traits can be compared interactively, thereby to reveal both similarities and differences between signals.","Published":"2017-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cmpprocess","Version":"1.0","Title":"Flexible Modeling of Count Processes","Description":"A toolkit for flexible modeling of count processes where data (over- or under-) dispersion exists.\n Estimations can be obtained under two data constructs where one has:\n (1) data on number of events in an s-unit time interval, or (2) only wait-time data.\n This package is supplementary to the work set forth in Zhu et al. (2016) .","Published":"2017-03-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cmprsk","Version":"2.2-7","Title":"Subdistribution Analysis of Competing Risks","Description":"Estimation, testing and regression modeling of\n subdistribution functions in competing risks, as described in Gray\n (1988), A class of K-sample tests for comparing the cumulative\n incidence of a competing risk, Ann. Stat. 16:1141-1154, and Fine JP and\n Gray RJ (1999), A proportional hazards model for the subdistribution\n of a competing risk, JASA, 94:496-509.","Published":"2014-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cmprskQR","Version":"0.9.1","Title":"Analysis of Competing Risks Using Quantile Regressions","Description":"Estimation, testing and regression modeling of\n subdistribution functions in competing risks using quantile regressions,\n as described in Peng and Fine (2009) .","Published":"2016-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cmrutils","Version":"1.3","Title":"Misc Functions of the Center for the Mathematical Research","Description":"A collection of useful helper routines developed by\n students of the Center for the Mathematical Research, Stankin,\n Moscow.","Published":"2015-09-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cmsaf","Version":"1.7.2","Title":"Tools for CM SAF NetCDF Data","Description":"The Satellite Application Facility on Climate Monitoring (CM SAF) \n is a ground segment of the European Organization for the Exploitation of \n Meteorological Satellites (EUMETSAT) and one of EUMETSATs Satellite Application \n Facilities. The CM SAF contributes to the sustainable observing of the climate \n system by providing Essential Climate Variables related to the energy and water \n cycle of the atmosphere (). It is a joint cooperation of seven \n National Meteorological and Hydrological Services, including the Deutscher\n Wetterdienst (DWD).\n The 'cmsaf' R-package provides a small collection of R-functions, which are \n inspired by the Climate Data Operators ('cdo'). This gives the opportunity to \n analyse and manipulate CM SAF data without the need of installing cdo. \n The 'cmsaf' R-package is tested for CM SAF NetCDF data, which are structured \n in three-dimensional arrays (longitude, latitude, time) on a rectangular grid. \n Layered CM SAF data have to be converted with the provided 'levbox_mergetime()' \n function. The 'cmsaf' R-package functions have only minor checks for deviations \n from the recommended data structure, and give only few specific error messages. \n Thus, there is no warranty of accurate results.\n Scripts for an easy application of the functions are provided at the CM SAF homepage \n ().","Published":"2017-03-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cmvnorm","Version":"1.0-3","Title":"The Complex Multivariate Gaussian Distribution","Description":"Various utilities for the complex multivariate Gaussian distribution.","Published":"2015-11-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cna","Version":"2.0.0","Title":"Causal Modeling with Coincidence Analysis","Description":"Provides comprehensive functionalities for causal modeling with Coincidence Analysis (CNA), which is a configurational comparative method of causal data analysis that was first introduced in Baumgartner (2009) . CNA is related to Qualitative Comparative Analysis (QCA), but contrary to the latter, it is custom-built for uncovering causal structures with multiple outcomes. While previous versions have only been capable of processing dichotomous variables, the current version generalizes CNA for multi-value and continuous variables whose values are interpreted as membership scores in fuzzy sets.","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cncaGUI","Version":"1.0","Title":"Canonical Non-Symmetrical Correspondence Analysis in R","Description":"A GUI with which users can construct and interact\n with Canonical Correspondence Analysis and Canonical Non-Symmetrical Correspondence Analysis and provides inferential results by using Bootstrap Methods.","Published":"2015-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CNLTreg","Version":"0.1","Title":"Complex-Valued Wavelet Lifting for Signal Denoising","Description":"Implementations of recent complex-valued wavelet shrinkage procedures for smoothing irregularly sampled signals.","Published":"2017-03-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CNLTtsa","Version":"0.1","Title":"Complex-Valued Wavelet Lifting for Univariate and Bivariate Time\nSeries Analysis","Description":"Implementations of recent complex-valued wavelet spectral procedures for analysis of irregularly sampled signals.","Published":"2017-03-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cnmlcd","Version":"1.0-0","Title":"Maximum Likelihood Estimation of a Log-Concave Density Function","Description":"Contains functions for computing the nonparametric maximum\n\t likelihood estimate of a log-concave density function from\n\t univariate observations. The log-density estimate is always a\n\t piecewise linear function.","Published":"2015-10-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CNOGpro","Version":"1.1","Title":"Copy Numbers of Genes in prokaryotes","Description":"Methods for assigning copy number states and breakpoints in resequencing experiments of prokaryotic organisms.","Published":"2015-01-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CNprep","Version":"2.0","Title":"Pre-process DNA Copy Number (CN) Data for Detection of CN Events","Description":"This package evaluates DNA copy number data, using both their initial form (copy number as a noisy function of genomic position) and their approximation by a piecewise-constant function (segmentation), for the purpose of identifying genomic regions where the copy number differs from the norm.","Published":"2014-12-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CNull","Version":"1.0","Title":"Fast Algorithms for Frequency-Preserving Null Models in Ecology","Description":"Efficient computations for null models that require shuffling columns on big matrix data.\n This package provides functions for faster computation of diversity measure statistics\n when independent random shuffling is applied to the columns of a given matrix. \n Given a diversity measure f and a matrix M, the provided functions can generate random samples \n (shuffled matrix rows of M), the mean and variance of f, and the p-values of this measure \n for two different null models that involve independent random shuffling of the columns of M.\n The package supports computations of alpha and beta diversity measures. ","Published":"2017-03-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CNVassoc","Version":"2.2","Title":"Association Analysis of CNV Data and Imputed SNPs","Description":"Carries out analysis of common \n Copy Number Variants (CNVs) and imputed Single Nucleotide \n Polymorphisms (SNPs) in population-based studies. \n It includes tools for estimating association under a series \n of study designs (case-control, cohort, etc), using several \n dependent variables (class status, censored data, counts) \n as response, adjusting for covariates and considering \n various inheritance models. Moreover, it is possible to \n perform epistasis studies with pairs of CNVs or imputed SNPs.\n It has been optimized in order to make feasible the analyses \n of Genome Wide Association studies (GWAs) with hundreds of \n thousands of genetic variants (CNVs / imputed SNPs). Also, \n it incorporates functions for inferring copy number (CNV \n genotype calling). Various classes and methods for generic \n functions (print, summary, plot, anova, ...) have been \n created to facilitate the analysis. ","Published":"2016-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CNVassocData","Version":"1.0","Title":"Example data sets for association analysis of CNV data","Description":"This package contains example data sets with Copy Number Variants and imputed SNPs to be used by CNVassoc package.","Published":"2013-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"coala","Version":"0.5.0","Title":"A Framework for Coalescent Simulation","Description":"Coalescent simulators can rapidly simulate biological sequences\n evolving according to a given model of evolution.\n You can use this package to specify such models, to conduct the simulations\n and to calculate additional statistics from the results.\n It relies on existing simulators for doing the simulation, and currently\n supports the programs 'ms', 'msms' and 'scrm'. It also supports finite-sites\n mutation models by combining the simulators with the program 'seq-gen'.","Published":"2016-12-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"coalescentMCMC","Version":"0.4-1","Title":"MCMC Algorithms for the Coalescent","Description":"Flexible framework for coalescent analyses in R. It includes a main function running the MCMC algorithm, auxiliary functions for tree rearrangement, and some functions to compute population genetic parameters.","Published":"2015-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"coarseDataTools","Version":"0.6-3","Title":"A Collection of Functions to Help with Analysis of Coarsely\nObserved Data","Description":"Functions to analyze coarse data.\n Specifically, it contains functions to (1) fit parametric accelerated\n failure time models to interval-censored survival time data, and (2)\n estimate the case-fatality ratio in scenarios with under-reporting.\n This package's development was motivated by applications to infectious\n disease: in particular, problems with estimating the incubation period and\n the case fatality ratio of a given disease. Sample data files are included\n in the package.","Published":"2016-03-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cobalt","Version":"2.1.0","Title":"Covariate Balance Tables and Plots","Description":"Generate balance tables and plots for covariates of groups\n preprocessed through matching, weighting or subclassification, for example,\n using propensity scores. Includes integration with 'MatchIt', 'twang', 'Matching', 'optmatch', \n 'CBPS', and 'ebal' for assessing balance on the output of their preprocessing functions. Users\n can also specify data for balance assessment not generated through the above packages. Also \n included are methods for assessing balance in clustered or multiply imputed data sets.","Published":"2017-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"COBRA","Version":"0.99.4","Title":"Nonlinear Aggregation of Predictors","Description":"This package performs prediction for regression-oriented problems, aggregating in a nonlinear scheme any basic regression machines suggested by the context and provided by the user. If the user has no valuable knowledge on the data, four defaults machines wrappers are implemented so as to cover a minimal spectrum of prediction methods. If necessary, the computations may be parallelized. The method is described in Biau, Fischer, Guedj and Malley (2013), \"COBRA: A Nonlinear Aggregation Strategy\".","Published":"2013-07-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cobs","Version":"1.3-3","Title":"Constrained B-Splines (Sparse Matrix Based)","Description":"Qualitatively Constrained (Regression) Smoothing Splines via\n Linear Programming and Sparse Matrices.","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CoClust","Version":"0.3-1","Title":"Copula Based Cluster Analysis","Description":"Copula Based Cluster Analysis.","Published":"2015-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"COCONUT","Version":"1.0.1","Title":"COmbat CO-Normalization Using conTrols (COCONUT)","Description":"Allows for pooled analysis of microarray data by batch-correcting control samples, and then applying the derived correction parameters to non-control samples to obtain bias-free, inter-dataset corrected data.","Published":"2016-06-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cocor","Version":"1.1-3","Title":"Comparing Correlations","Description":"Statistical tests for the comparison between two correlations\n based on either independent or dependent groups. Dependent correlations can\n either be overlapping or nonoverlapping. A web interface is available on the\n website http://comparingcorrelations.org. A plugin for the R GUI and IDE RKWard\n is included. Please install RKWard from https://rkward.kde.org to use this\n feature. The respective R package 'rkward' cannot be installed directly from a\n repository, as it is a part of RKWard.","Published":"2016-05-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cocoreg","Version":"0.1.1","Title":"Extract Shared Variation in Collections of Data Sets Using\nRegression Models","Description":"The algorithm extracts shared variation from a collection of data sets using regression models.","Published":"2017-05-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cocorresp","Version":"0.3-0","Title":"Co-Correspondence Analysis Methods","Description":"Fits predictive and symmetric co-correspondence analysis (CoCA) models to relate one data matrix\n to another data matrix. More specifically, CoCA maximises the weighted covariance \n between the weighted averaged species scores of one community and the weighted averaged species\n scores of another community. CoCA attempts to find patterns that are common to both communities.","Published":"2016-02-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cocron","Version":"1.0-1","Title":"Statistical Comparisons of Two or more Alpha Coefficients","Description":"Statistical tests for the comparison between two or more alpha\n coefficients based on either dependent or independent groups of individuals.\n A web interface is available at http://comparingcronbachalphas.org. A plugin\n for the R GUI and IDE RKWard is included. Please install RKWard from https://\n rkward.kde.org to use this feature. The respective R package 'rkward' cannot be\n installed directly from a repository, as it is a part of RKWard.","Published":"2016-03-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"coda","Version":"0.19-1","Title":"Output Analysis and Diagnostics for MCMC","Description":"Provides functions for summarizing and plotting the\n\toutput from Markov Chain Monte Carlo (MCMC) simulations, as\n\twell as diagnostic tests of convergence to the equilibrium\n\tdistribution of the Markov chain.","Published":"2016-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"codadiags","Version":"1.0","Title":"Markov chain Monte Carlo burn-in based on \"bridge\" statistics","Description":"Markov chain Monte Carlo burn-in based on \"bridge\" statistics, in the way of coda::heidel.diag, but including non asymptotic tabulated statistics.","Published":"2013-11-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cOde","Version":"0.2.2","Title":"Automated C Code Generation for Use with the 'deSolve' and\n'bvpSolve' Packages","Description":"Generates all necessary C functions allowing the user to work with\n the compiled-code interface of ode() and bvptwp(). The implementation supports\n \"forcings\" and \"events\". Also provides functions to symbolically compute\n Jacobians, sensitivity equations and adjoint sensitivities being the basis for\n sensitivity analysis.","Published":"2016-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CodeDepends","Version":"0.5-3","Title":"Analysis of R Code for Reproducible Research and Code\nComprehension","Description":"Tools for analyzing R expressions\n or blocks of code and determining the dependencies between them.\n It focuses on R scripts, but can be used on the bodies of functions.\n There are many facilities including the ability to summarize or get a high-level\n view of code, determining dependencies between variables, code improvement\n suggestions.","Published":"2017-05-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"codep","Version":"0.6-5","Title":"Multiscale Codependence Analysis","Description":"Computation of Multiscale Codependence Analysis and spatial eigenvector maps, as an additional feature. Early development version.","Published":"2017-01-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"codetools","Version":"0.2-15","Title":"Code Analysis Tools for R","Description":"Code analysis tools for R.","Published":"2016-10-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"codingMatrices","Version":"0.3.1","Title":"Alternative Factor Coding Matrices for Linear Model Formulae","Description":"A collection of coding functions as alternatives to the standard\n functions in the stats package, which have names starting with 'contr.'. Their\n main advantage is that they provide a consistent method for defining marginal\n effects in factorial models. In a simple one-way ANOVA model the\n intercept term is always the simple average of the class means.","Published":"2017-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"codyn","Version":"1.1.0","Title":"Community Dynamics Metrics","Description":"A toolbox of ecological community dynamics metrics that are\n explicitly temporal. Functions fall into two categories: temporal diversity\n indices and community stability metrics. The diversity indices are temporal\n analogs to traditional diversity indices such as richness and rank-abundance\n curves. Specifically, functions are provided to calculate species turnover, mean\n rank shifts, and lags in community similarity between time points. The community\n stability metrics calculate overall stability and patterns of species covariance\n and synchrony over time.","Published":"2016-04-27","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"coefficientalpha","Version":"0.5","Title":"Robust Coefficient Alpha and Omega with Missing and Non-Normal\nData","Description":"Cronbach's alpha and McDonald's omega are widely used reliability or internal consistency measures in social, behavioral and education sciences. Alpha is reported in nearly every study that involves measuring a construct through multiple test items. The package 'coefficientalpha' calculates coefficient alpha and coefficient omega with missing data and non-normal data. Robust standard errors and confidence intervals are also provided. A test is also available to test the tau-equivalent and homogeneous assumptions. Version 0.5 added the bootstrap confidence intervals.","Published":"2015-05-31","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"coefplot","Version":"1.2.4","Title":"Plots Coefficients from Fitted Models","Description":"Plots the coefficients from model objects. This very quickly shows the user the point estimates and confidence intervals for fitted models.","Published":"2016-01-10","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"coenocliner","Version":"0.2-2","Title":"Coenocline Simulation","Description":"Simulate species occurrence and abundances (counts) along\n gradients.","Published":"2016-05-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"coenoflex","Version":"2.2-0","Title":"Gradient-Based Coenospace Vegetation Simulator","Description":"Simulates the composition of samples of vegetation\n according to gradient-based vegetation theory. Features a\n flexible algorithm incorporating competition and complex\n multi-gradient interaction.","Published":"2016-09-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"coexist","Version":"1.0","Title":"Species coexistence modeling and analysis","Description":"species coexistence modeling under asymmetric dispersal\n and fluctuating source-sink dynamics;testing the proportion of\n coexistence scenarios driven by neutral and niche processes","Published":"2012-08-02","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"cofeatureR","Version":"1.0.1","Title":"Generate Cofeature Matrices","Description":"Generate cofeature (feature by sample) matrices. The package \n utilizes ggplot2::geom_tile() to generate the matrix allowing for easy\n additions from the base matrix.","Published":"2016-01-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CoFRA","Version":"0.1002","Title":"Complete Functional Regulation Analysis","Description":"Calculates complete functional regulation analysis and visualize\n the results in a single heatmap. The provided example data is for biological\n data but the methodology can be used for large data sets to compare quantitative\n entities that can be grouped. For example, a store might divide entities into\n cloth, food, car products etc and want to see how sales changes in the groups\n after some event. The theoretical background for the calculations are provided\n in New insights into functional regulation in MS-based drug profiling, Ana Sofia\n Carvalho, Henrik Molina & Rune Matthiesen, Scientific Reports .","Published":"2017-04-06","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"coga","Version":"0.1.0","Title":"Convolution of Gamma Distributions","Description":"Convolution of gamma distributions in R. The convolution of \n gamma distributions is the sum of series of gamma \n distributions and all gamma distributions here can have different \n parameters. This package can calculate density, distribution function \n and do simulation work.","Published":"2017-05-25","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"CoImp","Version":"0.3-1","Title":"Copula Based Imputation Method","Description":"Copula based imputation method. A semiparametric imputation procedure for missing multivariate data based on conditional copula specifications.","Published":"2016-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"coin","Version":"1.2-0","Title":"Conditional Inference Procedures in a Permutation Test Framework","Description":"Conditional inference procedures for the general independence\n problem including two-sample, K-sample (non-parametric ANOVA), correlation,\n censored, ordered and multivariate problems.","Published":"2017-06-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CoinMinD","Version":"1.1","Title":"Simultaneous Confidence Interval for Multinomial Proportion","Description":"Methods for obtaining simultaneous confidence interval for\n multinomial proportion have been proposed by many authors and\n the present study include a variety of widely applicable\n procedures. Seven classical methods (Wilson, Quesenberry and\n Hurst, Goodman, Wald with and without continuity correction,\n Fitzpatrick and Scott, Sison and Glaz) and Bayesian Dirichlet\n models are included in the package. The advantage of MCMC pack\n has been exploited to derive the Dirichlet posterior directly\n and this also helps in handling the Dirichlet prior parameters.\n This package is prepared to have equal and unequal values for\n the Dirichlet prior distribution that will provide better scope\n for data analysis and associated sensitivity analysis.","Published":"2013-05-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cointmonitoR","Version":"0.1.0","Title":"Consistent Monitoring of Stationarity and Cointegrating\nRelationships","Description":"We propose a consistent monitoring procedure to detect a\n structural change from a cointegrating relationship to a spurious\n relationship. The procedure is based on residuals from modified least\n squares estimation, using either Fully Modified, Dynamic or Integrated\n Modified OLS. It is inspired by Chu et al. (1996) in\n that it is based on parameter estimation on a pre-break \"calibration\" period\n only, rather than being based on sequential estimation over the full sample.\n See the discussion paper for further information.\n This package provides the monitoring procedures for both the cointegration\n and the stationarity case (while the latter is just a special case of the\n former one) as well as printing and plotting methods for a clear\n presentation of the results.","Published":"2016-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cointReg","Version":"0.2.0","Title":"Parameter Estimation and Inference in a Cointegrating Regression","Description":"Cointegration methods are widely used in empirical macroeconomics\n and empirical finance. It is well known that in a cointegrating\n regression the ordinary least squares (OLS) estimator of the\n parameters is super-consistent, i.e. converges at rate equal to the\n sample size T. When the regressors are endogenous, the limiting\n distribution of the OLS estimator is contaminated by so-called second\n order bias terms, see e.g. Phillips and Hansen (1990) .\n The presence of these bias terms renders inference difficult. Consequently,\n several modifications to OLS that lead to zero mean Gaussian mixture\n limiting distributions have been proposed, which in turn make\n standard asymptotic inference feasible. These methods include\n the fully modified OLS (FM-OLS) approach of Phillips and Hansen\n (1990) , the dynamic OLS (D-OLS) approach of Phillips\n and Loretan (1991) , Saikkonen (1991)\n and Stock and Watson (1993)\n and the new estimation approach called integrated\n modified OLS (IM-OLS) of Vogelsang and Wagner (2014)\n . The latter is based on an augmented\n partial sum (integration) transformation of the regression model. IM-OLS is\n similar in spirit to the FM- and D-OLS approaches, with the key difference\n that it does not require estimation of long run variance matrices and avoids\n the need to choose tuning parameters (kernels, bandwidths, lags). However,\n inference does require that a long run variance be scaled out.\n This package provides functions for the parameter estimation and inference\n with all three modified OLS approaches. That includes the automatic\n bandwidth selection approaches of Andrews (1991) and\n of Newey and West (1994) as well as the calculation of\n the long run variance.","Published":"2016-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"colf","Version":"0.1.2","Title":"Constrained Optimization on Linear Function","Description":"Performs least squares constrained optimization on a linear objective function. It contains\n a number of algorithms to choose from and offers a formula syntax similar to lm().","Published":"2016-12-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CollapsABEL","Version":"0.10.11","Title":"Generalized CDH (GCDH) Analysis","Description":"Implements a generalized version of the CDH test ( and )\n for detecting compound heterozygosity on a\n genome-wide level, due to usage of generalized linear models it allows flexible\n analysis of binary and continuous traits with covariates.","Published":"2016-12-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"collapsibleTree","Version":"0.1.4","Title":"Interactive Collapsible Tree Diagrams using 'D3.js'","Description":"\n Interactive Reingold-Tilford tree diagrams created using 'D3.js', where every node can be expanded and collapsed by clicking on it.\n Tooltips and color gradients can be mapped to nodes using a numeric column in the source data frame.\n See 'collapsibleTree' website for more information and examples.","Published":"2017-03-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"CollocInfer","Version":"1.0.4","Title":"Collocation Inference for Dynamic Systems","Description":"These functions implement collocation-inference\n for continuous-time and discrete-time stochastic processes.\n They provide model-based smoothing, gradient-matching,\n generalized profiling and forwards prediction error methods.","Published":"2016-11-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"collpcm","Version":"1.0","Title":"Collapsed Latent Position Cluster Model for Social Networks","Description":"Markov chain Monte Carlo based inference routines for collapsed latent position cluster models or social networks, which includes searches over the model space (number of clusters in the latent position cluster model). The label switching algorithm used is that of Nobile and Fearnside (2007) which relies on the algorithm of Carpaneto and Toth (1980) . ","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"collUtils","Version":"1.0.5","Title":"Auxiliary Package for Package 'CollapsABEL'","Description":"Provides some low level functions for processing PLINK input and output files.","Published":"2016-03-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"coloc","Version":"2.3-1","Title":"Colocalisation tests of two genetic traits","Description":"Performs the colocalisation tests described in Plagnol et al\n (2009), Wallace et al (2013) and Giambartolomei et al (2013).","Published":"2013-09-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"colordistance","Version":"0.8.0","Title":"Distance Metrics for Image Color Similarity","Description":"Loads and displays images, selectively masks specified background\n colors, bins pixels by color using either data-dependent or automatically\n generated color bins, quantitatively measures color similarity among images\n using one of several distance metrics for comparing pixel color clusters, and \n clusters images by object color similarity. Originally written for use with\n organism coloration (reef fish color diversity, butterfly mimicry, etc), but\n easily applicable for any image set.","Published":"2017-06-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"coloredICA","Version":"1.0.0","Title":"Implementation of Colored Independent Component Analysis and\nSpatial Colored Independent Component Analysis","Description":"It implements colored Independent Component Analysis (Lee et al., 2011) and spatial colored Independent Component Analysis (Shen et al., 2014). They are two algorithms to perform ICA when sources are assumed to be temporal or spatial stochastic processes, respectively.","Published":"2015-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"colorfulVennPlot","Version":"2.4","Title":"Plot and add custom coloring to Venn diagrams for 2-dimensional,\n3-dimensional and 4-dimensional data","Description":"Given 2-,3- or 4-dimensional data, plots a Venn diagram, i.e. 'crossing circles'. The user can specify values, labels for each circle-group and unique colors for each plotted part. Here is what it would look like for a 3-dimensional plot: http://elliotnoma.files.wordpress.com/2011/02/venndiagram.png. To see what the 4-dimensional plot looks like, go to http://elliotnoma.files.wordpress.com/2013/03/4dplot.png.","Published":"2013-11-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"colorhcplot","Version":"1.0","Title":"Colorful Hierarchical Clustering Dendrograms","Description":"This function takes a hierarchical cluster-class object and a factor describing the groups as arguments and generates colorful dendrograms in which leaves belonging to different groups are identified by colors.","Published":"2015-10-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"colormap","Version":"0.1.4","Title":"Color Palettes using Colormaps Node Module","Description":"Allows to generate colors from palettes defined in the colormap module of 'Node.js'. (see for more information). In total it provides 44 distinct palettes made from sequential and/or diverging colors. In addition to the pre defined palettes you can also specify your own set of colors. There are also scale functions that can be used with 'ggplot2'.","Published":"2016-11-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ColorPalette","Version":"1.0-1","Title":"Color Palettes Generator","Description":"Different methods to generate a color palette based on a specified base color and a number of colors that should be created.","Published":"2015-06-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"colorpatch","Version":"0.1.2","Title":"Optimized Rendering of Fold Changes and Confidence Values","Description":"Shows color patches for encoding fold changes (e.g. log ratios) together with confidence values \n within a single diagram. This is especially useful for rendering gene expression data as well as\n other types of differential experiments. In addition to different rendering methods (ggplot extensions)\n functionality for perceptually optimizing color palettes are provided.\n Furthermore the package provides extension methods of the colorspace color-class in order to\n simplify the work with palettes (a.o. length, as.list, and append are supported).","Published":"2017-06-10","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"colorplaner","Version":"0.1.3","Title":"A 'ggplot2' Extension to Visualize Two Variables per Color\nAesthetic Through Color Space Projections","Description":"A 'ggplot2' extension to visualize two\n variables through one color aesthetic via mapping to a color space\n projection. With this technique for 2-D color mapping, one can create a\n bivariate choropleth in R as well as other visualizations with multivariate\n color scales. Includes two new scales and a new guide for 'ggplot2'.","Published":"2016-11-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"colorr","Version":"1.0.0","Title":"Color Palettes for EPL, MLB, NBA, NHL, and NFL Teams","Description":"Color palettes for EPL, MLB, NBA, NHL, and NFL teams.","Published":"2017-02-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"colorRamps","Version":"2.3","Title":"Builds color tables","Description":"Builds gradient color maps","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"colorscience","Version":"1.0.4","Title":"Color Science Methods and Data","Description":"Methods and data for color science - color conversions by observer,\n illuminant and gamma. Color matching functions and chromaticity diagrams.\n Color indices, color differences and spectral data conversion/analysis.","Published":"2016-10-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"colorspace","Version":"1.3-2","Title":"Color Space Manipulation","Description":"Carries out mapping between assorted color spaces including\n RGB, HSV, HLS, CIEXYZ, CIELUV, HCL (polar CIELUV),\n\t CIELAB and polar CIELAB. Qualitative, sequential, and\n\t diverging color palettes based on HCL colors are provided\n\t along with an interactive palette picker (with either a Tcl/Tk\n\t or a shiny GUI).","Published":"2016-12-14","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"colorSpec","Version":"0.5-3","Title":"Color Calculations with Emphasis on Spectral Data","Description":"Calculate with spectral properties of light sources, materials, cameras, eyes, and scanners.\n Build complex systems from simpler parts using a spectral product algebra. For light sources,\n compute CCT and CRI. For object colors, compute optimal colors and Logvinenko coordinates.\n Work with the standard CIE illuminants and color matching functions, and read spectra from \n text files, including CGATS files. Sample text files, and 4 vignettes are included.","Published":"2016-05-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"colortools","Version":"0.1.5","Title":"Tools for colors in a Hue-Saturation-Value (HSV) color model","Description":"R package with handy functions to help users select and play with\n color schemes in an HSV color model","Published":"2013-12-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"colourlovers","Version":"0.2.2","Title":"R Client for the COLOURlovers API","Description":"Provides access to the COLOURlovers \n API, which offers color inspiration and color palettes.","Published":"2016-10-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"colourpicker","Version":"0.3","Title":"A Colour Picker Tool for Shiny and for Selecting Colours in\nPlots","Description":"A colour picker that can be used as an input in Shiny apps\n or 'Rmarkdown' documents. A Plot Colour Helper tool is available as an \n 'RStudio' addin, which helps you pick colours to use in your plots. A more \n generic Colour Picker 'RStudio' addin is also provided to let you select \n colours for use in your R code.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"colourvision","Version":"1.1","Title":"Colour Vision Models","Description":"Colour vision models, colour spaces and colour thresholds. Includes Vorobyev & Osorio Receptor Noise Limited models, Chittka colour hexagon, and Endler & Mielke model. Models have been extended to accept any number of photoreceptor types.","Published":"2017-03-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"colr","Version":"0.1.900","Title":"Functions to Select and Rename Data","Description":"Powerful functions to select and rename columns in dataframes, lists and numeric types \n by 'Perl' regular expression. Regular expression ('regex') are a very powerful grammar to match \n strings, such as column names. ","Published":"2017-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"comato","Version":"1.0","Title":"Analysis of Concept Maps","Description":"Provides methods for the import/export and automated analysis of concept maps.","Published":"2014-03-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"COMBAT","Version":"0.0.2","Title":"A Combined Association Test for Genes using Summary Statistics","Description":"To compute gene-based genetic association statistics from P values at multiple SNPs and genotype data of ancestry matched reference samples. COMBined Association Test (COMBAT) incorporates strengths from multiple existing gene-based tests, including VEGAS, GATES and SimpleM, and achieves much improved performance than any individual test.","Published":"2017-01-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"COMBIA","Version":"1.0-4","Title":"Synergy/Antagonism Analyses of Drug Combinations","Description":"A comprehensive synergy/antagonism analyses of drug combinations with\n quality graphics and data. The analyses can be performed by Bliss independence and Loewe\n additivity models. COMBIA provides improved statistical analysis and makes only very weak assumption of data variability \n while calculating bootstrap intervals (BIs). Finally, package saves analyzed data, \n 2D and 3D plots ready to use in research publications. COMBIA does not require manual\n data entry. Data can be directly input from wetlab experimental platforms \n for example fluostar, automated robots etc. One needs to call a single function only \n to perform all analysis (examples are provided with sample data).","Published":"2015-07-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"combinat","Version":"0.0-8","Title":"combinatorics utilities","Description":"routines for combinatorics","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Combine","Version":"1.0","Title":"Game-Theoretic Probability Combination","Description":"Suite of R functions for combination of probabilities using a game-theoretic method.","Published":"2015-09-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CombinePortfolio","Version":"0.3","Title":"Estimation of Optimal Portfolio Weights by Combining Simple\nPortfolio Strategies","Description":"Estimation of optimal portfolio weights as combination of simple portfolio strategies, like the tangency, global minimum variance (GMV) or naive (1/N) portfolio. It is based on a utility maximizing 8-fund rule. Popular special cases like the Kan-Zhou(2007) 2-fund and 3-fund rule or the Tu-Zhou(2011) estimator are nested.","Published":"2016-06-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CombinePValue","Version":"1.0","Title":"Combine a Vector of Correlated p-values","Description":"We offer two statistical tests to combine p-values: selfcontained.test vs competitive.test. The goal is to test whether a vector of pvalues are jointly significant when we combine them together.","Published":"2014-11-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CombinS","Version":"1.1-1","Title":"Construction Methods of some Series of PBIB Designs","Description":"Series of partially balanced incomplete block designs (PBIB) based on the combinatory method (S) introduced in (Imane Rezgui et al, 2014) ; and it gives their associated U-type design.","Published":"2016-11-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"combiter","Version":"1.0.2","Title":"Combinatorics Iterators","Description":"Provides iterators for combinations, permutations, subsets, and\n Cartesian product, which allow one to go through all elements without creating a\n huge set of all possible values.","Published":"2017-05-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CombMSC","Version":"1.4.2","Title":"Combined Model Selection Criteria","Description":"Functions for computing optimal convex combinations of\n model selection criteria based on ranks, along with utility\n functions for constructing model lists, MSCs, and priors on\n model lists.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"comclim","Version":"0.9.4","Title":"Community climate statistics","Description":"Computes community climate statistics for volume and mismatch using species' climate niches either unscaled or scaled relative to a regional species pool. These statistics can be used to describe biogeographic patterns and infer community assembly processes. Includes a vignette outlining usage.","Published":"2014-09-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cometExactTest","Version":"0.1.3","Title":"Exact Test from the Combinations of Mutually Exclusive\nAlterations (CoMEt) Algorithm","Description":"An algorithm for identifying combinations of mutually exclusive alterations in cancer genomes. CoMEt represents the mutations in a set M of k genes with a 2^k dimensional contingency table, and then computes the tail probability of observing T(M) exclusive alterations using an exact statistical test.","Published":"2015-10-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"comf","Version":"0.1.7","Title":"Functions for Thermal Comfort Research","Description":"Functions to calculate various common and less common thermal comfort indices, convert physical variables, and evaluate the performance of thermal comfort indices.","Published":"2017-05-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ComICS","Version":"1.0.3","Title":"Computational Methods for Immune Cell-Type Subsets","Description":"Provided are Computational methods for Immune Cell-type Subsets, including:(1) DCQ (Digital Cell Quantifier) to infer global dynamic changes in immune cell quantities within a complex tissue; and (2) VoCAL (Variation of Cell-type Abundance Loci) a deconvolution-based method that utilizes transcriptome data to infer the quantities of immune-cell types, and then uses these quantitative traits to uncover the underlying DNA loci.","Published":"2016-03-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"commandr","Version":"1.0.1","Title":"Command pattern in R","Description":"An S4 representation of the Command design pattern. The\n Operation class is a simple implementation using closures and supports\n forward and reverse (undo) evaluation. The more complicated Protocol\n framework represents each type of command (or analytical protocol) by\n a formal S4 class. Commands may be grouped and consecutively executed\n using the Pipeline class. Example use cases include logging, do/undo,\n analysis pipelines, GUI actions, parallel processing, etc.","Published":"2014-08-25","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"CommEcol","Version":"1.6.4","Title":"Community Ecology Analyses","Description":"Autosimilarity curves, dissimilarity indexes that overweight rare species, phylogenetic and functional (pairwise and multisample) dissimilarity indexes and nestedness for phylogenetic, functional and other diversity metrics. This should be a complement to available packages, particularly 'vegan'. ","Published":"2016-07-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"commentr","Version":"1.0.4","Title":"Print Nicely Formatted Comments for Use in Script Files","Description":"Functions to\n produce nicely formatted comments to use in R-scripts (or\n Latex/HTML/markdown etc). A comment with formatting is printed to the\n console and can then be copied to a script.","Published":"2016-03-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CommonJavaJars","Version":"1.0-5","Title":"Useful libraries for building a Java based GUI under R","Description":"Useful libraries for building a Java based GUI under R","Published":"2014-08-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"commonmark","Version":"1.2","Title":"High Performance CommonMark and Github Markdown Rendering in R","Description":"The CommonMark specification defines a rationalized version of markdown\n syntax. This package uses the 'cmark' reference implementation for converting\n markdown text into various formats including html, latex and groff man. In\n addition it exposes the markdown parse tree in xml format. The latest version of\n this package also adds support for Github extensions including tables, autolinks\n and strikethrough text.","Published":"2017-03-01","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"commonsMath","Version":"1.0.0","Title":"JAR Files of the Apache Commons Mathematics Library","Description":"Java JAR files for the Apache Commons Mathematics Library for use by users and other packages.","Published":"2017-05-24","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CommonTrend","Version":"0.7-1","Title":"Extract and plot common trends from a cointegration system.\nCalculate P-value for Johansen Statistics","Description":"Directly extract and plot stochastic common trends from\n a cointegration system using different approaches, currently\n including Kasa (1992) and Gonzalo and Granger (1995). \n\tThe approach proposed by Gonzalo and Granger, also known as\n Permanent-Transitory Decomposition, is widely used in\n macroeconomics and market microstructure literature. \n\tKasa's approach, on the other hand, has a nice property that it only\n uses the super consistent estimator: the cointegration vector\n 'beta'. \n\tThis package also provides functions calculate P-value\n from Johansen Statistics according to the approximation method\n proposed by Doornik (1998).\n\tUpdate:\n\t0.7-1: Fix bugs in calculation alpha. Add formulas and more explanations.\n 0.6-1: Rewrite the description file.\n 0.5-1: Add functions to calculate P-value from Johansen statistic, and vice versa.","Published":"2013-09-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CommT","Version":"0.1.1","Title":"Comparative Phylogeographic Analysis using the Community Tree\nFramework","Description":"Provides functions to measure the difference between constrained and unconstrained gene tree distributions using various tree distance metrics. Constraints are enforced prior to this analysis via the estimation of a tree under the community tree model.","Published":"2015-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"COMMUNAL","Version":"1.1.0","Title":"Robust Selection of Cluster Number K","Description":"Facilitates optimal clustering of a data set. Provides a framework to run a wide range of clustering algorithms to determine the optimal number (k) of clusters in the data. Then analyzes the cluster assignments from each clustering algorithm to identify samples that repeatedly classify to the same group. We call these 'core clusters', providing a basis for later class discovery.","Published":"2015-10-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CommunityCorrelogram","Version":"1.0","Title":"Ecological Community Correlogram","Description":"The CommunityCorrelogram package is designed for the geostatistical analysis of ecological community datasets with either a spatial or temporal distance component.","Published":"2014-06-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Comp2ROC","Version":"1.1.4","Title":"Compare Two ROC Curves that Intersect","Description":"Comparison of two ROC curves through the methodology proposed by Ana C. Braga.","Published":"2016-07-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"compactr","Version":"0.1","Title":"Creates empty plots with compact axis notation","Description":"Creates empty plots with compact axis notation to which users can\n add whatever they like (points, lines, text, etc.) The notation is more\n compact in the sense that the axis-labels and tick-labels are closer to the\n axis and the tick-marks are shorter. Also, if the plot appears as part of a\n matrix, the x-axis notation is suppressed unless the plot appears along the\n bottom row and the y-axis notation is suppress unless the plot appears\n along the left column.","Published":"2013-08-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"compare","Version":"0.2-6","Title":"Comparing Objects for Differences","Description":"Functions to compare a model object to a comparison object.\n If the objects are not identical, the functions can be instructed to\n explore various modifications of the objects (e.g., sorting rows,\n dropping names) to see if the modified versions are identical.","Published":"2015-08-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"compareC","Version":"1.3.1","Title":"Compare Two Correlated C Indices with Right-censored Survival\nOutcome","Description":"Proposed by Harrell, the C index or concordance C, is considered an overall measure of discrimination in survival analysis between a survival outcome that is possibly right censored and a predictive-score variable, which can represent a measured biomarker or a composite-score output from an algorithm that combines multiple biomarkers. This package aims to statistically compare two C indices with right-censored survival outcome, which commonly arise from a paired design and thus resulting two correlated C indices.","Published":"2015-01-28","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"CompareCausalNetworks","Version":"0.1.5","Title":"Interface to Diverse Estimation Methods of Causal Networks","Description":"Unified interface for the estimation of causal networks, including\n the methods 'backShift' (from package 'backShift'), 'bivariateANM' (bivariate\n additive noise model), 'bivariateCAM' (bivariate causal additive model),\n 'CAM' (causal additive model) (from package 'CAM'), 'hiddenICP' (invariant\n causal prediction with hidden variables), 'ICP' (invariant causal prediction)\n (from package 'InvariantCausalPrediction'), 'GES' (greedy equivalence\n search), 'GIES' (greedy interventional equivalence search), 'LINGAM', 'PC' (PC\n Algorithm), 'RFCI' (really fast causal inference) (all from package 'pcalg') and\n regression.","Published":"2016-12-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"compareDF","Version":"1.1.0","Title":"Do a Git Style Diff of the Rows Between Two Dataframes with\nSimilar Structure","Description":"Compares two dataframes which have the same column\n structure to show the rows that have changed. Also gives a git style diff format\n to quickly see what has changes in addition to summary statistics.","Published":"2017-01-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"compareGroups","Version":"3.2.4","Title":"Descriptive Analysis by Groups","Description":"Create data summaries for quality control, extensive reports for exploring data, as well as publication-ready univariate or bivariate tables in several formats (plain text, HTML,LaTeX, PDF, Word or Excel. Create figures to quickly visualise the distribution of your data (boxplots, barplots, normality-plots, etc.). Display statistics (mean, median, frequencies, incidences, etc.). Perform the appropriate tests (t-test, Analysis of variance, Kruskal-Wallis, Fisher, log-rank, ...) depending on the nature of the described variable (normal, non-normal or qualitative). Summarize genetic data (Single Nucleotide Polymorphisms) data displaying Allele Frequencies and performing Hardy-Weinberg Equilibrium tests among other typical statistics and tests for these kind of data.","Published":"2017-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"compareODM","Version":"1.2","Title":"comparison of medical forms in CDISC ODM format","Description":"Input: 2 ODM files (ODM version 1.3) Output: list of\n identical, matching, similar and differing data items","Published":"2013-05-27","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"CompareTests","Version":"1.2","Title":"Correct for Verification Bias in Diagnostic Accuracy & Agreement","Description":"A standard test is observed on all specimens. We treat the second test (or sampled test) as being conducted on only a stratified sample of specimens. Verification Bias is this situation when the specimens for doing the second (sampled) test is not under investigator control. We treat the total sample as stratified two-phase sampling and use inverse probability weighting. We estimate diagnostic accuracy (category-specific classification probabilities; for binary tests reduces to specificity and sensitivity, and also predictive values) and agreement statistics (percent agreement, percent agreement by category, Kappa (unweighted), Kappa (quadratic weighted) and symmetry tests (reduces to McNemar's test for binary tests)). See: Katki HA, Li Y, Edelstein DW, Castle PE. Estimating the agreement and diagnostic accuracy of two diagnostic tests when one test is conducted on only a subsample of specimens. Stat Med. 2012 Feb 28; 31(5) .","Published":"2017-02-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"comparison","Version":"1.0-4","Title":"Multivariate likelihood ratio calculation and evaluation","Description":"Functions for calculating and evaluating likelihood ratios from uni/multivariate continuous observations","Published":"2013-11-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"compeir","Version":"1.0","Title":"Event-specific incidence rates for competing risks data","Description":"The package enables to compute event-specific incidence\n rates for competing risks data, to compute rate ratios,\n event-specific incidence proportions and cumulative incidence\n functions from these, and to plot these in a comprehensive\n multi-state type graphic.","Published":"2011-03-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"compendiumdb","Version":"1.0.3","Title":"Tools for Retrieval and Storage of Functional Genomics Data","Description":"Package for the systematic retrieval and storage of\n functional genomics data via a MySQL database.","Published":"2015-10-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"compete","Version":"0.1","Title":"Analyzing Social Hierarchies","Description":"Organizing and Analyzing Social Dominance\n Hierarchy Data.","Published":"2016-06-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CompetingRisk","Version":"1.0","Title":"The Semi-Parametric Cumulative Incidence Function","Description":"Computing the point estimator and pointwise confidence interval of the cumulative incidence function from the cause-specific hazards model.","Published":"2017-03-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CompGLM","Version":"1.0","Title":"Conway-Maxwell-Poisson GLM and distribution functions","Description":"The package contains a function (which uses a similar interface to\n the `glm' function) for the fitting of a Conway-Maxwell-Poisson GLM. There\n are also various methods for analysis of the model fit. The package also\n contains functions for the Conway-Maxwell-Poisson distribution in a similar\n interface to functions `dpois', `ppois' and `rpois'. The functions are\n generally quick, since the workhorse functions are written in C++ (thanks\n to the Rcpp package).","Published":"2014-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"compHclust","Version":"1.0-3","Title":"Complementary Hierarchical Clustering","Description":"Performs the complementary hierarchical clustering procedure and returns X' (the expected residual matrix) and a vector of the relative gene importances.","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Compind","Version":"1.2","Title":"Composite Indicators Functions","Description":"Contains several functions to enhance approaches to the Composite Indicators methods, focusing, in particular, on the normalisation and weighting-aggregation steps.","Published":"2017-06-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"complexity","Version":"1.1.1","Title":"Calculate the Proportion of Permutations in Line with an\nInformative Hypothesis","Description":"Allows for the easy computation of complexity: the proportion of the parameter space in line with the hypothesis by chance. The package comes with a Shiny application in which the calculations can be conducted as well. ","Published":"2017-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"complexplus","Version":"2.1","Title":"Functions of Complex or Real Variable","Description":"Extension of several functions to the complex domain, including the matrix exponential and logarithm, and the determinant.","Published":"2017-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"complmrob","Version":"0.6.1","Title":"Robust Linear Regression with Compositional Data as Covariates","Description":"Provides functionality to perform robust regression\n on compositional data. To get information on the distribution of the\n estimates, various bootstrapping methods are implemented for the\n compositional as well as for standard robust regression models, to provide\n a direct comparison between them.","Published":"2015-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CompLognormal","Version":"3.0","Title":"Functions for actuarial scientists","Description":"Computes the probability density function, cumulative distribution function, quantile function, random numbers of any composite model based on the lognormal distribution.","Published":"2013-08-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"compoisson","Version":"0.3","Title":"Conway-Maxwell-Poisson Distribution","Description":"Provides routines for density and moments of the\n Conway-Maxwell-Poisson distribution as well as functions for\n fitting the COM-Poisson model for over/under-dispersed count\n data.","Published":"2012-10-29","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"COMPoissonReg","Version":"0.4.1","Title":"Conway-Maxwell Poisson (COM-Poisson) Regression","Description":"Fit Conway-Maxwell Poisson (COM-Poisson or CMP) regression models\n to count data (Sellers & Shmueli, 2010) . The\n package provides functions for model estimation, dispersion testing, and\n diagnostics. Zero-inflated CMP regression (Sellers & Raim, 2016)\n is also supported.","Published":"2017-05-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"Compositional","Version":"2.4","Title":"Compositional Data Analysis","Description":"Regression, classification, contour plots, hypothesis testing, fitting of distributions are the main function included.","Published":"2017-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"compositions","Version":"1.40-1","Title":"Compositional Data Analysis","Description":"The package provides functions for the consistent analysis of\n compositional data (e.g. portions of substances) and positive numbers\n (e.g. concentrations) in the way proposed by Aitchison and Pawlowsky-Glahn.","Published":"2014-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"compound.Cox","Version":"3.3","Title":"Estimation, Gene Selection, and Survival Prediction Based on the\nCompound Covariate Method Under the Cox Proportional Hazard\nModel","Description":"Estimation, gene selection, and survival prediction based on the compound covariate method under the Cox model with high-dimensional gene expressions.\n Available are survival data for non-small-cell lung cancer patients with gene expressions (Chen et al 2007 New Engl J Med) ,\n statistical methods in Emura et al (2012 PLoS ONE) and\n Emura & Chen (2016 Stat Methods Med Res) . Algorithms for generating correlated gene expressions are also available.","Published":"2017-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Compounding","Version":"1.0.2","Title":"Computing Continuous Distributions","Description":"Computing Continuous Distributions Obtained by Compounding\n a Continuous and a Discrete Distribution","Published":"2013-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CompQuadForm","Version":"1.4.3","Title":"Distribution Function of Quadratic Forms in Normal Variables","Description":"Computes the distribution function of quadratic forms in normal variables using Imhof's method, Davies's algorithm, Farebrother's algorithm or Liu et al.'s algorithm.","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CompR","Version":"1.0","Title":"Paired Comparison Data Analysis","Description":"Different tools for describing and analysing paired comparison data are presented. Main methods are estimation of products scores according Bradley Terry Luce model. A segmentation of the individual could be conducted on the basis of a mixture distribution approach. The number of classes can be tested by the use of Monte Carlo simulations. This package deals also with multi-criteria paired comparison data. ","Published":"2015-07-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CompRandFld","Version":"1.0.3-4","Title":"Composite-Likelihood Based Analysis of Random Fields","Description":"A set of procedures for the analysis of Random Fields using likelihood and non-standard likelihood methods is provided. Spatial analysis often involves dealing with large dataset. Therefore even simple studies may be too computationally demanding. Composite likelihood inference is emerging as a useful tool for mitigating such computational problems. This methodology shows satisfactory results when compared with other techniques such as the tapering method. Moreover, composite likelihood (and related quantities) have some useful properties similar to those of the standard likelihood.","Published":"2015-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"compute.es","Version":"0.2-4","Title":"Compute Effect Sizes","Description":"This package contains several functions for calculating the most\n widely used effect sizes (ES), along with their variances, confidence\n intervals and p-values. The output includes ES's of d (mean difference), g\n (unbiased estimate of d), r (correlation coefficient), z' (Fisher's z), and\n OR (odds ratio and log odds ratio). In addition, NNT (number needed to\n treat), U3, CLES (Common Language Effect Size) and Cliff's Delta are\n computed. This package uses recommended formulas as described in The\n Handbook of Research Synthesis and Meta-Analysis (Cooper, Hedges, &\n Valentine, 2009).","Published":"2014-09-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Conake","Version":"1.0","Title":"Continuous Associated Kernel Estimation","Description":"Continuous smoothing of probability density function on a compact or semi-infinite support is performed using four continuous associated kernels: extended beta, gamma, lognormal and reciprocal inverse Gaussian. The cross-validation technique is also implemented for bandwidth selection.","Published":"2015-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"concatenate","Version":"1.0.0","Title":"Human-Friendly Text from Unknown Strings","Description":"Simple functions for joining strings. Construct human-friendly messages whose elements aren't known in advance, like in stop, warning, or message, from clean code.","Published":"2016-05-08","License":"GPL (>= 3.2)","snapshot_date":"2017-06-23"} {"Package":"conclust","Version":"1.1","Title":"Pairwise Constraints Clustering","Description":"There are 4 main functions in this package: ckmeans(), lcvqe(), mpckm() and ccls(). They take an unlabeled dataset and two lists of must-link and cannot-link constraints as input and produce a clustering as output.","Published":"2016-08-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ConConPiWiFun","Version":"0.4.6","Title":"Optimisation with Continuous Convex Piecewise (Linear and\nQuadratic) Functions","Description":"Continuous convex piecewise linear (ccpl) resp. quadratic (ccpq) functions can be implemented with sorted breakpoints and slopes. This includes functions that are ccpl (resp. ccpq) on a convex set (i.e. an interval or a point) and infinite out of the domain. These functions can be very useful for a large class of optimisation problems. Efficient manipulation (such as log(N) insertion) of such data structure is obtained with map standard template library of C++ (that hides balanced trees). This package is a wrapper on such a class based on Rcpp modules. ","Published":"2015-11-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"concor","Version":"1.0-0.1","Title":"Concordance","Description":"The four functions svdcp (cp for column partitioned),\n svdbip or svdbip2 (bip for bi-partitioned), and svdbips (s for\n a simultaneous optimization of one set of r solutions),\n correspond to a \"SVD by blocks\" notion, by supposing each block\n depending on relative subspaces, rather than on two whole\n spaces as usual SVD does. The other functions, based on this\n notion, are relative to two column partitioned data matrices x\n and y defining two sets of subsets xi and yj of variables and\n amount to estimate a link between xi and yj for the pair (xi,\n yj) relatively to the links associated to all the other pairs.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"concordance","Version":"1.6","Title":"Product Concordance","Description":"A set of utilities for matching products in different classification codes used in international trade research. It supports concordance between HS (Combined), ISIC Rev. 2,3, and SITC1,2,3,4 product classification codes, as well as BEC, NAICS, and SIC classifications. It also provides code nomenclature / descriptions look-up, Rauch classification look-up (via concordance to SITC2) and trade elasticity look-up (via concordance to SITC2/3 or HS3.ss).","Published":"2016-01-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"concreg","Version":"0.6","Title":"Concordance Regression","Description":"Implements concordance regression which can be used to estimate generalized odds of concordance.\n\tCan be used for non- and semi-parametric survival analysis with non-proportional hazards, for binary and \n for continuous outcome data.","Published":"2016-12-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"cond","Version":"1.2-3","Title":"Approximate conditional inference for logistic and loglinear\nmodels","Description":"Higher order likelihood-based inference for logistic and \n loglinear models","Published":"2014-06-27","License":"GPL (>= 2) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"condformat","Version":"0.6.0","Title":"Conditional Formatting in Data Frames","Description":"Apply and visualize conditional formatting to data frames in R.\n It renders a data frame with cells formatted according to\n criteria defined by rules, using a syntax similar to 'ggplot2'. The table is\n printed either opening a web browser or within the 'RStudio' viewer if\n available. The conditional formatting rules allow to highlight cells\n matching a condition or add a gradient background to a given column. This\n package supports both 'HTML' and 'LaTeX' outputs in 'knitr' reports, and\n exporting to an 'xlsx' file.","Published":"2017-05-18","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"condGEE","Version":"0.1-4","Title":"Parameter estimation in conditional GEE for recurrent event gap\ntimes","Description":"Solves for the mean parameters, the variance parameter, and their asymptotic variance in a conditional GEE for recurrent event gap times, as described by Clement and Strawderman (2009) in the journal Biostatistics. Makes a parametric assumption for the length of the censored gap time.","Published":"2013-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"condir","Version":"0.1.1","Title":"Computation of P Values and Bayes Factors for Conditioning Data","Description":"Set of functions for the easy analyses of conditioning data.","Published":"2017-02-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"conditions","Version":"0.1","Title":"Standardized Conditions for R","Description":"Implements specialized conditions, i.e., typed errors,\n warnings and messages. Offers a set of standardized conditions (value error,\n deprecated warning, io message, ...) in the fashion of Python's built-in\n exceptions.","Published":"2017-01-18","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"condmixt","Version":"1.0","Title":"Conditional Density Estimation with Neural Network Conditional\nMixtures","Description":"Conditional density estimation with mixtures for\n heavy-tailed distributions","Published":"2012-05-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"condMVNorm","Version":"2015.2-1","Title":"Conditional Multivariate Normal Distribution","Description":"Computes conditional multivariate normal probabilities, random deviates and densities.","Published":"2015-02-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CONDOP","Version":"1.0","Title":"Condition-Dependent Operon Predictions","Description":"An implementation of the computational strategy for the\n comprehensive analysis of condition-dependent operon maps in prokaryotes\n proposed by Fortino et al. (2014) . \n It uses RNA-seq transcriptome profiles to improve prokaryotic operon map inference.","Published":"2016-02-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CondReg","Version":"0.20","Title":"Condition Number Regularized Covariance Estimation","Description":"Based on\n \\url{http://statistics.stanford.edu/~ckirby/techreports/GEN/2012/2012-10.pdf}","Published":"2014-07-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"condSURV","Version":"2.0.1","Title":"Estimation of the Conditional Survival Function for Ordered\nMultivariate Failure Time Data","Description":"Method to implement some newly developed methods for the\n estimation of the conditional survival function.","Published":"2016-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"condvis","Version":"0.4-1","Title":"Conditional Visualization for Statistical Models","Description":"Exploring fitted models by interactively taking 2-D and 3-D\n sections in data space.","Published":"2016-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"coneproj","Version":"1.11","Title":"Primal or Dual Cone Projections with Routines for Constrained\nRegression","Description":"Routines doing cone projection and quadratic programming, as well as doing estimation and inference for constrained parametric regression and shape-restricted regression problems.","Published":"2016-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"conf.design","Version":"2.0.0","Title":"Construction of factorial designs","Description":"This small library contains a series of simple tools for\n constructing and manipulating confounded and fractional\n factorial designs.","Published":"2013-02-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"confidence","Version":"1.1-0","Title":"Confidence Estimation of Environmental State Classifications","Description":"Functions for estimating and reporting multiyear averages and\n corresponding confidence intervals and distributions. A potential use case\n is reporting the chemical and ecological status of surface waters according\n to the European Water Framework Directive.","Published":"2014-10-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"config","Version":"0.2","Title":"Manage Environment Specific Configuration Values","Description":"Manage configuration values across multiple environments (e.g.\n development, test, production). Read values using a function that determines\n the current environment and returns the appropriate value.","Published":"2016-08-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"configr","Version":"0.3.0","Title":"An Implementation of Parsing and Writing Configuration File\n(JSON/INI/YAML/TOML)","Description":"\n Implements the JSON, INI, YAML and TOML parser for R setting and writing of configuration file. The functionality of this package is similar to that of package 'config'. ","Published":"2017-06-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"confinterpret","Version":"0.2.0","Title":"Descriptive Interpretations of Confidence Intervals","Description":"Produces descriptive interpretations of confidence intervals.\n Includes (extensible) support for various test types, specified as sets\n of interpretations dependent on where the lower and upper confidence limits\n sit.","Published":"2017-05-11","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"conformal","Version":"0.2","Title":"Conformal Prediction for Regression and Classification","Description":"Implementation of conformal prediction using caret models for classification and regression.","Published":"2016-03-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ConfoundedMeta","Version":"1.1.0","Title":"Sensitivity Analyses for Unmeasured Confounding in Meta-Analyses","Description":"Conducts sensitivity analyses for unmeasured confounding in\n random-effects meta-analysis per Mathur & VanderWeele (in preparation).\n Given output from a random-effects meta-analysis with a relative risk\n outcome, computes point estimates and inference for: (1) the proportion\n of studies with true causal effect sizes more extreme than a specified threshold\n of scientific significance; and (2) the minimum bias factor and confounding\n strength required to reduce to less than a specified threshold the proportion\n of studies with true effect sizes of scientifically significant size.\n Creates plots and tables for visualizing these metrics across a range of bias values.\n Provides tools to easily scrape study-level data from a published forest plot or \n summary table to obtain the needed estimates when these are not reported. ","Published":"2017-06-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"confreq","Version":"1.5.1","Title":"Configural Frequencies Analysis Using Log-Linear Modeling","Description":"Offers several functions for Configural Frequencies\n Analysis (CFA), which is a useful statistical tool for the analysis of\n multiway contingency tables. CFA was introduced by G. A. Lienert as\n 'Konfigurations Frequenz Analyse - KFA'. Lienert, G. A. (1971). \n Die Konfigurationsfrequenzanalyse: I. Ein neuer Weg zu Typen und Syndromen. \n Zeitschrift für Klinische Psychologie und Psychotherapie, 19(2), 99–115.","Published":"2016-12-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"confSAM","Version":"0.1","Title":"Estimates and Bounds for the False Discovery Proportion, by\nPermutation","Description":"For multiple testing.\n Computes estimates and confidence bounds for the\n False Discovery Proportion (FDP), the fraction of false positives among\n all rejected hypotheses.\n The methods in the package use permutations of the data. Doing so, they\n take into account the dependence structure in the data.","Published":"2017-01-18","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"congressbr","Version":"0.1.1","Title":"Downloads, Unpacks and Tidies Legislative Data from the\nBrazilian Federal Senate and Chamber of Deputies","Description":"Downloads and tidies data from the Brazilian Federal Senate and Chamber of Deputies Application Programming Interfaces available at and respectively.","Published":"2017-06-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"conicfit","Version":"1.0.4","Title":"Algorithms for Fitting Circles, Ellipses and Conics Based on the\nWork by Prof. Nikolai Chernov","Description":"Geometric circle fitting with Levenberg-Marquardt (a, b, R), Levenberg-Marquardt reduced (a, b), Landau, Spath and Chernov-Lesort. Algebraic circle fitting with Taubin, Kasa, Pratt and Fitzgibbon-Pilu-Fisher. Geometric ellipse fitting with ellipse LMG (geometric parameters) and conic LMA (algebraic parameters). Algebraic ellipse fitting with Fitzgibbon-Pilu-Fisher and Taubin.","Published":"2015-10-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"conics","Version":"0.3","Title":"Plot Conics","Description":"plot conics (ellipses, hyperbolas, parabolas)","Published":"2013-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Conigrave","Version":"0.1.1","Title":"Flexible Tools for Multiple Imputation","Description":"Provides a set of tools that can be used across 'data.frame' and\n 'imputationList' objects.","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"conjoint","Version":"1.39","Title":"Conjoint analysis package","Description":"Conjoint is a simple package that implements a conjoint\n analysis method to measure the preferences.","Published":"2013-08-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ConjointChecks","Version":"0.0.9","Title":"A package to check the cancellation axioms of conjoint\nmeasurement","Description":"Implementation of a procedure (Domingue, 2012; see also\n Karabatsos, 2001 and Kyngdon, 2011) to test the single and\n double cancellation axioms of conjoint measure in data that is\n dichotomously coded and measured with error.","Published":"2012-12-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"connect3","Version":"0.1.0","Title":"A Tool for Reproducible Research by Converting 'LaTeX' Files\nGenerated by R Sweave to Rich Text Format Files","Description":"Converts 'LaTeX' files (with extension '.tex') generated by R Sweave using package 'knitr' to Rich Text Format files (with extension '.rtf'). Rich Text Format files can be read and written by most word processors.","Published":"2015-12-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ConnMatTools","Version":"0.3.3","Title":"Tools for Working with Connectivity Data","Description":"Collects several different methods for analyzing and\n working with connectivity data in R. Though primarily oriented towards\n marine larval dispersal, many of the methods are general and useful for\n terrestrial systems as well.","Published":"2016-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"conover.test","Version":"1.1.4","Title":"Conover-Iman Test of Multiple Comparisons Using Rank Sums","Description":"Computes the Conover-Iman test (1979) for stochastic dominance and reports the results among multiple pairwise comparisons after a Kruskal-Wallis test for stochastic dominance among k groups (Kruskal and Wallis, 1952). The interpretation of stochastic dominance requires an assumption that the CDF of one group does not cross the CDF of the other. conover.test makes k(k-1)/2 multiple pairwise comparisons based on Conover-Iman t-test-statistic of the rank differences. The null hypothesis for each pairwise comparison is that the probability of observing a randomly selected value from the first group that is larger than a randomly selected value from the second group equals one half; this null hypothesis corresponds to that of the Wilcoxon-Mann-Whitney rank-sum test. Like the rank-sum test, if the data can be assumed to be continuous, and the distributions are assumed identical except for a difference in location, Conover-Iman test may be understood as a test for median difference. conover.test accounts for tied ranks. The Conover-Iman test is strictly valid if and only if the corresponding Kruskal-Wallis null hypothesis is rejected.","Published":"2017-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ConR","Version":"1.2.1","Title":"Computation of Parameters Used in Preliminary Assessment of\nConservation Status","Description":"Multi-species estimation of geographical range parameters\n\tfor preliminary assessment of conservation status following Criterion B of the \n\tInternational Union for Conservation of Nature (IUCN, \n\tsee ).","Published":"2017-06-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CONS","Version":"0.1.1","Title":"Consonance Analysis Module","Description":"Consonance Analysis is a useful numerical and graphical approach\n for evaluating the consistency of the measurements and the panel of people\n involved in sensory evaluation. It makes use of several uni and multivariate\n techniques either graphical or analytical. It shows the implementation of this\n procedure in a graphical interface.","Published":"2017-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ConSpline","Version":"1.1","Title":"Partial Linear Least-Squares Regression using Constrained\nSplines","Description":"Given response y, continuous predictor x, and covariate matrix, the relationship between E(y) and x is estimated with a shape constrained regression spline. Function outputs fits and various types of inference.","Published":"2015-08-29","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ConsRank","Version":"2.0.1","Title":"Compute the Median Ranking(s) According to the Kemeny's\nAxiomatic Approach","Description":"Compute the median ranking according to the Kemeny's axiomatic approach. \n Rankings can or cannot contain ties, rankings can be both complete or incomplete. \n The package contains both branch-and-bound algorithms and heuristic solutions recently proposed.\n The package also provide some useful utilities for deal with preference rankings.\n Essential references:\n Emond, E.J., and Mason, D.W. (2002) ; \n D'Ambrosio, A., Amodio, S., and Iorio, C. (2015) ; \n Amodio, S., D'Ambrosio, A., and Siciliano R. (2016) ; \n D'Ambrosio, A., Mazzeo, G., Iorio, C., and Siciliano, R. (2017) .","Published":"2017-04-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"constrainedKriging","Version":"0.2.4","Title":"Constrained, Covariance-Matching Constrained and Universal Point\nor Block Kriging","Description":"Provides functions for\n efficient computations of nonlinear spatial predictions with\n local change of support. This package supplies functions for\n tow-dimensional spatial interpolation by constrained,\n covariance-matching constrained and universal (external drift)\n kriging for points or block of any shape for data with a\n nonstationary mean function and an isotropic weakly stationary\n variogram. The linear spatial interpolation methods,\n constrained and covariance-matching constrained kriging,\n provide approximately unbiased prediction for nonlinear target\n values under change of support. This package\n extends the range of geostatistical tools available in R and\n provides a veritable alternative to conditional simulation for\n nonlinear spatial prediction problems with local change of\n support.","Published":"2015-04-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ContaminatedMixt","Version":"1.1","Title":"Model-Based Clustering and Classification with the Multivariate\nContaminated Normal Distribution","Description":"Fits mixtures of multivariate contaminated normal distributions\n (with eigen-decomposed scale matrices) via the expectation conditional-\n\tmaximization algorithm under a clustering or classification paradigm.","Published":"2017-02-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"contfrac","Version":"1.1-10","Title":"Continued Fractions","Description":"Various utilities for evaluating continued fractions.","Published":"2016-05-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"conting","Version":"1.6","Title":"Bayesian Analysis of Contingency Tables","Description":"Bayesian analysis of complete and incomplete contingency tables.","Published":"2016-08-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"contoureR","Version":"1.0.5","Title":"Contouring of Non-Regular Three-Dimensional Data","Description":"Create contour lines for a non regular series of points, potentially from a non-regular canvas.","Published":"2015-08-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ContourFunctions","Version":"0.1.0","Title":"Create Contour Plots from Data or a Function","Description":"Provides functions for making contour plots.\n The contour plot can be created from grid data, a function,\n or a data set. If non-grid data is given, then a Gaussian\n process is fit to the data and used to create the contour plot.","Published":"2017-05-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"contrast","Version":"0.21","Title":"A Collection of Contrast Methods","Description":"One degree of freedom contrasts for lm, glm, gls, and geese objects.","Published":"2016-09-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"controlTest","Version":"1.0","Title":"Median Comparison for Two-Sample Right-Censored Survival Data","Description":"Nonparametric two-sample procedure for comparing the median survival time. ","Published":"2015-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ConvCalendar","Version":"1.2","Title":"Converts dates between calendars","Description":"Converts between the Date class and d/m/y for several\n calendars, including Persian, Islamic, and Hebrew","Published":"2013-04-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ConvergenceConcepts","Version":"1.2.1","Title":"Seeing Convergence Concepts in Action","Description":"This is a pedagogical package, designed to help students understanding convergence of\n random variables. It provides a way to investigate interactively various modes of\n\t convergence (in probability, almost surely, in law and in mean) of a sequence of i.i.d.\n\t random variables. Visualisation of simulated sample paths is possible through interactive\n\t plots. The approach is illustrated by examples and exercises through the function\n\t 'investigate', as described in\n\t Lafaye de Micheaux and Liquet (2009) .\n\t The user can study his/her own sequences of random variables.","Published":"2017-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"convertGraph","Version":"0.1","Title":"Convert Graphical Files Format","Description":"Converts graphical file formats (SVG,\n PNG, JPEG, BMP, GIF, PDF, etc) to one another. The exceptions are the\n SVG file format that can only be converted to other formats and in contrast,\n PDF format, which can only be created from others graphical formats.\n The main purpose of the package was to provide a solution for converting SVG\n file format to PNG which is often needed for exporting graphical files\n produced by R widgets.","Published":"2016-04-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"convertr","Version":"0.1","Title":"Convert Between Units","Description":"Provides conversion functionality between a broad range of\n scientific, historical, and industrial unit types.","Published":"2016-10-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"convevol","Version":"1.0","Title":"Quantifies and assesses the significance of convergent evolution","Description":"Quantifies and assesses the significance of convergent evolution.","Published":"2014-12-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"convexjlr","Version":"0.5.1","Title":"Disciplined Convex Programming in R using Convex.jl","Description":"Package convexjlr provides a simple high-level wrapper for\n Julia package 'Convex.jl' (see for\n more information),\n which makes it easy to describe and solve convex optimization problems in R.\n The problems can be dealt with include:\n linear programs,\n second-order cone programs,\n semidefinite programs,\n exponential cone programs.","Published":"2017-06-21","License":"Apache License | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"convey","Version":"0.2.0","Title":"Income Concentration Analysis with Complex Survey Samples","Description":"Variance estimation on indicators of income concentration and\n poverty using complex sample survey designs. Wrapper around the\n survey package.","Published":"2017-04-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"convoSPAT","Version":"1.2","Title":"Convolution-Based Nonstationary Spatial Modeling","Description":"Fits convolution-based nonstationary\n Gaussian process models to point-referenced spatial data. The nonstationary\n covariance function allows the user to specify the underlying correlation\n structure and which spatial dependence parameters should be allowed to\n vary over space: the anisotropy, nugget variance, and process variance.\n The parameters are estimated via maximum likelihood, using a local\n likelihood approach. Also provided are functions to fit stationary spatial\n models for comparison, calculate the Kriging predictor and standard errors,\n and create various plots to visualize nonstationarity.","Published":"2017-04-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cooccur","Version":"1.3","Title":"Probabilistic Species Co-Occurrence Analysis in R","Description":"This R package applies the probabilistic model of species co-occurrence (Veech 2013) to a set of species distributed among a set of survey or sampling sites. The algorithm calculates the observed and expected frequencies of co-occurrence between each pair of species. The expected frequency is based on the distribution of each species being random and independent of the other species. The analysis returns the probabilities that a more extreme (either low or high) value of co-occurrence could have been obtained by chance. The package also includes functions for visualizing species co-occurrence results and preparing data for downstream analyses.","Published":"2016-02-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cooccurNet","Version":"0.1.6","Title":"Co-Occurrence Network","Description":"Read and preprocess fasta format data, and construct the co-occurrence network for downstream analyses. This R package is to construct the co-occurrence network with the algorithm developed by Du (2008) . It could be used to transform the data with high-dimension, such as DNA or protein sequence, into co-occurrence networks. Co-occurrence network could not only capture the co-variation pattern between variables, such as the positions in DNA or protein sequences, but also reflect the relationship between samples. Although it is originally used in DNA and protein sequences, it could be also used to other kinds of data, such as RNA, SNP, etc.","Published":"2017-01-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"coop","Version":"0.6-0","Title":"Co-Operation: Fast Covariance, Correlation, and Cosine\nSimilarity Operations","Description":"Fast implementations of the co-operations: covariance,\n correlation, and cosine similarity. The implementations are\n fast and memory-efficient and their use is resolved\n automatically based on the input data, handled by R's S3\n methods. Full descriptions of the algorithms and benchmarks\n are available in the package vignettes.","Published":"2016-12-13","License":"BSD 2-clause License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cooptrees","Version":"1.0","Title":"Cooperative aspects of optimal trees in weighted graphs","Description":"Computes several cooperative games and allocation rules associated with minimum cost spanning tree problems and minimum cost arborescence problems.","Published":"2014-09-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"copBasic","Version":"2.0.5","Title":"General Bivariate Copula Theory and Many Utility Functions","Description":"Extensive functions for bivariate copula (bicopula) computations and related\n operations concerning oft cited bicopula theory described by Nelsen (2006), Joe (2014), and\n other selected works. The lower, upper, product, and select other bicopula are implemented.\n Arbitrary bicopula expressions include the diagonal, survival copula, the dual of a copula,\n co-copula, numerical bicopula density, and maximum likelihood estimation. Level\n curves (sets), horizontal and vertical sections also are supported. Numerical derivatives and\n inverses of a bicopula are provided; simulation by the conditional distribution method thus is\n supported. Bicopula composition, convex combination, and products are provided. Support\n extends to Kendall Function as well as the Lmoments thereof, Kendall Tau, Spearman Rho and\n Footrule, Gini Gamma, Blomqvist Beta, Hoeffding Phi, Schweizer-Wolff Sigma, tail dependency\n (including pseudo-polar representation) and tail order, skewness, and bivariate Lmoments.\n Evaluators of positively/negatively quadrant dependency, left increasing and right\n decreasing are available. Kullback-Leibler divergence, Vuong's procedure, Spectral Measure,\n and Lcomoments for copula inference are available. Quantile and median regressions for\n V with respect to U and U with respect to V are available. Empirical copulas (EC) are supported.","Published":"2017-02-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"copCAR","Version":"2.0-2","Title":"Fitting the copCAR Regression Model for Discrete Areal Data","Description":"Provides tools for fitting the copCAR (Hughes, 2015)\n regression model for discrete\n\tareal data. Three types of estimation are supported (continuous\n\textension, composite marginal likelihood, and distributional transform),\n\tfor three types of outcomes (Bernoulli, negative binomial, and Poisson).","Published":"2017-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cope","Version":"0.2.3","Title":"Coverage Probability Excursion (CoPE) Sets","Description":"Provides functions to compute and plot Coverage\n Probability Excursion (CoPE) sets\n for real valued functions on a 2-dimensional domain. CoPE sets are obtained\n from repeated noisy observations of the function on the entire domain.\n They are designed to bound the excursion\n set of the target function at a given level from above and below with\n a predefined probability. The target\n function can be a parameter in spatially-indexed linear regression.\n Support by NIH grant R01 CA157528 is gratefully acknowledged. ","Published":"2017-02-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"coppeCosenzaR","Version":"0.1.0","Title":"COPPE-Cosenza Fuzzy Hierarchy Model","Description":"The program implements the COPPE-Cosenza Fuzzy Hierarchy Model. \n The model was based on the evaluation of local alternatives, representing \n regional potentialities, so as to fulfill demands of economic projects. \n After defining demand profiles in terms of their technological coefficients, \n the degree of importance of factors is defined so as to represent \n the productive activity. The method can detect a surplus of supply without \n the restriction of the distance of classical algebra, defining a hierarchy \n of location alternatives. In COPPE-Cosenza Model, the distance between \n factors is measured in terms of the difference between grades of memberships\n of the same factors belonging to two or more sets under comparison. The \n required factors are classified under the following linguistic variables: \n Critical (CR); Conditioning (C); Little Conditioning (LC); and Irrelevant \n (I). And the alternatives can assume the following linguistic variables: \n Excellent (Ex), Good (G), Regular (R), Weak (W), Empty (Em), Zero (Z) and \n Inexistent (In). The model also provides flexibility, allowing different \n aggregation rules to be performed and defined by the Decision Maker. Such \n feature is considered in this package, allowing the user to define other \n aggregation matrices, since it considers the same linguistic variables \n mentioned. ","Published":"2017-05-20","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"coprimary","Version":"1.0","Title":"Sample Size Calculation for Two Primary Time-to-Event Endpoints\nin Clinical Trials","Description":"Computes the required number of patients for two time-to-event end-points as primary endpoint in phase III clinical trial.","Published":"2016-12-15","License":"GPL (>= 3.3.2)","snapshot_date":"2017-06-23"} {"Package":"copula","Version":"0.999-17","Title":"Multivariate Dependence with Copulas","Description":"Classes (S4) of commonly used elliptical, Archimedean,\n extreme-value and other copula families, as well as their rotations,\n mixtures and asymmetrizations. Nested Archimedean copulas, related\n tools and special functions. Methods for density, distribution, random\n number generation, bivariate dependence measures, Rosenblatt transform,\n Kendall distribution function, perspective and contour plots. Fitting of\n copula models with potentially partly fixed parameters, including\n standard errors. Serial independence tests, copula specification tests\n (independence, exchangeability, radial symmetry, extreme-value\n dependence, goodness-of-fit) and model selection based on\n cross-validation. Empirical copula, smoothed versions, and\n non-parametric estimators of the Pickands dependence function.","Published":"2017-06-18","License":"GPL (>= 3) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"Copula.Markov","Version":"1.1","Title":"Estimation and Statistical Process Control Under Copula-Based\nTime Series Models","Description":"Estimation and statistical process control are performed under copula-based time-series models.","Published":"2016-07-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CopulaDTA","Version":"0.0.5","Title":"Copula Based Bivariate Beta-Binomial Model for Diagnostic Test\nAccuracy Studies","Description":"Modelling of sensitivity and specificity on their natural scale\n using copula based bivariate beta-binomial distribution to yield marginal\n mean sensitivity and specificity. The intrinsic negative correlation between\n sensitivity and specificity is modelled using a copula function. A forest plot\n can be obtained for categorical covariates or for the model with intercept only.","Published":"2017-02-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"copulaedas","Version":"1.4.2","Title":"Estimation of Distribution Algorithms Based on Copulas","Description":"Provides a platform where EDAs (estimation of\n distribution algorithms) based on copulas can be implemented and\n studied. The package offers complete implementations of various\n EDAs based on copulas and vines, a group of well-known\n optimization problems, and utility functions to study the\n performance of the algorithms. Newly developed EDAs can be easily\n integrated into the package by extending an S4 class with generic\n functions for their main components.","Published":"2015-10-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CopulaRegression","Version":"0.1-5","Title":"Bivariate Copula Based Regression Models","Description":"This R-packages presents a bivariate, copula-based model\n for the joint distribution of a pair of continuous and discrete\n random variables. The two marginal random variables are modeled\n via generalized linear models, and their joint distribution\n (represented by a parametric copula family) is estimated using\n maximum-likelihood techniques.","Published":"2014-09-04","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"CopulaREMADA","Version":"1.0","Title":"Copula Mixed Effect Models for Bivariate and Trivariate\nMeta-Analysis of Diagnostic Test Accuracy Studies","Description":"It has functions to implement the copula mixed models for bivariate and trivariate meta-analysis of diagnostic test accuracy studies. ","Published":"2016-09-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CopyDetect","Version":"1.2","Title":"Computing Statistical Indices to Detect Answer Copying on\nMultiple-Choice Tests","Description":"Contains several IRT and non-IRT based statistical indices proposed in the literature for detecting answer copying on multiple-choice examinations. Includes the indices that have been shown as effective and reliable based on the simulation studies. Provides results for the Omega index, Generalized Binomial Test, K index, K1 and K2 indices, and S1 and S2 indices.","Published":"2016-04-27","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"coRanking","Version":"0.1.3","Title":"Co-Ranking Matrix","Description":"Calculates the co-ranking matrix to assess the\n quality of a dimensionality reduction.","Published":"2016-09-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Corbi","Version":"0.4-2","Title":"Collection of Rudimentary Bioinformatics Tools","Description":"Provides a bundle of basic and fundamental bioinformatics tools,\n such as network querying and alignment, subnetwork extraction and search,\n network biomarker identification.","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"corclass","Version":"0.1.1","Title":"Correlational Class Analysis","Description":"Perform a correlational class analysis of the data, resulting in a partition of the data into separate modules.","Published":"2016-01-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"corcounts","Version":"1.4","Title":"Generate correlated count random variables","Description":"Generate high-dimensional correlated count random\n variables with a prespecified Pearson correlation.","Published":"2009-11-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cord","Version":"0.1.1","Title":"Community Estimation in G-Models via CORD","Description":"Partition data points (variables) into communities/clusters, similar to clustering algorithms, such as k-means and hierarchical clustering. This package implements a clustering algorithm based on a new metric CORD, defined for high dimensional parametric or semi-parametric distributions. Read http://arxiv.org/abs/1508.01939 for more details.","Published":"2015-09-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CORE","Version":"3.0","Title":"Cores of Recurrent Events","Description":"given a collection of intervals with integer start and end positions, find recurrently targeted regions and estimate the significance of finding. Randomization is implemented by parallel methods, either using local host machines, or submitting grid engine jobs.","Published":"2014-12-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"corehunter","Version":"3.1.0","Title":"Multi-Purpose Core Subset Selection","Description":"Core Hunter is a tool to sample diverse, representative subsets from large germplasm\n collections, with minimum redundancy. Such so-called core collections have applications in plant\n breeding and genetic resource management in general. Core Hunter can construct cores based on\n genetic marker data, phenotypic traits or precomputed distance matrices, optimizing one of many\n provided evaluation measures depending on the precise purpose of the core (e.g. high diversity,\n representativeness, or allelic richness). In addition, multiple measures can be simultaneously\n optimized as part of a weighted index to bring the different perspectives closer together.\n The Core Hunter library is implemented in Java 8 as an open source project (see\n ).","Published":"2017-02-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CORElearn","Version":"1.50.3","Title":"Classification, Regression and Feature Evaluation","Description":"A suite of machine learning algorithms written in C++ with R \n interface contains several learning techniques for classification and regression,\n Predictive models include e.g., classification and regression trees with\n optional constructive induction and models in the leaves, random forests, kNN, \n naive Bayes, and locally weighted regression. All predictions obtained with these\n models can be explained and visualized with ExplainPrediction package. \n The package is especially strong in feature evaluation where it contains several variants of\n Relief algorithm and many impurity based attribute evaluation functions, e.g., Gini, \n information gain, MDL, and DKM. These methods can be used for feature selection \n or discretization of numeric attributes.\n The OrdEval algorithm and its visualization is used for evaluation\n of data sets with ordinal features and class, enabling analysis according to the \n Kano model of customer satisfaction. \n Several algorithms support parallel multithreaded execution via OpenMP. \n The top-level documentation is reachable through ?CORElearn.","Published":"2017-03-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"coreNLP","Version":"0.4-2","Title":"Wrappers Around Stanford CoreNLP Tools","Description":"Provides a minimal interface for applying\n annotators from the 'Stanford CoreNLP' java library. Methods\n are provided for tasks such as tokenisation, part of speech\n tagging, lemmatisation, named entity recognition, coreference\n detection and sentiment analysis.","Published":"2016-09-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"coreSim","Version":"0.2.4","Title":"Core Functionality for Simulating Quantities of Interest from\nGeneralised Linear Models","Description":"Core functions for simulating quantities of interest\n from generalised linear models (GLM). This package will form the backbone of\n a series of other packages that improve the interpretation of GLM estimates.","Published":"2017-05-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"coreTDT","Version":"1.0","Title":"TDT for compound heterozygous and recessive models","Description":"Use to analysis case-parent trio sequencing studies. Test the compound heterozygous and recessive disease models","Published":"2014-09-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"corHMM","Version":"1.20","Title":"Analysis of Binary Character Evolution","Description":"Fits a hidden rates model that allows different transition rate classes on different portions of a phylogeny by treating rate classes as hidden states in a Markov process and various other functions for evaluating models of binary character evolution.","Published":"2016-09-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"corkscrew","Version":"1.1","Title":"Preprocessor for Data Modeling","Description":"Includes binning categorical variables into lesser number of categories based on t-test, converting categorical variables into continuous features \n\tusing the mean of the response variable for the respective categories, understanding the relationship between the response variable and predictor variables \n\tusing data transformations.","Published":"2015-10-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"corlink","Version":"1.0.0","Title":"Record Linkage, Incorporating Imputation for Missing Agreement\nPatterns, and Modeling Correlation Patterns Between Fields","Description":"A matrix of agreement patterns and counts for record pairs is the input for the procedure. An EM algorithm is used to impute plausible values for missing record pairs. A second EM algorithm, incorporating possible correlations between per-field agreement, is used to estimate posterior probabilities that each pair is a true match - i.e. constitutes the same individual.","Published":"2016-10-20","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"CORM","Version":"1.0.2","Title":"The Clustering of Regression Models Method","Description":"We proposed a new model-based clustering method, called the clustering of \n regression models method(CORM), which groups genes that share a similar \n relationship to the covariate(s). This method provides a unified approach \n for a family of clustering procedures and can be applied to data collected \n with various experimental designs. This package includes the implementation \n for two such clustering procedures: (1) the Clustering of Linear Models \n (CLM) method, and (2) the Clustering of Linear Mixed Models (CLMM) method.","Published":"2014-06-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"corpcor","Version":"1.6.9","Title":"Efficient Estimation of Covariance and (Partial) Correlation","Description":"Implements a James-Stein-type shrinkage estimator for \n the covariance matrix, with separate shrinkage for variances and correlations. \n The details of the method are explained in Schafer and Strimmer (2005) \n and Opgen-Rhein and Strimmer (2007) \n . The approach is both computationally as well\n as statistically very efficient, it is applicable to \"small n, large p\" data, \n and always returns a positive definite and well-conditioned covariance matrix. \n In addition to inferring the covariance matrix the package also provides \n shrinkage estimators for partial correlations and partial variances. \n The inverse of the covariance and correlation matrix \n can be efficiently computed, as well as any arbitrary power of the \n shrinkage correlation matrix. Furthermore, functions are available for fast \n singular value decomposition, for computing the pseudoinverse, and for \n checking the rank and positive definiteness of a matrix.","Published":"2017-04-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"corpora","Version":"0.4-3","Title":"Statistics and data sets for corpus frequency data","Description":"Utility functions and data sets for the statistical\n analysis of corpus frequency data, used in the SIGIL statistics\n course.","Published":"2012-04-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CorporaCoCo","Version":"1.0-2","Title":"Corpora Co-Occurrence Comparison","Description":"A set of functions used to compare co-occurrence between two corpora.","Published":"2017-03-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"corpus","Version":"0.7.0","Title":"Text Corpus Analysis","Description":"Text corpus data analysis, with full support for Unicode. Functions for reading data from newline-delimited JSON files, for normalizing and tokenizing text, for searching for term occurrences, and for computing term occurrence frequencies (including n-grams).","Published":"2017-06-22","License":"Apache License (== 2.0) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"corr2D","Version":"0.1.12","Title":"Implementation of 2D Correlation Analysis in R","Description":"Implementation of two-dimensional (2D) correlation analysis based\n on the Fourier-transformation approach described by Isao Noda (I. Noda\n (1993) ). Additionally there are two plot\n functions for the resulting correlation matrix: The first one creates\n colored 2D plots, while the second one generates 3D plots.","Published":"2016-11-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CorrBin","Version":"1.5","Title":"Nonparametrics with Clustered Binary and Multinomial Data","Description":"This package implements non-parametric analyses for clustered\n binary and multinomial data. The elements of the cluster are assumed\n exchangeable, and identical joint distribution (also known as marginal\n compatibility, or reproducibility) is assumed for clusters of different\n sizes. A trend test based on stochastic ordering is implemented.","Published":"2015-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"correctedAUC","Version":"0.0.3","Title":"Correcting AUC for Measurement Error","Description":"Correcting area under ROC (AUC) for measurement error based on probit-shift model.","Published":"2016-06-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CorrectOverloadedPeaks","Version":"1.2.14","Title":"Correct Overloaded Peaks from GC-APCI-MS Data","Description":"Analyzes and modifies metabolomics raw data (generated using GC-APCI-MS, Gas Chromatography-Atmospheric Pressure Chemical Ionization-Mass Spectrometry) to correct overloaded signals, i.e. ion intensities exceeding detector saturation leading to a cut-off peak. Data in xcmsRaw format are accepted as input and mzXML files can be processed alternatively. Overloaded signals are detected automatically and modified using an Gaussian or Isotopic-Ratio approach, QC plots are generated and corrected data are stored within the original xcmsRaw or mzXML respectively to allow further processing.","Published":"2016-08-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CorReg","Version":"1.2.1","Title":"Linear Regression Based on Linear Structure Between Variables","Description":"Linear regression based on a recursive structural equation model\n (explicit multiples correlations) found by a M.C.M.C. algorithm. It permits to face\n highly correlated variables. Variable selection is included (by lasso,\n elastic net, etc.). It also provides some graphical tools for basic\n statistics.","Published":"2017-05-03","License":"CeCILL","snapshot_date":"2017-06-23"} {"Package":"corregp","Version":"1.0.2","Title":"Functions and Methods for Correspondence Regression","Description":"A collection of tools for correspondence regression, i.e. the\n correspondence analysis of the crosstabulation of a categorical variable Y\n in function of another one X, where X can in turn be made up of the\n combination of various categorical variables. Consequently, correspondence\n regression can be used to analyze the effects for a polytomous or\n multinomial outcome variable.","Published":"2017-05-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Correlplot","Version":"1.0-2","Title":"A collection of functions for graphing correlation matrices","Description":"Correlplot contains diverse routines for the construction of different plots for representing correlation matrices.","Published":"2013-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"corrgram","Version":"1.12","Title":"Plot a Correlogram","Description":"Calculates correlation of variables and displays the results\n graphically. Included panel functions can display points, shading, ellipses, and\n correlation values with confidence intervals.","Published":"2017-05-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CorrMixed","Version":"0.1-13","Title":"Estimate Correlations Between Repeatedly Measured Endpoints\n(E.g., Reliability) Based on Linear Mixed-Effects Models","Description":"In clinical practice and research settings in medicine and the behavioral sciences, it is often of interest to quantify the correlation of a continuous endpoint that was repeatedly measured (e.g., test-retest correlations, ICC, etc.). This package allows for estimating these correlations based on mixed-effects models. Part of this software has been developed using funding provided from the European Union's 7th Framework Programme for research, technological development and demonstration under Grant Agreement no 602552.","Published":"2016-08-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"corrplot","Version":"0.77","Title":"Visualization of a Correlation Matrix","Description":"A graphical display of a correlation matrix or general matrix.\n It also contains some algorithms to do matrix reordering.","Published":"2016-04-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"corrr","Version":"0.2.1","Title":"Correlations in R","Description":"A tool for exploring correlations.\n It makes it possible to easily perform routine tasks when\n exploring correlation matrices such as ignoring the diagonal,\n focusing on the correlations of certain variables against others,\n or rearranging and visualising the matrix in terms of the\n strength of the correlations.","Published":"2016-10-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"corrsieve","Version":"1.6-8","Title":"CorrSieve","Description":"Statistical summary of Structure output.","Published":"2013-05-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"CorrToolBox","Version":"1.4","Title":"Modeling Correlational Magnitude Transformations in\nDiscretization Contexts","Description":"Modeling the correlation transitions under specified distributional assumptions within the realm of discretization in the context of the latency and threshold concepts.","Published":"2017-03-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"corset","Version":"0.1-4","Title":"Arbitrary Bounding of Series and Time Series Objects","Description":"Set of methods to constrain numerical series and time series within\n arbitrary boundaries.","Published":"2017-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"corTools","Version":"1.0","Title":"Tools for processing data after a Genome Wide Association Study","Description":"Designed for analysis of the results of a Genome Wide Association Study. Includes tools to pull lists of Chromosome number and SNP position below a certain significance threshold, refine gene networks (including data I/O for Cytoscape), and check SNP base pair changes. ","Published":"2013-08-23","License":"Artistic License 2.0","snapshot_date":"2017-06-23"} {"Package":"CoSeg","Version":"0.38","Title":"Cosegregation Analysis and Pedigree Simulation","Description":"Tools for generating and analyzing pedigrees. Specifically, this has functions that will generate realistic pedigrees for the USA and China based on historical birth rates and family sizes. It also has functions for analyzing these pedigrees when they include disease information including one based on counting meioses and another based on likelihood ratios.","Published":"2017-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"COSINE","Version":"2.1","Title":"COndition SpecIfic sub-NEtwork","Description":"To identify the globally most discriminative subnetwork from gene \n expression profiles using an optimization model and genetic algorithm","Published":"2014-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cosinor","Version":"1.1","Title":"Tools for estimating and predicting the cosinor model","Description":"cosinor is a set of simple functions that transforms longitudinal\n data to estimate the cosinor linear model as described in Tong (1976).\n Methods are given to summarize the mean, amplitude and acrophase, to\n predict the mean annual outcome value, and to test the coefficients.","Published":"2014-07-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cosmoFns","Version":"1.0-1","Title":"Functions for cosmological distances, times, luminosities, etc","Description":"Package encapsulates standard expressions for distances,\n times, luminosities, and other quantities useful in\n observational cosmology, including molecular line observations.\n Currently coded for a flat universe only.","Published":"2012-12-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CosmoPhotoz","Version":"0.1","Title":"Photometric redshift estimation using generalized linear models","Description":"User-friendly interfaces to perform fast and reliable photometric\n redshift estimation. The code makes use of generalized linear models and\n can adopt gamma or inverse gaussian families, either from a frequentist or\n a Bayesian perspective. The code additionally provides a Shiny application\n providing a simple user interface.","Published":"2014-08-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cosso","Version":"2.1-1","Title":"Fit Regularized Nonparametric Regression Models Using COSSO\nPenalty","Description":"COSSO is a new regularization method that automatically\n estimates and selects important function components by a\n soft-thresholding penalty in the context of smoothing spline\n ANOVA models. Implemented models include mean regression,\n quantile regression, logistic regression and the Cox regression\n models.","Published":"2013-03-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"costat","Version":"2.3","Title":"Time series costationarity determination","Description":"Contains functions that can determine whether a time series\n\tis second-order stationary or not (and hence evidence for\n\tlocally stationarity). Given two non-stationary series (i.e.\n\tlocally stationary series) this package can then discover\n\ttime-varying linear combinations that are second-order stationary.","Published":"2013-10-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CosW","Version":"0.1","Title":"The CosW Distribution","Description":"Density, distribution function, quantile function, random\n generation and survival function for the Cosine Weibull Distribution as defined\n by SOUZA, L. New Trigonometric Class of Probabilistic Distributions. 219 p.\n Thesis (Doctorate in Biometry and Applied Statistics) - Department of Statistics\n and Information, Federal Rural University of Pernambuco, Recife, Pernambuco,\n 2015 (available at ) and BRITO, C. C. R. Method Distributions generator and\n Probability Distributions Classes. 241 p. Thesis (Doctorate in Biometry and\n Applied Statistics) - Department of Statistics and Information, Federal Rural\n University of Pernambuco, Recife, Pernambuco, 2014 (available upon request).","Published":"2016-07-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cotrend","Version":"1.0","Title":"Consistant Cotrend Rank Selection","Description":"Implements cointegration/cotrending rank selection\n algorithm in Guo and Shintani(2011). Paper: \"Consistant\n Cotrending rank selection when both stochastic and nonlinear\n deterministic trends are present\", Preprint, Feb 2011.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"couchDB","Version":"1.4.1","Title":"Connect to and Work with CouchDB Databases","Description":"Interface to the couchDB document database .","Published":"2016-06-26","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"COUNT","Version":"1.3.4","Title":"Functions, Data and Code for Count Data","Description":"Functions, data and code for Hilbe, J.M. 2011. Negative Binomial Regression, 2nd Edition (Cambridge University Press) and Hilbe, J.M. 2014. Modeling Count Data (Cambridge University Press).","Published":"2016-10-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Counterfactual","Version":"1.0","Title":"Estimation and Inference Methods for Counterfactual Analysis","Description":"Implements the estimation and inference methods for counterfactual analysis described in Chernozhukov, Fernandez-Val and Melly (2013) \"Inference on Counterfactual Distributions,\" Econometrica, 81(6). The counterfactual distributions considered are the result of changing either the marginal distribution of covariates related to the outcome variable of interest, or the conditional distribution of the outcome given the covariates. They can be applied to estimate quantile treatment effects and wage decompositions.","Published":"2016-09-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Countr","Version":"3.2.8","Title":"Flexible Univariate Count Models Based on Renewal Processes","Description":"Flexible univariate count models based on renewal\n processes. The models may include covariates and can be specified\n with familiar formula syntax as in glm().","Published":"2016-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"countrycode","Version":"0.19","Title":"Convert Country Names and Country Codes","Description":"Standardize country names, convert them into one of\n eleven coding schemes, convert between coding schemes, and\n assign region descriptors.","Published":"2017-02-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CountsEPPM","Version":"2.1","Title":"Mean and Variance Modeling of Count Data","Description":"Modeling under- and over-dispersed count data using extended Poisson process models (EPPM).","Published":"2016-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"countyfloods","Version":"0.0.2","Title":"Quantify United States County-Level Flood Measurements","Description":"Quantifies United States flood impacts at the county level using\n United States Geological Service (USGS) River Discharge data for the USGS\n API. This package builds on R packages from the USGS, with the goal of\n creating county-level time series of flood status that can be more easily\n joined with county-level impact measurements, including health outcomes.\n This work was supported in part by grants from the National Institute of\n Environmental Health Sciences (R00ES022631), the Colorado Water Center,\n and the National Science Foundation, Integrative Graduate Education and\n Research Traineeship (IGERT) Grant No. DGE-0966346 \"I-WATER: Integrated\n Water, Atmosphere, Ecosystems Education and Research Program\" at\n Colorado State University.","Published":"2017-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"countytimezones","Version":"1.0.0","Title":"Convert from UTC to Local Time for United States Counties","Description":"Inputs date-times in Coordinated Universal Time (UTC)\n and converts to a local date-time and local date for US counties, based on\n each county's Federal Information Processing Standard (FIPS) code.\n This work was supported in part by grants from the National Institute of\n Environmental Health Sciences (R00ES022631) and the National Science\n Foundation (1331399).","Published":"2016-10-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"countyweather","Version":"0.1.0","Title":"Compiles Meterological Data for U.S. Counties","Description":"Interacts with NOAA data sources (including the NCDC API at\n and ISD data) using\n functions from the 'rnoaa' package to obtain and compile weather time\n series for U.S. counties. This work was supported in part by grants from the\n National Institute of Environmental Health Sciences (R00ES022631) and the\n Colorado State University Water Center.","Published":"2016-10-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"COUSCOus","Version":"1.0.0","Title":"A Residue-Residue Contact Detecting Method","Description":"Contact prediction using shrinked covariance (COUSCOus). COUSCOus is a residue-residue contact detecting method approaching the contact inference using the glassofast implementation of Matyas and Sustik (2012, The University of Texas at Austin UTCS Technical Report 2012:1-3. TR-12-29.) that solves the L_1 regularised Gaussian maximum likelihood estimation of the inverse of a covariance matrix. Prior to the inverse covariance matrix estimation we utilise a covariance matrix shrinkage approach, the empirical Bayes covariance estimator, which has been shown by Haff (1980) to be the best estimator in a Bayesian framework, especially dominating estimators of the form aS, such as the smoothed covariance estimator applied in a related contact inference technique PSICOV.","Published":"2016-02-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"covafillr","Version":"0.4.1","Title":"Local Polynomial Regression of State Dependent Covariates in\nState-Space Models","Description":"Facilitates local polynomial regression for state dependent covariates in state-space models. The functionality can also be used from 'C++' based model builder tools such as 'Rcpp'/'inline', 'TMB', or 'JAGS'.","Published":"2017-05-04","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"covatest","Version":"0.2.1","Title":"Tests on Properties of Space-Time Covariance Functions","Description":"Tests on properties of space-time covariance functions.\n Tests on symmetry, separability and for assessing \n different forms of non-separability are available. Moreover tests on \n some classes of covariance functions, such that the classes of \n product-sum models, Gneiting models and integrated product models have \n been provided.","Published":"2017-04-27","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"covBM","Version":"0.1.0","Title":"Brownian Motion Processes for 'nlme'-Models","Description":"Allows Brownian motion, fractional Brownian motion,\n and integrated Ornstein-Uhlenbeck process components to\n be added to linear and non-linear mixed effects models\n using the structures and methods of the 'nlme' package.","Published":"2015-10-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"covfefe","Version":"0.1.0","Title":"Covfefy Any Word, Sentence or Speech","Description":"Converts any word, sentence or speech into Trump's infamous\n \"covfefe\" format. Reference: .\n Inspiration thanks to: .","Published":"2017-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"covLCA","Version":"1.0","Title":"Latent Class Models with Covariate Effects on Underlying and\nMeasured Variables","Description":"Estimation of latent class models with covariate effects\n on underlying and measured variables. The measured variables\n are dichotomous or polytomous, all with the same number of\n categories.","Published":"2013-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"covmat","Version":"1.0","Title":"Covariance Matrix Estimation","Description":"We implement a collection of techniques for estimating covariance matrices. \n Covariance matrices can be built using missing data. Stambaugh Estimation and \n FMMC methods can be used to construct such matrices. Covariance matrices can \n be built by denoising or shrinking the eigenvalues of a sample covariance \n matrix. Such techniques work by exploiting the tools in Random Matrix Theory \n to analyse the distribution of eigenvalues. Covariance matrices can also \n be built assuming that data has many underlying regimes. Each regime is \n allowed to follow a Dynamic Conditional Correlation model. Robust covariance \n matrices can be constructed by multivariate cleaning and smoothing of noisy data.","Published":"2015-09-28","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"covr","Version":"2.2.2","Title":"Test Coverage for Packages","Description":"Track and report code coverage for your package and (optionally)\n upload the results to a coverage service like 'Codecov' (http://codecov.io) or\n 'Coveralls' (http://coveralls.io). Code coverage is a measure of the amount of\n code being exercised by a set of tests. It is an indirect measure of test\n quality and completeness. This package is compatible with any testing\n methodology or framework and tracks coverage of both R code and compiled\n C/C++/FORTRAN code.","Published":"2017-01-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"covreg","Version":"1.0","Title":"A simultaneous regression model for the mean and covariance","Description":"This package fits a simultaneous regression model for the mean vectors and covariance matrices of multivariate response variables, as described in Hoff and Niu (2012). The explanatory variables can be continuous or discrete. The current version of the package provides the Bayesian estimates.","Published":"2014-03-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"covRobust","Version":"1.1-3","Title":"Robust Covariance Estimation via Nearest Neighbor Cleaning","Description":"The cov.nnve() function implements robust covariance estimation\n by the nearest neighbor variance estimation (NNVE) method of\n Wang and Raftery (2002) .","Published":"2017-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CovSel","Version":"1.2.1","Title":"Model-Free Covariate Selection","Description":"Model-free selection of covariates under unconfoundedness for situations where the parameter of interest is an average causal effect. This package is based on model-free backward elimination algorithms proposed in de Luna, Waernbaum and Richardson (2011). Marginal co-ordinate hypothesis testing is used in situations where all covariates are continuous while kernel-based smoothing appropriate for mixed data is used otherwise.","Published":"2015-11-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CovSelHigh","Version":"1.1.0","Title":"Model-Free Covariate Selection in High Dimensions","Description":"Model-free selection of covariates in high dimensions under unconfoundedness for situations where the parameter of interest is an average causal effect. This package is based on model-free backward elimination algorithms proposed in de Luna, Waernbaum and Richardson (2011) and VanderWeele and Shpitser (2011) . Confounder selection can be performed via either Markov/Bayesian networks, random forests or LASSO.","Published":"2017-05-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"covsep","Version":"1.0.0","Title":"Tests for Determining if the Covariance Structure of\n2-Dimensional Data is Separable","Description":"Functions for testing if the covariance structure of 2-dimensional data\n (e.g. samples of surfaces X_i = X_i(s,t)) is separable, i.e. if covariance(X) = C_1 x C_2.\n A complete descriptions of the implemented tests can be found in the paper\n arXiv:1505.02023. ","Published":"2016-01-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"covTest","Version":"1.02","Title":"Computes covariance test for adaptive linear modelling","Description":"This package computes covariance test for the lasso. ","Published":"2013-08-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cowbell","Version":"0.1.0","Title":"Performs Segmented Linear Regression on Two Independent\nVariables","Description":"Implements a specific form of segmented linear regression\n with two independent variables. The visualization of that function looks \n like a quarter segment of a cowbell giving the package its name. \n The package has been specifically constructed for the case where minimum \n and maximum value of the dependent and two independent variables \n are known a prior, which is usually the case\n when those values are derived from Likert scales.","Published":"2017-05-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cowplot","Version":"0.7.0","Title":"Streamlined Plot Theme and Plot Annotations for 'ggplot2'","Description":"Some helpful extensions and modifications to the 'ggplot2'\n package. In particular, this package makes it easy to combine multiple\n 'ggplot2' plots into one and label them with letters, e.g. A, B, C, etc.,\n as is often required for scientific publications. The package also provides\n a streamlined and clean theme that is used in the Wilke lab, hence the\n package name, which stands for Claus O. Wilke's plot package.","Published":"2016-10-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cowsay","Version":"0.5.0","Title":"Messages, Warnings, Strings with Ascii Animals","Description":"Allows printing of character strings as messages/warnings/etc.\n with ASCII animals, including cats, cows, frogs, chickens, ghosts,\n and more.","Published":"2016-12-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CoxBoost","Version":"1.4","Title":"Cox models by likelihood based boosting for a single survival\nendpoint or competing risks","Description":"This package provides routines for fitting Cox models by\n likelihood based boosting for a single endpoint or in presence\n of competing risks","Published":"2013-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"coxinterval","Version":"1.2","Title":"Cox-Type Models for Interval-Censored Data","Description":"Fits Cox-type models based on interval-censored data from a survival\n or illness-death process.","Published":"2015-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"coxme","Version":"2.2-5","Title":"Mixed Effects Cox Models","Description":"Cox proportional hazards models containing Gaussian random \n effects, also known as frailty models.","Published":"2015-06-15","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"Coxnet","Version":"0.2","Title":"Regularized Cox Model","Description":"Cox model regularized with net (L1 and Laplacian), elastic-net (L1 and L2) or lasso (L1) penalty, and their adaptive forms, such as adaptive lasso and net adjusting for signs of linked coefficients. Moreover, it treats the number of non-zero coefficients as another tuning parameter and simultaneously selects with the regularization parameter \\code{lambda}. In addition, it fits a varying coefficient Cox model by kernel smoothing, incorporated with the aforementioned penalties. The package uses one-step coordinate descent algorithm and runs extremely fast by taking into account the sparsity structure of coefficients.","Published":"2015-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"coxphf","Version":"1.12","Title":"Cox Regression with Firth's Penalized Likelihood","Description":"Implements Firth's penalized maximum likelihood bias reduction method for Cox regression\n which has been shown to provide a solution in case of monotone likelihood (nonconvergence of likelihood function).\n The program fits profile penalized likelihood confidence intervals which were proved to outperform\n Wald confidence intervals.","Published":"2016-12-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"coxphMIC","Version":"0.1.0","Title":"Sparse Estimation of Cox Proportional Hazards Models via\nApproximated Information Criterion","Description":"Sparse estimation for Cox PH models is done via\n Minimum approximated Information Criterion (MIC) by Su, Wijayasinghe, \n Fan, and Zhang (2016) . MIC mimics the best \n subset selection using a penalized likelihood approach yet with no need \n of a tuning parameter. The problem is further reformulated with a \n re-parameterization step so that it reduces to one unconstrained non-convex\n yet smooth programming problem, which can be solved efficiently. Furthermore,\n the re-parameterization tactic yields an additional advantage in terms of\n circumventing post-selection inference.","Published":"2017-04-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"coxphw","Version":"4.0.0","Title":"Weighted Estimation in Cox Regression","Description":"Implements weighted estimation in Cox regression as proposed by\n Schemper, Wakounig and Heinze (Statistics in Medicine, 2009, ). Weighted Cox regression\n provides unbiased average hazard ratio estimates also in case of non-proportional hazards.\n Approximated generalized concordance probability an effect size measure for clear-cut\n decisions can be obtained.\n The package provides options to estimate time-dependent effects conveniently by\n including interactions of covariates with arbitrary functions of time, with or without\n making use of the weighting option.","Published":"2017-01-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CoxPlus","Version":"1.1.1","Title":"Cox Regression (Proportional Hazards Model) with Multiple Causes\nand Mixed Effects","Description":"A high performance package estimating Cox Model when an even has more than one causes. It also supports random and fixed effects, tied events, and time-varying variables.","Published":"2015-10-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"CoxRidge","Version":"0.9.2","Title":"Cox Models with Dynamic Ridge Penalties","Description":"A package for fitting Cox models with penalized ridge-type partial likelihood. The package includes functions for fitting simple Cox models with all covariates controlled by a ridge penalty. The weight of the penalty is optimised by using a REML type-algorithm. Models with time varying effects of the covariates can also be fitted. Some of the covariates may be allowed to be fixed and thus not controlled by the penalty. There are three different penalty functions, ridge, dynamic and weighted dynamic. Time varying effects can be fitted without the need of an expanded dataset. ","Published":"2015-02-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"coxrobust","Version":"1.0","Title":"Robust Estimation in Cox Model","Description":"Fit robustly proportional hazards regression model","Published":"2006-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"coxsei","Version":"0.1","Title":"Fitting a CoxSEI Model","Description":"It fits a CoxSEI (Cox type Self-Exciting Intensity) model to right-censored counting process data.","Published":"2015-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CP","Version":"1.6","Title":"Conditional Power Calculations","Description":"Functions for calculating the conditional power\n for different models in survival time analysis\n within randomized clinical trials\n with two different treatments to be compared\n and survival as an endpoint.","Published":"2016-06-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cp4p","Version":"0.3.5","Title":"Calibration Plot for Proteomics","Description":"Functions to check whether a vector of p-values respects the assumptions of FDR (false discovery rate) control procedures and to compute adjusted p-values.","Published":"2016-05-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cpa","Version":"1.0","Title":"Confirmatory Path Analysis through the d-sep tests","Description":"The package includes functions to test and compare causal models. ","Published":"2013-10-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CPBayes","Version":"0.2.0","Title":"Bayesian Meta Analysis for Studying Cross-Phenotype Genetic\nAssociations","Description":"A Bayesian meta-analysis method for studying cross-phenotype\n genetic associations. It uses summary-level data across multiple phenotypes to\n simultaneously measure the evidence of aggregate-level pleiotropic association and\n estimate an optimal subset of traits associated with the risk locus. CPBayes is based\n on a spike and slab prior and is implemented by Markov chain Monte Carlo technique Gibbs sampling.","Published":"2017-04-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cpca","Version":"0.1.2","Title":"Methods to perform Common Principal Component Analysis (CPCA)","Description":"This package contains methods to perform Common Principal\n Component Analysis (CPCA). The stepwise method by Trendafilov is published\n in the current version. Please see Trendafilov (2010). Stepwise estimation\n of common principal components. Computational Statistics & Data Analysis,\n 54(12), 3446-3457. doi:10.1016/j.csda.2010.03.010","Published":"2014-02-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"CPE","Version":"1.4.4","Title":"Concordance Probability Estimates in Survival Analysis","Description":"Functions to calculate concordance probability estimates\n in survival analysis","Published":"2012-07-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CpGassoc","Version":"2.60","Title":"Association Between Methylation and a Phenotype of Interest","Description":"Is designed to test for association between methylation at CpG sites across the genome and a phenotype of interest, adjusting for any relevant covariates. The package can perform standard analyses of large datasets very quickly with no need to impute the data. It can also handle mixed effects models with chip or batch entering the model as a random intercept. Also includes tools to apply quality control filters, perform permutation tests, and create QQ plots, manhattan plots, and scatterplots for individual CpG sites.\t","Published":"2017-05-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cpgen","Version":"0.1","Title":"Parallelized Genomic Prediction and GWAS","Description":"Frequently used methods in genomic applications with emphasis on parallel computing (OpenMP).\n At its core, the package has a Gibbs Sampler that allows running univariate linear\n mixed models that have both, sparse and dense design matrices. The parallel sampling method\n in case of dense design matrices (e.g. Genotypes) allows running Ridge Regression or BayesA for\n a very large number of individuals. The Gibbs Sampler is capable of running Single Step Genomic Prediction models.\n In addition, the package offers parallelized functions for common tasks like genome-wide\n association studies and cross validation in a memory efficient way.","Published":"2015-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CpGFilter","Version":"1.0","Title":"CpG Filtering Method Based on Intra-class Correlation\nCoefficients","Description":"Filter CpGs based on Intra-class Correlation Coefficients (ICCs) when replicates are available. ICCs are calculated by fitting linear mixed effects models to all samples including the un-replicated samples. Including the large number of un-replicated samples improves ICC estimates dramatically. The method accommodates any replicate design. ","Published":"2014-11-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CPHshape","Version":"1.0.1","Title":"Find the maximum likelihood estimator of the shape constrained\nhazard baseline and the effect parameters in the Cox\nproportional hazards model","Description":"This package computes the maximum likelihood estimator (MLE) of the shape-constrained hazard baseline and the effect parameters in the Cox proportional hazards model under IID sampling. We assume that the data are continuous and allow for right censoring. The function 'find.shapeMLE' allows for four different shape constraints: increasing, decreasing, unimodal, and u-shaped.","Published":"2014-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cpk","Version":"1.3-1","Title":"Clinical Pharmacokinetics","Description":"The package cpk provides simplified clinical pharmacokinetic functions for dose regimen design and modification at the point-of-care. Currently, the following functions are available: (1) ttc.fn for target therapeutic concentration, (2) dr.fn for dose rate, (3) di.fn for dosing interval, (4) dm.fn for maintenance dose, (5) bc.ttc.fn for back calculation, (6) ar.fn for accumulation ratio, (7) dpo.fn for orally administered dose, (8) cmax.fn for peak concentration, (9) css.fn for steady-state concentration, (10) cmin.fn for trough,(11) ct.fn for concentration-time predictions, (12) dlcmax.fn for calculating loading dose based on drug's maximum concentration, (13) dlar.fn for calculating loading dose based on drug's accumulation ratio, and (14) R0.fn for calculating drug infusion rate. Reference: Linares O, Linares A. Computational opioid prescribing: A novel application of clinical pharmacokinetics. J Pain Palliat Care Pharmacother 2011;25:125-135.","Published":"2013-12-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cplexAPI","Version":"1.3.3","Title":"R Interface to C API of IBM ILOG CPLEX","Description":"This is the R Interface to the C API of IBM ILOG CPLEX. It necessarily depends on IBM ILOG CPLEX (>= 12.1).","Published":"2017-01-31","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cplm","Version":"0.7-5","Title":"Compound Poisson Linear Models","Description":"Likelihood-based and Bayesian methods for various compound Poisson linear models.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cpm","Version":"2.2","Title":"Sequential and Batch Change Detection Using Parametric and\nNonparametric Methods","Description":"Sequential and batch change detection for univariate data streams, using the change point model framework. Functions are provided to allow nonparametric distribution-free change detection in the mean, variance, or general distribution of a given sequence of observations. Parametric change detection methods are also provided for Gaussian, Bernoulli and Exponential sequences. Both the batch (Phase I) and sequential (Phase II) settings are supported, and the sequences may contain either a single or multiple change points.","Published":"2015-07-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CPMCGLM","Version":"1.1","Title":"Correction of the pvalue after multiple coding","Description":"We propose to determine the correction of the significance level after multiple coding of an explanatory variable in Generalized Linear Model. The different methods of correction of the p-value are the Single step Bonferroni procedure, and resampling based methods developped by P.H.Westfall in 1993. Resampling methods are based on the permutation and the parametric bootstrap procedure. If some continuous, and dichotomous transformations are performed this package offers an exact correction of the p-value developped by B.Liquet & D.Commenges in 2005. The naive method with no correction is also available.","Published":"2013-11-06","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"cpr","Version":"0.2.3","Title":"Control Polygon Reduction","Description":"Implementation of the Control Polygon Reduction and Control Net\n Reduction methods for finding parsimonious B-spline regression models.","Published":"2017-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Cprob","Version":"1.3","Title":"The Conditional Probability Function of a Competing Event","Description":"Permits to estimate the conditional probability function of a competing event, and to fit, using the temporal process regression or the pseudo-value approach, a proportional-odds model to the conditional probability function (or other models by specifying another link function). See .","Published":"2017-01-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CPsurv","Version":"1.0.0","Title":"Nonparametric Change Point Estimation for Survival Data","Description":"Nonparametric change point estimation for survival data based on p-values of exact binomial tests.","Published":"2017-03-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cpt","Version":"0.9","Title":"Classification Permutation Test","Description":"Non-parametric test for equality of multivariate distributions. Trains a classifier to classify (multivariate) observations as coming from one of two distributions. If the classifier is able to classify the observations better than would be expected by chance (using permutation inference), then the null hypothesis that the two distributions are equal is rejected. ","Published":"2017-03-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"cqrReg","Version":"1.2","Title":"Quantile, Composite Quantile Regression and Regularized Versions","Description":"Estimate quantile regression(QR) and composite quantile regression (cqr) and with adaptive lasso penalty using interior point (IP), majorize and minimize(MM), coordinate descent (CD), and alternating direction method of multipliers algorithms(ADMM).","Published":"2015-04-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cquad","Version":"1.4","Title":"Conditional Maximum Likelihood for Quadratic Exponential Models\nfor Binary Panel Data","Description":"Estimation, based on conditional maximum likelihood, of the quadratic exponential model proposed by Bartolucci, F. & Nigro, V. (2010, Econometrica) and of a simplified and a modified version of this model. The quadratic exponential model is suitable for the analysis of binary longitudinal data when state dependence (further to the effect of the covariates and a time-fixed individual intercept) has to be taken into account. Therefore, this is an alternative to the dynamic logit model having the advantage of easily allowing conditional inference in order to eliminate the individual intercepts and then getting consistent estimates of the parameters of main interest (for the covariates and the lagged response). The simplified version of this model does not distinguish, as the original model does, between the last time occasion and the previous occasions. The modified version formulates in a different way the interaction terms and it may be used to test in a easy way state dependence as shown in Bartolucci, F., Nigro, V. & Pigini, C. (2013, Econometric Reviews) . The package also includes estimation of the dynamic logit model by a pseudo conditional estimator based on the quadratic exponential model, as proposed by Bartolucci, F. & Nigro, V. (2012, Journal of Econometrics) .","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CR","Version":"1.0","Title":"Power Calculation for Weighted Log-Rank Tests in Cure Rate\nModels","Description":"This package contains R-functions to perform power\n calculation in a group sequential clinical trial with censored\n survival data and possibly unequal patient allocation between\n treatment and control groups. The fuctions can also be used to\n determine the study duration in a clinical trial with censored\n survival data as the sum of the accrual duration, which\n determines the sample size in a traditional sense, and the\n follow-up duration, which more or less controls the number of\n events to be observed. This package also contains R functions\n and methods to display the computed results.","Published":"2012-07-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CRAC","Version":"1.0","Title":"Cosmology R Analysis Code","Description":"R functions for cosmological research.\n The main functions are similar to the python library, cosmolopy.","Published":"2014-02-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crackR","Version":"0.3-9","Title":"Probabilistic damage tolerance analysis for fatigue cracking of\nmetallic aerospace structures","Description":"Using a sampling-based approach (either sequential importance sampling or explicit Monte Carlo), this package can be used to perform a probabilistic damage tolerance for aircraft structures. It can model a single crack, or two simultaneously growing fatigue cracks (the so-called continuing damage problem). With a single crack, multiple types of future repairs are possible.","Published":"2014-04-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cramer","Version":"0.9-1","Title":"Multivariate nonparametric Cramer-Test for the\ntwo-sample-problem","Description":"Provides R routine for the so called two-sample\n Cramer-Test. This nonparametric two-sample-test on equality\n of the underlying distributions can be applied to \n multivariate data as well as univariate data. It offers two \n possibilities to approximate the critical value both of which \n are included in this package.","Published":"2014-08-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crandatapkgs","Version":"0.1.8","Title":"Find Data-Only Packages on CRAN","Description":"Provides a data frame listing of known data-only and data-heavy\n packages available on CRAN.","Published":"2017-06-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"crank","Version":"1.1","Title":"Completing Ranks","Description":"Functions for completing and recalculating rankings.","Published":"2015-10-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cranlike","Version":"1.0.0","Title":"Tools for 'CRAN'-Like Repositories","Description":"A set of functions to manage 'CRAN'-like repositories\n efficiently.","Published":"2017-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cranlogs","Version":"2.1.0","Title":"Download Logs from the 'RStudio' 'CRAN' Mirror","Description":"'API' to the database of 'CRAN' package downloads from the\n 'RStudio' 'CRAN mirror'. The database itself is at ,\n see for the raw 'API'.","Published":"2015-12-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"crantastic","Version":"0.1","Title":"Various R tools for http://crantastic.org/","Description":"Various R tools for http://crantastic.org/","Published":"2009-08-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"crawl","Version":"2.1.1","Title":"Fit Continuous-Time Correlated Random Walk Models to Animal\nMovement Data","Description":"Fit continuous-time\n correlated random walk models with time indexed\n covariates to animal telemetry data. The model is fit using the Kalman-filter on\n a state space version of the continuous-time stochastic\n movement process.","Published":"2017-04-21","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"crayon","Version":"1.3.2","Title":"Colored Terminal Output","Description":"Colored terminal output on terminals that support 'ANSI'\n color and highlight codes. It also works in 'Emacs' 'ESS'. 'ANSI'\n color support is automatically detected. Colors and highlighting can\n be combined and nested. New styles can also be created easily.\n This package was inspired by the 'chalk' 'JavaScript' project.","Published":"2016-06-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"crblocks","Version":"0.9-1","Title":"Categorical Randomized Block Data Analysis","Description":"Implements a statistical test for comparing bar plots or\n histograms of categorical data derived from a randomized block\n repeated measures layout.","Published":"2012-08-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crch","Version":"1.0-0","Title":"Censored Regression with Conditional Heteroscedasticity","Description":"Different approaches to censored or truncated regression with \n conditional heteroscedasticity are provided. First, continuous \n distributions can be used for the (right and/or left censored or truncated)\n response with separate linear predictors for the mean and variance. \n Second, cumulative link models for ordinal data\n (obtained by interval-censoring continuous data) can be employed for\n heteroscedastic extended logistic regression (HXLR). In the latter type of\n models, the intercepts depend on the thresholds that define the intervals. ","Published":"2016-10-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"CreditMetrics","Version":"0.0-2","Title":"Functions for calculating the CreditMetrics risk model","Description":"A set of functions for computing the CreditMetrics risk model","Published":"2009-02-01","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"creditr","Version":"0.6.1","Title":"Credit Default Swaps in R","Description":"Provides tools for pricing credit default swaps using\n C code for the International Swaps and Derivatives\n Association (ISDA) CDS Standard Model. See\n \n for more information about the model and \n \n for license details for the C code.","Published":"2015-08-12","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"credule","Version":"0.1.3","Title":"Credit Default Swap Functions","Description":"It provides functions to bootstrap Credit Curves from market quotes (Credit Default Swap - CDS - spreads) and price Credit Default Swaps - CDS.","Published":"2015-08-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CRF","Version":"0.3-14","Title":"Conditional Random Fields","Description":"Implements modeling and computational tools for conditional\n random fields (CRF) model as well as other probabilistic undirected\n graphical models of discrete data with pairwise and unary potentials.","Published":"2017-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cricketr","Version":"0.0.14","Title":"Analyze Cricketers Based on ESPN Cricinfo Statsguru","Description":"Tools for analyzing performances of cricketers based on stats in\n ESPN Cricinfo Statsguru. The toolset can be used for analysis of Tests,ODIs \n and Twenty20 matches of both batsmen and bowlers.","Published":"2017-03-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"crimCV","Version":"0.9.3","Title":"Group-Based Modelling of Longitudinal Data","Description":"This package fits discrete mixtures of Zero-Inflated\n Poisson (ZIP) models for analyzing criminal trajectories.","Published":"2013-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crimelinkage","Version":"0.0.4","Title":"Statistical Methods for Crime Series Linkage","Description":"Statistical Methods for Crime Series Linkage. This package provides \n code for criminal case linkage, crime series identification, crime series \n clustering, and suspect identification.","Published":"2015-09-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"crisp","Version":"1.0.0","Title":"Fits a Model that Partitions the Covariate Space into Blocks in\na Data- Adaptive Way","Description":"Implements convex regression with interpretable sharp partitions\n (CRISP), which considers the problem of predicting an outcome variable on the basis of two covariates, using an interpretable yet non-additive model. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. More details are provided in Petersen, A., Simon, N., and Witten, D. (2016). Convex Regression with Interpretable Sharp Partitions. Journal of Machine Learning Research, 17(94): 1-31 .","Published":"2017-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CRM","Version":"1.1.1","Title":"Continual Reassessment Method (CRM) for Phase I Clinical Trials","Description":"CRM simulator for Phase I Clinical Trials","Published":"2012-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crminer","Version":"0.1.2","Title":"Fetch 'Scholary' Full Text from 'Crossref'","Description":"Text mining client for 'Crossref' (). Includes\n functions for getting getting links to full text of articles, fetching full\n text articles from those links or Digital Object Identifiers ('DOIs'),\n and text extraction from 'PDFs'.","Published":"2017-05-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"crmn","Version":"0.0.20","Title":"CCMN and other noRMalizatioN methods for metabolomics data","Description":"Implements the Cross-contribution Compensating Multiple\n standard Normalization (CCMN) method and other normalization\n algorithms.","Published":"2014-11-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"crmPack","Version":"0.2.1","Title":"Object-Oriented Implementation of CRM Designs","Description":"Implements a wide range of model-based dose\n escalation designs, ranging from classical and modern continual\n reassessment methods (CRMs) based on dose-limiting toxicity endpoints to\n dual-endpoint designs taking into account a biomarker/efficacy outcome. The\n focus is on Bayesian inference, making it very easy to setup a new design\n with its own JAGS code. However, it is also possible to implement 3+3\n designs for comparison or models with non-Bayesian estimation. The whole\n package is written in a modular form in the S4 class system, making it very\n flexible for adaptation to new models, escalation or stopping rules.","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crn","Version":"1.1","Title":"Downloads and Builds datasets for Climate Reference Network","Description":"The crn package provides the core functions required to\n download and format data from the Climate Reference Network.\n Both daily and hourly data are downloaded from the ftp, a\n consolidated file of all stations is created, station metadata\n is extracted. In addition functions for selecting individual\n variables and creating R friendly datasets for them is\n provided.","Published":"2012-08-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crochet","Version":"1.0.0","Title":"Implementation Helper for [ and [<- Of Custom Matrix-Like Types","Description":"Functions to help implement the extraction / subsetting / indexing\n function [ and replacement function [<- of custom matrix-like types (based\n on S3, S4, etc.), modeled as closely to the base matrix class as possible\n (with tests to prove it).","Published":"2017-05-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cronR","Version":"0.3.0","Title":"Schedule R Scripts and Processes with the 'cron' Job Scheduler","Description":"Create, edit, and remove 'cron' jobs on your unix-alike system. The package provides a set of easy-to-use wrappers\n to 'crontab'. It also provides an RStudio add-in to easily launch and schedule your scripts.","Published":"2017-03-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"crop","Version":"0.0-2","Title":"Graphics Cropping Tool","Description":"A device closing function which is able to crop graphics (e.g.,\n PDF, PNG files) on Unix-like operating systems with the required underlying\n command-line tools installed.","Published":"2015-10-19","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"cropdatape","Version":"1.0.0","Title":"Open Data of Agricultural Production of Crops of Peru","Description":"Provides peruvian agricultural production data from the Agriculture Minestry of Peru (MINAGRI). The first version includes\n 6 crops: rice, quinoa, potato, sweet potato, tomato and wheat; all of them across 24 departments. Initially, in excel files which has been transformed\n and assembled using tidy data principles, i.e. each variable is in a column, each observation is a row and each value is in a cell.\n The variables variables are sowing and harvest area per crop, yield, production and price per plot, every one year, from 2004 to 2014.","Published":"2017-03-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CrossClustering","Version":"3.0","Title":"A Partial Clustering Algorithm with Automatic Estimation of the\nNumber of Clusters and Identification of Outliers","Description":"Computes a partial clustering algorithm that combines\n the Ward's minimum variance and Complete Linkage algorithms, providing\n automatic estimation of a suitable number of clusters and identification of\n outlier elements.","Published":"2016-03-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"crossdes","Version":"1.1-1","Title":"Construction of Crossover Designs","Description":"Contains functions for the construction of carryover\n balanced crossover designs. In addition contains functions to\n check given designs for balance.","Published":"2013-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"crossmatch","Version":"1.3-1","Title":"The Cross-match Test","Description":"This package performs a test for comparing two\n multivariate distributions by using the distance between\n observations. The input is a distance matrix and the labels of\n the two groups to be compared, the output is the number of\n cross-matches and a p-value.","Published":"2012-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Crossover","Version":"0.1-16","Title":"Analysis and Search of Crossover Designs","Description":"Package Crossover provides different crossover designs from\n combinatorial or search algorithms as well as from literature and a GUI to\n access them.","Published":"2016-09-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"crossReg","Version":"1.0","Title":"Confidence intervals for crossover points of two simple\nregression lines","Description":"\n This package provides functions to calculate confidence intervals for crossover points of two simple linear regression lines using \n the non-linear regression, the delta method, the Fieller method, and the bootstrap methods.","Published":"2014-07-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CrossScreening","Version":"0.1.1","Title":"Cross-Screening in Observational Studies that Test Many\nHypotheses","Description":"Cross-screening is a new method that uses both random halves of the sample to screen and test many hypotheses. It generally improves statistical power in observational studies when many hypotheses are tested simultaneously. References: 1. Qingyuan Zhao, Dylan S Small, and Paul R Rosenbaum. Cross-screening in observational studies that test many hypotheses. . 2. Qingyuan Zhao. On sensitivity value of pair-matched observational studies. .","Published":"2017-04-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"crosstalk","Version":"1.0.0","Title":"Inter-Widget Interactivity for HTML Widgets","Description":"Provides building blocks for allowing HTML widgets to communicate\n with each other, with Shiny or without (i.e. static .html files). Currently\n supports linked brushing and filtering.","Published":"2016-12-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CrossVA","Version":"0.9.0","Title":"Verbal Autopsy Data Transform for Use with Various Coding\nAlgorithms","Description":"Enables transformation of Verbal Autopsy data collected with the WHO 2016 questionnaire\n for automated coding of Cause of Death using different computer algorithms. Currently supports user-supplied mappings,\n and provides unvalidated mapping definitions to transform to InterVA4, Tariff 2, and InSilicoVA.","Published":"2016-11-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"crossval","Version":"1.0.3","Title":"Generic Functions for Cross Validation","Description":"Contains generic functions for performing \n cross validation and for computing diagnostic errors.","Published":"2015-07-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"crp.CSFP","Version":"2.0.2","Title":"CreditRisk+ Portfolio Model","Description":"Modelling credit risks based on the concept of \"CreditRisk+\", First Boston Financial Products, 1997 and \"CreditRisk+ in the Banking Industry\", Gundlach & Lehrbass, Springer, 2003.","Published":"2016-09-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"crplyr","Version":"0.1.2","Title":"A 'dplyr' Interface for Crunch","Description":"In order to facilitate analysis of datasets hosted on the Crunch\n data platform , the 'crplyr' package implements 'dplyr'\n methods on top of the Crunch backend. The usual methods \"select\", \"filter\",\n \"mutate\", \"group_by\", and \"summarize\" are implemented in such a way as to\n perform as much computation on the server and pull as little data locally\n as possible.","Published":"2017-06-06","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"crqa","Version":"1.0.6","Title":"Cross-Recurrence Quantification Analysis for Categorical and\nContinuous Time-Series","Description":"\n Cross-recurrence quantification analysis \n of two time-series, of either categorical or\n continuous values. It provides different methods\n for profiling cross-recurrence, i.e., only looking\n at the diagonal recurrent points, as well as more\n in-depth measures of the whole cross-recurrence plot,\n e.g., recurrence rate.","Published":"2015-07-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crrp","Version":"1.0","Title":"Penalized Variable Selection in Competing Risks Regression","Description":"In competing risks regression, the proportional subdistribution hazards(PSH) model is popular for its direct assessment of covariate effects on the cumulative incidence function. This package allows for penalized variable selection for the PSH model. Penalties include LASSO, SCAD, MCP, and their group versions.","Published":"2015-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crrSC","Version":"1.1","Title":"Competing risks regression for Stratified and Clustered data","Description":"Extension of cmprsk to Stratified and Clustered data.\n Goodness of fit test for Fine-Gray model.","Published":"2013-06-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"crrstep","Version":"2015-2.1","Title":"Stepwise Covariate Selection for the Fine & Gray Competing Risks\nRegression Model","Description":"Performs forward and backwards stepwise regression for the Proportional subdistribution hazards model in competing risks (Fine & Gray 1999). Procedure uses AIC, BIC and BICcr as selection criteria. BICcr has a penalty of k = log(n*), where n* is the number of primary events.","Published":"2015-02-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crs","Version":"0.15-27","Title":"Categorical Regression Splines","Description":"Regression splines that handle a mix of continuous and categorical (discrete) data often encountered in applied settings. I would like to gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC, ), the Social Sciences and Humanities Research Council of Canada (SSHRC, ), and the Shared Hierarchical Academic Research Computing Network (SHARCNET, ).","Published":"2017-05-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"crskdiag","Version":"1.0.1","Title":"Diagnostics for Fine and Gray Model","Description":"Provides the implementation of analytical and graphical approaches for checking the assumptions of the Fine and Gray model. ","Published":"2016-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crsnls","Version":"0.2","Title":"Nonlinear Regression Parameters Estimation by 'CRS4HC' and\n'CRS4HCe'","Description":"Functions for nonlinear regression parameters estimation by algorithms based on Controlled Random Search algorithm.\n Both functions (crs4hc(), crs4hce()) adapt current search strategy by four heuristics competition. In addition, crs4hce() improves adaptability by adaptive stopping condition.","Published":"2016-04-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"crtests","Version":"0.2.1","Title":"Classification and Regression Tests","Description":"Provides wrapper functions for running classification and\n regression tests using different machine learning techniques, such as Random\n Forests and decision trees. The package provides standardized methods for\n preparing data to suit the algorithm's needs, training a model, making\n predictions, and evaluating results. Also, some functions are provided to run\n multiple instances of a test.","Published":"2016-05-20","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"CRTgeeDR","Version":"1.2","Title":"Doubly Robust Inverse Probability Weighted Augmented GEE\nEstimator","Description":"Implements a semi-parametric GEE estimator accounting for missing data with Inverse-probability weighting (IPW) and for imbalance in covariates with augmentation (AUG). The estimator IPW-AUG-GEE is Doubly robust (DR).","Published":"2016-04-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CRTSize","Version":"1.0","Title":"Sample Size Estimation Functions for Cluster Randomized Trials","Description":"Sample size estimation in cluster (group) randomized trials. Contains traditional power-based methods, empirical smoothing (Rotondi and Donner, 2009), and updated meta-analysis techniques (Rotondi and Donner, 2012).","Published":"2015-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"crul","Version":"0.3.8","Title":"HTTP Client","Description":"A simple HTTP client, with tools for making HTTP requests,\n and mocking HTTP requests. The package is built on R6, and takes\n inspiration from Ruby's 'faraday' gem ().\n The package name is a play on curl, the widely used command line tool\n for HTTP, and this package is built on top of the R package 'curl', an\n interface to 'libcurl' ().","Published":"2017-06-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"crunch","Version":"1.17.0","Title":"Crunch.io Data Tools","Description":"The Crunch.io service provides a cloud-based\n data store and analytic engine, as well as an intuitive web interface.\n Using this package, analysts can interact with and manipulate Crunch\n datasets from within R. Importantly, this allows technical researchers to\n collaborate naturally with team members, managers, and clients who prefer a\n point-and-click interface.","Published":"2017-06-06","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"crunchy","Version":"0.2.0","Title":"Shiny Apps on Crunch","Description":"To facilitate building custom dashboards on the Crunch data\n platform , the 'crunchy' package provides tools for\n working with 'shiny'. These tools include utilities to manage authentication\n and authorization automatically and custom stylesheets to help match the\n look and feel of the Crunch web application.","Published":"2017-05-05","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cruts","Version":"0.3","Title":"Interface to Climatic Research Unit Time-Series Version 3.21\nData","Description":"Functions for reading in and manipulating CRU TS3.21: Climatic\n Research Unit (CRU) Time-Series (TS) Version 3.21 data.","Published":"2016-05-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CrypticIBDcheck","Version":"0.3-1","Title":"Identifying cryptic relatedness in genetic association studies","Description":"Exploratory tools to identify closely related subjects using autosomal genetic marker data.","Published":"2013-09-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CryptRndTest","Version":"1.2.2","Title":"Statistical Tests for Cryptographic Randomness","Description":"Performs cryptographic randomness tests on a sequence of random\n integers or bits. Included tests are greatest common divisor, birthday spacings,\n book stack, adaptive chi-square, topological binary, and three random walk\n tests. Tests except greatest common divisor and birthday spacings are not\n covered by standard test suites. In addition to the chi-square goodness-of-fit\n test, results of Anderson-Darling, Kolmogorov-Smirnov, and Jarque-Bera tests are\n also generated by some of the cryptographic randomness tests.","Published":"2016-02-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cryst","Version":"0.1.0","Title":"Calculate the Relative Crystallinity of Starch by XRD and FTIR","Description":"Functions to calculate the relative crystallinity of starch by X-ray Diffraction (XRD) and Infrared Spectroscopy (FTIR). Starch is biosynthesized by plants in the form of granules semicrystalline. For XRD, the relative crystallinity is obtained by separating the crystalline peaks from the amorphous scattering region. For FTIR, the relative crystallinity is achieved by setting of a Gaussian holocrystalline-peak in the 800-1300 cm-1 region of FTIR spectrum of starch which is divided into amorphous region and crystalline region. The relative crystallinity of native starch granules varies from 14 of 45 percent. This package was supported by FONDECYT 3150630 and CIPA Conicyt-Regional R08C1002 is gratefully acknowledged.","Published":"2016-10-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"csampling","Version":"1.2-2","Title":"Functions for Conditional Simulation in Regression-Scale Models","Description":"Monte Carlo conditional inference for the parameters of a\n linear nonnormal regression model","Published":"2014-04-03","License":"GPL (>= 2) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"CSclone","Version":"1.0","Title":"Bayesian Nonparametric Modeling in R","Description":"Germline and somatic locus data which contain the total read depth and B allele \n read depth using Bayesian model (Dirichlet Process) to cluster. Meanwhile, the cluster \n model can deal with the SNVs mutation and the CNAs mutation.","Published":"2016-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CSeqpat","Version":"0.1.0","Title":"Frequent Contiguous Sequential Pattern Mining of Text","Description":"Mines contiguous sequential patterns in text.","Published":"2017-03-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cSFM","Version":"1.1","Title":"Covariate-adjusted Skewed Functional Model (cSFM)","Description":"cSFM is a method to model skewed functional data when considering covariates via a copula-based approach. ","Published":"2014-01-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cshapes","Version":"0.6","Title":"The CShapes Dataset and Utilities","Description":"Package for CShapes, a GIS dataset of country boundaries (1946-today). Includes functions for data extraction and the computation of distance matrices and -lists. ","Published":"2016-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"csn","Version":"1.1.3","Title":"Closed Skew-Normal Distribution","Description":"Provides functions for computing the density\n and the log-likelihood function of closed-skew normal variates,\n and for generating random vectors sampled from this distribution.\n See Gonzalez-Farias, G., Dominguez-Molina, J., and Gupta, A. (2004).\n The closed skew normal distribution, \n Skew-elliptical distributions and their applications: a journey beyond normality,\n Chapman and Hall/CRC, Boca Raton, FL, pp. 25-42.","Published":"2015-05-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"csp","Version":"0.1.0","Title":"Correlates of State Policy Data Set in R","Description":"Provides the Correlates of State Policy data set for easy use in R.","Published":"2016-07-04","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"csrplus","Version":"1.03-0","Title":"Methods to Test Hypotheses on the Distribution of Spatial Point\nProcesses","Description":"Includes two functions to evaluate the hypothesis of complete spatial randomness (csr) in point processes. The function 'mwin' calculates quadrat counts to estimate the intensity of a spatial point process through the moving window approach proposed by Bailey and Gatrell (1995). Event counts are computed within a window of a set size over a fine lattice of points within the region of observation. The function 'pielou' uses the nearest neighbor test statistic and asymptotic distribution proposed by Pielou (1959) to compare the observed point process to one generated under csr. The value can be compared to that given by the more widely used test proposed by Clark and Evans (1954).","Published":"2015-04-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"csSAM","Version":"1.2.4","Title":"csSAM - cell-specific Significance Analysis of Microarrays","Description":"Cell-type specific differential expression of a microarray\n experiment of heterogeneous tissue samples, using SAM.","Published":"2013-05-13","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"cssTools","Version":"1.0","Title":"Cognitive Social Structure Tools","Description":"A collection of tools for estimating a network from a random sample of cognitive social structure (CSS) slices. Also contains functions for evaluating a CSS in terms of various error types observed in each slice.","Published":"2016-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cstab","Version":"0.2","Title":"Selection of Number of Clusters via Normalized Clustering\nInstability","Description":"Selection of the number of clusters in cluster analysis using\n stability methods.","Published":"2016-12-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cstar","Version":"1.0","Title":"Substantive significance testing for regression estimates and\nmarginal effects","Description":"Functions that allow a researcher to examine the robustness of the substantive significance of their findings. Implements ideas set out in Esarey and Danneman (2014).","Published":"2014-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"csv","Version":"0.4","Title":"Read and Write CSV Files with Selected Conventions","Description":"Reads and writes CSV with selected conventions.\n Uses the same generic function for reading and writing to promote consistent formats.","Published":"2017-02-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"csvread","Version":"1.2","Title":"Fast Specialized CSV File Loader","Description":"Functions for loading large (10M+ lines) CSV\n and other delimited files, similar to read.csv, but typically faster and\n using less memory than the standard R loader. While not entirely general,\n it covers many common use cases when the types of columns in the CSV file\n are known in advance. In addition, the package provides a class 'int64',\n which represents 64-bit integers exactly when reading from a file. The\n latter is useful when working with 64-bit integer identifiers exported from\n databases. The CSV file loader supports common column types including\n 'integer', 'double', 'string', and 'int64', leaving further type\n transformations to the user.","Published":"2015-03-08","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"csvy","Version":"0.1.3","Title":"Import and Export CSV Data with a YAML Metadata Header","Description":"Support for import from and export to the CSVY file format. CSVY is a file format that combines the simplicity of CSV (comma-separated values) with the metadata of other plain text and binary formats (JSON, XML, Stata, etc.) by placing a YAML header on top of a regular CSV. ","Published":"2016-07-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cthreshER","Version":"1.1.0","Title":"Continuous Threshold Expectile Regression","Description":"Estimation and inference methods for the continuous threshold expectile regression.\n It can fit the continuous threshold expectile regression and test the existence of change point,\n for the paper, \"Feipeng Zhang and Qunhua Li (2016). A continuous threshold expectile regression, submitted.\" ","Published":"2016-11-10","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"ctl","Version":"1.0.0-0","Title":"Correlated Trait Locus (CTL) Mapping in R","Description":"Analysis of genetical genomic data to identify genetic loci associated with correlation changes in quantitative traits (CTL).","Published":"2016-09-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CTM","Version":"0.2","Title":"A Text Mining Toolkit for Chinese Document","Description":"The CTM package is designed to solve problems of text mining and is specific for Chinese document.","Published":"2016-11-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ctmcd","Version":"1.1","Title":"Estimating the Parameters of a Continuous-Time Markov Chain from\nDiscrete-Time Data","Description":"Functions for estimating Markov generator matrices from discrete-time observations. The implemented approaches comprise diagonal adjustment, weighted adjustment and quasi-optimization of matrix logarithm based candidate solutions, an expectation-maximization algorithm as well as a Gibbs sampler.","Published":"2017-04-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ctmcmove","Version":"1.2.8","Title":"Modeling Animal Movement with Continuous-Time Discrete-Space\nMarkov Chains","Description":"Software to facilitates taking movement data in xyt format and pairing it with raster covariates within a continuous time Markov chain (CTMC) framework. As described in Hanks et al. (2015) , this allows flexible modeling of movement in response to covariates (or covariate gradients) with model fitting possible within a Poisson GLM framework. ","Published":"2017-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ctmm","Version":"0.3.6","Title":"Continuous-Time Movement Modeling","Description":"Functions for identifying, fitting, and applying continuous-space, continuous-time stochastic movement models to animal tracking data.","Published":"2017-04-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ctqr","Version":"1.0","Title":"Censored and Truncated Quantile Regression","Description":"Estimation of quantile regression models for survival data.","Published":"2016-08-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cts","Version":"1.0-21","Title":"Continuous Time Autoregressive Models","Description":"Functions to fit continuous time autoregressive models with the Kalman filter.","Published":"2017-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ctsem","Version":"2.4.0","Title":"Continuous Time Structural Equation Modelling","Description":"A multivariate continuous (and discrete) time dynamic modelling\n package for panel and time series data, using linear stochastic differential\n equations. Contains a faster frequentist set of functions using OpenMx for\n single subject and mixed-effects (random intercepts only) structural\n equation models, or a hierarchical Bayesian implementation using Stan that\n allows for random effects over all model parameters. Allows for modelling of\n multiple noisy measurements of multiple stochastic processes, time varying\n input / event covariates, and time invariant covariates used to predict the\n parameters.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CTT","Version":"2.1","Title":"Classical Test Theory Functions","Description":"Contains common CTT functions","Published":"2014-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CTTShiny","Version":"0.1","Title":"Classical Test Theory via Shiny","Description":"Interactive shiny application for running classical test theory (item analysis).","Published":"2015-08-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ctv","Version":"0.8-2","Title":"CRAN Task Views","Description":"Infrastructure for task views to CRAN-style repositories: Querying task views and installing the associated\n packages (client-side tools), generating HTML pages and storing task view information in the repository\n\t (server-side tools).","Published":"2016-09-15","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"CUB","Version":"1.0","Title":"A Class of Mixture Models for Ordinal Data","Description":"Estimate and test models for ordinal data concerning the family of\n CUB models and their extensions (where CUB stands for Combination of a \n\tdiscrete Uniform and a shifted Binomial distributions).","Published":"2016-12-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"cubature","Version":"1.3-8","Title":"Adaptive Multivariate Integration over Hypercubes","Description":"R wrapper around the cubature C library of\n Steven G. Johnson for adaptive multivariate integration over hypercubes.\n This version provides both hcubature and pcubature routines in addition\n to a vector interface that results in substantial speed gains.","Published":"2017-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cubfits","Version":"0.1-3","Title":"Codon Usage Bias Fits","Description":"Estimating mutation and selection coefficients on synonymous\n codon bias usage based on models of ribosome overhead cost (ROC).\n Multinomial logistic regression and Markov Chain Monte Carlo are used to\n estimate and predict protein production rates with/without the presence\n of expressions and measurement errors. Work flows with examples for\n simulation, estimation and prediction processes are also provided\n with parallelization speedup. The whole framework is tested with\n yeast genome and gene expression data of Yassour, et al. (2009)\n .","Published":"2017-04-30","License":"Mozilla Public License 2.0","snapshot_date":"2017-06-23"} {"Package":"Cubist","Version":"0.0.19","Title":"Rule- And Instance-Based Regression Modeling","Description":"Regression modeling using rules with added instance-based corrections.","Published":"2016-12-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CuCubes","Version":"0.1.0","Title":"MultiDimensional Feature Selection (MDFS)","Description":"Functions for MultiDimensional Feature Selection (MDFS):\n * calculating multidimensional information gains,\n * finding interesting tuples for chosen variables,\n * scoring variables,\n\t* finding important variables,\n\t* plotting selection results.\n\tCuCubes is also known as CUDA Cubes and it is a library that allows fast\n\tCUDA-accelerated computation of information gains in binary classification\n\tproblems.\n\tThis package wraps CuCubes and provides an alternative CPU version as well\n\tas helper functions for building MultiDimensional Feature Selectors.","Published":"2016-12-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cudaBayesreg","Version":"0.3-16","Title":"CUDA Parallel Implementation of a Bayesian Multilevel Model for\nfMRI Data Analysis","Description":"Compute Unified Device Architecture (CUDA) is a software\n platform for massively parallel high-performance computing on\n NVIDIA GPUs. This package provides a CUDA implementation of a\n Bayesian multilevel model for the analysis of brain fMRI data.\n A fMRI data set consists of time series of volume data in 4D\n space. Typically, volumes are collected as slices of 64 x 64\n voxels. Analysis of fMRI data often relies on fitting linear\n regression models at each voxel of the brain. The volume of the\n data to be processed, and the type of statistical analysis to\n perform in fMRI analysis, call for high-performance computing\n strategies. In this package, the CUDA programming model uses a\n separate thread for fitting a linear regression model at each\n voxel in parallel. The global statistical model implements a\n Gibbs Sampler for hierarchical linear models with a normal\n prior. This model has been proposed by Rossi, Allenby and\n McCulloch in `Bayesian Statistics and Marketing', Chapter 3,\n and is referred to as `rhierLinearModel' in the R-package\n bayesm. A notebook equipped with a NVIDIA `GeForce 8400M GS'\n card having Compute Capability 1.1 has been used in the tests.\n The data sets used in the package's examples are available in\n the separate package cudaBayesregData.","Published":"2015-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cudaBayesregData","Version":"0.3-11","Title":"Data sets for the examples used in the package \"cudaBayesreg\"","Description":"FMRI data sets used in the examples of \"cudaBayesreg\".\n Data sets have been separated from the main package\n \"cudaBayesreg\" for convenience.","Published":"2012-10-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cudia","Version":"0.1","Title":"CUDIA Cross-level Imputation","Description":"Reconstruct individual-level values from aggregate-level\n summaries.","Published":"2012-08-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CUFF","Version":"1.3","Title":"Charles's Utility Function using Formula","Description":"Utility functions that provides wrapper to descriptive base functions\n like cor, mean and table. It makes use of the formula interface to pass\n variables to functions. It also provides operators to concatenate (%+%), to\n repeat (%n%) and manage character vectors for nice display.","Published":"2017-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CUMP","Version":"2.0","Title":"Analyze Multivariate Phenotypes by Combining Univariate Results","Description":"Combining Univariate Association Test Results of Multiple Phenotypes for Detecting Pleiotropy.","Published":"2016-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cumplyr","Version":"0.1-1","Title":"Extends ddply to allow calculation of cumulative quantities","Description":"Extends ddply to allow calculation of cumulative\n quantities.","Published":"2012-05-14","License":"MIT","snapshot_date":"2017-06-23"} {"Package":"cumSeg","Version":"1.1","Title":"Change point detection in genomic sequences","Description":"Estimation of number and location of change points in\n mean-shift (piecewise constant) models. Particularly useful to\n model genomic sequences of continuous measurements.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"cumstats","Version":"1.0","Title":"Cumulative Descriptive Statistics","Description":"Cumulative descriptive statistics for (arithmetic, geometric, harmonic) mean, median, mode, variance, skewness and kurtosis.","Published":"2017-01-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"curl","Version":"2.6","Title":"A Modern and Flexible Web Client for R","Description":"The curl() and curl_download() functions provide highly\n configurable drop-in replacements for base url() and download.file() with\n better performance, support for encryption (https, ftps), gzip compression,\n authentication, and other 'libcurl' goodies. The core of the package implements a\n framework for performing fully customized requests where data can be processed\n either in memory, on disk, or streaming via the callback or connection\n interfaces. Some knowledge of 'libcurl' is recommended; for a more-user-friendly\n web client see the 'httr' package which builds on this package with http\n specific tools and logic.","Published":"2017-04-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"currentSurvival","Version":"1.0","Title":"Estimation of CCI and CLFS Functions","Description":"The currentSurvival package contains functions for the\n estimation of the current cumulative incidence (CCI) and the\n current leukaemia-free survival (CLFS). The CCI is the\n probability that a patient is alive and in any disease\n remission (e.g. complete cytogenetic remission in chronic\n myeloid leukaemia) after initiating his or her therapy (e.g.\n tyrosine kinase therapy for chronic myeloid leukaemia). The\n CLFS is the probability that a patient is alive and in any\n disease remission after achieving the first disease remission.","Published":"2013-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"curry","Version":"0.1.1","Title":"Partial Function Application with %<%, %-<%, and %><%","Description":"Partial application is the process of reducing the arity of a\n function by fixing one or more arguments, thus creating a new function\n lacking the fixed arguments. The curry package provides three different ways\n of performing partial function application by fixing arguments from either\n end of the argument list (currying and tail currying) or by fixing multiple\n named arguments (partial application). This package provides this\n functionality through the %<%, %-<%, and %><% operators which allows for\n a programming style comparable to modern functional languages. Compared\n to other implementations such a purrr::partial() the operators in curry\n composes functions with named arguments, aiding in autocomplete etc.","Published":"2016-09-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"curvetest","Version":"2.2","Title":"The package will formally test two curves represented by\ndiscrete data sets to be statistically equal or not when the\nerrors of the two curves were assumed either equal or not using\nthe tube formula to calculate the tail probabilities","Description":"Test Equality of Curves with Homoscedastic or\n Heteroscedastic Errors.","Published":"2012-03-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"curvHDR","Version":"1.2-0","Title":"Filtering of Flow Cytometry Samples","Description":"Filtering, also known as gating, of flow cytometry samples using \n the curvHDR method, which is described in Naumann, U., Luta, G. and \n Wand, M.P. (2010) .","Published":"2017-04-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cusp","Version":"2.3.3","Title":"Cusp-Catastrophe Model Fitting Using Maximum Likelihood","Description":"Cobb's maximum likelihood method for cusp-catastrophe modeling\n (Grasman, van der Maas, & Wagenmakers, 2009, JSS, 32:8;\n Cobb, L, 1981, Behavioral Science, 26:1, 75--78).\n Includes a cusp() function for model fitting, and several\n utility functions for plotting, and for comparing the\n model to linear regression and logistic curve models.","Published":"2015-08-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"customizedTraining","Version":"1.1","Title":"Customized Training for Lasso and Elastic-Net Regularized\nGeneralized Linear Models","Description":"Customized training is a simple technique for transductive\n learning, when the test covariates are known at the time of training. The\n method identifies a subset of the training set to serve as the training set\n for each of a few identified subsets in the training set. This package\n implements customized training for the glmnet() and cv.glmnet() functions.","Published":"2016-09-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CUSUMdesign","Version":"1.1.3","Title":"Compute Decision Interval and Average Run Length for CUSUM\nCharts","Description":"Computation of decision intervals (H) and average run lengths (ARL) for CUSUM charts.","Published":"2016-05-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cutoffR","Version":"1.0","Title":"CUTOFF: A Spatio-temporal Imputation Method","Description":"This package provides a set of tools for spatio-temporal imputation \n in R. It includes the implementation for then CUTOFF imputation method, \n a useful cross-validation function that can be used not only by the \n CUOTFF method but also by some other imputation functions to help \n choosing an optimal value for relevant parameters, such as the number \n of k-nearest neighbors for the KNN imputation method, or the number of\n components for the SVD imputation method. It also contains tools for \n simulating data with missing values with respect to some specific \n missing pattern, for example, block missing. Some useful visualisation\n functions for imputation purposes are also provided in the package. ","Published":"2014-05-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cuttlefish.model","Version":"1.0","Title":"An R package to perform LPUE standardization and stock\nassessment of the English Channel cuttlefish stock using a\ntwo-stage biomass model","Description":"This package can be used to standardize abundance indices using the delta-GLM method and to model the English Channel cuttlefish stock using a two-stage biomass model","Published":"2014-04-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cvAUC","Version":"1.1.0","Title":"Cross-Validated Area Under the ROC Curve Confidence Intervals","Description":"This package contains various tools for working with and evaluating cross-validated area under the ROC curve (AUC) estimators. The primary functions of the package are ci.cvAUC and ci.pooled.cvAUC, which report cross-validated AUC and compute confidence intervals for cross-validated AUC estimates based on influence curves for i.i.d. and pooled repeated measures data, respectively. One benefit to using influence curve based confidence intervals is that they require much less computation time than bootstrapping methods. The utility functions, AUC and cvAUC, are simple wrappers for functions from the ROCR package. ","Published":"2014-12-09","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"CVcalibration","Version":"1.0-1","Title":"Estimation of the Calibration Equation with Error-in\nObservations","Description":"Statistical inferences for estimating the calibration equation with error-in observations","Published":"2014-01-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"CVD","Version":"1.0.2","Title":"Color Vision Deficiencies","Description":"Methods for color vision deficiencies (CVD), to help understanding and mitigating issues with CVDs and to generate tests for diagnosis and interpretation.","Published":"2016-11-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"cvequality","Version":"0.1.1","Title":"Tests for the Equality of Coefficients of Variation from\nMultiple Groups","Description":"Contains functions for testing for significant differences between multiple coefficients of variation. Includes Feltz and Miller's (1996) asymptotic test and Krishnamoorthy and Lee's (2014) modified signed-likelihood ratio test. See the vignette for more, including full details of citations.","Published":"2016-12-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cvplogistic","Version":"3.1-0","Title":"Penalized Logistic Regression Model using Majorization\nMinimization by Coordinate Descent (MMCD) Algorithm","Description":"The package uses majorization minimization by coordinate\n descent (MMCD) algorithm to compute the solution surface for\n concave penalized logistic regression model. The SCAD and MCP\n (default) are two concave penalties considered in this\n implementation. For the MCP penalty, the package also provides\n the local linear approximation by coordinate descant (LLA-CD)\n and adaptive rescaling algorithms for computing the solutions.\n The package also provides a Lasso-concave hybrid penalty for\n fast variable selection. The hybrid penalty applies the concave\n penalty only to the variables selected by the Lasso. For all\n the implemented methods, the solution surface is computed along\n kappa, which is a more smooth fit for the logistic model.\n Tuning parameter selection method by k-fold cross-validated\n area under ROC curve (CV-AUC) is implemented as well.","Published":"2013-03-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cvq2","Version":"1.2.0","Title":"Calculate the predictive squared correlation coefficient","Description":"The external prediction capability of quantitative structure-activity relationship (QSAR) models is often quantified using the predictive squared correlation coefficient. This value can be calculated with an external data set or by cross validation. ","Published":"2013-10-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"CVR","Version":"0.1.1","Title":"Canonical Variate Regression","Description":"Perform canonical variate regression (CVR) for two sets of covariates and a univariate\n response, with regularization and weight parameters tuned by cross validation. ","Published":"2017-03-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CVST","Version":"0.2-1","Title":"Fast Cross-Validation via Sequential Testing","Description":"This package implements the fast cross-validation via sequential testing (CVST) procedure. CVST is an improved cross-validation procedure which uses non-parametric testing coupled with sequential analysis to determine the best parameter set on linearly increasing subsets of the data. By eliminating underperforming candidates quickly and keeping promising candidates as long as possible, the method speeds up the computation while preserving the capability of a full cross-validation. Additionally to the CVST the package contains an implementation of the ordinary k-fold cross-validation with a flexible and powerful set of helper objects and methods to handle the overall model selection process. The implementations of the Cochran's Q test with permutations and the sequential testing framework of Wald are generic and can therefore also be used in other contexts.","Published":"2013-12-10","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"CVThresh","Version":"1.1.1","Title":"Level-Dependent Cross-Validation Thresholding","Description":"This package carries out level-dependent cross-validation\n method for the selection of thresholding value in wavelet\n shrinkage. This procedure is implemented by coupling a\n conventional cross validation with an imputation method due to\n a limitation of data length, a power of 2. It can be easily\n applied to classical leave-one-out and k-fold cross validation.\n Since the procedure is computationally fast, a level-dependent\n cross validation can be performed for wavelet shrinkage of\n various data such as a data with correlated errors.","Published":"2013-01-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cvTools","Version":"0.3.2","Title":"Cross-validation tools for regression models","Description":"Tools that allow developers to write functions for\n cross-validation with minimal programming effort and assist\n users with model selection.","Published":"2012-05-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CVTuningCov","Version":"1.0","Title":"Regularized Estimators of Covariance Matrices with CV Tuning","Description":"This is a package for selecting tuning parameters based on cross-validation (CV) in regularized estimators of large covariance matrices. Four regularized methods are implemented: banding, tapering, hard-thresholding and soft-thresholding. Two types of matrix norms are applied: Frobenius norm and operator norm. Two types of CV are considered: K-fold CV and random CV. Usually K-fold CV use K-1 folds to train a model and the rest one fold to validate the model. The reverse version trains a model with 1 fold and validates with the rest with K-1 folds. Random CV randomly splits the data set to two parts, a training set and a validation set with user-specified sizes. ","Published":"2014-08-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"cvxbiclustr","Version":"0.0.1","Title":"Convex Biclustering Algorithm","Description":"An iterative algorithm for solving a convex\n formulation of the biclustering problem.","Published":"2015-06-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cvxclustr","Version":"1.1.1","Title":"Splitting methods for convex clustering","Description":"Alternating Minimization Algorithm (AMA) and Alternating Direction\n Method of Multipliers (ADMM) splitting methods for convex clustering.","Published":"2014-07-28","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"cwhmisc","Version":"6.0","Title":"Miscellaneous Functions for Math, Plotting, Printing,\nStatistics, Strings, and Tools","Description":"Miscellaneous useful or interesting functions. Some parameters of functions may have changed, so beware!","Published":"2015-08-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cwm","Version":"0.0.3","Title":"Cluster Weighted Models by EM algorithm","Description":"This package estimates gaussian cluster weighted linear regressions by EM algorithm. ","Published":"2014-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cxxfunplus","Version":"1.0","Title":"extend cxxfunction by saving the dynamic shared objects","Description":"extend cxxfunction by saving the dynamic shared objects\n for reusing across R sessions","Published":"2012-08-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"cycleRtools","Version":"1.1.1","Title":"Tools for Cycling Data Analysis","Description":"A suite of functions for analysing cycling data.","Published":"2016-01-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cyclocomp","Version":"1.1.0","Title":"Cyclomatic Complexity of R Code","Description":"Cyclomatic complexity is a software metric (measurement),\n used to indicate the complexity of a program. It is a quantitative\n measure of the number of linearly independent paths through a program's\n source code. It was developed by Thomas J. McCabe, Sr. in 1976.","Published":"2016-09-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cycloids","Version":"1.0","Title":"cycloids","Description":"Tools for calculating coordinate representations of\n hypocycloids, epicyloids, hypotrochoids, and epitrochoids\n (altogether called 'cycloids' here) with different scaling\n and positioning options. The cycloids can be visualised with\n any appropriate graphics function in R.","Published":"2013-11-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Cyclops","Version":"1.2.2","Title":"Cyclic Coordinate Descent for Logistic, Poisson and Survival\nAnalysis","Description":"This model fitting tool incorporates cyclic coordinate descent and\n majorization-minimization approaches to fit a variety of regression models\n found in large-scale observational healthcare data. Implementations focus\n on computational optimization and fine-scale parallelization to yield\n efficient inference in massive datasets.","Published":"2016-10-06","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"cymruservices","Version":"0.2.0","Title":"Query 'Team Cymru' 'IP' Address, Autonomous System Number\n('ASN'), Border Gateway Protocol ('BGP'), Bogon and 'Malware'\nHash Data Services","Description":"A toolkit for querying 'Team Cymru' 'IP'\n address, Autonomous System Number ('ASN'), Border Gateway Protocol ('BGP'), Bogon\n and 'Malware' Hash Data Services.","Published":"2016-03-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"cyphid","Version":"1.1","Title":"Cycle and Phase Identification for mastication data","Description":"This library contains a primary function that divides\n chewing sequences in cycles and cycles into phases. See\n get.all.breaks for an example.","Published":"2013-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"cystiSim","Version":"0.1.0","Title":"Agent-Based Model for Taenia_solium Transmission and Control","Description":"The cystiSim package provides an agent-based model for Taenia solium transmission and control. cystiSim was developed within the framework of CYSTINET, the European Network on taeniosis/cysticercosis, COST ACTION TD1302.","Published":"2016-05-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"CytobankAPI","Version":"1.0.1.1","Title":"Cytobank API Wrapper for R","Description":"Tools to interface with Cytobank's API via R, organized by various\n endpoints that represent various areas of Cytobank functionality. Learn more\n about Cytobank at .","Published":"2017-06-15","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"cytoDiv","Version":"0.5-3","Title":"Cytometric diversity indices","Description":"Calculates ecological diversity indices for a microbial\n community","Published":"2012-01-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"D2C","Version":"1.2.1","Title":"Predicting Causal Direction from Dependency Features","Description":"The relationship between statistical dependency and causality lies\n at the heart of all statistical approaches to causal inference. The D2C\n package implements a supervised machine learning approach to infer the\n existence of a directed causal link between two variables in multivariate\n settings with n>2 variables. The approach relies on the asymmetry of some\n conditional (in)dependence relations between the members of the Markov\n blankets of two variables causally connected. The D2C algorithm predicts\n the existence of a direct causal link between two variables in a\n multivariate setting by (i) creating a set of of features of the\n relationship based on asymmetric descriptors of the multivariate dependency\n and (ii) using a classifier to learn a mapping between the features and the\n presence of a causal link","Published":"2015-01-21","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"D3GB","Version":"1.1","Title":"Interactive Genome Browser with R","Description":"Creates interactive genome browser with 'R'. It joins the data analysis power of R and the visualization libraries of JavaScript in one package.","Published":"2017-04-10","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"d3heatmap","Version":"0.6.1.1","Title":"Interactive Heat Maps Using 'htmlwidgets' and 'D3.js'","Description":"Create interactive heat maps that are usable from the R console, in\n the 'RStudio' viewer pane, in 'R Markdown' documents, and in 'Shiny' apps. Hover\n the mouse pointer over a cell to show details, drag a rectangle to zoom, and\n click row/column labels to highlight.","Published":"2016-02-23","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"D3M","Version":"0.41.1","Title":"Two Sample Test with Wasserstein Metric","Description":"Two sample test based on Wasserstein metric. This is motivated from detection of differential DNA-methylation sites based on underlying distributions.","Published":"2016-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"d3Network","Version":"0.5.2.1","Title":"Tools for creating D3 JavaScript network, tree, dendrogram, and\nSankey graphs from R","Description":"This packages is intended to make it easy to create D3 JavaScript\n network, tree, dendrogram, and Sankey graphs from R using data frames.\n !!! NOTE: Active development has moved to the networkD3 package. !!!","Published":"2015-01-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"D3partitionR","Version":"0.3.1","Title":"Plotting D3 Hierarchical Plots in R and Shiny","Description":"Plotting hierarchical plots in R such as Sunburst, Treemap, Circle\n Treemap, Partition Chart, collapsible indented tree and collapsible tree.","Published":"2016-12-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"d3r","Version":"0.6.5","Title":"'d3.js' Utilities for R","Description":"Helper functions for using 'd3.js' in R.","Published":"2017-05-21","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"d3Tree","Version":"0.2.0","Title":"Create Interactive Collapsible Trees with the JavaScript 'D3'\nLibrary","Description":"Create and customize interactive collapsible 'D3' trees using the 'D3'\n JavaScript library and the 'htmlwidgets' package. These trees can be used\n directly from the R console, from 'RStudio', in Shiny apps and R Markdown documents.\n When in Shiny the tree layout is observed by the server and can be used as a reactive filter\n of structured data.","Published":"2017-06-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"DA.MRFA","Version":"1.1.2","Title":"Dimensionality Assessment using Minimum Rank Factor Analysis","Description":"Performs Parallel Analysis for assessing the dimensionality of a set of variables using Minimum Rank Factor Analysis (see Timmerman & Lorenzo-Seva (2011) and ten Berge & Kiers (1991) for more information).\n The package also includes the option to compute Minimum Rank Factor Analysis by itself, as well as the Greater Lower Bound calculation.","Published":"2017-05-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DAAG","Version":"1.22","Title":"Data Analysis and Graphics Data and Functions","Description":"Various data sets used in examples and exercises in the\n book Maindonald, J.H. and Braun, W.J. (2003, 2007, 2010) \"Data\n Analysis and Graphics Using R\".","Published":"2015-09-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DAAGbio","Version":"0.63-3","Title":"Data Sets and Functions, for Demonstrations with Expression\nArrays and Gene Sequences","Description":"Data sets and functions, for the display of gene expression array (microarray) data, and for demonstrations with such data.","Published":"2017-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DAAGxtras","Version":"0.8-4","Title":"Data Sets and Functions, supplementary to DAAG","Description":"various data sets used in additional exercises for\n the book Maindonald, J.H. and Braun, W.J. (3rd edn 2010)\n \"Data Analysis and Graphics Using R\", and for a\n 'Data Mining' course. Note that a number of datasets\n that were in earlier versions of this package have been\n transferred to the DAAG package.","Published":"2013-10-16","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"dad","Version":"2.0.0","Title":"Three-Way Data Analysis Through Densities","Description":"The three-way data consists of a set of variables measured on several groups of individuals. To each group is associated an estimated probability density function. The package provides functional methods (principal component analysis, multidimensional scaling, discriminant analysis...) for such probability densities.","Published":"2016-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dae","Version":"2.7-20","Title":"Functions Useful in the Design and ANOVA of Experiments","Description":"The content falls into the following groupings: (i) Data, (ii)\n Factor manipulation functions, (iii) Design functions, (iv) ANOVA functions, (v)\n Matrix functions, (vi) Projector and canonical efficiency functions, and (vii)\n Miscellaneous functions. A document 'daeDesignRandomization.pdf', available\n in the doc subdirectory of the installation directory for 'dae', describes the\n use of the package for generating randomized layouts for experiments. The ANOVA\n functions facilitate the extraction of information when the 'Error' function has\n been used in the call to 'aov'.","Published":"2016-09-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"daewr","Version":"1.1-7","Title":"Design and Analysis of Experiments with R","Description":"Contains Data frames and functions used in the book \"Design and Analysis of Experiments with R\".","Published":"2016-10-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"daff","Version":"0.3.0","Title":"Diff, Patch and Merge for Data.frames","Description":"Diff, patch and merge for data frames. Document changes in data\n sets and use them to apply patches. Changes to data can be made visible by using\n render_diff. The V8 package is used to wrap the 'daff.js' JavaScript library\n which is included in the package.","Published":"2017-05-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dafs","Version":"1.0-37","Title":"Data analysis for forensic scientists","Description":"Data and miscellanea to support the book \"Introduction to\n Data analysis with R for Forensic Scientists\", Curran, J.M.\n 2010 CRC Press ISBN: 978-1-4200-8826-7","Published":"2012-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dagbag","Version":"1.1","Title":"Learning directed acyclic graphs (DAGs) through bootstrap\naggregating","Description":"dagbag is a set of methods that learn DAGs via bootstrap aggregating. ","Published":"2014-06-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DAGGER","Version":"1.4","Title":"Consensus genetic maps","Description":"Integrates the information from multiple linkage maps to\n create a consensus directed graph, which is then linearized to\n produce a consensus map.","Published":"2011-08-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dagitty","Version":"0.2-2","Title":"Graphical Analysis of Structural Causal Models","Description":"A port of the web-based software 'DAGitty', available at \n , for analyzing structural causal models \n (also known as directed acyclic graphs or DAGs).\n This package computes covariate adjustment sets for estimating causal\n effects, enumerates instrumental variables, derives testable\n implications (d-separation and vanishing tetrads), generates equivalent\n models, and includes a simple facility for data simulation. ","Published":"2016-08-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dagR","Version":"1.1.3","Title":"R functions for directed acyclic graphs","Description":"Functions to draw, manipulate, evaluate directed\n acyclic graphs and simulate corresponding data.","Published":"2014-01-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Daim","Version":"1.1.0","Title":"Diagnostic accuracy of classification models","Description":"Several functions for evaluating the accuracy of\n classification models. The package provides the following\n performance measures: repeated k-fold cross-validation, \n\t\t0.632 and 0.632+ bootstrap estimation of the misclassification rate, \n sensitivity, specificity and AUC. If an application is \n computationally intensive, parallel execution can be used \n to reduce the computational effort.","Published":"2013-09-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DAISIE","Version":"1.4","Title":"Dynamical Assembly of Islands by Speciation, Immigration and\nExtinction","Description":"Simulates and computes the (maximum) likelihood of a dynamical model of island biota assembly through speciation, immigration and extinction. See Valente et al. 2015. Ecology Letters 18: 844-852, .","Published":"2017-04-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DAKS","Version":"2.1-3","Title":"Data Analysis and Knowledge Spaces","Description":"Functions and an example dataset for the psychometric theory of\n knowledge spaces. This package implements data analysis methods and\n procedures for simulating data and quasi orders and transforming different\n formulations in knowledge space theory. See package?DAKS for an overview.","Published":"2016-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DALY","Version":"1.5.0","Title":"The DALY Calculator - Graphical User Interface for Probabilistic\nDALY Calculation in R","Description":"The DALY Calculator is a free, open-source Graphical User\n Interface (GUI) for stochastic disability-adjusted life year\n (DALY) calculation.","Published":"2016-11-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dam","Version":"0.0.1","Title":"Data Analysis Metabolomics","Description":"A collection of functions which aim to assist common computational workflow for analysis of matabolomic data..","Published":"2016-06-09","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"DamiaNN","Version":"1.0.0","Title":"Neural Network Numerai","Description":"Interactively train neural networks on Numerai, , data. Generate tournament predictions and write them to a CSV.","Published":"2016-09-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DAMisc","Version":"1.4-3","Title":"Dave Armstrong's Miscellaneous Functions","Description":"Miscellaneous set of functions I use in my teaching either at the University of Wisconsin-Milwaukee or the Inter-university Consortium for Political and Social Research Summer Program in Quantitative Methods. Broadly, the functions help with presentation and interpretation of GLMs. ","Published":"2016-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DAMOCLES","Version":"1.1","Title":"Dynamic Assembly Model of Colonization, Local Extinction and\nSpeciation","Description":"Simulates and computes (maximum) likelihood of a dynamical model of community assembly that takes into account phylogenetic history.","Published":"2015-03-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dams","Version":"0.2","Title":"Dams in the United States from the National Inventory of Dams\n(NID)","Description":"The single largest source of dams in the United States is the\n National Inventory of Dams (NID) from the US\n Army Corps of Engineers. Entire data from the NID cannot be obtained all at\n once and NID's website limits extraction of more than a couple of thousand\n records at a time. Moreover, selected data from the NID's user interface\n cannot not be saved to a file. In order to make the analysis of this data\n easier, all the data from NID was extracted manually. Subsequently, the raw\n data was checked for potential errors and cleaned. This package provides\n sample cleaned data from the NID and provides functionality to access the\n entire cleaned NID data.","Published":"2016-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DandEFA","Version":"1.6","Title":"Dandelion Plot for R-Mode Exploratory Factor Analysis","Description":"Contains the function used to create the Dandelion Plot. Dandelion Plot is a visualization method for R-mode Exploratory Factor Analysis. ","Published":"2016-03-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"darch","Version":"0.12.0","Title":"Package for Deep Architectures and Restricted Boltzmann Machines","Description":"The darch package is built on the basis of the code from G. E.\n Hinton and R. R. Salakhutdinov (available under Matlab Code for deep belief\n nets). This package is for generating neural networks with many layers (deep\n architectures) and train them with the method introduced by the publications\n \"A fast learning algorithm for deep belief nets\" (G. E. Hinton, S. Osindero,\n Y. W. Teh (2006) ) and \"Reducing the\n dimensionality of data with neural networks\" (G. E. Hinton, R. R.\n Salakhutdinov (2006) ). This method includes a\n pre training with the contrastive divergence method published by G.E Hinton\n (2002) and a fine tuning with common known\n training algorithms like backpropagation or conjugate gradients.\n Additionally, supervised fine-tuning can be enhanced with maxout and\n dropout, two recently developed techniques to improve fine-tuning for deep\n learning.","Published":"2016-07-20","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Dark","Version":"0.9.8","Title":"The Analysis of Dark Adaptation Data","Description":"The recovery of visual sensitivity in a dark environment is known\n as dark adaptation. In a clinical or research setting the recovery is typically\n measured after a dazzling flash of light and can be described by the Mahroo,\n Lamb and Pugh (MLP) model of dark adaptation. The functions in this package take\n dark adaptation data and use nonlinear regression to find the parameters of the\n model that 'best' describe the data. They do this by firstly, generating rapid\n initial objective estimates of data adaptation parameters, then a multi-start\n algorithm is used to reduce the possibility of a local minimum. There is also a\n bootstrap method to calculate parameter confidence intervals. The functions rely\n upon a 'dark' list or object. This object is created as the first step in the\n workflow and parts of the object are updated as it is processed.","Published":"2016-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"darksky","Version":"1.0.0","Title":"Tools to Work with the Dark Sky API","Description":"Provides programmatic access to the Dark Sky API \n , which provides current or historical global \n weather conditions.","Published":"2016-09-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dartR","Version":"0.80","Title":"Importing and Analysing Snp and Silicodart Data Generated by\nGenome-Wide Restriction Fragment Analysis","Description":"Functions are provided that facilitate the import and analysis of\n snp and silicodart (presence/absence) data. The main focus is on data generated\n by DarT (Diversity Arrays Technology). However, once SNP or related fragment\n presence/absence data from any source is imported into a genlight object many\n of the functions can be used. Functions are available for input and output of\n snp and silicodart data, for reporting on and filtering on various criteria\n (e.g. CallRate, Heterozygosity, Reproducibility, maximum allele frequency).\n Advanced filtering is based on Linkage Disequilibrium and HWE. Other functions\n are available for visualization after PCoA, or to facilitate transfer of data\n between genlight/genind objects and newhybrids, related, phylip packages etc.","Published":"2017-06-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"darts","Version":"1.0","Title":"Statistical Tools to Analyze Your Darts Game","Description":"Are you aiming at the right spot in darts? Maybe not! Use\n this package to find your optimal aiming location. For a better\n explanation, go to\n http://www-stat.stanford.edu/~ryantibs/darts/ or see the paper\n \"A Statistician Plays Darts\".","Published":"2011-01-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"dashboard","Version":"0.1.0","Title":"Interactive Data Visualization with D3.js","Description":"The dashboard package allows users to create web pages which display \n\tinteractive data visualizations working in a standard modern browser. It displays them locally \n\tusing the Rook server. Nor knowledge about web technologies nor Internet connection are \n\trequired. D3.js is a JavaScript library for manipulating documents based on data. \n\tD3 helps the dashboard package bring data to life using HTML, SVG and CSS.","Published":"2014-12-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dat","Version":"0.2.0","Title":"Tools for Data Manipulation","Description":"An implementation of common higher order functions with syntactic\n sugar for anonymous function. Provides also a link to 'dplyr' for common\n transformations on data frames to work around non standard evaluation by\n default.","Published":"2017-04-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"data.table","Version":"1.10.4","Title":"Extension of `data.frame`","Description":"Fast aggregation of large data (e.g. 100GB in RAM), fast ordered joins, fast add/modify/delete of columns by group using no copies at all, list columns, a fast friendly file reader and parallel file writer. Offers a natural and flexible syntax, for faster development.","Published":"2017-02-01","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"data.tree","Version":"0.7.0","Title":"General Purpose Hierarchical Data Structure","Description":"Create tree structures from hierarchical data, and traverse the\n tree in various orders. Aggregate, cumulate, print, plot, convert to and from\n data.frame and more. Useful for decision trees, machine learning, finance,\n conversion from and to JSON, and many other applications.","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"data.world","Version":"1.1.1","Title":"Main Package for Working with 'data.world' Data Sets","Description":"High-level tools for working with data.world data sets. data.world is a community \n where you can find interesting data, store and showcase your own data and data projects, \n and find and collaborate with other members. In addition to exploring, querying and \n charting data on the data.world site, you can access data via 'API' endpoints and \n integrations. Use this package to access, query and explore data sets, and to \n integrate data into R projects. Visit , for additional information.","Published":"2017-06-23","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"DatABEL","Version":"0.9-6","Title":"File-Based Access to Large Matrices Stored on HDD in Binary\nFormat","Description":"Provides an interface to the C++ FILEVECTOR library\n facilitating analysis using large (giga- to tera-bytes) matrices.\n Matrix storage is organized in a way that either columns or rows\n are quickly accessible. DatABEL is primarily aimed to support\n genome-wide association analyses e.g. using GenABEL, MixABEL and\n ProbABEL.","Published":"2015-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"datacheck","Version":"1.2.2","Title":"Tools for Checking Data Consistency","Description":"Functions to check variables against a\n set of data quality rules. A rule file can be accompanied by look-up tables. In\n addition, there are some convenience functions that may\n serve as an example for defining clearer 'data rules'. An\n HTML based user interface facilitates initial exploration of the\n functionality.","Published":"2015-04-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"datacheckr","Version":"0.2.0","Title":"Data Checking","Description":"Checks column names, classes, values, keys and joins in data frames.\n Also checks length, class and values of vectors and scalars.\n Returns an informative error message if user-defined conditions are not met.","Published":"2017-04-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DataClean","Version":"1.0","Title":"Data Cleaning","Description":"Includes functions that researchers or practitioners may use to clean\n raw data, transferring html, xlsx, txt data file into other formats. And it\n also can be used to manipulate text variables, extract numeric variables from\n text variables and other variable cleaning processes. It is originated from a\n author's project which focuses on creative performance in online education\n environment. The resulting paper of that study will be published soon.","Published":"2016-03-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DataCombine","Version":"0.2.21","Title":"Tools for Easily Combining and Cleaning Data Sets","Description":"Tools for combining and cleaning data sets, particularly\n with grouped and time series data.","Published":"2016-04-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"datadogr","Version":"0.1.0","Title":"R Client for 'Datadog' API","Description":"Query for metrics from 'Datadog' () via its API.","Published":"2017-05-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"datadr","Version":"0.8.6","Title":"Divide and Recombine for Large, Complex Data","Description":"Methods for dividing data into subsets, applying analytical\n methods to the subsets, and recombining the results. Comes with a generic\n MapReduce interface as well. Works with key-value pairs stored in memory,\n on local disk, or on HDFS, in the latter case using the R and Hadoop\n Integrated Programming Environment (RHIPE).","Published":"2016-10-02","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DataEntry","Version":"0.9-1","Title":"Make it Easier to Enter Questionnaire Data","Description":"This is a GUI application for defining\n attributes and setting valid values of variables, and then,\n entering questionnaire data in a data.frame.","Published":"2017-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DataExplorer","Version":"0.4.0","Title":"Data Explorer","Description":"Data exploration process for data analysis and model building, so\n that users could focus on understanding data and extracting insights. The\n package automatically scans through each variable and does data profiling.\n Typical graphical techniques will be performed for both discrete and\n continuous features.","Published":"2017-01-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dataframes2xls","Version":"0.4.7","Title":"Write Data Frames to Xls Files","Description":"Writes data frames to xls files. It supports \n multiple sheets and basic formatting.","Published":"2016-09-26","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"datafsm","Version":"0.2.0","Title":"Estimating Finite State Machine Models from Data","Description":"Our method automatically generates models of dynamic decision-\n making that both have strong predictive power and are interpretable in human\n terms. We use an efficient model representation and a genetic algorithm-based\n estimation process to generate simple deterministic approximations that explain\n most of the structure of complex stochastic processes. We have applied the\n software to empirical data, and demonstrated it's ability to recover known data-\n generating processes by simulating data with agent-based models and correctly\n deriving the underlying decision models for multiple agent models and degrees of\n stochasticity.","Published":"2017-06-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DataGraph","Version":"1.0.1","Title":"Export Data from R so DataGraph can Read it","Description":"Functions to save either '.dtable' or '.dtbin' files that can be read by DataGraph, a graphing and analysis application for macOS. Can save a data frame, collection of data frames and sequences of data frames and individual vectors. For more information see .","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DataLoader","Version":"1.3","Title":"Import Multiple File Types","Description":"Functions to import multiple files of multiple data file types ('.xlsx', '.xls', '.csv', '.txt')\n from a given directory into R data frames.","Published":"2015-11-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dataMaid","Version":"0.9.2","Title":"A Suite of Checks for Identification of Potential Errors in a\nData Frame as Part of the Data Cleaning Process","Description":"Data cleaning is an important first step of any statistical\n analysis. dataMaid provides an extendable suite of test for common potential\n errors in a dataset. It produces a document with a thorough summary of the\n checks and the results that a human can use to identify possible errors.","Published":"2017-01-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"datamap","Version":"0.1-1","Title":"A system for mapping foreign objects to R variables and\nenvironments","Description":"datamap utilizes variable bindings and objects of class\n \"UserDefinedDatabase\" to provide a simple mapping system to\n foreign objects. Maps can be used as environments or attached\n to the search path, and changes to either are persistent.\n Mapped foreign objects are fetched in real-time and are never\n cached by the mapping system.","Published":"2009-12-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"datamart","Version":"0.5.2","Title":"Unified access to your data sources","Description":"Provides an S4 infrastructure for unified handling\n of internal datasets and web based data sources. The package is\n currently in beta; things may break, change or go away without\n warning.","Published":"2014-10-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"dataMeta","Version":"0.1.0","Title":"Create and Append a Data Dictionary for an R Dataset","Description":"Designed to create a basic data dictionary and append to the original dataset's attributes list. The package makes use of a tidy dataset and creates a data frame that will serve as a linker that will aid in building the dictionary. The dictionary is then appended to the list of the original dataset's attributes. The user will have the option of entering variable and item descriptions by writing code or use alternate functions that will prompt the user to add these.","Published":"2017-04-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dataonderivatives","Version":"0.3.0","Title":"Easily Source Publicly Available Data on Derivatives","Description":"Post Global Financial Crisis derivatives reforms have lifted the \n veil off over-the-counter (OTC) derivative markets. Swap Execution Facilities\n (SEFs) and Swap Data Repositories (SDRs) now publish data on swaps that are \n traded on or reported to those facilities (respectively). This package provides\n you the ability to get this data from supported sources.","Published":"2017-05-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dataone","Version":"2.0.1","Title":"R Interface to the DataONE REST API","Description":"Provides read and write access to data and metadata from\n the DataONE network of data repositories. \n Each DataONE repository implements a consistent repository application \n programming interface. Users call methods in R to access these remote \n repository functions, such as methods to query the metadata catalog, get \n access to metadata for particular data packages, and read the data objects \n from the data repository. Users can also insert and update data objects on \n repositories that support these methods.","Published":"2016-08-30","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"datapack","Version":"1.2.0","Title":"A Flexible Container to Transport and Manipulate Data and\nAssociated Resources","Description":"Provides a flexible container to transport and\n manipulate complex sets of data. These data may consist of multiple data files and\n associated meta data and ancillary files. Individual data objects have\n associated system level meta data, and data files are linked together using\n the OAI-ORE standard resource map which describes the relationships between the files. \n The OAI-ORE standard is described at . Data packages \n can be serialized and transported as structured files that have been created following \n the BagIt specification. The BagIt specification is described at \n .","Published":"2017-04-07","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"datapasta","Version":"2.0.0","Title":"R Tools for Data Copy-Pasta","Description":"RStudio addins and R functions that make copy-pasting vectors and tables to text painless.","Published":"2017-03-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dataQualityR","Version":"1.0","Title":"Performs variable level data quality checks and generates\nsummary statistics","Description":"The package performs variable level data quality checks including\n missing values, unique values, frequency tables, and generates summary\n statistics","Published":"2013-09-21","License":"MIT | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dataRetrieval","Version":"2.7.2","Title":"Retrieval Functions for USGS and EPA Hydrologic and Water\nQuality Data","Description":"Collection of functions to help retrieve U.S. Geological Survey\n (USGS) and U.S. Environmental Protection Agency (EPA) water quality and\n hydrology data from web services. USGS web services are discovered from \n National Water Information System (NWIS) and . \n Both EPA and USGS water quality data are obtained from the Water Quality Portal .","Published":"2017-05-23","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"datarobot","Version":"2.6.0","Title":"DataRobot Predictive Modeling API","Description":"For working with the DataRobot predictive modeling platform's API.","Published":"2017-04-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"datasauRus","Version":"0.1.2","Title":"Datasets from the Datasaurus Dozen","Description":"The Datasaurus Dozen is a set of datasets with the same summary statistics. They \n retain the same summary statistics despite having radically different distributions.\n The datasets represent a larger and quirkier object lesson that is typically taught\n via Anscombe's Quartet (available in the 'datasets' package). Anscombe's Quartet\n contains four very different distributions with the same summary statistics and as \n such highlights the value of visualisation in understanding data, over and above\n summary statistics. As well as being an engaging variant on the Quartet, the data\n is generated in a novel way. The simulated annealing process used to derive datasets \n from the original Datasaurus is detailed in \"Same Stats, Different Graphs: Generating \n Datasets with Varied Appearance and Identical Statistics through Simulated Annealing\" \n .","Published":"2017-05-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dataseries","Version":"0.2.0","Title":"Switzerland's Data Series in One Place","Description":"Download and import time series from , a comprehensive and up-to-date collection of open data from Switzerland.","Published":"2017-04-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"datasets.load","Version":"0.1.0","Title":"Interface for Loading Datasets","Description":"Visual interface for loading datasets in RStudio from all installed (unloaded) packages.","Published":"2016-12-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Datasmith","Version":"1.0-1","Title":"Tools to Complete Euclidean Distance Matrices","Description":"Implements several algorithms for Euclidean distance matrix completion,\n Sensor Network Localization, and sparse Euclidean distance matrix completion using\n the minimum spanning tree.","Published":"2017-01-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"datastepr","Version":"0.0.2","Title":"An Implementation of a SAS-Style Data Step","Description":"Based on a SAS data step. This allows for row-wise dynamic building\n of data, iteratively importing slices of existing dataframes, conducting\n analyses, and exporting to a results frame. This is particularly useful for\n differential or time-series analyses, which are often not well suited to vector-\n based operations.","Published":"2016-08-20","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"datasus","Version":"0.1.0","Title":"An Interface to DATASUS System","Description":"It allows the user to retrieve the data from the systems of \n DATASUS (SUS IT department related to the Brazilian Ministry of Health, \n see for more\n information) much in the same way that is done in the online portal.\n For now the package allows access to the SINASC and SIM's (ICD-10) \n systems, that is, the 'Estatísticas Vitais'.","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"datautils","Version":"0.1.5","Title":"Timestamps and Advanced Plotting","Description":"Contains facilities such as getting the current timestamp in decimal seconds, computing interval w.r.t. a reference timestamp, and custom plotting with error bars.","Published":"2017-03-31","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"dataverse","Version":"0.2.0","Title":"Client for Dataverse 4 Repositories","Description":"Provides access to Dataverse version 4 APIs , \n enabling data search, retrieval, and deposit. For Dataverse versions <= 4.0, \n use the deprecated 'dvn' package .","Published":"2017-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dataview","Version":"2.1.1","Title":"Data and Workspace Browser for Terminals","Description":"Tools for deciphering the contents of\n\tunknown objects or environments from within the terminal,\n\ta problem often encountered when working with unfamiliar packages or debugging complex functions.\n\tIf working in xterm256 or ANSI terminals the output is coloured by default\n\tto improve readability (e.g. the standard Ubuntu terminal).","Published":"2015-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"date","Version":"1.2-37","Title":"Functions for Handling Dates","Description":"Functions for handling dates.","Published":"2017-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"datetime","Version":"0.1.2","Title":"Nominal Dates, Times, and Durations","Description":"Provides methods for working with nominal dates, times, and \n durations. Base R has sophisticated facilities for handling time, but these \n can give unexpected results if, for example, timezone is not handled properly. \n This package provides a more casual approach to support cases which \n do not require rigorous treatment. It systematically deconstructs the \n concepts origin and timezone, and de-emphasizes the display of seconds. It \n also converts among nominal durations such as seconds, hours, days, and weeks.\n See '?datetime' and '?duration' for examples. Adapted from 'metrumrg' \n .","Published":"2017-05-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DATforDCEMRI","Version":"0.55","Title":"Deconvolution Analysis Tool for Dynamic Contrast Enhanced MRI","Description":"This package performs voxel-wise deconvolution analysis of\n DCE-MRI contrast agent concentration versus time data and\n generates the Impulse Response Function, which can be used to\n approximate commonly utilized kinetic parameters such as Ktrans\n and ve. An interactive advanced voxel diagnosis tool (AVDT) is\n also provided to facilitate easy navigation of voxel-wise data.","Published":"2013-03-20","License":"CC BY-NC-SA 3.0","snapshot_date":"2017-06-23"} {"Package":"dave","Version":"1.5","Title":"Functions for \"Data Analysis in Vegetation Ecology\"","Description":"A collection of functions accompanying the book \"Data Analysis in Vegetation Ecology\", Second edition. 2013, Wiley-Blackwell, Chichester","Published":"2014-08-05","License":"LGPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"Davies","Version":"1.1-9","Title":"The Davies Quantile Function","Description":"Various utilities for the Davies distribution.","Published":"2016-05-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dawai","Version":"1.2.1","Title":"Discriminant Analysis with Additional Information","Description":"In applications it is usual that some additional information is available. This package dawai (an acronym for Discriminant Analysis With Additional Information) performs linear and quadratic discriminant analysis with additional information expressed as inequality restrictions among the populations means. It also computes several estimations of the true error rate.","Published":"2015-08-31","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"dbarts","Version":"0.8-7","Title":"Discrete Bayesian Additive Regression Trees Sampler","Description":"Fits Bayesian additive regression trees (BART) while allowing the updating of predictors or response so that BART can be incorporated as a conditional model in a Gibbs/MH sampler. Also serves as a drop-in replacement for package 'BayesTree'.","Published":"2016-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dbConnect","Version":"1.0","Title":"Provides a graphical user interface to connect with databases\nthat use MySQL","Description":"Using widgets provided by gWidgets this creates a nice\n usable interface to help facilitate the learning of SQL. It\n also just gives a nice interface for more advanced users. Not\n all features are provided in the GUI but it is possible for\n more advanced users to achieve these goals inside the GUI.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"dbEmpLikeGOF","Version":"1.2.4","Title":"Goodness-of-fit and two sample comparison tests using sample\nentropy","Description":"Goodness-of-fit and two sample comparison tests using sample entropy","Published":"2013-08-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dbEmpLikeNorm","Version":"1.0.0","Title":"Test for joint assessment of normality","Description":"Test for joint assessment of normality","Published":"2013-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DBEST","Version":"1.7","Title":"Detecting Breakpoints and Estimating Segments in Trend","Description":"A program for analyzing vegetation time series, with two algorithms: 1) change detection algorithm that detects trend changes, determines their type (abrupt or non-abrupt), and estimates their timing, magnitude, number, and direction; 2) generalization algorithm that simplifies the temporal trend into main features. The user can set the number of major breakpoints or magnitude of greatest changes of interest for detection, and can control the generalization process by setting an additional parameter of generalization-percentage.","Published":"2017-05-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dbfaker","Version":"0.1.0","Title":"A Tool to Ensure the Validity of Database Writes","Description":"A tool to ensure the validity of database writes.\n It provides a set of utilities to analyze and type check the properties\n of data frames that are to be written to databases with SQL support.","Published":"2016-10-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DBGSA","Version":"1.2","Title":"methods of distance-based gene set functional enrichment\nanalysis","Description":"This package provides methods and examples to support a\n method of Gene Set Analysis (GSA). DBGSA is a novel\n distance-based gene set enrichment analysis method. We consider\n that, the distance between 2 groups with different phenotype by\n focusing on the gene expression should be larger, if a certain\n gene functional set was significantly associated with a\n particular phenotype.","Published":"2012-01-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dbhydroR","Version":"0.2-2","Title":"'DBHYDRO' Hydrologic and Water Quality Data","Description":"Client for programmatic access to the South Florida Water\n Management District's 'DBHYDRO' database at \n , with functions\n for accessing hydrologic and water quality data. ","Published":"2017-02-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DBI","Version":"0.7","Title":"R Database Interface","Description":"A database interface definition for communication\n between R and relational database management systems. All\n classes in this package are virtual and need to be extended by\n the various R/DBMS implementations.","Published":"2017-06-18","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DBItest","Version":"1.5","Title":"Testing 'DBI' Back Ends","Description":"A helper that tests 'DBI' back ends for conformity\n to the interface.","Published":"2017-06-19","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DBKGrad","Version":"1.6","Title":"Discrete Beta Kernel Graduation of Mortality Data","Description":"This package allows for nonparametric graduation of mortality rates using fixed or adaptive discrete beta kernel estimator.","Published":"2014-10-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dblcens","Version":"1.1.7","Title":"Compute the NPMLE of distribution from doubly censored data","Description":"Use EM algorithm to compute the NPMLE of CDF and also the\n two censoring distributions. For doubly censored data (as\n described in Chang and Yang (1987) Ann. Stat. 1536-47). You can\n also specify a constraint, it will return the constrained NPMLE\n and the -2 log empirical likelihood ratio. This can be used to\n test the hypothesis about the constraint and find confidence\n intervals for probability or quantile via empirical likelihood\n ratio theorem. Influence function of hat F may also be\n calculated (but may be slow).","Published":"2012-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dbmss","Version":"2.4-0","Title":"Distance-Based Measures of Spatial Structures","Description":"Simple computation of spatial statistic functions of distance to characterize the spatial structures of mapped objects, including classical ones (Ripley's K and others) and more recent ones used by spatial economists (Duranton and Overman's Kd, Marcon and Puech's M). Relies on 'spatstat' for some core calculation.","Published":"2017-03-26","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"dbplyr","Version":"1.0.0","Title":"A 'dplyr' Back End for Databases","Description":"A 'dplyr' back end for databases that allows you to work with \n remote database tables as if they are in-memory data frames. Basic features\n works with any database that has a 'DBI' back end; more advanced features \n require 'SQL' translation to be provided by the package author.","Published":"2017-06-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dbscan","Version":"1.1-1","Title":"Density Based Clustering of Applications with Noise (DBSCAN) and\nRelated Algorithms","Description":"A fast reimplementation of several density-based algorithms of\n the DBSCAN family for spatial data. Includes the DBSCAN (density-based spatial\n clustering of applications with noise) and OPTICS (ordering points to identify\n the clustering structure) clustering algorithms HDBSCAN (hierarchical DBSCAN) and the LOF (local outlier\n factor) algorithm. The implementations uses the kd-tree data structure (from\n library ANN) for faster k-nearest neighbor search. An R interface to fast kNN\n and fixed-radius NN search is also provided.","Published":"2017-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dbstats","Version":"1.0.4","Title":"Distance-Based Statistics","Description":"Prediction methods where explanatory information is coded as a matrix of distances between individuals. Distances can either be directly input as a distances matrix, a squared distances matrix, an inner-products matrix or computed from observed predictors. ","Published":"2014-12-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dc3net","Version":"1.2.0","Title":"Inferring Condition-Specific Networks via Differential Network\nInference","Description":"Performs differential network analysis to infer disease specific gene networks.","Published":"2017-03-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"DCA","Version":"1.0","Title":"Dynamic Correlation Analysis for High Dimensional Data","Description":"Finding dominant latent signals that regulate dynamic correlation between many pairs of variables.","Published":"2017-04-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DCchoice","Version":"0.0.15","Title":"Analyzing Dichotomous Choice Contingent Valuation Data","Description":"Functions for analyzing dichotomous choice contingent valuation (CV) data. It provides \n functions for estimating parametric and nonparametric models for single-, one-and-one-half-, \n and double-bounded CV data.","Published":"2016-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dcemriS4","Version":"0.55","Title":"A Package for Image Analysis of DCE-MRI (S4 Implementation)","Description":"A collection of routines and documentation that allows one to\n perform voxel-wise quantitative analysis of dynamic contrast-enhanced MRI \n (DEC-MRI) and diffusion-weighted imaging (DWI) data, with emphasis on \n oncology applications.","Published":"2015-04-29","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DCG","Version":"0.9.2","Title":"Data Cloud Geometry (DCG): Using Random Walks to Find Community\nStructure in Social Network Analysis","Description":"Data cloud geometry (DCG) applies random walks in finding community structures for social networks.","Published":"2016-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DCGL","Version":"2.1.2","Title":"Differential Co-expression Analysis and Differential Regulation\nAnalysis of Gene Expression Microarray Data","Description":"Functions for 1) gene filtration; 2) link filtration; 3) differential co-expression analysis: DCG (Differential Coexpressed\n Gene) identification and DCL (Differentially Coexpressed Link)\n identification; and 4) differential regulation analysis: DRG (Differential Regulated\n Gene) identification, DRL (Differential Regulated Link)\n identification, DRL visualization and regulator ranking.","Published":"2014-12-18","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"dcGOR","Version":"1.0.6","Title":"Analysis of Ontologies and Protein Domain Annotations","Description":"There lacks a package for analysing domain-centric ontologies and annotations, particularly those in the dcGO database. The dcGO (http://supfam.org/SUPERFAMILY/dcGO) is a comprehensive domain-centric database for annotating protein domains using a panel of ontologies including Gene Ontology. With the package, users are expected to analyse and visualise domain-centric ontologies and annotations. Supported analyses include but are not limited to: easy access to a wide range of ontologies and their domain-centric annotations; able to build customised ontologies and annotations; domain-based enrichment analysis and visualisation; construction of a domain (semantic similarity) network according to ontology annotations; significance analysis for estimating a contact (statistical significance) network via Random Walk with Restart; and high-performance parallel computing. The new functionalities are: 1) to create domain-centric ontologies; 2) to predict ontology terms for input protein sequences (precisely domain content in the form of architectures) plus to assess the predictions; 3) to reconstruct ancestral discrete characters using maximum likelihood/parsimony.","Published":"2015-07-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dChipIO","Version":"0.1.5","Title":"Methods for Reading dChip Files","Description":"Functions for reading DCP and CDF.bin files generated by the dChip software.","Published":"2016-01-13","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"DCL","Version":"0.1.0","Title":"Claims Reserving under the Double Chain Ladder Model","Description":"Statistical modelling and forecasting in claims reserving in non-life insurance under the Double Chain Ladder framework by Martinez-Miranda, Nielsen and Verrall (2012).","Published":"2013-10-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dclone","Version":"2.1-2","Title":"Data Cloning and MCMC Tools for Maximum Likelihood Methods","Description":"Low level functions for implementing\n maximum likelihood estimating procedures for\n complex models using data cloning and Bayesian\n Markov chain Monte Carlo methods.\n Sequential and parallel MCMC support\n for JAGS, WinBUGS and OpenBUGS.","Published":"2016-03-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DCluster","Version":"0.2-7","Title":"Functions for the Detection of Spatial Clusters of Diseases","Description":"A set of functions for the detection of spatial clusters\n of disease using count data. Bootstrap is used to estimate\n sampling distributions of statistics.","Published":"2015-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DClusterm","Version":"0.1","Title":"Model-Based Detection of Disease Clusters","Description":"Model-based methods for the detection of disease clusters\n using GLMs, GLMMs and zero-inflated models.","Published":"2017-02-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DCM","Version":"0.1.1","Title":"Data Converter Module","Description":"Data Converter Module (DCM) converts the dataset format from split into stack and to the reverse.","Published":"2016-12-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dcmle","Version":"0.3-1","Title":"Hierarchical Models Made Easy with Data Cloning","Description":"S4 classes around infrastructure provided by the\n 'coda' and 'dclone' packages to make package development easy as a breeze\n with data cloning for hierarchical models.","Published":"2016-03-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dcmr","Version":"1.0","Title":"Attribute profile estimation using Diagnostic Classification\nModels and MCMC","Description":"Analysis of dichotomous response data to obtain attribute profile\n estimates for respondents using Diagnostic Classification Model (DCM) and\n Markov Chain Monte Carlo (MCMC) method. The estimation procedure uses a\n loglinear cognitive diagnostic modeling (LDCM) framework that allows for\n the estimation of a host of DCMs such as NIDO, NIDA, NC-RUM etc.","Published":"2014-07-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"DCODE","Version":"1.0","Title":"List Linear n-Peptide Constraints for Overlapping Protein\nRegions","Description":"Traversal graph algorithm for listing linear n-peptide constraints for overlapping protein regions. (Lebre and Gascuel, The combinatorics of overlapping genes, freely available from arXiv at : http://arxiv.org/abs/1602.04971). ","Published":"2016-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dCovTS","Version":"1.1","Title":"Distance Covariance and Correlation for Time Series Analysis","Description":"Computing and plotting the distance covariance and correlation\n function of a univariate or a multivariate time series. Both versions\n of biased and unbiased estimators of distance covariance and correlation are provided.\n Test statistics for testing pairwise independence are also implemented. \n Some data sets are also included.","Published":"2016-09-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dcv","Version":"0.1.1","Title":"Conventional Cross-validation statistics for climate-growth\nmodel","Description":"This package performs several conventional\n Cross-validation statistical methods for climate-growth model\n in the climate reconstruction from tree rings, including Sign\n Test statistic, Reduction of Error statistic, Product Mean\n Test, Durbin-Watson statistic etc. This package is at its\n primary stage, the functions have not been tested exhaustively\n and more functions would be added in the comming days.","Published":"2010-12-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ddalpha","Version":"1.2.1","Title":"Depth-Based Classification and Calculation of Data Depth","Description":"Contains procedures for depth-based supervised learning, which are entirely non-parametric, in particular the DDalpha-procedure (Lange, Mosler and Mozharovskyi, 2014). The training data sample is transformed by a statistical depth function to a compact low-dimensional space, where the final classification is done. It also offers an extension to functional data and routines for calculating certain notions of statistical depth functions. 50 multivariate and 5 functional classification problems are included.","Published":"2016-10-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DDD","Version":"3.4","Title":"Diversity-Dependent Diversification","Description":"\n Implements maximum likelihood and bootstrap methods based on the diversity-dependent birth-death process to test whether speciation or extinction are diversity-dependent, under various models including various types of key innovations.\n See Etienne et al. 2012, Proc. Roy. Soc. B 279: 1300-1309, , Etienne & Haegeman 2012, Am. Nat. 180: E75-E89, and Etienne et al. 2016. Meth. Ecol. Evol. 7: 1092-1099, .\n Also contains functions to simulate the diversity-dependent process.","Published":"2017-02-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ddeploy","Version":"1.0.4","Title":"Wrapper for the Duke Deploy REST API","Description":"A wrapper for the Duke Analytics model deployment API. See for more details.","Published":"2015-06-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DDHFm","Version":"1.1.2","Title":"Variance Stabilization by Data-Driven Haar-Fisz (for\nMicroarrays)","Description":"Contains the normalizing and variance stabilizing\n\tData-Driven Haar-Fisz algorithm. Also contains related algorithms\n\tfor simulating from certain microarray gene intensity models and\n\tevaluation of certain transformations. Contains cDNA and shipping\n\tcredit flow data.","Published":"2016-10-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DDIwR","Version":"0.2-0","Title":"DDI with R","Description":"This package provides useful functions for various DDI (Data Documentation Initiative) related outputs.","Published":"2014-10-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DDM","Version":"1.0-0","Title":"Death Registration Coverage Estimation","Description":"A set of three two-census methods to the estimate the degree of death registration coverage for a population. Implemented methods include the Generalized Growth Balance method (GGB), the Synthetic Extinct Generation method (SEG), and a hybrid of the two, GGB-SEG. Each method offers automatic estimation, but users may also specify exact parameters or use a graphical interface to guess parameters in the traditional way if desired.","Published":"2017-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ddpcr","Version":"1.6","Title":"Analysis and Visualization of Droplet Digital PCR in R and on\nthe Web","Description":"An interface to explore, analyze, and visualize droplet digital PCR\n (ddPCR) data in R. This is the first non-proprietary software for analyzing\n two-channel ddPCR data. An interactive tool was also created and is available\n online to facilitate this analysis for anyone who is not comfortable with\n using R.","Published":"2016-11-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ddR","Version":"0.1.2","Title":"Distributed Data Structures in R","Description":"Provides distributed data structures and simplifies\n distributed computing in R.","Published":"2015-11-25","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DDRTree","Version":"0.1.5","Title":"Learning Principal Graphs with DDRTree","Description":"Provides an implementation of the framework of reversed graph embedding (RGE) which projects data into a reduced dimensional space while constructs a principal tree which passes through the middle of the data simultaneously. DDRTree shows superiority to alternatives (Wishbone, DPT) for inferring the ordering as well as the intrinsic structure of the single cell genomics data. In general, it could be used to reconstruct the temporal progression as well as bifurcation structure of any datatype. ","Published":"2017-04-30","License":"Artistic License 2.0","snapshot_date":"2017-06-23"} {"Package":"ddst","Version":"1.4","Title":"Data Driven Smooth Tests","Description":"Smooth testing of goodness of fit. These tests are data\n driven (alternative hypothesis is dynamically selected based on data). In this\n package you will find various tests for exponent, Gaussian, Gumbel and uniform\n distribution.","Published":"2016-05-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"deadband","Version":"0.1.0","Title":"Statistical Deadband Algorithms Comparison","Description":"Statistical deadband algorithms are based on the Send-On-Delta concept as in Miskowicz(2006,). A collection of functions compare effectiveness and fidelity of sampled signals using statistical deadband algorithms.","Published":"2016-09-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"deal","Version":"1.2-37","Title":"Learning Bayesian Networks with Mixed Variables","Description":"Bayesian networks with continuous and/or discrete\n variables can be learned and compared from data.","Published":"2013-01-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"deamer","Version":"1.0","Title":"Deconvolution density estimation with adaptive methods for a\nvariable prone to measurement error","Description":"deamer provides deconvolution algorithms for the\n non-parametric estimation of the density f of an error-prone\n variable x with additive noise e. The model is y = x + e where\n the noisy variable y is observed, while x is unobserved.\n Estimation may be performed for i) a known density of the error\n ii) with an auxiliary sample of pure noise and iii) with an\n auxiliary sample of replicate (repeated) measurements.\n Estimation is performed using adaptive model selection and\n penalized contrasts.","Published":"2012-08-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"deBInfer","Version":"0.4.1","Title":"Bayesian Inference for Differential Equations","Description":"A Bayesian framework for parameter inference in differential equations.\n This approach offers a rigorous methodology for parameter inference as well as\n modeling the link between unobservable model states and parameters, and\n observable quantities. Provides templates for the DE model, the\n observation model and data likelihood, and the model parameters and their prior\n distributions. A Markov chain Monte Carlo (MCMC) procedure processes these inputs\n to estimate the posterior distributions of the parameters and any derived\n quantities, including the model trajectories. Further functionality is provided\n to facilitate MCMC diagnostics and the visualisation of the posterior distributions\n of model parameters and trajectories.","Published":"2016-09-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"debug","Version":"1.3.1","Title":"MVB's debugger for R","Description":"Debugger for R functions, with code display, graceful\n error recovery, line-numbered conditional breakpoints, access\n to exit code, flow control, and full keyboard input.","Published":"2013-02-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"debugme","Version":"1.0.2","Title":"Debug R Packages","Description":"Specify debug messages as special string constants, and\n control debugging of packages via environment variables.","Published":"2017-03-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DECIDE","Version":"1.2","Title":"DEComposition of Indirect and Direct Effects","Description":"Calculates various estimates for measures of educational\n differentials, the relative importance of primary and secondary\n effects in the creation of such differentials and compares the\n estimates obtained from two datasets.","Published":"2014-11-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"decision","Version":"0.1.0","Title":"Statistical Decision Analysis","Description":"Contains a function called dmur() which accepts four parameters like possible values, probabilities of the values, selling cost and preparation cost. The dmur() function generates various numeric decision parameters like MEMV (Maximum (optimum) expected monitory value), best choice, EPPI (Expected profit with perfect information), EVPI (Expected value of the perfect information), EOL (Expected opportunity loss), which facilitate effective decision-making.","Published":"2016-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DecisionCurve","Version":"1.3","Title":"Calculate and Plot Decision Curves","Description":"Decision curves are a useful tool to evaluate the population impact\n of adopting a risk prediction instrument into clinical practice. Given one or more\n instruments (risk models) that predict the probability of a binary outcome, this\n package calculates and plots decision curves, which display estimates of the\n standardized net benefit by the probability threshold used to categorize observations\n as 'high risk.' Curves can be estimated using data from an observational cohort, or\n from case-control studies when an estimate of the population outcome prevalence\n is available. Confidence intervals calculated using the bootstrap can be displayed and\n a wrapper function to calculate cross-validated curves using k-fold cross-validation is\n also provided.","Published":"2016-08-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"decisionSupport","Version":"1.101.2","Title":"Quantitative Support of Decision Making under Uncertainty","Description":"Supporting the quantitative analysis of binary welfare based\n decision making processes using Monte Carlo simulations. Decision support\n is given on two levels: (i) The actual decision level is to choose between\n two alternatives under probabilistic uncertainty. This package calculates\n the optimal decision based on maximizing expected welfare. (ii) The meta\n decision level is to allocate resources to reduce the uncertainty in the\n underlying decision problem, i.e to increase the current information to\n improve the actual decision making process. This problem is dealt with\n using the Value of Information Analysis. The Expected Value of\n Information for arbitrary prospective estimates can be calculated as\n well as Individual Expected Value of Perfect Information.\n The probabilistic calculations are done via Monte Carlo\n simulations. This Monte Carlo functionality can be used on its own.","Published":"2016-04-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"decode","Version":"1.2","Title":"Differential Co-Expression and Differential Expression Analysis","Description":"Integrated differential expression (DE) and differential co-expression (DC) analysis on gene expression data based on DECODE (DifferEntial CO-expression and Differential Expression) algorithm.","Published":"2015-07-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"decoder","Version":"1.1.12","Title":"Decode Coded Variables to Plain Text and the Other Way Around","Description":"Main function \"decode\" is used to decode coded key values to plain\n text. Function \"code\" can be used to code plain text to code if there is a\n 1:1 relation between the two. The concept relies on 'keyvalue' objects used\n for translation. There are several 'keyvalue' objects included in the areas\n of geographical regional codes, administrative health care unit codes,\n diagnosis codes and more. It is also easy to extend the use by arbitrary \n code sets.","Published":"2017-02-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"decompr","Version":"4.5.0","Title":"Global-Value-Chain Decomposition","Description":"Two global-value-chain decompositions are implemented. Firstly, the\n Wang-Wei-Zhu (Wang, Wei, and Zhu, 2013) algorithm splits bilateral gross exports\n into 16 value-added components. Secondly, the Leontief decomposition (default)\n derives the value added origin of exports by country and industry, which is also\n based on Wang, Wei, and Zhu (Wang, Z., S.-J. Wei, and K. Zhu. 2013. \"Quantifying\n International Production Sharing at the Bilateral and Sector Levels.\").","Published":"2016-08-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"decon","Version":"1.2-4","Title":"Deconvolution Estimation in Measurement Error Models","Description":"This package contains a collection of functions to deal\n with nonparametric measurement error problems using\n deconvolution kernel methods. We focus two measurement error\n models in the package: (1) an additive measurement error model,\n where the goal is to estimate the density or distribution\n function from contaminated data; (2) nonparametric regression\n model with errors-in-variables. The R functions allow the\n measurement errors to be either homoscedastic or\n heteroscedastic. To make the deconvolution estimators\n computationally more efficient in R, we adapt the \"Fast Fourier\n Transform\" (FFT) algorithm for density estimation with\n error-free data to the deconvolution kernel estimation. Several\n methods for the selection of the data-driven smoothing\n parameter are also provided in the package. See details in:\n Wang, X.F. and Wang, B. (2011). Deconvolution estimation in\n measurement error models: The R package decon. Journal of\n Statistical Software, 39(10), 1-24.","Published":"2013-04-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"deconstructSigs","Version":"1.8.0","Title":"Identifies Signatures Present in a Tumor Sample","Description":"Takes sample information in the form of the fraction of mutations\n in each of 96 trinucleotide contexts and identifies the weighted combination\n of published signatures that, when summed, most closely reconstructs the\n mutational profile.","Published":"2016-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"deconvolveR","Version":"1.0-3","Title":"Empirical Bayes Estimation Strategies","Description":"Empirical Bayes methods for learning prior distributions from data.\n An unknown prior distribution (g) has yielded (unobservable) parameters, each of\n which produces a data point from a parametric exponential family (f). The goal\n is to estimate the unknown prior (\"g-modeling\") by deconvolution and Empirical\n Bayes methods.","Published":"2016-12-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DecorateR","Version":"0.1.1","Title":"Fit and Deploy DECORATE Trees","Description":"DECORATE (Diverse Ensemble Creation by Oppositional Relabeling\n of Artificial Training Examples) builds an ensemble of J48 trees by recursively\n adding artificial samples of the training data (\"Melville, P., & Mooney, R. J. (2005). Creating diversity in ensembles using artificial data. Information Fusion, 6(1), 99-111. \").","Published":"2017-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Deducer","Version":"0.7-9","Title":"A Data Analysis GUI for R","Description":"An intuitive, cross-platform graphical data analysis system. It uses menus and dialogs to guide the user efficiently through the data manipulation and analysis process, and has an excel like spreadsheet for easy data frame visualization and editing. Deducer works best when used with the Java based R GUI JGR, but the dialogs can be called from the command line. Dialogs have also been integrated into the Windows Rgui.","Published":"2015-12-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DeducerExtras","Version":"1.7","Title":"Additional dialogs and functions for Deducer","Description":"Added functionality for Deducer. This package includes\n additional dialogs for calculating distribution function\n values, cluster analysis and more.","Published":"2013-03-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DeducerPlugInExample","Version":"0.2-0","Title":"Deducer Plug-in Example","Description":"A example GUI plug-in package to serve as a template.","Published":"2012-03-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DeducerPlugInScaling","Version":"0.1-0","Title":"Reliability and factor analysis plugin","Description":"A Deducer plug-in for factor analysis, reliability\n analysis and discriminant analysis, using psych, GPArotation\n and mvnormtest packages.","Published":"2012-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DeducerSpatial","Version":"0.7","Title":"Deducer for spatial data analysis","Description":"A Deducer plug-in for spatial data analysis. Includes The\n ability to plot and explore open street map and Bing satellite\n images.","Published":"2013-04-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DeducerSurvival","Version":"0.1-0","Title":"Add Survival Dialogue to Deducer","Description":"Adds Kaplan-Meier Survival Analysis to the Deducer GUI","Published":"2012-07-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DeducerText","Version":"0.1-2","Title":"Deducer GUI for Text Data","Description":"A GUI for text mining","Published":"2014-06-13","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"deducorrect","Version":"1.3.7","Title":"Deductive Correction, Deductive Imputation, and Deterministic\nCorrection","Description":"A collection of methods for automated data cleaning where all actions are logged.","Published":"2015-07-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"deductive","Version":"0.1.2","Title":"Data Correction and Imputation Using Deductive Methods","Description":"Attempt to repair inconsistencies and missing values in data \n records by using information from valid values and validation rules \n restricting the data.","Published":"2017-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"deepboost","Version":"0.1.5","Title":"Deep Boosting Ensemble Modeling","Description":"Provides deep boosting models training, evaluation, predicting and\n hyper parameter optimising using grid search and cross validation.\n Based on Google's Deep Boosting algorithm, and Google's C++ implementation.\n Cortes, C., Mohri, M., & Syed, U. (2014) .","Published":"2016-12-29","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"deepnet","Version":"0.2","Title":"deep learning toolkit in R","Description":"Implement some deep learning architectures and neural network\n algorithms, including BP,RBM,DBN,Deep autoencoder and so on.","Published":"2014-03-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DEEPR","Version":"0.1","Title":"Dirichlet-multinomial Evolutionary Event Profile Randomization\n(DEEPR) test","Description":"Tests for, and describe differences in event count profiles in groups of reconstructed cophylogenies","Published":"2015-01-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"deformula","Version":"0.1.1","Title":"Integration of One-Dimensional Functions with Double Exponential\nFormulas","Description":"Numerical quadrature of functions of one variable over a finite or infinite interval\n\twith double exponential formulas.","Published":"2015-10-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"degenes","Version":"1.1","Title":"Detection of differentially expressed genes","Description":"Detection of differentially expressed genes between two\n distinct groups of samples.","Published":"2012-10-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"degreenet","Version":"1.3-1","Title":"Models for Skewed Count Distributions Relevant to Networks","Description":"Likelihood-based inference for skewed count distributions used in network modeling. \"degreenet\" is a part of the \"statnet\" suite of packages for network analysis.","Published":"2015-04-19","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Delaporte","Version":"6.0.0","Title":"Statistical Functions for the Delaporte Distribution","Description":"Provides probability mass, distribution, quantile, random-variate generation, and method-of-moments parameter-estimation functions for the Delaporte distribution. The Delaporte is a discrete probability distribution which can be considered the convolution of a negative binomial distribution with a Poisson distribution. Alternatively, it can be considered a counting distribution with both Poisson and negative binomial components. It has been studied in actuarial science as a frequency distribution which has more variability than the Poisson, but less than the negative binomial.","Published":"2017-03-31","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"deldir","Version":"0.1-14","Title":"Delaunay Triangulation and Dirichlet (Voronoi) Tessellation","Description":"Calculates the Delaunay triangulation and the Dirichlet\n\tor Voronoi tessellation (with respect to the entire plane) of\n\ta planar point set. Plots triangulations and tessellations in\n\tvarious ways. Clips tessellations to sub-windows. Calculates\n\tperimeters of tessellations. Summarises information about the\n\ttiles of the tessellation.","Published":"2017-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DeLorean","Version":"1.2.4","Title":"Estimates Pseudotimes for Single Cell Expression Data","Description":"Implements the DeLorean model to estimate pseudotimes for\n single cell expression data. The DeLorean model uses a Gaussian process\n latent variable model to model uncertainty in the capture time of\n cross-sectional data.","Published":"2016-10-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"delt","Version":"0.8.2","Title":"Estimation of Multivariate Densities Using Adaptive Partitions","Description":"\n We implement methods for estimating multivariate densities.\n We include a discretized kernel estimator,\n an adaptive histogram (a greedy histogram and a CART-histogram), \n stagewise minimization, and bootstrap aggregation.","Published":"2015-06-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Delta","Version":"0.1.1.1","Title":"Measure of Agreement Between Two Raters","Description":"Measure of agreement Delta was originally by Martín & Femia (2004) . \n Since then has been considered as agreement measure for different \n\tfields, since their behavior is usually better than the usual Kappa index\n\tby Cohen (1960) . The main issue with Delta \n\tis that can not be computed by hand contrary to Kappa.","Published":"2017-06-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"deltaPlotR","Version":"1.5","Title":"Identification of dichotomous differential item functioning\n(DIF) using Angoff's Delta Plot method","Description":"The deltaPlotR package proposes an implementation of Angoff's Delta Plot method to detect dichotomous DIF. Several detection thresholds are included, either from multivariate normality assumption or by prior determination. Item purification is supported.","Published":"2014-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Demerelate","Version":"0.9-3","Title":"Functions to Calculate Relatedness on Diploid Genetic Data","Description":"Functions to calculate pairwise relatedness on diploid genetic datasets. Different estimators for relatedness can be combined with information on geographical distances. Information on heterozygosity, allele- and genotype diversity as well as genetic F-statistics are provided for each population.","Published":"2017-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DEMEtics","Version":"0.8-7","Title":"Evaluating the genetic differentiation between populations based\non Gst and D values","Description":"This package allows to calculate the fixation index Gst\n (Nei, 1973) and the differentiation index D (Jost, 2008) pairwise\n between or averaged over several populations. P-values, stating the\n significance of differentiation, and 95 percent confidence intervals\n can be estimated using bootstrap resamplings. In the case that more\n than two populations are compared pairwise, the p-values are\n adjusted by bonferroni correction and in several other ways due to\n the multiple comparison from one data set.","Published":"2013-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"demi","Version":"1.1.2","Title":"Differential Expression from Multiple Indicators","Description":"Implementation of the DEMI method for the analysis of high-density microarray data.","Published":"2015-02-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"deming","Version":"1.0-1","Title":"Deming, Thiel-Sen and Passing-Bablock Regression","Description":"Generalized Deming regression, Theil-Sen regression and Passing-Bablock regression functions.","Published":"2014-09-24","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"demogR","Version":"0.5.0","Title":"Analysis of Age-Structured Demographic Models","Description":"Construction and analysis of matrix population models in R.","Published":"2017-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"demography","Version":"1.20","Title":"Forecasting Mortality, Fertility, Migration and Population Data","Description":"Functions for demographic analysis including lifetable\n calculations; Lee-Carter modelling; functional data analysis of\n mortality rates, fertility rates, net migration numbers; and\n stochastic population forecasting.","Published":"2017-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"demoKde","Version":"0.9-4","Title":"Kernel Density Estimation for Demonstration Purposes","Description":"Demonstration code showing how (univariate) kernel\n\t density estimates are computed, at least conceptually,\n\t and allowing users to experiment with different kernels,\n\t should they so wish. NOTE: the density function in the\n\t stats package should be used for computational efficiency.","Published":"2017-02-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DEMOVA","Version":"1.0","Title":"DEvelopment (of Multi-Linear QSPR/QSAR) MOdels VAlidated using\nTest Set","Description":"Tool for the development of multi-linear QSPR/QSAR models (Quantitative structure-property/activity relationship). Theses models are used in chemistry, biology and pharmacy to find a relationship between the structure of a molecule and its property (such as activity, toxicology but also physical properties). The various functions of this package allows: selection of descriptors based of variances, intercorrelation and user expertise; selection of the best multi-linear regression in terms of correlation and robustness; methods of internal validation (Leave-One-Out, Leave-Many-Out, Y-scrambling) and external using test sets.","Published":"2016-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dendextend","Version":"1.5.2","Title":"Extending 'Dendrogram' Functionality in R","Description":"Offers a set of functions for extending\n 'dendrogram' objects in R, letting you visualize and compare trees of\n 'hierarchical clusterings'. You can (1) Adjust a tree's graphical parameters\n - the color, size, type, etc of its branches, nodes and labels. (2)\n Visually and statistically compare different 'dendrograms' to one another.","Published":"2017-03-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"dendroextras","Version":"0.2.2","Title":"Extra Functions to Cut, Label and Colour Dendrogram Clusters","Description":"Provides extra functions to manipulate dendrograms\n that build on the base functions provided by the 'stats' package. The main\n functionality it is designed to add is the ability to colour all the edges\n in an object of class 'dendrogram' according to cluster membership i.e. each\n subtree is coloured, not just the terminal leaves. In addition it provides\n some utility functions to cut 'dendrogram' and 'hclust' objects and to \n set/get labels.","Published":"2017-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dendrometeR","Version":"1.0.0","Title":"Analyzing Dendrometer Data","Description":"Various functions to import, verify, process and plot high-resolution dendrometer using daily and stem-cycle approaches.","Published":"2016-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DendroSync","Version":"0.1.0","Title":"A Set of Tools for Calculating Spatial Synchrony Between\nTree-Ring Chronologies","Description":"Provides functions for the calculation and plotting of synchrony in \n tree growth from tree-ring width chronologies (TRW index). It combines\n variance-covariance (VCOV) mixed modelling with functions that quantify \n the degree to which the TRW chronologies contain a common temporal \n signal. It also implements temporal trends in spatial synchrony using a \n moving window. These methods can also be used with other kind of ecological\n variables that have temporal autocorrelation corrected.","Published":"2017-05-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DendSer","Version":"1.0.1","Title":"Dendrogram seriation: ordering for visualisation","Description":"Re-arranges a dendrogram to optimize visualisation-based cost functions","Published":"2013-09-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dendsort","Version":"0.3.3","Title":"Modular Leaf Ordering Methods for Dendrogram Nodes","Description":"An implementation of functions to optimize ordering of nodes in\n a dendrogram, without affecting the meaning of the dendrogram. A dendrogram can\n be sorted based on the average distance of subtrees, or based on the smallest\n distance value. These sorting methods improve readability and interpretability\n of tree structure, especially for tasks such as comparison of different distance\n measures or linkage types and identification of tight clusters and outliers. As\n a result, it also introduces more meaningful reordering for a coupled heatmap\n visualization.","Published":"2015-12-14","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"DengueRT","Version":"1.0.1","Title":"Parameter Estimates and Real-Time Prediction of a Single Dengue\nOutbreak","Description":"Provides functions for parameter estimation and real-time predictions of a single dengue \n outbreak taking into account model uncertainty using model averaging. ","Published":"2016-05-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"denoiseR","Version":"1.0","Title":"Regularized Low Rank Matrix Estimation","Description":"Estimate a low rank matrix from noisy data using singular values\n thresholding and shrinking functions. Impute missing values with matrix completion. ","Published":"2016-07-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"denovolyzeR","Version":"0.2.0","Title":"Statistical Analyses of De Novo Genetic Variants","Description":"An integrated toolset for the analysis of de novo (sporadic)\n genetic sequence variants. denovolyzeR implements a mutational model that\n estimates the probability of a de novo genetic variant arising in each human\n gene, from which one can infer the expected number of de novo variants in a\n given population size. Observed variant frequencies can then be compared against\n expectation in a Poisson framework. denovolyzeR provides a suite of functions\n to implement these analyses for the interpretation of de novo variation in human\n disease.","Published":"2016-08-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"denpro","Version":"0.9.2","Title":"Visualization of Multivariate Functions, Sets, and Data","Description":"\n We provide tools to \n (1) visualize multivariate density functions and density estimates \n with level set trees,\n (2) visualize level sets with shape trees,\n (3) visualize multivariate data with tail trees, \n (4) visualize scales of multivariate density estimates with \n mode graphs and branching maps, and\n (5) visualize anisotropic spread with 2D volume functions and\n 2D probability content functions.\n Level set trees visualize mode structure,\n shape trees visualize shapes of level sets of unimodal densities,\n and tail trees visualize connected data sets.\n The kernel estimator is implemented\n but the package may also be applied for visualizing other density estimates. ","Published":"2015-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"denseFLMM","Version":"0.1.0","Title":"Functional Linear Mixed Models for Densely Sampled Data","Description":"Estimation of functional linear mixed models for densely sampled data based on functional principal component analysis.","Published":"2017-03-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Density.T.HoldOut","Version":"2.00","Title":"Density.T.HoldOut: Non-combinatorial T-estimation Hold-Out for\ndensity estimation","Description":"Implementation in the density framework of the non-combinatorial algorithm and its greedy version, introduced by Magalhães and Rozenholc (2014), for T-estimation Hold-Out proposed in Birgé (2006, Section 9). The package provide an implementation which uses several families of estimators (regular and irregular histograms, kernel estimators) which may be used alone or combined. As a complement, provides also a comparison with other Held-Out derived from least-squares and maximum-likelihood. This package implements also the T-estimation Hold-Out derived from the test introduced in Baraud (2011).","Published":"2014-09-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"densityClust","Version":"0.2.1","Title":"Clustering by Fast Search and Find of Density Peaks","Description":"An implementation of the clustering algorithm described by Alex\n Rodriguez and Alessandro Laio (Science, 2014 vol. 344), along with tools to\n inspect and visualize the results.","Published":"2016-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DensParcorr","Version":"1.0","Title":"Dens-Based Method for Partial Correlation Estimation in Large\nScale Brain Networks","Description":"Provide a Dens-based method for estimating functional connection in large scale brain networks using partial correlation.","Published":"2016-04-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"densratio","Version":"0.0.3","Title":"Density Ratio Estimation","Description":"Density ratio estimation.\n The estimated density ratio function can be used in many applications such as\n the inlier-based outlier detection, covariate shift adaptation and etc.","Published":"2016-03-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"denstrip","Version":"1.5.3","Title":"Density strips and other methods for compactly illustrating\ndistributions","Description":"Graphical methods for compactly illustrating probability distributions, including density strips, density regions, sectioned density plots and varying width strips.","Published":"2014-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DEoptim","Version":"2.2-4","Title":"Global Optimization by Differential Evolution","Description":"Implements the differential evolution algorithm for global optimization of a real-valued function of a real-valued parameter vector. ","Published":"2016-12-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DEoptimR","Version":"1.0-8","Title":"Differential Evolution Optimization in Pure R","Description":"Differential Evolution (DE) stochastic algorithms for global\n optimization of problems with and without constraints.\n The aim is to curate a collection of its state-of-the-art variants that\n (1) do not sacrifice simplicity of design,\n (2) are essentially tuning-free, and\n (3) can be efficiently implemented directly in the R language.\n Currently, it only provides an implementation of the 'jDE' algorithm by\n Brest et al. (2006) .","Published":"2016-11-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"depend.truncation","Version":"2.6","Title":"Statistical Inference for Parametric and Semiparametric Models\nBased on Dependently Truncated Data","Description":"Suppose that one can observe bivariate random variables (X, Y) only when X<=Y holds. Data (Xj, Yj), subject to Xj<=Yj, for all j=1,...,n, are called truncated data. For truncated data, several different approaches are implemented for statistical inference on (X, Y), when X and Y are dependent. Also included is truncated data on the number of deaths at each year (1963-1980) for Japanese male centenarians. ","Published":"2017-04-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DEploid","Version":"0.3.2","Title":"Deconvolute Mixed Genomes with Unknown Proportions","Description":"Traditional phasing programs are limited to diploid organisms.\n Our method modifies Li and Stephens algorithm with Markov chain Monte Carlo\n (MCMC) approaches, and builds a generic framework that allows haplotype searches\n in a multiple infection setting. This package is primarily developed as part of\n the Pf3k project, which is a global collaboration using the latest\n sequencing technologies to provide a high-resolution view of natural variation\n in the malaria parasite Plasmodium falciparum. Parasite DNA are extracted from\n patient blood sample, which often contains more than one parasite strain, with\n unknown proportions. This package is used for deconvoluting mixed haplotypes,\n and reporting the mixture proportions from each sample.","Published":"2016-11-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"depmix","Version":"0.9.14","Title":"Dependent Mixture Models","Description":"Fits (multigroup) mixtures of latent or hidden Markov models on mixed categorical and continuous (timeseries) data. The Rdonlp2 package can optionally be used for optimization of the log-likelihood and is available from R-forge. ","Published":"2016-02-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"depmixS4","Version":"1.3-3","Title":"Dependent Mixture Models - Hidden Markov Models of GLMs and\nOther Distributions in S4","Description":"Fits latent (hidden) Markov models on mixed categorical and continuous (time series) data, otherwise known as dependent mixture models.","Published":"2016-02-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"depth","Version":"2.1-1","Title":"Nonparametric Depth Functions for Multivariate Analysis","Description":"Tools for depth functions methodology applied \n to multivariate analysis. Besides allowing calculation \n of depth values and depth-based location estimators, the package\n includes functions or drawing contour plots and perspective plots\n of depth functions. Euclidian and spherical depths are supported.","Published":"2017-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"depth.plot","Version":"0.1","Title":"Multivariate Analogy of Quantiles","Description":"Could be used to obtain spatial depths, spatial ranks and outliers of multivariate random variables. Could also be used to visualize DD-plots (a multivariate generalization of QQ-plots).","Published":"2015-12-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DepthProc","Version":"2.0.1","Title":"Statistical Depth Functions for Multivariate Analysis","Description":"Data depth concept offers a variety of powerful and user friendly\n tools for robust exploration and inference for multivariate data. The offered\n techniques may be successfully used in cases of lack of our knowledge on\n parametric models generating data due to their nature. The\n package consist of among others implementations of several data depth techniques\n involving multivariate quantile-quantile plots, multivariate scatter estimators,\n multivariate Wilcoxon tests and robust regressions.","Published":"2017-06-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"depthTools","Version":"0.4","Title":"Depth Tools Package","Description":"depthTools is a package that implements different\n statistical tools for the description and analysis of gene\n expression data based on the concept of data depth, namely, the\n scale curves for visualizing the dispersion of one or various\n groups of samples (e.g. types of tumors), a rank test to decide\n whether two groups of samples come from a single distribution\n and two methods of supervised classification techniques, the DS\n and TAD methods. All these techniques are based on the Modified\n Band Depth, which is a recent notion of depth with a low\n computational cost, what renders it very appropriate for high\n dimensional data such as gene expression data.","Published":"2013-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dequer","Version":"2.0-0","Title":"Stacks, Queues, and 'Deques' for R","Description":"Queues, stacks, and 'deques' are list-like, abstract data types. \n These are meant to be very cheap to \"grow\", or insert new objects into.\n A typical use case involves storing data in a list in a streaming fashion,\n when you do not necessarily know how may elements need to be stored.\n Unlike R's lists, the new data structures provided here are not\n necessarily stored contiguously, making insertions and deletions at the\n front/end of the structure much faster. The underlying implementation\n is new and uses a head/tail doubly linked list; thus, we do not rely on R's\n environments or hashing. To avoid unnecessary data copying, most operations\n on these data structures are performed via side-effects.","Published":"2016-09-26","License":"BSD 2-clause License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Deriv","Version":"3.8.1","Title":"Symbolic Differentiation","Description":"R-based solution for symbolic differentiation. It admits\n user-defined function as well as function substitution\n in arguments of functions to be differentiated. Some symbolic\n simplification is part of the work.","Published":"2017-06-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"derivmkts","Version":"0.2.2","Title":"Functions and R Code to Accompany Derivatives Markets","Description":"A set of pricing and expository functions that should\n be useful in teaching a course on financial derivatives.","Published":"2016-09-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"desc","Version":"1.1.0","Title":"Manipulate DESCRIPTION Files","Description":"Tools to read, write, create, and manipulate DESCRIPTION files.\n It is intended for packages that create or manipulate other packages.","Published":"2017-01-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"descomponer","Version":"1.3","Title":"Seasonal Adjustment by Frequency Analysis","Description":"Decompose a time series into seasonal, trend and irregular components using transformations to amplitude-frequency domain.","Published":"2017-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"descr","Version":"1.1.3","Title":"Descriptive Statistics","Description":"Weighted frequency and contingency tables of categorical\n variables and of the comparison of the mean value of a numerical\n variable by the levels of a factor, and methods to produce xtable\n objects of the tables and to plot them. There are also functions to\n facilitate the character encoding conversion of objects, to quickly\n convert fixed width files into csv ones, and to export a data.frame to\n a text file with the necessary R and SPSS codes to reread the data.","Published":"2016-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DescribeDisplay","Version":"0.2.5","Title":"An Interface to the DescribeDisplay GGobi Plugin","Description":"Produce publication quality graphics from output of GGobi's\n describe display plugin.","Published":"2016-01-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"describer","Version":"0.2.0","Title":"Describe Data in R Using Common Descriptive Statistics","Description":"Allows users to quickly and easily describe data using\n common descriptive statistics.","Published":"2015-09-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"descriptr","Version":"0.1.1","Title":"Descriptive Statistics & Distributions Exploration","Description":"Generate descriptive statistics such as measures of location,\n dispersion, frequency tables, cross tables, group summaries and multiple\n one/two way tables. Visualize and compute percentiles/probabilities of \n normal, t, f, chi square and binomial distributions.","Published":"2017-06-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"desctable","Version":"0.1.0","Title":"Produce Descriptive and Comparative Tables Easily","Description":"Easily create descriptive and comparative tables.\n It makes use and integrates directly with the tidyverse family of packages, and pipes.\n Tables are produced as data frames/lists of data frames for easy manipulation after creation,\n and ready to be saved as csv, or piped to DT::datatable() or pander::pander() to integrate into reports.","Published":"2017-05-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DescTools","Version":"0.99.20","Title":"Tools for Descriptive Statistics","Description":"A collection of miscellaneous basic statistic functions and convenience wrappers for efficiently describing data. The author's intention was to create a toolbox, which facilitates the (notoriously time consuming) first descriptive tasks in data analysis, consisting of calculating descriptive statistics, drawing graphical summaries and reporting the results. The package contains furthermore functions to produce documents using MS Word (or PowerPoint) and functions to import data from Excel. Many of the included functions can be found scattered in other packages and other sources written partly by Titans of R. The reason for collecting them here, was primarily to have them consolidated in ONE instead of dozens of packages (which themselves might depend on other packages which are not needed at all), and to provide a common and consistent interface as far as function and arguments naming, NA handling, recycling rules etc. are concerned. Google style guides were used as naming rules (in absence of convincing alternatives). The 'camel style' was consequently applied to functions borrowed from contributed R packages as well.","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DescToolsAddIns","Version":"0.9","Title":"Some Functions to be Used as Shortcuts in RStudio","Description":"RStudio as of recently offers the option to define addins and assign shortcuts to them. This package contains AddIns for a few most used functions in an analysts (at least mine) daily work (like str(), example(), plot(), head(), view(), Desc()). Most of these functions will get the current selection in RStudio's editor window and send the specific command to the console while instantly executing it. Assigning shortcuts to these AddIns will spare you quite a few keystrokes.","Published":"2017-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"deseasonalize","Version":"1.35","Title":"Optimal deseasonalization for geophysical time series using AR\nfitting","Description":"Deseasonalize daily or monthly time series.","Published":"2013-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"designGG","Version":"1.1","Title":"Computational tool for designing genetical genomics experiments","Description":"The package provides R scripts for designing genetical\n genomics experiments.","Published":"2013-02-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"designGLMM","Version":"0.1.0","Title":"Finding Optimal Block Designs for a Generalised Linear Mixed\nModel","Description":"Use simulated annealing to find optimal designs for Poisson regression models with blocks.","Published":"2016-02-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"designmatch","Version":"0.3.0","Title":"Matched Samples that are Balanced and Representative by Design","Description":"Includes functions for the construction of matched samples that are balanced and representative by design. Among others, these functions can be used for matching in observational studies with treated and control units, with cases and controls, in related settings with instrumental variables, and in discontinuity designs. Also, they can be used for the design of randomized experiments, for example, for matching before randomization. By default, 'designmatch' uses the 'GLPK' optimization solver, but its performance is greatly enhanced by the 'Gurobi' optimization solver and its associated R interface. For their installation, please follow the instructions at and . We have also included directions in the gurobi_installation file in the inst folder.","Published":"2017-05-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"desiR","Version":"1.2.1","Title":"Desirability Functions for Ranking, Selecting, and Integrating\nData","Description":"Functions for (1) ranking, selecting, and prioritising genes,\n proteins, and metabolites from high dimensional biology experiments, (2)\n multivariate hit calling in high content screens, and (3) combining data\n from diverse sources.","Published":"2016-12-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"desirability","Version":"2.1","Title":"Function Optimization and Ranking via Desirability Functions","Description":"S3 classes for multivariate optimization using the desirability function by Derringer and Suich (1980).","Published":"2016-09-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"desire","Version":"1.0.7","Title":"Desirability functions in R","Description":"Harrington and Derringer-Suich type desirability functions","Published":"2013-07-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DESnowball","Version":"1.0","Title":"Bagging with Distance-based Regression for Differential Gene\nExpression Analyses","Description":"This package implements a statistical data mining method to\n compare whole genome gene expression profiles, with respect to the presence\n of a recurrent genetic disturbance event, to identify the affected target\n genes.","Published":"2014-01-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"deSolve","Version":"1.14","Title":"Solvers for Initial Value Problems of Differential Equations\n(ODE, DAE, DDE)","Description":"Functions that solve initial value problems of a system\n of first-order ordinary differential equations (ODE), of\n partial differential equations (PDE), of differential\n algebraic equations (DAE), and of delay differential\n equations. The functions provide an interface to the FORTRAN\n functions lsoda, lsodar, lsode, lsodes of the ODEPACK\n collection, to the FORTRAN functions dvode and daspk and a\n C-implementation of solvers of the Runge-Kutta family with\n fixed or variable time steps. The package contains routines\n designed for solving ODEs resulting from 1-D, 2-D and 3-D\n partial differential equations (PDE) that have been converted\n to ODEs by numerical differencing.","Published":"2016-09-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DESP","Version":"0.2-2","Title":"Estimation of Diagonal Elements of Sparse Precision-Matrices","Description":"Several estimators of the diagonal elements of a sparse precision\n\t(inverse covariance) matrix from a sample of Gaussian vectors for a\n\tgiven matrix of estimated marginal regression coefficients. Moreover, a robust estimator\n of the precision matrix is proposed.\n\tTo install package 'gurobi', instructions at \n\t and \n\t.","Published":"2017-02-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"desplot","Version":"1.1","Title":"Plotting Field Plans for Agricultural Experiments","Description":"A function for plotting maps of agricultural field experiments that\n are laid out in grids.","Published":"2016-12-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"detect","Version":"0.4-0","Title":"Analyzing Wildlife Data with Detection Error","Description":"Models for analyzing site occupancy and count data models\n with detection error, including single-visit based models,\n conditional distance sampling and time-removal models.\n Package development was supported by the\n Alberta Biodiversity Monitoring Institute (www.abmi.ca)\n and the Boreal Avian Modelling Project (borealbirds.ca).","Published":"2016-03-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"detector","Version":"0.1.0","Title":"Detect Data Containing Personally Identifiable Information","Description":"Allows users to quickly and easily detect data containing\n Personally Identifiable Information (PII) through convenience functions.","Published":"2015-08-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"deTestSet","Version":"1.1.5","Title":"Test Set for Differential Equations","Description":"Solvers and test set for stiff and non-stiff differential equations, and \n differential algebraic equations.","Published":"2017-01-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DetMCD","Version":"0.0.4","Title":"Implementation of the DetMCD Algorithm (Robust and Deterministic\nEstimation of Location and Scatter)","Description":"Implementation of DetMCD, a new algorithm for robust and deterministic estimation of location and scatter. The benefits of robust and deterministic estimation are explained in Hubert, Rousseeuw and Verdonck (2012) .","Published":"2016-11-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DetR","Version":"0.0.4","Title":"Suite of Deterministic and Robust Algorithms for Linear\nRegression","Description":"DetLTS, DetMM (and DetS) Algorithms for Deterministic, Robust\n Linear Regression.","Published":"2016-02-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"detrendeR","Version":"1.0.4","Title":"Start the detrendeR Graphical User Interface (GUI)","Description":"Simple GUI to perform some standard tree-ring analyses.","Published":"2012-10-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DetSel","Version":"1.0.2","Title":"A computer program to detect markers responding to selection","Description":"In the new era of population genomics, surveys of genetic\n polymorphism (\"genome scans\") offer the opportunity to\n distinguish locus-specific from genome wide effects at many\n loci. Identifying presumably neutral regions of the genome that\n are assumed to be influenced by genome-wide effects only, and\n excluding presumably selected regions, is therefore critical to\n infer population demography and phylogenetic history reliably.\n Conversely, detecting locus-specific effects may help identify\n those genes that have been, or still are, targeted by natural\n selection. The software package DetSel has been developed to\n identify markers that show deviation from neutral expectation\n in pairwise comparisons of diverging populations.","Published":"2013-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"detzrcr","Version":"0.2.0","Title":"Compare Detrital Zircon Suites","Description":"Compare detrital zircon suites by uploading univariate,\n U-Pb age, or bivariate, U-Pb age and Lu-Hf data, in a 'shiny'-based\n user-interface. Outputs publication quality figures using 'ggplot2', and\n tables of statistics currently in use in the detrital zircon geochronology\n community.","Published":"2017-06-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"devEMF","Version":"3.5","Title":"EMF Graphics Output Device","Description":"Output graphics to EMF+/EMF.","Published":"2017-06-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Devore7","Version":"0.7.6","Title":"Data sets from Devore's \"Prob and Stat for Eng (7th ed)\"","Description":"Data sets and sample analyses from Jay L. Devore (2008),\n \"Probability and Statistics for Engineering and the Sciences\n (7th ed)\", Thomson.","Published":"2014-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"devRate","Version":"0.1.2","Title":"Quantify Relationship Between Development Rate and Temperature\nin Ectotherms","Description":"A set of functions to ease quantifying the relationship between development\n rate and temperature. The package comprises a set of models and estimated parameters\n borrowed from a literature review in ectotherms (mostly arthropods).","Published":"2017-05-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"devtools","Version":"1.13.2","Title":"Tools to Make Developing R Packages Easier","Description":"Collection of package development tools.","Published":"2017-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dexter","Version":"0.1.7","Title":"Data Management and Basic Analysis of Tests","Description":"Data handling, Rasch model and Haberman's interaction model for\n educational and psychological tests that may involve multiple test forms or\n stages.","Published":"2017-02-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"df2json","Version":"0.0.2","Title":"Convert a dataframe to JSON","Description":"It handles numerics, characters, factors, and logicals.","Published":"2013-04-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dfcomb","Version":"2.3","Title":"Phase I/II Adaptive Dose-Finding Design for Combination Studies","Description":"Phase I/II adaptive dose-finding design for combination\n studies. Several methods are proposed depending on the type of\n combinations: (1) the combination of two cytotoxic agents, and (2)\n combination of a molecularly targeted agent with a cytotoxic agent.","Published":"2017-05-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dfCompare","Version":"1.0.0","Title":"Compare Two Dataframes and Return Adds, Changes, and Deletes","Description":"Compares two dataframes with a common key\n and returns the delta records. The package will return\n three dataframes that contain the added, changed,\n and deleted records.","Published":"2017-05-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dfcrm","Version":"0.2-2","Title":"Dose-finding by the continual reassessment method","Description":"This package provides functions to run the CRM and\n TITE-CRM in phase I trials and calibration tools for trial\n planning purposes.","Published":"2013-08-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dfexplore","Version":"0.2.1","Title":"Explore data.frames by plotting NA and classes of each variable","Description":"Quickly and graphically show missing values and classes of each variable and each observation of a data.frame. The aim is to show pattern of missing values and if there is wrong class attribution.","Published":"2014-01-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DFIT","Version":"1.0-3","Title":"Differential Functioning of Items and Tests","Description":"A set of functions to perform Raju, van der Linden and Fleer's\n (1995, doi:10.1177/014662169501900405) Differential Functioning of Items\n and Tests (DFIT) analyses. It includes functions to use the Monte Carlo Item\n Parameter Replication approach (Oshima, Raju, & Nanda, 2006, doi:10.1111/j.\n 1745-3984.2006.00001.x) for obtaining the associated statistical significance\n tests cut-off points. They may also be used for a priori and post-hoc power\n calculations (Cervantes, 2017, doi:10.18637/jss.v076.i05).","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dfmta","Version":"1.5","Title":"Phase I/II Adaptive Dose-Finding Design for MTA","Description":"Phase I/II adaptive dose-finding design for single-agent\n Molecularly Targeted Agent (MTA), according to the paper \"Phase\n I/II Dose-Finding Design for Molecularly Targeted Agent: Plateau\n Determination using Adaptive Randomization\".","Published":"2017-05-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dfoptim","Version":"2016.7-1","Title":"Derivative-Free Optimization","Description":"Derivative-Free optimization algorithms. These algorithms do not require gradient information. More importantly, they can be used to solve non-smooth optimization problems.","Published":"2016-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dfphase1","Version":"1.1.1","Title":"Phase I Control Charts (with Emphasis on Distribution-Free\nMethods)","Description":"Statistical methods for retrospectively detecting changes in location and/or dispersion of univariate and multivariate variables. Data values are assumed to be independent, can be individual (one observation at each instant of time) or subgrouped (more than one observation at each instant of time). Control limits are computed, often using a permutation approach, so that a prescribed false alarm probability is guaranteed without making any parametric assumptions on the stable (in-control) distribution. ","Published":"2017-01-14","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dfpk","Version":"3.2.0","Title":"A Bayesian Dose-Finding Design using Pharmacokinetics(PK) for\nPhase I Clinical Trials","Description":"Statistical methods involving PK measures are provided, in the dose allocation \n\t\t\t\t\t\tprocess during a Phase I clinical trials. These methods enter pharmacokinetics \n\t\t\t\t\t\t(PK) in the dose finding designs in different ways, including covariates models,\n\t\t\t\t\t\tdependent variable or hierarchical models. \n\t\t\t\t\t\tThis package provides functions to generate data from several scenarios and functions \n\t\t\t\t\t\tto run simulations which their objective is to determine the maximum tolerated dose (MTD).","Published":"2017-06-07","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dga","Version":"1.2","Title":"Capture-Recapture Estimation using Bayesian Model Averaging","Description":"Performs Bayesian model averaging for capture-recapture. This includes code to stratify records, check the strata for suitable overlap to be used for capture-recapture, and some functions to plot the estimated population size. ","Published":"2015-04-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dGAselID","Version":"1.1","Title":"Genetic Algorithm with Incomplete Dominance for Feature\nSelection","Description":"Feature selection from high dimensional data using a diploid\n genetic algorithm with Incomplete Dominance for genotype to phenotype mapping\n and Random Assortment of chromosomes approach to recombination.","Published":"2017-05-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DGCA","Version":"1.0.1","Title":"Differential Gene Correlation Analysis","Description":"Performs differential correlation analysis on input\n matrices, with multiple conditions specified by a design matrix. Contains\n functions to filter, process, save, visualize, and interpret differential\n correlations of identifier-pairs across the entire identifier space, or with\n respect to a particular set of identifiers (e.g., one). Also contains several\n functions to perform differential correlation analysis on clusters (i.e., modules)\n or genes. Finally, it contains functions to generate empirical p-values for the\n hypothesis tests and adjust them for multiple comparisons. Although the package\n was built with gene expression data in mind, it is applicable to other types of\n genomics data as well, in addition to being potentially applicable to data from\n other fields entirely. It is described more fully in the manuscript\n introducing it, freely available at .","Published":"2016-11-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dggridR","Version":"1.0.1","Title":"Discrete Global Grids for R","Description":"Spatial analyses involving binning require that every bin have the same area, but this is impossible using a rectangular grid laid over the Earth or over any projection of the Earth. Discrete global grids use hexagons, triangles, and diamonds to overcome this issue, overlaying the Earth with equally-sized bins. This package provides utilities for working with discrete global grids, along with utilities to aid in plotting such data.","Published":"2017-04-24","License":"MIT + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"dglars","Version":"2.0.0","Title":"Differential Geometric Least Angle Regression","Description":"Differential geometric least angle regression method for fitting \n\tsparse generalized linear models. In this version of the package, the user can fit\n\tmodels specifying Gaussian, Poisson, Binomial, Gamma and Inverse Gaussian family. \n\tFurthermore, several link functions can be used to model the relationship between the \n\tconditional expected value of the response variable and the linear predictor. The \n\tsolution curve can be computed using an efficient predictor-corrector or a cyclic \n\tcoordinate descent algorithm, as described in the paper linked to via the URL below.","Published":"2017-02-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dglm","Version":"1.8.3","Title":"Double Generalized Linear Models","Description":"Model fitting and evaluation tools for double generalized linear\n models (DGLMs). This class of models uses one generalized linear model (GLM)\n to fit the specified response and a second GLM to fit the deviance of the first\n model.","Published":"2016-08-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dgmb","Version":"1.2","Title":"Simulating Data for PLS Mode B Structural Models","Description":"A set of functions have been implemented to generate random data to perform Monte Carlo simulations on structural models with formative constructs and interaction and nonlinear effects (Two-Step PLS Mode B structural models). The setup of the true model considers a simple structure with three formative exogenous constructs related to one formative endogenous construct. The routines take into account the interaction and nonlinear effects of the exogenous constructs on the endogenous construct.","Published":"2015-10-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dgo","Version":"0.2.10","Title":"Dynamic Estimation of Group-Level Opinion","Description":"Fit dynamic group-level IRT and MRP models from individual or\n aggregated item response data. This package handles common preprocessing\n tasks and extends functions for inspecting results, poststratification, and\n quick iteration over alternative models.","Published":"2017-05-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dgodata","Version":"0.0.2","Title":"Data for the 'dgo' Package","Description":"Provides data used by package 'dgo' in examples and vignettes.","Published":"2017-02-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dgof","Version":"1.2","Title":"Discrete Goodness-of-Fit Tests","Description":"This package contains a proposed revision to the\n stats::ks.test() function and the associated ks.test.Rd help\n page. With one minor exception, it does not change the\n existing behavior of ks.test(), and it adds features necessary\n for doing one-sample tests with hypothesized discrete\n distributions. The package also contains cvm.test(), for doing\n one-sample Cramer-von Mises goodness-of-fit tests.","Published":"2013-10-25","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"DHARMa","Version":"0.1.5","Title":"Residual Diagnostics for Hierarchical (Multi-Level / Mixed)\nRegression Models","Description":"The 'DHARMa' package uses a simulation-based approach to create\n readily interpretable scaled (quantile) residuals for fitted generalized linear mixed\n models. Currently supported are generalized linear mixed models from 'lme4' \n (classes 'lmerMod', 'glmerMod'), generalized additive models ('gam' from 'mgcv'), \n 'glm' (including 'negbin' from 'MASS', but excluding quasi-distributions) and 'lm' model\n classes. Alternatively, externally created simulations, e.g. posterior predictive simulations \n from Bayesian software such as 'JAGS', 'STAN', or 'BUGS' can be processed as well. \n The resulting residuals are standardized to values between 0 and 1 and can be interpreted \n as intuitively as residuals from a linear regression. The package also provides a number of \n plot and test functions for typical model misspecification problems, such as \n over/underdispersion, zero-inflation, and residual spatial and temporal autocorrelation.","Published":"2017-03-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"dhga","Version":"0.1","Title":"Differential Hub Gene Analysis","Description":"Identification of hub genes in a gene co-expression network from gene expression data. The differential network analysis for two contrasting conditions leads to the identification of various types of hubs like Housekeeping, Unique to stress (Disease) and Unique to control (Normal) hub genes. ","Published":"2016-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dhglm","Version":"1.7","Title":"Double Hierarchical Generalized Linear Models","Description":"Double hierarchical generalized linear models in which the mean, dispersion parameters for variance of random effects, and residual variance (overdispersion) can be further modeled as random-effect models.","Published":"2017-05-01","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"dHSIC","Version":"1.1","Title":"Independence Testing via Hilbert Schmidt Independence Criterion","Description":"Contains an implementation of the\n\td-variable Hilbert Schmidt independence criterion\n\tand several hypothesis tests based on it.","Published":"2017-06-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"di","Version":"1.0.0","Title":"Deficit Index (DI)","Description":"A set of utilities for calculating the Deficit (frailty) Index (DI) in gerontological studies. \n The deficit index was first proposed by Arnold Mitnitski and Kenneth Rockwood \n and represents a proxy measure of aging and also can be served as\n a sensitive predictor of survival. For more information, see \n (i)\"Accumulation of Deficits as a Proxy Measure of Aging\" \n by Arnold B. Mitnitski et al. (2001), \n The Scientific World Journal 1, ;\n (ii) \"Frailty, fitness and late-life mortality in relation to chronological and biological age\" \n by Arnold B Mitnitski et al. (2001), \n BMC Geriatrics2002 2(1), .","Published":"2017-06-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"diagis","Version":"0.1.2","Title":"Diagnostic Plot and Multivariate Summary Statistics of Weighted\nSamples from Importance Sampling","Description":"Fast functions for effective sample size, weighted multivariate mean and variance computation, \n and weight diagnostic plot for generic importance sampling type results.","Published":"2017-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"diagonals","Version":"0.4.0","Title":"Block Diagonal Extraction or Replacement","Description":"Several tools for handling block-matrix diagonals and similar constructs are implemented. Block-diagonal matrices can be extracted or removed using two small functions implemented here. In addition, non-square matrices are supported. Block diagonal matrices occur when two dimensions of a data set are combined along one edge of a matrix. For example, trade-flow data in the 'decompr' and 'gvc' packages have each country-industry combination occur along both edges of the matrix.","Published":"2015-10-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"diagram","Version":"1.6.3","Title":"Functions for visualising simple graphs (networks), plotting\nflow diagrams","Description":"Visualises simple graphs (networks) based on a transition matrix, utilities to plot flow diagrams, \n visualising webs, electrical networks, ... \n Support for the books \"A practical guide to ecological modelling -\n using R as a simulation platform\"\n by Karline Soetaert and Peter M.J. Herman (2009). Springer.\n and the book \"Solving Differential Equations in R\"\n by Karline Soetaert, Jeff Cash and Francesca Mazzia. Springer.\n Includes demo(flowchart), demo(plotmat), demo(plotweb)","Published":"2014-11-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DiagrammeR","Version":"0.9.0","Title":"Create Graph Diagrams and Flowcharts Using R","Description":"Create graph diagrams and flowcharts using R.","Published":"2017-01-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DiagrammeRsvg","Version":"0.1","Title":"Export DiagrammeR Graphviz Graphs as SVG","Description":"Allows for export of DiagrammeR Graphviz objects to SVG.","Published":"2016-02-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DiallelAnalysisR","Version":"0.1.1","Title":"Diallel Analysis with R","Description":"Performs Diallel Analysis with R using Griffing's and Hayman's approaches. Four different methods (1: Method-I (Parents + F1's + reciprocals); 2: Method-II (Parents and one set of F1's); 3: Method-III (One set of F1's and reciprocals); 4: Method-IV (One set of F1's only)) and two methods (1: Fixed Effects Model; 2: Random Effects Model) can be applied using Griffing's approach.","Published":"2016-04-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"diaplt","Version":"1.2.1","Title":"Beads Summary Plot of Ranges","Description":"Visualize one-factor data frame. \n Beads plot consists of diamonds of each factor of each data series. \n A diamond indicates average and range. \n Look over a data frame with many numeric columns and a factor column. ","Published":"2013-11-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dice","Version":"1.2","Title":"Calculate probabilities of various dice-rolling events","Description":"This package provides utilities to calculate the probabilities of various dice-rolling events, such as the probability of rolling a four-sided die six times and getting a 4, a 3, and either a 1 or 2 among the six rolls (in any order); the probability of rolling two six-sided dice three times and getting a 10 on the first roll, followed by a 4 on the second roll, followed by anything but a 7 on the third roll; or the probabilities of each possible sum of rolling five six-sided dice, dropping the lowest two rolls, and summing the remaining dice.","Published":"2014-10-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DiceDesign","Version":"1.7","Title":"Designs of Computer Experiments","Description":"Space-Filling Designs and Uniformity Criteria.","Published":"2015-06-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DiceEval","Version":"1.4","Title":"Construction and Evaluation of Metamodels","Description":"Estimation, validation and prediction of models of different types : linear models, additive models, MARS,PolyMARS and Kriging.","Published":"2015-06-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DiceKriging","Version":"1.5.5","Title":"Kriging Methods for Computer Experiments","Description":"Estimation, validation and prediction of kriging models.\n Important functions : km, print.km, plot.km, predict.km.","Published":"2015-04-24","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"DiceOptim","Version":"2.0","Title":"Kriging-Based Optimization for Computer Experiments","Description":"Efficient Global Optimization (EGO) algorithm and adaptations for\n parallel infill (multipoint EI), problems with noise, and problems with\n constraints.","Published":"2016-09-15","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"diceR","Version":"0.1.0","Title":"Diverse Cluster Ensemble in R","Description":"Performs cluster analysis using an ensemble clustering framework.\n Results from a diverse set of algorithms are pooled together using methods\n such as majority voting, K-Modes, LinkCluE, and CSPA. There are options to\n compare cluster assignments across algorithms using internal and external\n indices, visualizations such as heatmaps, and significance testing for the\n existence of clusters.","Published":"2017-06-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DiceView","Version":"1.3-1","Title":"Plot methods for computer experiments design and surrogate","Description":"View 2D/3D sections or contours of computer experiments designs, surrogates or test functions.","Published":"2013-11-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dichromat","Version":"2.0-0","Title":"Color Schemes for Dichromats","Description":"Collapse red-green or green-blue distinctions to simulate\n the effects of different types of color-blindness.","Published":"2013-01-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dicionariosIBGE","Version":"1.6","Title":"Dictionaries for reading microdata surveys from IBGE","Description":"This package contains the dictionaries for reading microdata\n from IBGE (Brazilian Institute of Geography and Statistics)\n surveys PNAD, PME and POF.","Published":"2014-08-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DidacticBoost","Version":"0.1.1","Title":"A Simple Implementation and Demonstration of Gradient Boosting","Description":"A basic, clear implementation of tree-based gradient boosting\n designed to illustrate the core operation of boosting models. Tuning\n parameters (such as stochastic subsampling, modified learning rate, or\n regularization) are not implemented. The only adjustable parameter is the\n number of training rounds. If you are looking for a high performance boosting\n implementation with tuning parameters, consider the 'xgboost' package.","Published":"2016-04-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"didrooRFM","Version":"1.0.0","Title":"Compute Recency Frequency Monetary Scores for your Customer Data","Description":"This hosts the findRFM function which generates RFM scores on a 1-5 point scale for\n customer transaction data. The function consumes a data frame with Transaction Number,\n Customer ID, Date of Purchase (in date format) and Amount of Purchase as the attributes.\n The function returns a data frame with RFM data for the sales information.","Published":"2017-05-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dielectric","Version":"0.2.3","Title":"Defines some physical constants and dielectric functions\ncommonly used in optics, plasmonics","Description":"Physical constants. Gold, silver and glass permittivities,\n together with spline interpolation functions.","Published":"2013-11-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"diezeit","Version":"0.1-0","Title":"R Interface to the ZEIT ONLINE Content API","Description":"A wrapper for the ZEIT ONLINE Content API, available at . 'diezeit' gives access to articles and corresponding metadata from the ZEIT archive and from ZEIT ONLINE. A personal API key is required for usage.","Published":"2015-10-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DIFboost","Version":"0.2","Title":"Detection of Differential Item Functioning (DIF) in Rasch Models\nby Boosting Techniques","Description":"Performs detection of Differential Item Functioning using the method DIFboost as proposed in Schauberger and Tutz (2015): Detection of Differential item functioning in Rasch models by boosting techniques, British Journal of Mathematical and Statistical Psychology. ","Published":"2016-07-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"difconet","Version":"1.0-4","Title":"Differential Coexpressed Networks","Description":"Estimation of DIFferential COexpressed NETworks using diverse and user metrics.\n\t\t\t This package is basically used for three functions related to the estimation\n\t\t\t of differential coexpression. \n\t\t\t First, to estimate differential coexpression where\n\t\t\t the coexpression is estimated, by default, by Spearman correlation. For this,\n\t\t\t a metric to compare two correlation distributions is needed. The package includes\n\t\t\t 6 metrics. Some of them needs a threshold. A new metric can also be specified as\n\t\t\t a user function with specific parameters (see difconet.run). The significance is\n\t\t\t be estimated by permutations.\n\t\t\t Second, to generate datasets with controlled differential correlation data. This \n\t\t\t is done by either adding noise, or adding specific correlation structure.\n\t\t\t Third, to show the results of differential correlation analyses. Please see\n\t\t\t for further information.","Published":"2017-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DiffCorr","Version":"0.4.1","Title":"Analyzing and Visualizing Differential Correlation Networks in\nBiological Data","Description":"A method for identifying pattern changes between 2 experimental\n conditions in correlation networks (e.g., gene co-expression networks),\n which builds on a commonly used association measure, such as Pearson's\n correlation coefficient. This package includes functions to calculate\n correlation matrices for high-dimensional dataset and to test\n differential correlation, which means the changes in the correlation\n relationship among variables (e.g., genes and metabolites) between 2\n experimental conditions. ","Published":"2015-04-02","License":"GPL (> 3)","snapshot_date":"2017-06-23"} {"Package":"diffdepprop","Version":"0.1-9","Title":"Calculates Confidence Intervals for two Dependent Proportions","Description":"The package includes functions to calculate confidence\n intervals for the difference of dependent proportions. There\n are two functions implemented to edit the data (dichotomising\n with the help of cutpoints, counting accordance and discordance\n of two tests or situations). For the calculation of the\n confidence intervals entries of the fourfold table are needed.","Published":"2013-05-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"diffEq","Version":"1.0-1","Title":"Functions from the book Solving Differential Equations in R","Description":"Functions and examples from the book Solving Differential \n Equations in R by Karline Soetaert, Jeff R Cash and Francesca Mazzia.\n Springer, 2012.","Published":"2014-12-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"diffeR","Version":"0.0-4","Title":"Metrics of Difference for Comparing Pairs of Maps","Description":"Metrics of difference for comparing pairs of maps representing real or categorical variables at original and multiple resolutions.","Published":"2015-12-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"diffIRT","Version":"1.5","Title":"Diffusion IRT Models for Response and Response Time Data","Description":"Package to fit diffusion-based IRT models to response and \n\tresponse time data. Models are fit using marginal maximum \n\tlikelihood. Parameter restrictions (fixed value and equality \n\tconstraints) are possible. In addition, factor scores (person drift \n\trate and person boundary separation) can be estimated. Model fit \n\tassessment tools are also available. The traditional diffusion model \n\tcan be estimated as well.","Published":"2015-08-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"diffMeshGP","Version":"0.1.0","Title":"Multi-Fidelity Computer Experiments Using the Tuo-Wu-Yu Model","Description":"This R function implements the nonstationary Kriging model proposed by Tuo, Wu and Yu (2014) for analyzing multi-fidelity computer outputs. This function computes the maximum likelihood estimates for the model parameters as well as the predictive means and variances of the exact solution (i.e., the conceptually highest fidelity).","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DiffNet","Version":"1.0-0","Title":"Detection of Statistically Significant Changes in Complex\nBiological Networks","Description":"Provides an implementation of statistically significant \n differential sub-network analysis for paired biological networks. ","Published":"2017-02-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"diffobj","Version":"0.1.6","Title":"Diffs for R Objects","Description":"Generate a colorized diff of two R objects for an intuitive\n visualization of their differences.","Published":"2016-11-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"diffr","Version":"0.1","Title":"Display Differences Between Two Files using Codediff Library","Description":"An R interface to the 'codediff' JavaScript library (a copy of which is included in the package,\n see for information).\n Allows for visualization of the difference between 2 files, usually text files or R scripts, in a browser.","Published":"2017-02-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"diffractometry","Version":"0.1-8","Title":"Baseline identification and peak decomposition for x-ray\ndiffractograms","Description":"Residual-based baseline identification and peak decomposition for x-ray diffractograms as introduced in Davies/Gather/Mergel/Meise/Mildenberger (2008).","Published":"2013-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"diffrprojects","Version":"0.1.14","Title":"Projects for Text Version Comparison and Analytics in R","Description":"Provides data structures and methods for measuring, coding, \n and analysing text within text corpora. The package allows for manual as \n well computer aided coding on character, token and text pair level. ","Published":"2016-11-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"diffrprojectswidget","Version":"0.1.5","Title":"Visualization for 'diffrprojects'","Description":"Interactive visualizations and tabulations for diffrprojects. \n All presentations are based on the htmlwidgets framework allowing for \n interactivity via HTML and Javascript, Rstudio viewer integration, \n RMarkdown integration, as well as Shiny compatibility. ","Published":"2016-11-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"diffusionMap","Version":"1.1-0","Title":"Diffusion map","Description":"Implements diffusion map method of data\n parametrization, including creation and visualization of\n diffusion map, clustering with diffusion K-means and\n\t regression using adaptive regression model.","Published":"2014-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DiffusionRgqd","Version":"0.1.3","Title":"Inference and Analysis for Generalized Quadratic Diffusions","Description":"Tools for performing inference and analysis on a class of quadratic diffusion processes for both scalar and bivariate diffusion systems. For scalar diffusions, a module is provided for solving first passage time problems for both time-homogeneous and time-inhomogeneous GQDs.","Published":"2016-08-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DiffusionRimp","Version":"0.1.2","Title":"Inference and Analysis for Diffusion Processes via Data\nImputation and Method of Lines","Description":"Tools for performing inference and analysis using a data-imputation scheme and the method of lines.","Published":"2016-08-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DiffusionRjgqd","Version":"0.1.1","Title":"Inference and Analysis for Jump Generalized Quadratic Diffusions","Description":"Tools for performing inference and analysis on a class of quadratic jump diffusion processes.","Published":"2016-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"diffusr","Version":"0.1.1","Title":"Network Diffusion Algorithms","Description":"Implementation of network diffusion algorithms such as insulated\n heat propagation or Markov random walks. Network diffusion algorithms generally\n spread information in the form of node weights along the edges of a graph to other nodes.\n These weights can for example be interpreted as temperature, an initial amount\n of water, the activation of neurons in the brain, or the location of a random\n surfer in the internet. The information (node weights) is iteratively propagated\n to other nodes until a equilibrium state or stop criterion occurs.","Published":"2017-06-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"DIFlasso","Version":"1.0-3","Title":"A Penalty Approach to Differential Item Functioning in Rasch\nModels","Description":"Performs DIFlasso, a method to detect DIF (Differential Item Functioning) in Rasch Models. It can handle settings with many variables and also metric variables. ","Published":"2017-05-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"difNLR","Version":"1.0.3","Title":"Detection of Differential Item Functioning (DIF) and\nDifferential Distractor Functioning (DDF) by Non-Linear\nRegression Models","Description":"Detection of DIF among dichotomously scored items and DDF among unscored items with non-linear regression procedures based on generalized logistic regression models.","Published":"2017-06-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"difR","Version":"4.7","Title":"Collection of Methods to Detect Dichotomous Differential Item\nFunctioning (DIF)","Description":"Provides a collection of standard methods to detect differential item functioning among dichotomously scored items. Methods for uniform and non-uniform DIF, based on test-score or IRT methods, for comparing two or more than two groups of respondents, are available (Magis, Beland, Tuerlinckx and De Boeck,A General Framework and an R Package for the Detection of Dichotomous Differential Item Functioning, Behavior Research Methods, 42, 2010, 847-862 ).","Published":"2016-11-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DIFtree","Version":"2.1.4","Title":"Item Focused Trees for the Identification of Items in\nDifferential Item Functioning","Description":"Item focused recursive partitioning for simultaneous selection of\n items and variables that induce Differential Item Functioning (DIF) based on the\n Rasch Model or the Logistic Regression Approach for DIF detection.","Published":"2016-12-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"digest","Version":"0.6.12","Title":"Create Compact Hash Digests of R Objects","Description":"Implementation of a function 'digest()' for the creation \n of hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', \n 'crc32', 'xxhash' and 'murmurhash' algorithms) permitting easy comparison of R\n language objects, as well as a function 'hmac()' to create hash-based\n message authentication code. Please note that this package is not meant to\n be deployed for cryptographic purposes for which more comprehensive (and\n widely tested) libraries such as 'OpenSSL' should be used.","Published":"2017-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Digiroo2","Version":"0.6","Title":"An application programming interface for generating null models\nof social contacts based on individuals' space use","Description":"Digiroo2 is an R package developed by researchers at the\n University of Queensland to investigate association patterns\n and social structure in wild animal populations. Proximity\n between individuals is generally considered to be an\n appropriate proxy for associations and pairwise association\n indices are the most widely used technique for analysing animal\n social structure. However, little attention is given to\n identifying how patterns of spatial overlap affect these\n association patterns. For example, do individuals associate\n randomly with others with whom they share home ranges, or do\n some individuals go out of their way to associate with or avoid\n particular individuals? This program builds a null model of\n random associations based on an individual's space use\n determined using home range methodologies. Random points may be\n generated within a specified home range contour or according to\n the Utilization Distribution (UD). Expected associations of\n individuals are extracted based on probability of occurrence\n and the proximity between home range weighted random points.\n Association matrices can be generated from multiple\n permutations for analysis using SOCPROG 2.4 (Whitehead 2009) to\n create 'expected' pairwise half-weight association indices\n (HWIs). These may be compared with the 'observed' HWIs from\n field observations to reveal whether pairs of animals associate\n more (= attraction) or less (= avoidance) than expected by\n chance.","Published":"2013-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"digitalPCR","Version":"1.1.0","Title":"Estimate Copy Number for Digital PCR","Description":"The assay sensitivity is the minimum number of copies that the digital PCR assay can detect. Users provide serial dilution results in the format of counts of positive and total reaction wells. The output is the estimated assay sensitivity and the copy number per well in the initial dilute.","Published":"2016-03-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"digitize","Version":"0.0.4","Title":"Use Data from Published Plots in R","Description":"Import data from a digital image; it requires user input for\n calibration and to locate the data points. The end result is similar to\n 'DataThief' and other other programs that 'digitize' published plots or\n graphs.","Published":"2016-08-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dils","Version":"0.8.1","Title":"Data-Informed Link Strength. Combine multiple-relationship\nnetworks into a single weighted network. Impute (fill-in)\nmissing network links","Description":"Combine multiple-relationship networks into a single weighted\n network. The approach is similar to factor analysis in the that\n contribution from each constituent network varies so as to maximize the\n information gleaned from the multiple-relationship networks.\n This implementation uses Principal Component Analysis calculated using\n 'prcomp' with bootstrap subsampling. Missing links are imputed using\n the method of Chen et al.\n (2012).","Published":"2013-11-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DIME","Version":"1.2","Title":"DIME (Differential Identification using Mixture Ensemble)","Description":"A robust differential identification method that considers an ensemble of finite mixture models combined with a local false discovery rate (fdr) to analyze ChIP-seq (high-throughput genomic)data comparing two samples allowing for flexible modeling of data.","Published":"2013-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dimRed","Version":"0.1.0","Title":"A Framework for Dimensionality Reduction","Description":"A collection of dimensionality reduction\n techniques from R packages and provides a common\n interface for calling the methods.","Published":"2017-05-04","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dina","Version":"1.0.2","Title":"Bayesian Estimation of DINA Model","Description":"Estimate the Deterministic Input, Noisy \"And\" Gate (DINA)\n cognitive diagnostic model parameters using the Gibbs sampler described\n by Culpepper (2015) .","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dinamic","Version":"1.0","Title":"DiNAMIC A Method To Analyze Recurrent DNA Copy Number\nAberrations in Tumors","Description":"This function implements the DiNAMIC procedure for\n assessing the statistical significance of recurrent DNA copy\n number aberrations (Bioinformatics (2011) 27(5) 678 - 685).","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"diptest","Version":"0.75-7","Title":"Hartigan's Dip Test Statistic for Unimodality - Corrected","Description":"Compute Hartigan's dip test statistic for unimodality /\n multimodality and provide a test with simulation based p-values, where\n the original public code has been corrected.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DIRECT","Version":"1.0.1","Title":"Bayesian Clustering of Multivariate Data Under the\nDirichlet-Process Prior","Description":"A Bayesian clustering method for replicated time series or replicated measurements from multiple experimental conditions, e.g., time-course gene expression data. It estimates the number of clusters directly from the data using a Dirichlet-process prior. See Fu, A. Q., Russell, S., Bray, S. and Tavare, S. (2013) Bayesian clustering of replicated time-course gene expression data with weak signals. The Annals of Applied Statistics. 7(3) 1334-1361. .","Published":"2016-04-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Directional","Version":"2.7","Title":"Directional Statistics","Description":"A collection of R functions for directional data analysis.","Published":"2017-05-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"directlabels","Version":"2017.03.31","Title":"Direct Labels for Multicolor Plots","Description":"An extensible framework\n for automatically placing direct labels onto multicolor 'lattice' or\n 'ggplot2' plots.\n Label positions are described using Positioning Methods\n which can be re-used across several different plots.\n There are heuristics for examining \"trellis\" and \"ggplot\" objects\n and inferring an appropriate Positioning Method.","Published":"2017-04-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"directPA","Version":"1.3","Title":"Direction Analysis for Pathways and Kinases","Description":"Direction analysis is a set of tools designed to identify\n combinatorial effects of multiple treatments and/or perturbations on pathways\n and kinases profiled by microarray, RNA-seq, proteomics, or phosphoproteomics\n data.","Published":"2016-03-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DirectStandardisation","Version":"1.2","Title":"Adjusted Means and Proportions by Direct Standardisation","Description":"Calculate adjusted means and proportions of a variable by groups defined by another variable by direct standardisation, standardised to the structure of the dataset.","Published":"2016-12-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DirichletReg","Version":"0.6-3","Title":"Dirichlet Regression in R","Description":"Implements Dirichlet regression models in R.","Published":"2015-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dirmcmc","Version":"1.3.3","Title":"Directional Metropolis Hastings Algorithm","Description":"Implementation of Directional Metropolis Hastings Algorithm for\n MCMC.","Published":"2017-02-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dirmult","Version":"0.1.3-4","Title":"Estimation in Dirichlet-Multinomial distribution","Description":"Estimate parameters in Dirichlet-Multinomial and compute\n profile log-likelihoods.","Published":"2013-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Disake","Version":"1.5","Title":"Discrete associated kernel estimators","Description":"Discrete smoothing of probability mass function is performed using three discrete associated kernels: Dirac Discrete Uniform (DiracDU), Binomial and Discrete Triangular. Two automatic bandwidth selection procedures are implemented: the cross-validation method for the three kernels and the local Bayesian approach for Binomial kernel. Note that DiracDU is used for categorical data, Binomial kernel is appropriate for count data with small or moderate sample sizes, and Discrete Triangular kernel is recommended for count data with large sample sizes.","Published":"2015-01-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"discgolf","Version":"0.1.0","Title":"Discourse 'API' Client","Description":"Client for the Discourse 'API'. Discourse is a open source\n discussion forum platform (). It comes with 'RESTful'\n API access to an installation. This client requires that you are authorized\n to access a Discourse installation, either yours or another.","Published":"2016-04-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"disclap","Version":"1.5","Title":"Discrete Laplace Exponential Family","Description":"Discrete Laplace exponential family for models such as a generalized linear model","Published":"2014-04-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"disclapmix","Version":"1.6.2","Title":"Discrete Laplace Mixture Inference using the EM Algorithm","Description":"Make inference in a mixture of discrete Laplace distributions using the EM algorithm. This can e.g. be used for modelling the distribution of Y chromosomal haplotypes as described in [1, 2] (refer to the URL section).","Published":"2015-09-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DiscML","Version":"1.0.1","Title":"DiscML: An R package for estimating evolutionary rates of\ndiscrete characters using maximum likelihood","Description":"DiscML performs rate estimation using maximum likelihood with the \n options to correct for unobservable data, to implement a Gamma-distribution \n for rate variation, and to estimate the prior root probabilities from the \n empirical data.","Published":"2014-07-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"disco","Version":"0.5","Title":"Discordance and Concordance of Transcriptomic Responses","Description":"Concordance and discordance of homologous gene regulation allows comparing\n reaction to stimuli in different organisms, \n for example human patients and animal models of a disease. The package\n contains functions to calculate discordance and concordance score\n for homologous gene pairs, identify concordantly or\n discordantly regulated transcriptional modules and visualize the results.\n It is intended for analysis of transcriptional data.","Published":"2017-03-03","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"discord","Version":"0.1","Title":"Functions for Discordant Kinship Modeling","Description":"Functions for discordant kinship modeling (and other sibling-based quasi-experimental designs). Currently, the package contains data restructuring functions; functions for generating genetically- and environmentally-informed data for kin pairs.","Published":"2017-05-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"discretecdAlgorithm","Version":"0.0.4","Title":"Coordinate-Descent Algorithm for Discrete Data","Description":"Structure learning of Bayesian network using coordinate-descent\n algorithm. This algorithm is designed for discrete network assuming a multinomial data set,\n and we use a multi-logit model to do the regression.\n The algorithm is described in Gu, Fu and Zhou (2016) .","Published":"2017-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DiscreteInverseWeibull","Version":"1.0.2","Title":"Discrete Inverse Weibull Distribution","Description":"Probability mass function, distribution function, quantile function, random generation and parameter estimation for the discrete inverse Weibull distribution.","Published":"2016-05-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DiscreteLaplace","Version":"1.1.1","Title":"Discrete Laplace Distributions","Description":"Probability mass function, distribution function, quantile function, random generation and estimation for the skew discrete Laplace distributions.","Published":"2016-05-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"discreteMTP","Version":"0.1-2","Title":"Multiple testing procedures for discrete test statistics","Description":"Multiple testing procedures for discrete test statistics,\n that use the known discrete null distribution of the p-values\n for simultaneous inference.","Published":"2012-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"discreteRV","Version":"1.2.2","Title":"Create and Manipulate Discrete Random Variables","Description":"Create, manipulate, transform, and simulate from discrete random\n variables. The syntax is modeled after that which is used in mathematical\n statistics and probability courses, but with powerful support for more\n advanced probability calculations. This includes the creation of joint\n random variables, and the derivation and manipulation of their conditional\n and marginal distributions.","Published":"2015-09-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DiscreteWeibull","Version":"1.1","Title":"Discrete Weibull Distributions (Type 1 and 3)","Description":"Probability mass function, distribution function, quantile function, random generation and parameter estimation for the type I and III discrete Weibull distributions.","Published":"2015-10-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"discretization","Version":"1.0-1","Title":"Data preprocessing, discretization for classification","Description":"This package is a collection of supervised discretization\n algorithms. It can also be grouped in terms of top-down or\n bottom-up, implementing the discretization algorithms.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"discrimARTs","Version":"0.2","Title":"Discrimination of Alternative Reproductive Tactics (ARTs)","Description":"'discrimARTs' discriminates with explicit confidence the alternative reproductive tactics (ARTs) in dimorphic systems with bimodal traits by computing the maximum likelihood estimate of a mixture of distributions of a measured ARTs trait. Supported distributions include a mixture of 2 normal distributions and a mixture of 2 facing gamma distributions.","Published":"2013-12-08","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"DiscriMiner","Version":"0.1-29","Title":"Tools of the Trade for Discriminant Analysis","Description":"Functions for Discriminant Analysis and Classification purposes\n covering various methods such as descriptive, geometric, linear, quadratic,\n PLS, as well as qualitative discriminant analyses","Published":"2013-11-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"discSurv","Version":"1.1.7","Title":"Discrete Time Survival Analysis","Description":"Provides data transformations, estimation utilities,\n predictive evaluation measures and simulation functions for discrete time\n survival analysis.","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"diseasemapping","Version":"1.4.2","Title":"Modelling Spatial Variation in Disease Risk for Areal Data","Description":"Formatting of population and case data, calculation of Standardized\n Incidence Ratios, and fitting the BYM model using INLA.","Published":"2016-09-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DisHet","Version":"0.1.0","Title":"Estimate the Gene Expression Levels and Component Proportions of\nthe Normal, Stroma (Immune) and Tumor Components of Bulk Tumor\nSamples","Description":"Model cell type heterogeneity of bulk renal cell carcinoma. The observed gene expression in bulk tumor sample is modeled by a log-normal distribution with the location parameter structured as a linear combination of the component-specific gene expressions. ","Published":"2017-03-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DisimForMixed","Version":"0.2","Title":"Calculate Dissimilarity Matrix for Dataset with Mixed Attributes","Description":"Implement the methods proposed by Ahmad & Dey (2007) in calculating the dissimilarity matrix at the presence of mixed attributes. This Package includes functions to discretize quantitative variables, calculate conditional probability for each pair of attribute values, distance between every pair of attribute values, significance of attributes, calculate dissimilarity between each pair of objects.","Published":"2016-06-06","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"diskImageR","Version":"1.0.0","Title":"A Pipeline to Analyze Resistance and Tolerance from Drug Disk\nDiffusion Assays","Description":"A pipeline to analyze photographs of disk diffusion plates. This removes the need to analyze the plates themselves, and thus analysis can be done separate from the assay. Furthermore, diskImageR removes potential researcher bias, by quantitative assessment of drug resistance as the zone diameter at multiple cutoff values of growth inhibition. This method also extends the disk diffusion assay by measuring drug tolerance (in addition to drug resistance) as the fraction of the subpopulation that is able to grow above the resistance point (\"FoG\"), and drug sensitivity as the rate of change from no growth to full growth (\"slope\").","Published":"2016-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dismo","Version":"1.1-4","Title":"Species Distribution Modeling","Description":"Functions for species distribution modeling, that is, predicting entire geographic distributions form occurrences at a number of sites and the environment at these sites.","Published":"2017-01-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"disp2D","Version":"1.0","Title":"2D Hausdorff and Simplex Dispersion Orderings","Description":"An implementation of two exact algorithms for testing the\n Hausdorff and simplex dispersion orderings","Published":"2012-05-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"disparityfilter","Version":"2.2.3","Title":"Disparity Filter Algorithm for Weighted Networks","Description":"The disparity filter algorithm is a network reduction technique to\n identify the 'backbone' structure of a weighted network without destroying\n its multi-scale nature. The algorithm is documented in M. Angeles Serrano,\n Marian Boguna and Alessandro Vespignani in \"Extracting the multiscale\n backbone of complex weighted networks\", Proceedings of the National Academy\n of Sciences 106 (16), 2009. This implementation of the algorithm supports\n both directed and undirected networks.","Published":"2016-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"displayHTS","Version":"1.0","Title":"displayHTS","Description":"A package containing R functions for displaying data and\n results from high-throughput screening experiments.","Published":"2013-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dispmod","Version":"1.1","Title":"Dispersion models","Description":"Description: Functions for modelling dispersion in GLM.","Published":"2012-02-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"disposables","Version":"1.0.3","Title":"Create Disposable R Packages for Testing","Description":"Create disposable R packages for testing.\n You can create, install and load multiple R packages with a single\n function call, and then unload, uninstall and destroy them with another\n function call. This is handy when testing how some R code or an R package\n behaves with respect to other packages.","Published":"2017-03-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dissUtils","Version":"1.0","Title":"Utilities for making pairwise comparisons of multivariate data","Description":"This package has extensible C++ code for computing dissimilarities between vectors. It also has a number of C++ functions for assembling collections of dissimilarities. In particular, it lets you find a matrix of dissimilarities between the rows of two input matrices. There are also functions for finding the nearest neighbors of each row of a matrix, either within the matrix itself or within another matrix.","Published":"2014-06-02","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Distance","Version":"0.9.6","Title":"Distance Sampling Detection Function and Abundance Estimation","Description":"A simple way of fitting detection functions to distance sampling\n data for both line and point transects. Adjustment term selection, left and\n right truncation as well as monotonicity constraints and binning are\n supported. Abundance and density estimates can also be calculated (via a\n Horvitz-Thompson-like estimator) if survey area information is provided.","Published":"2016-08-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"distance.sample.size","Version":"0.0","Title":"Calculates Study Size Required for Distance Sampling","Description":"Calculates the study size (either number of detections, or\n proportion of region that should be covered) to achieve a target precision for\n the estimated abundance. The calculation allows for the penalty due to unknown\n detection function, and for overdispersion. The user must specify a guess at the\n true detection function.","Published":"2016-01-26","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"distances","Version":"0.1.2","Title":"Tools for Distance Metrics","Description":"Provides tools for constructing, manipulating and using distance metrics.","Published":"2017-05-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"DistatisR","Version":"1.0","Title":"DiSTATIS Three Way Metric Multidimensional Scaling","Description":"Implement DiSTATIS and CovSTATIS (three-way multidimensional scaling). For the analysis of multiple distance/covariance matrices collected on the same set of observations","Published":"2013-07-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"distcomp","Version":"1.0-1","Title":"Computations over Distributed Data without Aggregation","Description":"Implementing algorithms and fitting models when sites (possibly remote) share\n computation summaries rather than actual data over HTTP with a master R process (using\n 'opencpu', for example). A stratified Cox model and a singular value decomposition are\n provided. The former makes direct use of code from the R 'survival' package. (That is,\n the underlying Cox model code is derived from that in the R 'survival' package.)\n Sites may provide data via several means: CSV files, Redcap API, etc. An extensible\n design allows for new methods to be added in the future. Web applications are provided\n (via 'shiny') for the implemented methods to help in designing and deploying the\n computations.","Published":"2017-05-16","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"distdrawr","Version":"0.1.2","Title":"Download Occurrence Data of Vascular Plants in Germany from the\nFLORKART Database","Description":"Download data from the FlorKart database of the floristic field mapping in Germany in a convenient way. The database incorporates distribution data for plants in Germany on the basis of quadrants on a topographical map with a resolution of 1 : 25000 (TK 25). The data is owned and provided by the German Federal Agency for Nature Conservation (BfN) and the Network Phytodiversity in Germany (NetPhyD). For further information please visit . The author of this package is in no way associated with the BfN or NetPhyD. ","Published":"2017-01-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"distfree.cr","Version":"1.0","Title":"Distribution-free confidence region (distfree.cr)","Description":"An R package that is developed for constructing confidence\n regions without the need to know the sampling distribution of\n bivariate data.","Published":"2012-11-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"distillery","Version":"1.0-4","Title":"Method Functions for Confidence Intervals and to Distill\nInformation from an Object","Description":"Some very simple method functions for confidence interval calculation, bootstrap resampling, and to distill pertinent information from a potentially complex object; primarily used in common with packages extRemes and SpatialVx.","Published":"2017-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"distory","Version":"1.4.3","Title":"Distance Between Phylogenetic Histories","Description":"Geodesic distance between phylogenetic trees and\n associated functions.","Published":"2017-03-21","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"distr","Version":"2.6.2","Title":"Object Oriented Implementation of Distributions","Description":"S4-classes and methods for distributions.","Published":"2017-04-22","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"distrDoc","Version":"2.6","Title":"Documentation for 'distr' Family of R Packages","Description":"Provides documentation in form of a common vignette to packages 'distr', 'distrEx',\n 'distrMod', 'distrSim', 'distrTEst', 'distrTeach', and 'distrEllipse'.","Published":"2016-04-24","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"distrEllipse","Version":"2.6.2","Title":"S4 Classes for Elliptically Contoured Distributions","Description":"Distribution (S4-)classes for elliptically contoured distributions (based on\n package 'distr').","Published":"2016-09-04","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"distrEx","Version":"2.6.1","Title":"Extensions of Package 'distr'","Description":"Extends package 'distr' by functionals, distances, and conditional distributions.","Published":"2017-04-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"DISTRIB","Version":"1.0","Title":"Four Essential Functions for Statistical Distributions Analysis:\nA New Functional Approach","Description":"A different way for calculating pdf/pmf, cdf, quantile and random data such that the user is able to consider the name of related distribution as an argument and so easily can changed by a changing argument by user. It must be mentioned that the core and computation base of package 'DISTRIB' is package 'stats'. Although similar functions are introduced previously in package 'stats', but the package 'DISTRIB' has some special applications in some special computational programs.","Published":"2016-12-26","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"DistributionUtils","Version":"0.5-1","Title":"Distribution Utilities","Description":"This package contains utilities which are of use in the\n packages I have developed for dealing with distributions.\n Currently these packages are GeneralizedHyperbolic,\n VarianceGamma, and SkewHyperbolic and NormalLaplace. Each of\n these packages requires DistributionUtils. Functionality\n includes sample skewness and kurtosis, log-histogram, tail\n plots, moments by integration, changing the point about which a\n moment is calculated, functions for testing distributions using\n inversion tests and the Massart inequality. Also includes an\n implementation of the incomplete Bessel K function.","Published":"2015-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"distrMod","Version":"2.6.1","Title":"Object Oriented Implementation of Probability Models","Description":"Implements S4 classes for probability models based on packages 'distr' and\n 'distrEx'.","Published":"2016-09-04","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"distrom","Version":"0.3-3","Title":"Distributed Multinomial Regression","Description":"Estimation for a multinomial logistic regression factorized into independent Poisson log regressions. See the textir package for applications in multinomial inverse regression analysis of text.","Published":"2015-08-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"distrRmetrics","Version":"2.6","Title":"Distribution Classes for Distributions from Rmetrics","Description":"S4-distribution classes based on package distr for distributions from packages\n 'fBasics' and 'fGarch'.","Published":"2016-04-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"distrSim","Version":"2.6","Title":"Simulation Classes Based on Package 'distr'","Description":"S4-classes for setting up a coherent framework for simulation within the distr\n family of packages.","Published":"2016-04-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"distrTeach","Version":"2.6.1","Title":"Extensions of Package 'distr' for Teaching\nStochastics/Statistics in Secondary School","Description":"Provides flexible examples of LLN and CLT for teaching purposes in secondary\n school.","Published":"2016-04-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"distrTEst","Version":"2.6","Title":"Estimation and Testing Classes Based on Package 'distr'","Description":"Evaluation (S4-)classes based on package distr for evaluating procedures\n (estimators/tests) at data/simulation in a unified way.","Published":"2016-04-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"divagis","Version":"1.0.0","Title":"Provides tools for quality checks of georeferenced plant species\naccessions","Description":"Provides tools for quality checks of georeferenced plant\n species accessions.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DivE","Version":"1.0","Title":"Diversity Estimator","Description":"R-package DivE contains functions for the DivE estimator (Laydon, D. et al., Quantification of HTLV-1 clonality and TCR diversity, PLOS Comput. Biol. 2014). The DivE estimator is a heuristic approach to estimate the number of classes or the number of species (species richness) in a population. ","Published":"2014-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"diveMove","Version":"1.4.3","Title":"Dive Analysis and Calibration","Description":"Utilities to represent, visualize, filter, analyse, and summarize\n\t time-depth recorder (TDR) data. Miscellaneous functions for\n\t handling location data are also provided.","Published":"2017-03-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"diverse","Version":"0.1.5","Title":"Diversity Measures for Complex Systems","Description":"Computes the most common diversity measures used in social and other sciences, and includes new measures from interdisciplinary research.","Published":"2017-03-23","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"diversitree","Version":"0.9-10","Title":"Comparative 'Phylogenetic' Analyses of Diversification","Description":"Contains a number of comparative 'phylogenetic' methods,\n mostly focusing on analysing diversification and character\n evolution. Contains implementations of 'BiSSE' (Binary State\n 'Speciation' and Extinction) and its unresolved tree extensions,\n 'MuSSE' (Multiple State 'Speciation' and Extinction), 'QuaSSE',\n 'GeoSSE', and 'BiSSE-ness' Other included methods include Markov\n models of discrete and continuous trait evolution and constant rate\n 'speciation' and extinction.","Published":"2017-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"diveRsity","Version":"1.9.90","Title":"A Comprehensive, General Purpose Population Genetics Analysis\nPackage","Description":"Allows the calculation of both genetic diversity partition \n statistics, genetic differentiation statistics, and locus informativeness \n for ancestry assignment. \n It also provides users with various option to calculate \n bootstrapped 95\\% confidence intervals both across loci, \n for pairwise population comparisons, and to plot these results interactively. \n Parallel computing capabilities and pairwise results without \n bootstrapping are provided. \n Also calculates F-statistics from Weir and Cockerham (1984). \n Various plotting features are provided, as well as Chi-square tests of \n genetic heterogeneity. \n Functionality for the calculation of various diversity parameters is \n possible for RAD-seq derived SNP data sets containing thousands of marker loci.\n A shiny application for the development of microsatellite multiplexes is \n also available.","Published":"2017-04-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DiversityOccupancy","Version":"1.0.6","Title":"Building Diversity Models from Multiple Species Occupancy Models","Description":"Predictions of alpha diversity are fitted from presence data, first abundance is modeled from occupancy models and then, several diversity indices are calculated and finally GLM models are used to predict diversity in different environments and select priority areas.","Published":"2017-03-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DiversitySampler","Version":"2.1","Title":"Functions for re-sampling a community matrix to compute\ndiversity indices at different sampling levels","Description":"There are two functions in this package, which can be used\n together to estimate the Shannon's Diversity index at different\n levels of sample size. A Monte-Carlo procedure is used to\n re-sample a given observation at each level of sampling. The\n expectation being that the mean of the re-sampling will\n approach Shannon's diversity index at that sample level.","Published":"2012-12-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"divest","Version":"0.4.1","Title":"Get Images Out of DICOM Format Quickly","Description":"Provides tools to convert DICOM-format files to NIfTI-1 format.","Published":"2017-06-02","License":"BSD_3_clause + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"DivMelt","Version":"1.0.3","Title":"HRM Diversity Assay Analysis Tool","Description":"This package has tools for analyzing DNA melting data to\n generate HRM scores, the DNA diversity measure output of the\n HRM Diversity Assay. For additional documentation visit\n http://code.google.com/p/divmelt/.","Published":"2013-01-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"divo","Version":"0.1.2","Title":"Tools for Analysis of Diversity and Similarity in Biological\nSystems","Description":"A set of tools for empirical analysis of diversity (a number and frequency of different types in population) and similarity (a number and frequency of shared types in two populations) in biological or ecological systems. ","Published":"2016-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dixon","Version":"0.0-5","Title":"Nearest Neighbour Contingency Table Analysis","Description":"Function to test spatial segregation and association based\n in contingency table analysis of nearest neighbour counts\n following Dixon (2002). Some fortran code has been included to\n the original dixon2002 function of the ecespa package to\n improve speed.","Published":"2013-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DJL","Version":"2.6","Title":"Distance Measure Based Judgment and Learning","Description":"Implements various decision support tools related to the new product development.\n Subroutines include correlation reliability test, Mahalanobis distance measure for outlier detection, combinatorial search (all possible subset regression), non-parametric efficiency analysis measures: DDF (directional distance function), DEA (data envelopment analysis), HDF (hyperbolic distance function), SBM (slack-based measure), and SF (shortage function), benchmarking, risk analysis, technology adoption model, new product target setting, etc.","Published":"2016-09-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dkDNA","Version":"0.1.1","Title":"Diffusion Kernels on a Set of Genotypes","Description":"Compute diffusion kernels on DNA polymorphisms, including SNP and bi-allelic genotypes. ","Published":"2015-06-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DLASSO","Version":"2.0.2","Title":"Implementation of Adaptive or Non-Adaptive Differentiable Lasso\nand SCAD Penalties in Linear Models","Description":"An implementation of the differentiable lasso (dlasso) and SCAD (dSCAD) using iterative ridge algorithm. This package allows selecting the tuning parameter by AIC, BIC, GIC and GIC.","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dlib","Version":"1.0","Title":"Allow Access to the 'Dlib' C++ Library","Description":"Interface for 'Rcpp' users to 'dlib' which is a\n 'C++' toolkit containing machine learning algorithms and computer vision tools.\n It is used in a wide range of domains including robotics, embedded devices,\n mobile phones, and large high performance computing environments. This package\n allows R users to use 'dlib' through 'Rcpp'.","Published":"2017-02-20","License":"BSL-1.0","snapshot_date":"2017-06-23"} {"Package":"dlm","Version":"1.1-4","Title":"Bayesian and Likelihood Analysis of Dynamic Linear Models","Description":"Maximum likelihood, Kalman filtering and smoothing, and Bayesian\n analysis of Normal linear State Space models, also known as \n Dynamic Linear Models ","Published":"2014-09-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dlmap","Version":"1.13","Title":"Detection Localization Mapping for QTL","Description":"QTL mapping in a mixed model framework with separate\n detection and localization stages. The first stage detects the\n number of QTL on each chromosome based on the genetic variation\n due to grouped markers on the chromosome; the second stage uses\n this information to determine the most likely QTL positions.\n The mixed model can accommodate general fixed and random\n effects, including spatial effects in field trials and pedigree\n effects. Applicable to backcrosses, doubled haploids,\n recombinant inbred lines, F2 intercrosses, and association\n mapping populations.","Published":"2012-08-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dlmodeler","Version":"1.4-2","Title":"Generalized Dynamic Linear Modeler","Description":"dlmodeler is a set of user-friendly functions to simplify the state-space modelling, fitting, analysis and forecasting of Generalized Dynamic Linear Models (DLMs). It includes functions to name and extract individual components of a DLM, build classical seasonal time-series models (monthly, quarterly, yearly, etc. with calendar adjustments) and provides a unified interface compatible with other state-space packages including: dlm, FKF and KFAS.","Published":"2014-02-11","License":"GPL (>= 2) | BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DLMtool","Version":"4.2","Title":"Data-Limited Methods Toolkit","Description":"Development, simulation testing, and implementation of management\n procedures for data-limited fisheries \n (see Carruthers et al (2014) ).","Published":"2017-06-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dlnm","Version":"2.3.2","Title":"Distributed Lag Non-Linear Models","Description":"Collection of functions for distributed lag linear and non-linear models.","Published":"2017-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dlsem","Version":"1.8","Title":"Distributed-Lag Linear Structural Equation Modelling","Description":"Inference functionalities for distributed-lag linear structural equation models with constrained lag shapes. Endpoint-constrained quadratic, quadratic decreasing and gamma lag shapes are available.","Published":"2017-06-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dlstats","Version":"0.0.9","Title":"Download Stats of R Packages","Description":"Monthly download stats of 'CRAN' and 'Bioconductor' packages.\n\t Download stats of 'CRAN' packages is from the 'RStudio' 'CRAN mirror', see .\n\t 'Bioconductor' package download stats is at .","Published":"2016-09-20","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"dma","Version":"1.3-0","Title":"Dynamic Model Averaging","Description":"Dynamic model averaging for binary and continuous\n outcomes.","Published":"2017-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dml","Version":"1.1.0","Title":"Distance Metric Learning in R","Description":"The state-of-the-art algorithms for distance metric learning, including global and local methods such as Relevant Component Analysis, Discriminative Component Analysis, Local Fisher Discriminant Analysis, etc. These distance metric learning methods are widely applied in feature extraction, dimensionality reduction, clustering, classification, information retrieval, and computer vision problems.","Published":"2015-08-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dmm","Version":"1.7-1","Title":"Dyadic Mixed Model for Pedigree Data","Description":"Dyadic mixed model analysis with multi-trait responses and\n pedigree-based partitioning of individual variation into a range of\n environmental and genetic variance components for individual and \n maternal effects.","Published":"2016-04-12","License":"GPL-2 | GPL (>= 2) | GPL-3","snapshot_date":"2017-06-23"} {"Package":"DMMF","Version":"0.3.2.0","Title":"Daily Based Morgan-Morgan-Finney (DMMF) Soil Erosion Model","Description":"Implements the daily based Morgan-Morgan-Finney (DMMF) soil erosion model (Choi et al., 2017 ) for estimating surface runoff and sediment budgets from a field or a catchment on a daily basis.","Published":"2017-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dMod","Version":"0.3.2","Title":"Dynamic Modeling and Parameter Estimation in ODE Models","Description":"The framework provides functions to generate ODEs of reaction\n networks, parameter transformations, observation functions, residual functions,\n etc. The framework follows the paradigm that derivative information should be\n used for optimization whenever possible. Therefore, all major functions produce\n and can handle expressions for symbolic derivatives.","Published":"2016-09-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DMR","Version":"2.0","Title":"Delete or Merge Regressors for linear model selection","Description":"A backward selection procedure called delete or merge\n regressors (DMR) combines deleting continuous variables with\n merging levels of factors. The method assumes greedy search\n among linear models with set of constraints of two types:\n either a parameter for a continuous variable is set to zero or\n parameters corresponding to two levels of a factor are\n compared. DMR is a stepwise regression procedure, where in each\n step a new constraint is added according to ranking of the\n hypotheses based on squared t-statistics. As a result a nested\n family of linear models is obtained and the final decision is\n made according to minimization of the generalized information\n criterion (GIC, default BIC). The main function of the package\n is DMR, which is based on hierarchical clustering. Moreover,\n other functions for extensions of DMR method are given, such as\n stepDMR which is based on recalculation of t-statistics in each\n step and function DMR4glm for generalized linear models.","Published":"2013-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DMRMark","Version":"1.1.1","Title":"DMR Detection by Non-Homogeneous Hidden Markov Model from\nMethylation Array Data","Description":"Perform differential analysis for\n methylation array data. Detect differentially\n methylated regions (DMRs) from array M-values. \n The core is a Non-homogeneous Hidden Markov Model\n for estimating spatial correlation and a novel Constrained \n Gaussian Mixture Model for modeling the M-value pairs of each individual locus.","Published":"2017-04-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"dmt","Version":"0.8.20","Title":"Dependency Modeling Toolkit","Description":"Probabilistic dependency modeling toolkit. ","Published":"2013-12-12","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dmutate","Version":"0.1.1","Title":"Mutate Data Frames with Random Variates","Description":"Work within the 'dplyr' workflow to add random variates to your data frame. \n Variates can be added at any level of an existing column. Also, bounds can be specified \n for simulated variates. ","Published":"2017-01-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DMwR","Version":"0.4.1","Title":"Functions and data for \"Data Mining with R\"","Description":"This package includes functions and data accompanying the book \n\t \"Data Mining with R, learning with case studies\" by Luis Torgo, CRC Press 2010.","Published":"2013-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DMwR2","Version":"0.0.2","Title":"Functions and Data for the Second Edition of \"Data Mining with\nR\"","Description":"Functions and data accompanying the second edition of the book \"Data Mining with R, learning with case studies\" by Luis Torgo, published by CRC Press.","Published":"2016-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dna","Version":"1.1-1","Title":"Differential Network Analysis","Description":"Package for conducting differential network analysis from\n microarray data.","Published":"2014-03-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DNAseqtest","Version":"1.0","Title":"Generating and Testing DNA Sequences","Description":"Generates DNA sequences based on Markov model techniques for matched sequences. This can be generalized to several sequences. The sequences (taxa) are then arranged in an evolutionary tree (phylogenetic tree) depicting how taxa diverge from their common ancestors. This gives the tests and estimation methods for the parameters of different models. Standard phylogenetic methods assume stationarity, homogeneity and reversibility for the Markov processes, and often impose further restrictions on the parameters.","Published":"2016-03-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DNAtools","Version":"0.1-22","Title":"Tools for Analysing Forensic Genetic DNA Data","Description":"Computationally efficient tools for comparing all pairs of profiles\n in a DNA database. The expectation and covariance of the summary statistic\n is implemented for fast computing. Routines for estimating proportions of\n close related individuals are available. The use of wildcards (also called F-\n designation) is implemented. Dedicated functions ease plotting the results.","Published":"2017-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dnc","Version":"1.2","Title":"Dynamic Network Clustering","Description":"Community detection for dynamic networks, i.e., networks measured repeatedly over a sequence of discrete time points, using a latent space approach.","Published":"2016-05-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DnE","Version":"2.1.0","Title":"Distribution and Equation","Description":"The DnE package involves functions to analyse the distribution of a set of given data. The basic idea of the analysis is chi-squared test. Functions which have the form as \"is.xxdistribution\" are used to analyse whether the data obeys the xxdistrbution. If you do not know which distribution to judge, use function is.dt().","Published":"2014-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dnet","Version":"1.0.10","Title":"Integrative Analysis of Omics Data in Terms of Network,\nEvolution and Ontology","Description":"The focus of the dnet is to make sense of omics data (such as gene expression and mutations) from different angles including: integration with molecular networks, enrichments using ontologies, and relevance to gene evolutionary ages. Integration is achieved to identify a gene subnetwork from the whole gene network whose nodes/genes are labelled with informative data (such as the significant levels of differential expression or survival risks). To help make sense of identified gene networks, enrichment analysis is also supported using a wide variety of pre-compiled ontologies and phylostratific gene age information in major organisms including: human, mouse, rat, chicken, C.elegans, fruit fly, zebrafish and arabidopsis. Add-on functionalities are supports for calculating semantic similarity between ontology terms (and between genes) and for calculating network affinity based on random walk; both can be done via high-performance parallel computing.","Published":"2017-01-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DNLC","Version":"1.0.0","Title":"Differential Network Local Consistency Analysis","Description":"Using Local Moran's I for detection of differential network local consistency.","Published":"2016-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DNMF","Version":"1.3","Title":"Discriminant Non-Negative Matrix Factorization","Description":"Discriminant Non-Negative Matrix Factorization aims to extend the Non-negative Matrix Factorization algorithm in order to extract features that enforce not only the spatial locality, but also the separability between classes in a discriminant manner. It refers to three article, Zafeiriou, Stefanos, et al. \"Exploiting discriminant information in nonnegative matrix factorization with application to frontal face verification.\" Neural Networks, IEEE Transactions on 17.3 (2006): 683-695. Kim, Bo-Kyeong, and Soo-Young Lee. \"Spectral Feature Extraction Using dNMF for Emotion Recognition in Vowel Sounds.\" Neural Information Processing. Springer Berlin Heidelberg, 2013. and Lee, Soo-Young, Hyun-Ah Song, and Shun-ichi Amari. \"A new discriminant NMF algorithm and its application to the extraction of subtle emotional differences in speech.\" Cognitive neurodynamics 6.6 (2012): 525-535.","Published":"2015-06-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DOBAD","Version":"1.0.5","Title":"Analysis of Discretely Observed Linear\nBirth-and-Death(-and-Immigration) Markov Chains","Description":"Provides Frequentist (EM) and Bayesian (MCMC) Methods for Inference of Birth-Death-Immigration Markov Chains.","Published":"2016-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"doBy","Version":"4.5-15","Title":"Groupwise Statistics, LSmeans, Linear Contrasts, Utilities","Description":"The facilities can roughly be grouped as: \n\n 1) Facilities for groupwise computations of summary statistics and\n other facilities for working with grouped data: 'do' something to data \n stratified 'by' some variables. \n\n 2) LSmeans (least-squares means), general linear contrasts.\n\n 3) Miscellaneous other utilities.","Published":"2016-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"docopt","Version":"0.4.5","Title":"Command-Line Interface Specification Language","Description":"Define a command-line interface by just giving it\n a description in the specific format.","Published":"2016-06-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"docopulae","Version":"0.3.3","Title":"Optimal Designs for Copula Models","Description":"A direct approach to optimal designs for copula models based on\n the Fisher information. Provides flexible functions for building joint PDFs,\n evaluating the Fisher information and finding optimal designs. It includes an\n extensible solution to summation and integration called 'nint', functions for\n transforming, plotting and comparing designs, as well as a set of tools for\n common low-level tasks.","Published":"2016-06-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"docstring","Version":"1.0.0","Title":"Provides Docstring Capabilities to R Functions","Description":"Provides the ability to display something analogous to\n Python's docstrings within R. By allowing the user to document\n their functions as comments at the beginning of their function\n without requiring putting the function into a package we allow\n more users to easily provide documentation for their functions.\n The documentation can be viewed just like any other help files\n for functions provided by packages as well.","Published":"2017-03-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"documair","Version":"0.6-0","Title":"Automatic Documentation for R packages","Description":"Production of R packages from tagged comments introduced within the code \n and a minimum of additional documentation files.","Published":"2014-09-22","License":"GPL (>= 2.15)","snapshot_date":"2017-06-23"} {"Package":"document","Version":"1.2.0","Title":"Run 'roxygen2' on (Chunks of) Single Code Files","Description":"Have you ever been tempted to create 'roxygen2'-style documentation\n comments for one of your functions that was not part of one of your\n packages (yet)?\n This is exactly what this package is about: running 'roxygen2' on\n (chunks of) a single code file.","Published":"2017-06-01","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"docuSignr","Version":"0.0.2","Title":"Connect to 'DocuSign' API","Description":"Connect to the 'DocuSign' Rest API , \n which supports embedded signing, and sending of documents. ","Published":"2017-06-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"docxtools","Version":"0.1.1","Title":"Tools for R Markdown to Docx Documents","Description":"A set of helper functions for using R Markdown to create documents\n in docx format, especially documents for use in a classroom or workshop\n setting.","Published":"2017-03-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"docxtractr","Version":"0.2.0","Title":"Extract Data Tables and Comments from Microsoft Word Documents","Description":"Microsoft Word docx files provide an XML structure that is fairly\n straightforward to navigate, especially when it applies to Word tables and\n comments. Tools are provided to determine table count/structure, comment count\n and also to extract/clean tables and comments from Microsoft Word docx documents.","Published":"2016-07-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Dodge","Version":"0.8","Title":"Functions for Acceptance Sampling Ideas originated by H.F. Dodge","Description":"Various sampling plans are able to be compared using evaluations of their OC, AOQ, ATI etc.","Published":"2013-09-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DODR","Version":"0.99.2","Title":"Detection of Differential Rhythmicity","Description":"Detect Differences in rhythmic time series. Using linear\n least squares and the robust semi-parametric rfit() method. Differences in\n harmonic fitting could be detected as well as differences in scale of the\n noise distribution.","Published":"2016-05-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DoE.base","Version":"0.30","Title":"Full Factorials, Orthogonal Arrays and Base Utilities for DoE\nPackages","Description":"Package DoE.base creates full factorial experimental designs and designs based on orthogonal arrays for (industrial) experiments. Additionally, it provides utility functions for the class design, which is also used by other packages for designed experiments.","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DoE.wrapper","Version":"0.8-10","Title":"Wrapper package for design of experiments functionality","Description":"This package creates various kinds of designs for\n (industrial) experiments. It uses, and sometimes enhances,\n design generation routines from other packages. \n So far, response surface designs from package rsm, latin hypercube\n samples from packages lhs and DiceDesign, and \n D-optimal designs from package AlgDesign have been implemented.","Published":"2014-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"doFuture","Version":"0.5.0","Title":"A Universal Foreach Parallel Adaptor using the Future API of the\n'future' Package","Description":"Provides a '%dopar%' adaptor such that any type of futures can\n be used as backends for the 'foreach' framework.","Published":"2017-04-01","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"doMC","Version":"1.3.4","Title":"Foreach Parallel Adaptor for 'parallel'","Description":"Provides a parallel backend for the %dopar% function using\n the multicore functionality of the parallel package.","Published":"2015-10-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Dominance","Version":"1.0.17","Title":"ADI (Average Dominance Index), Social Network Graphs with Dual\nDirections, and Music Notation Graph","Description":"Can calculate ADI (Average Dominance Index), FDI (Frequency based Dominance Indexand) and can build social network graphs with dual directions, can build a Music Notation Graph.","Published":"2016-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"domino","Version":"0.3.0","Title":"R Console Bindings for the 'Domino Command-Line Client'","Description":"A wrapper on top of the 'Domino Command-Line Client'. It lets you\n run 'Domino' commands (e.g., \"run\", \"upload\", \"download\") directly from your\n R environment. Under the hood, it uses R's system function to run the 'Domino'\n executable, which must be installed as a prerequisite. 'Domino' is a service\n that makes it easy to run your code on scalable hardware, with integrated\n version control and collaboration features designed for analytical workflows\n (see for more information).","Published":"2016-07-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"doMPI","Version":"0.2.2","Title":"Foreach Parallel Adaptor for the Rmpi Package","Description":"Provides a parallel backend for the %dopar% function using\n the Rmpi package.","Published":"2017-05-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"doParallel","Version":"1.0.10","Title":"Foreach Parallel Adaptor for the 'parallel' Package","Description":"Provides a parallel backend for the %dopar% function using\n the parallel package.","Published":"2015-10-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"doRedis","Version":"1.1.1","Title":"Foreach parallel adapter for the rredis package","Description":"A Redis parallel backend for the %dopar% function","Published":"2014-04-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"doRNG","Version":"1.6.6","Title":"Generic Reproducible Parallel Backend for 'foreach' Loops","Description":"Provides functions to perform\n reproducible parallel foreach loops, using independent\n random streams as generated by L'Ecuyer's combined\n multiple-recursive generator [L'Ecuyer (1999), ].\n It enables to easily convert standard %dopar% loops into\n fully reproducible loops, independently of the number\n of workers, the task scheduling strategy, or the chosen\n parallel environment and associated foreach backend.","Published":"2017-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DoseFinding","Version":"0.9-15","Title":"Planning and Analyzing Dose Finding Experiments","Description":"The DoseFinding package provides functions for the design and analysis\n\t of dose-finding experiments (with focus on pharmaceutical Phase\n\t II clinical trials). It provides functions for: multiple contrast\n\t tests, fitting non-linear dose-response models (using Bayesian and\n\t non-Bayesian estimation), calculating optimal designs and an\n\t implementation of the MCPMod methodology.","Published":"2016-07-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"doSNOW","Version":"1.0.14","Title":"Foreach Parallel Adaptor for the 'snow' Package","Description":"Provides a parallel backend for the %dopar% function using\n Luke Tierney's snow package.","Published":"2015-10-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dosresmeta","Version":"1.3.3","Title":"Performing Multivariate Dose-Response Meta-Analysis","Description":"It estimates a dose-response relation from either a single or\n multiple summarized data. The trend estimation takes into account the\n correlation among sets of log relative risks and use it to efficiently\n estimate the dose-response relation. To obtain a pooled functional\n relation, the study-specific trends are combined according to principles of\n multivariate random-effects meta-analysis.","Published":"2016-08-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dostats","Version":"1.3.2","Title":"Compute Statistics Helper Functions","Description":"A small package containing helper utilities for creating function\n for computing statistics.","Published":"2015-05-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"DOT","Version":"0.1","Title":"Render and Export DOT Graphs in R","Description":"Renders DOT diagram markup language in R and also provides the possibility to\n export the graphs in PostScript and SVG (Scalable Vector Graphics) formats.\n In addition, it supports literate programming packages such as 'knitr' and\n 'rmarkdown'.","Published":"2016-04-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DoTC","Version":"0.2","Title":"Distribution of Typicality Coefficients","Description":"Calculation of cluster typicality coefficients as being generated by fuzzy k-means clustering. ","Published":"2016-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dotCall64","Version":"0.9-04","Title":"Enhanced Foreign Function Interface Supporting Long Vectors","Description":"\n An alternative version of .C() and .Fortran() supporting long vectors and 64-bit integer type arguments. The provided interface .C64() features mechanisms the avoid unnecessary copies of read-only or write-only arguments. This makes it a convenient and fast interface to C/C++ and Fortran code.","Published":"2016-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dotenv","Version":"1.0.2","Title":"Load Environment Variables from '.env'","Description":"Load configuration from a '.env' file, that is\n in the current working directory, into environment variables.","Published":"2017-03-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dotwhisker","Version":"0.2.6","Title":"Dot-and-Whisker Plots of Regression Results","Description":"Quick and easy dot-and-whisker plots of regression results.","Published":"2017-04-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DoubleCone","Version":"1.0","Title":"Test against parametric regression function","Description":"Performs hypothesis tests concerning a regression function in a least-squares model, where the null is a parametric function, and the alternative is the union of large-dimensional convex polyhedral cones. ","Published":"2013-11-22","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"DoubleExpSeq","Version":"1.1","Title":"Differential Exon Usage Test for RNA-Seq Data via Empirical\nBayes Shrinkage of the Dispersion Parameter","Description":"Differential exon usage test for RNA-Seq data via an empirical Bayes shrinkage method for the dispersion parameter the utilizes inclusion-exclusion data to analyze the propensity to skip an exon across groups. The input data consists of two matrices where each row represents an exon and the columns represent the biological samples. The first matrix is the count of the number of reads expressing the exon for each sample. The second matrix is the count of the number of reads that either express the exon or explicitly skip the exon across the samples, a.k.a. the total count matrix. Dividing the two matrices yields proportions representing the propensity to express the exon versus skipping the exon for each sample.","Published":"2015-09-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DOvalidation","Version":"0.1.0","Title":"Local Linear Hazard Estimation with Do-Validated and\nCross-Validated Bandwidths","Description":"Local linear estimator for the univariate hazard (hazard rate) and bandwidth parameter selection using the do-validation method and the standard least squares cross-validation method.","Published":"2014-11-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Dowd","Version":"0.12","Title":"Functions Ported from 'MMR2' Toolbox Offered in Kevin Dowd's\nBook Measuring Market Risk","Description":"'Kevin Dowd's' book Measuring Market Risk is a widely read book \n in the area of risk measurement by students and \n practitioners alike. As he claims, 'MATLAB' indeed might have been the most \n suitable language when he originally wrote the functions, but,\n with growing popularity of R it is not entirely \n\t valid. As 'Dowd's' code was not intended to be error free and were mainly \n\t for reference, some functions in this package have inherited those \n\t errors. An attempt will be made in future releases to identify and correct \n\t them. 'Dowd's' original code can be downloaded from www.kevindowd.org/measuring-market-risk/. \n It should be noted that 'Dowd' offers both\n 'MMR2' and 'MMR1' toolboxes. Only 'MMR2' was ported to R. 'MMR2' is more \n recent version of 'MMR1' toolbox and they both have mostly similar \n function. The toolbox mainly contains different parametric and non \n\t parametric methods for measurement of market risk as well as \n\t backtesting risk measurement methods.","Published":"2016-03-11","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"downloader","Version":"0.4","Title":"Download Files over HTTP and HTTPS","Description":"Provides a wrapper for the download.file function,\n making it possible to download files over HTTPS on Windows, Mac OS X, and\n other Unix-like platforms. The 'RCurl' package provides this functionality\n (and much more) but can be difficult to install because it must be compiled\n with external dependencies. This package has no external dependencies, so\n it is much easier to install.","Published":"2015-07-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"downscale","Version":"1.2-4","Title":"Downscaling Species Occupancy","Description":"A set of functions that downscales species occupancy at\n coarse grain sizes to predict species occupancy at fine grain sizes.","Published":"2016-11-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"downsize","Version":"0.2.2","Title":"A Tool to Downsize Large Workflows for Testing","Description":"Toggles the test and production versions of a large workflow.","Published":"2017-04-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"dpa","Version":"1.0-3","Title":"Dynamic Path Approach","Description":"A GUI or command-line operated data analysis tool, for\n analyzing time-dependent simulation data in which multiple\n instantaneous or time-lagged relations are assumed. This\n package uses Structural Equation Modeling (the sem package). It\n is aimed to deal with time-dependent data and estimate whether\n a causal diagram fits data from an (agent-based) simulation\n model.","Published":"2012-10-29","License":"LGPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"dparser","Version":"0.1.3","Title":"Port of Dparser Package","Description":"A Scannerless GLR parser/parser generator. Note that GLR standing for \"generalized LR\", where L stands for \"left-to-right\" and\n R stands for \"rightmost (derivation)\". For more information see . This parser is based on the Tomita\n (1987) algorithm. (Paper can be found at ).\n The original dparser package documentation can be found at . This allows you to add mini-languages to R (like\n RxODE's ODE mini-language Wang, Hallow, and James 2015 ) or to parse other languages like NONMEM to automatically translate\n them to R code. To use this in your code, add a LinkingTo 'dparser' in your DESCRIPTION file and instead of using '#include ' use\n '#include '. This also provides a R-based port of the make_dparser command called\n 'mkdparser'. Additionally you can parse an arbitrary grammar within R using the 'dparse' function.","Published":"2017-04-29","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DPBBM","Version":"0.2.5","Title":"Dirichlet Process Beta-Binomial Mixture","Description":"Beta-binomial Mixture Model is used to infer the pattern from count data.\n\t\tIt can be used for clustering of RNA methylation sequencing data. ","Published":"2016-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dpcR","Version":"0.4","Title":"Digital PCR Analysis","Description":"Analysis, visualisation and simulation of digital polymerase chain\n reaction (dPCR). Supports data formats of commerical systems (Bio-Rad QX100 and\n QX200; Fluidigm BioMark) and other systems.","Published":"2017-01-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dpglasso","Version":"1.0","Title":"Primal Graphical Lasso","Description":"fits the primal graphical lasso, via one-at-a-time\n block-coordinate descent.","Published":"2012-10-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Dpit","Version":"1.0","Title":"Distribution Pitting","Description":"Compares distributions with one another in terms of their fit to each sample in a dataset that contains multiple samples, as described in Joo, Aguinis, and Bradley (in press). Users can examine the fit of seven distributions per sample: pure power law, lognormal, exponential, power law with an exponential cutoff, normal, Poisson, and Weibull. Automation features allow the user to compare all distributions for all samples with a single command line, which creates a separate row containing results for each sample until the entire dataset has been analyzed.","Published":"2017-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dplR","Version":"1.6.6","Title":"Dendrochronology Program Library in R","Description":"Perform tree-ring analyses such as detrending, chronology\n building, and cross dating. Read and write standard file formats\n used in dendrochronology.","Published":"2017-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dplRCon","Version":"1.0","Title":"Concordance for Dendroclimatology","Description":"The concordance method is a non-parametric method based on bootstrapping that is used to test the hypothesis that two subsets of time series are similar in terms of mean, variance or both. This method was developed to address a concern within dendroclimatology that young trees may produce a differing climate response to older more established trees. Details of this method are available in Pirie, M. (2013). The Climate of New Zealand reconstructed from kauri tree rings: Enhancement through the use of novel statistical methodology. PhD. Dissertation, School of Environment and Department of Statistics, University of Auckland, New Zealand. This package also produces a figure with 3 panels, each panel is for a different climate variable. An example of this figure in included in \"On the influence of tree size on the climate - growth relationship of New Zealand kauri (Agathis australis): insights from annual, monthly and daily growth patterns. J Wunder, AM Fowler, ER Cook, M Pirie, SPJ McCloskey. Trees 27 (4), 937-948\". For further R functions for loading your own dendroclimatology datasets and performing dendrochronology analysis refer to the R package \"dplR: Dendrochronology Program Library in R\". The concordance procedure is intended to add to the standard dendrochronology techniques provided in \"dplR\". ","Published":"2015-02-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"dplyr","Version":"0.7.1","Title":"A Grammar of Data Manipulation","Description":"A fast, consistent tool for working with data frame like objects,\n both in memory and out of memory.","Published":"2017-06-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dpmixsim","Version":"0.0-8","Title":"Dirichlet Process Mixture model simulation for clustering and\nimage segmentation","Description":"The package implements a Dirichlet Process Mixture (DPM)\n model for clustering and image segmentation. The DPM model is\n a Bayesian nonparametric methodology that relies on MCMC\n simulations for exploring mixture models with an unknown number\n of components. The code implements conjugate models with\n normal structure (conjugate normal-normal DP mixture model).\n The package's applications are oriented towards the\n classification of magnetic resonance images according to tissue\n type or region of interest.","Published":"2012-07-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dpmr","Version":"0.1.9","Title":"Data Package Manager for R","Description":"Create, install, and summarise data packages that follow\n the Open Knowledge Foundation's Data Package Protocol.","Published":"2016-03-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DPpackage","Version":"1.1-6","Title":"Bayesian nonparametric modeling in R","Description":"This package contains functions to perform inference via\n simulation from the posterior distributions for Bayesian\n nonparametric and semiparametric models. Although the name of\n the package was motivated by the Dirichlet Process prior, the\n package considers and will consider other priors on functional\n spaces. So far, DPpackage includes models considering Dirichlet\n Processes, Dependent Dirichlet Processes, Dependent Poisson-\n Dirichlet Processes, Hierarchical Dirichlet Processes, Polya\n Trees, Linear Dependent Tailfree Processes, Mixtures of\n Triangular distributions, Random Bernstein polynomials priors\n and Dependent Bernstein Polynomials. The package also includes\n models considering Penalized B-Splines. Currently the package\n includes semiparametric models for marginal and conditional\n density estimation, ROC curve analysis, interval censored data,\n binary regression models, generalized linear mixed models, IRT\n type models, and generalized additive models. The package also\n contains functions to compute Pseudo-Bayes factors for model\n comparison, and to elicitate the precision parameter of the\n Dirichlet Process. To maximize computational efficiency, the\n actual sampling for each model is done in compiled FORTRAN. The\n functions return objects which can be subsequently analyzed\n with functions provided in the coda package.","Published":"2012-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dprep","Version":"3.0.2","Title":"Data Pre-Processing and Visualization Functions for\nClassification","Description":"Data preprocessing techniques for classification. Functions for normalization, handling of missing values,discretization, outlier detection, feature selection, and data visualization are included.","Published":"2015-11-24","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DPWeibull","Version":"1.0","Title":"Dirichlet Process Weibull Mixture Model for Survival Data","Description":"Use Dirichlet process Weibull mixture model and dependent Dirichlet process Weibull mixture model for survival data with and without competing risks. Dirichlet process Weibull mixture model is used for data without covariates and dependent Dirichlet process model is used for regression data. The package is designed to handle exact/right-censored/ interval-censored observations without competing risks and exact/right-censored observations for data with competing risks. Inside each cluster of Dirichlet process, we assume a multiplicative effect of covariates as in Cox model and Fine and Gray model. In addition, we provide a wrapper for DPdensity() function from the R package 'DPpackage'. This wrapper automatically uses Low Information Omnibus prior and can model one and two dimensional data with Dirichlet mixture of Gaussian distributions.","Published":"2017-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dr","Version":"3.0.10","Title":"Methods for Dimension Reduction for Regression","Description":"Functions, methods, and datasets for fitting dimension\n reduction regression, using slicing (methods SAVE and SIR), Principal\n Hessian Directions (phd, using residuals and the response), and an\n iterative IRE. Partial methods, that condition on categorical\n predictors are also available. A variety of tests, and stepwise\n deletion of predictors, is also included. Also included is\n code for computing permutation tests of dimension. Adding additional\n methods of estimating dimension is straightforward.\n For documentation, see the vignette in the package. With version 3.0.4,\n the arguments for dr.step have been modified.","Published":"2015-08-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"drake","Version":"3.0.0","Title":"Data Frames in R for Make","Description":"A solution for reproducible code and \n high-performance computing.","Published":"2017-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"drat","Version":"0.1.2","Title":"Drat R Archive Template","Description":"Creation and use of R Repositories via helper functions \n to insert packages into a repository, and to add repository information \n to the current R session. Two primary types of repositories are support:\n gh-pages at GitHub, as well as local repositories on either the same machine\n or a local network. Drat is a recursive acronym: Drat R Archive Template. ","Published":"2016-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"drawExpression","Version":"1.0","Title":"Visualising R syntax through graphics","Description":"Graphical display of R expression, showing the\n interpretation of an expression by R and the various kind of R\n data structure. The steps of the interpretation of an\n expression are obtained through the parsed tree.","Published":"2012-07-23","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"DRaWR","Version":"1.0.1","Title":"Discriminative Random Walk with Restart","Description":"We present DRaWR, a network-based method for ranking genes or\n properties related to a given gene set. Such related genes or properties are\n identified from among the nodes of a large, heterogeneous network of biological\n information. Our method involves a random walk with restarts, performed on\n an initial network with multiple node and edge types, preserving more of the\n original, specific property information than current methods that operate\n on homogeneous networks. In this first stage of our algorithm, we find the\n properties that are the most relevant to the given gene set and extract a\n subnetwork of the original network, comprising only the relevant properties. We\n then rerank genes by their similarity to the given gene set, based on a second\n random walk with restarts, performed on the above subnetwork.","Published":"2016-02-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DrBats","Version":"0.1.4","Title":"Data Representation: Bayesian Approach That's Sparse","Description":"Feed longitudinal data into a Bayesian Latent Factor Model to obtain a low-rank representation. Parameters are estimated using a Hamiltonian Monte Carlo algorithm with STAN. See G. Weinrott, B. Fontez, N. Hilgert and S. Holmes, \"Bayesian Latent Factor Model for Functional Data Analysis\", Actes des JdS 2016.","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"drc","Version":"3.0-1","Title":"Analysis of Dose-Response Curves","Description":"Analysis of dose-response data is made available through a suite of flexible and versatile model fitting and after-fitting functions.","Published":"2016-08-30","License":"GPL-2 | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"DREGAR","Version":"0.1.3.0","Title":"Regularized Estimation of Dynamic Linear Regression in the\nPresence of Autocorrelated Residuals (DREGAR)","Description":"A penalized/non-penalized implementation for dynamic regression in the presence of autocorrelated residuals (DREGAR) using iterative penalized/ordinary least squares. It applies Mallows CP, AIC, BIC and GCV to select the tuning parameters.","Published":"2017-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"drfit","Version":"0.6.7","Title":"Dose-Response Data Evaluation","Description":"A somewhat outdated package of basic and easy-to-use functions for\n fitting dose-response curves to continuous dose-response data, calculating some\n (eco)toxicological parameters and plotting the results. Please consider using\n the more powerful and actively developed 'drc' package. Functions that are\n fitted are the cumulative density function of the lognormal distribution\n (probit fit), of the logistic distribution (logit fit), of the weibull\n distribution (weibull fit) and a linear-logistic model ('linlogit' fit),\n derived from the latter, which is used to describe data showing stimulation at\n low doses (hormesis). In addition, functions checking, plotting and retrieving\n dose-response data retrieved from a database accessed via 'RODBC' are included.\n As an alternative to the original fitting methods, the algorithms from the 'drc'\n package can be used.","Published":"2016-09-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"drgee","Version":"1.1.6","Title":"Doubly Robust Generalized Estimating Equations","Description":"Fit restricted mean models for the conditional association\n between an exposure and an outcome, given covariates. Three methods\n are implemented: O-estimation, where a nuisance model for the\n association between the covariates and the outcome is used;\n E-estimation where a nuisance model for the association\n between the covariates and the exposure is used, and doubly robust (DR)\n estimation where both nuisance models are used. In DR-estimation,\n the estimates will be consistent when at least one of the nuisance\n models is correctly specified, not necessarily both.","Published":"2016-11-08","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"DrillR","Version":"0.1","Title":"R Driver for Apache Drill","Description":"Provides a R driver for Apache Drill, which could connect to the Apache Drill cluster or drillbit and get result(in data frame) from the SQL query and check the current configuration status. This link contains more information about Apache Drill.","Published":"2016-06-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DRIP","Version":"1.1","Title":"Discontinuous Regression and Image Processing","Description":"This is a collection of functions for discontinuous regression\n\t analysis and image processing.","Published":"2015-09-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"drLumi","Version":"0.1.2","Title":"Multiplex Immunoassays Data Analysis","Description":"Contains quality control routines for multiplex immunoassay data, \n including several approaches for: treating the background noise of the assay, \n fitting the dose-response curves and estimating the limits of quantification.","Published":"2015-09-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"drm","Version":"0.5-8","Title":"Regression and association models for repeated categorical data","Description":"Likelihood-based marginal regression and association\n modelling for repeated, or otherwise clustered, categorical\n responses using dependence ratio as a measure of the\n association","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"drmdel","Version":"1.3.1","Title":"Dual Empirical Likelihood Inference under Density Ratio Models\nin the Presence of Multiple Samples","Description":"Dual empirical likelihood (DEL) inference under semiparametric density ratio models (DRM) in the presence of multiple samples, including population cumulative distribution function estimation, quantile estimation and comparison, density estimation, composite hypothesis testing for DRM parameters which encompasses testing for changes in population distribution functions as a special case, etc.","Published":"2015-01-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dropR","Version":"0.1","Title":"Analyze Drop Out of an Experiment or Survey","Description":"Drop out analysis for psychologists in a R based web application.\n Shiny is used to visualize and analyze drop outs tailored to the methods of\n online survey methodology. Concept and app presented at the SCIP Conference\n in Long Beach, California.","Published":"2015-01-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DRR","Version":"0.0.2","Title":"Dimensionality Reduction via Regression","Description":"An Implementation of Dimensionality Reduction\n via Regression using Kernel Ridge Regression.","Published":"2016-09-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"drsmooth","Version":"1.9.0","Title":"Dose-Response Modeling with Smoothing Splines","Description":"Provides tools for assessing the shape of a dose-response\n curve by testing linearity and non-linearity at user-defined cut-offs. It\n also provides two methods of estimating a threshold dose, or the dose at\n which the dose-response function transitions to significantly increasing:\n bi-linear (based on pkg 'segmented') and smoothed with splines (based on\n pkg 'mgcv').","Published":"2015-09-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"DrugClust","Version":"0.2","Title":"Implementation of a Machine Learning Framework for Predicting\nDrugs Side Effects","Description":"An implementation of a Machine Learning Framework for prediction of new drugs Side Effects.\n Firstly drugs are clustered with respect to their features description and secondly predictions are made, according to Bayesian scores.\n Moreover it can perform protein enrichment considering the proteins clustered together in the first step of the algorithm.\n This last tool is of extreme interest for biologist and drug discovery purposes, given the fact that it can be used either as a validation of the clusters obtained, as well as for the possible discovery of new interactions between certain side effects and non targeted pathways.\n Clustering of the drugs in the feature space can be done using K-Means, PAM or K-Seeds (a novel clustering algorithm proposed by the author).","Published":"2016-04-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ds","Version":"3.0","Title":"Descriptive Statistics","Description":"The package performs various analyzes of descriptive statistics, including correlations","Published":"2014-09-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DSAIDE","Version":"0.4.0","Title":"Dynamical Systems Approach to Infectious Disease Epidemiology","Description":"A collection of Shiny apps that allow for the simulation and\n exploration of various infectious disease transmission dynamics scenarios.\n The purpose of the package is to help individuals learn \n about infectious disease epidemiology from a dynamical systems perspective.\n All apps include explanations of the underlying models and instructions on\n what to do with the models. ","Published":"2017-03-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dsample","Version":"0.91.2.2","Title":"Discretization-Based Direct Random Sample Generation","Description":"Two discretization-based Monte Carlo algorithms, namely the Fu-Wang algorithm and the Wang-Lee algorithm, are provided for random sample generation from a high dimensional distribution of complex structure. The normalizing constant of the target distribution needs not to be known.","Published":"2015-11-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DSBayes","Version":"1.1","Title":"Bayesian subgroup analysis in clinical trials","Description":"Calculate posterior modes and credible intervals of parameters of the Dixon-Simon model for subgroup analysis (with binary covariates) in clinical trials.","Published":"2014-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dse","Version":"2015.12-1","Title":"Dynamic Systems Estimation (Time Series Package)","Description":"Tools for multivariate, linear, time-invariant,\n\ttime series models. This includes ARMA and state-space representations,\n\tand methods for converting between them. It also includes simulation\n\tmethods and several estimation functions. The package has functions \n\tfor looking at model roots, stability, and forecasts at different \n\thorizons. The ARMA model representation is general, so that VAR, VARX, \n\tARIMA, ARMAX, ARIMAX can all be considered to be special cases. Kalman\n\tfilter and smoother estimates can be obtained from the state space\n\tmodel, and state-space model reduction techniques are implemented. \n\tAn introduction and User's Guide is available in a vignette.","Published":"2015-12-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DSL","Version":"0.1-6","Title":"Distributed Storage and List","Description":"An abstract DList class helps storing large list-type objects in a distributed manner. Corresponding high-level functions and methods for handling distributed storage (DStorage) and lists allows for processing such DLists on distributed systems efficiently. In doing so it uses a well defined storage backend implemented based on the DStorage class.","Published":"2015-07-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dslabs","Version":"0.0.1","Title":"Data Science Labs","Description":"Datasets and functions that facilitate data analysis labs in data science courses and workshops. ","Published":"2017-01-19","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"dslice","Version":"1.1.5","Title":"Dynamic Slicing","Description":"Dynamic slicing is a method designed for dependency detection between a categorical variable and a continuous variable. It could be applied for non-parametric hypothesis testing and gene set enrichment analysis.","Published":"2015-11-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dsm","Version":"2.2.14","Title":"Density Surface Modelling of Distance Sampling Data","Description":"Density surface modelling of line transect data. A Generalized\n Additive Model-based approach is used to calculate spatially-explicit estimates\n of animal abundance from distance sampling (also presence/absence and strip\n transect) data. Several utility functions are provided for model checking,\n plotting and variance estimation.","Published":"2017-01-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dsmodels","Version":"1.0.0","Title":"A Language to Facilitate the Creation and Visualization of Two-\nDimensional Dynamical Systems","Description":"An expressive language to facilitate the creation and visualization\n of two-dimensional dynamical systems. The basic elements of the language are\n a model wrapping around a function(x,y) which outputs a list(x\n = xprime, y = yprime), and a range. The language supports three\n types of visual objects: visualizations, features, and backgrounds. Visualizations, including dots and arrows,\n depict the behavior of the dynamical system over the entire range.\n Features display\n user-defined curves and points, and their images under the system.\n Backgrounds define and color regions of interest, such as areas of convergence and divergence.\n The language\n can also automatically guess attractors and regions of convergence and divergence.","Published":"2016-11-11","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DSpat","Version":"0.1.6","Title":"Spatial Modelling for Distance Sampling Data","Description":"Fits inhomogeneous Poisson process spatial models\n to line transect sampling data and provides estimate of\n abundance within a region.","Published":"2014-12-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dsrTest","Version":"0.2.1","Title":"Tests and Confidence Intervals on Directly Standardized Rates\nfor Several Methods","Description":"Perform a test of a simple null hypothesis about a \n directly standardized rate and obtain the matching confidence \n interval using a choice of methods.","Published":"2017-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DSsim","Version":"1.1.1","Title":"Distance Sampling Simulations","Description":"Performs distance sampling simulations.It repeatedly generates\n instances of a user defined population within a given survey region, generates\n realisations of a survey design (currently these must be generated using\n Distance software in advance ) and simulates\n the detection process. The data are then analysed so that the results can\n be compared for accuracy and precision across all replications. This will\n allow users to select survey designs which will give them the best accuracy\n and precision given their expectations about population distribution. Any\n uncertainty in population distribution or population parameters can be\n included by running the different survey designs for a number of different\n population descriptions. An example simulation can be found in the help file for\n make.simulation.","Published":"2017-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dst","Version":"0.3","Title":"Using Dempster-Shafer Theory","Description":"This package allows you to make basic probability assignments on a set of\n possibilities (events) and combine these events with Dempster's rule of combination.","Published":"2015-01-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DstarM","Version":"0.2.2","Title":"Analyze Two Choice Reaction Time Data with the D*M Method","Description":"A collection of functions to estimate parameters of a diffusion model via a D*M analysis. Build in models are: the Ratcliff diffusion model, the RWiener diffusion model, and Linear Ballistic Accumulator models. Custom models functions can be specified as long as they have a density function.","Published":"2017-04-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DStree","Version":"1.0","Title":"Recursive Partitioning for Discrete-Time Survival Trees","Description":"Building discrete-time survival trees and bagged trees based on\n the functionalities of the rpart package. Splitting criterion maximizes the\n likelihood of a covariate-free logistic discrete time hazard model.","Published":"2016-12-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dSVA","Version":"1.0","Title":"Direct Surrogate Variable Analysis","Description":"Functions for direct surrogate variable analysis, which can identify hidden factors in high-dimensional biomedical data.","Published":"2017-01-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DSviaDRM","Version":"1.0","Title":"Exploring Disease Similarity in Terms of Dysfunctional\nRegulatory Mechanisms","Description":"Elucidation of human disease similarities has emerged as an active research area, which is highly relevant to etiology, disease classification, and drug repositioning. This package was designed and implemented for identifying disease similarities. It contains five functions which are 'DCEA', 'DCpathway', 'DS', 'comDCGL' and 'comDCGLplot'. In 'DCEA' function, differentially co-expressed genes and differentially co-expressed links are extracted from disease vs. health samples. Then 'DCpathway' function assigns differential co-expression values of pathways to be the average differential co-expression value of their component genes. Then 'DS' employs partial correlation coefficient of pathways as the disease similarity for each disease pairs. And 'DS' contains a permutation process for evaluating the statistical significant of observed disease partial correlation coefficients. At last, 'comDCGL' and 'comDCGLplot' sort out shared differentially co-expressed genes and differentially co-expressed links with regulation information and visualize them. ","Published":"2015-05-12","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"DT","Version":"0.2","Title":"A Wrapper of the JavaScript Library 'DataTables'","Description":"Data objects in R can be rendered as HTML tables using the\n JavaScript library 'DataTables' (typically via R Markdown or Shiny). The\n 'DataTables' library has been included in this R package. The package name\n 'DT' is an abbreviation of 'DataTables'.","Published":"2016-08-09","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dtables","Version":"0.2.0","Title":"Simplifying Descriptive Frequencies and Statistics","Description":"Towards automation of descriptive frequencies and statistics tables.","Published":"2016-11-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dtangle","Version":"0.1.0","Title":"Cell Type Deconvolution from Gene Expressions","Description":"Deconvolving cell types from high-throughput gene profiling data. ","Published":"2017-05-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"DTComPair","Version":"1.0.3","Title":"Comparison of Binary Diagnostic Tests in a Paired Study Design","Description":"This package contains functions to compare the accuracy of two binary diagnostic tests in a \"paired\" study design, i.e. when each test is applied to each subject in the study.","Published":"2014-02-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DTDA","Version":"2.1-1","Title":"Doubly truncated data analysis","Description":"This package implements different algorithms for analyzing\n randomly truncated data, one-sided and two-sided (i.e. doubly)\n truncated data. Two real data sets are included.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dti","Version":"1.2-6.1","Title":"Analysis of Diffusion Weighted Imaging (DWI) Data","Description":"Diffusion Weighted Imaging (DWI) is a Magnetic Resonance Imaging\n modality, that measures diffusion of water in tissues like the human \n brain. The package contains R-functions to process diffusion-weighted \n data. The functionality includes diffusion tensor imaging (DTI),\n diffusion kurtosis imaging (DKI), modeling for high angular resolution \n diffusion weighted imaging (HARDI) using Q-ball-reconstruction and \n tensor mixture models, several methods for structural adaptive \n smoothing including POAS and msPOAS, and a streamline fiber tracking \n for tensor and tensor mixture models.\n The package provides functionality to manipulate and visualize results \n in 2D and 3D.","Published":"2016-10-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DTK","Version":"3.5","Title":"Dunnett-Tukey-Kramer Pairwise Multiple Comparison Test Adjusted\nfor Unequal Variances and Unequal Sample Sizes","Description":"This package was created to analyze multi-level one-way\n experimental designs. It is designed to handle vectorized\n observation and factor data where there are unequal sample\n sizes and population variance homogeneity can not be assumed.\n To conduct the Dunnett modified Tukey-Kramer test (a.k.a. the\n T3 Procedure), create two vectors: one for your observations\n and one for the factor level of each observation. The function,\n gl.unequal, provides a means to more conveniently produce a\n factor vector with unequal sample sizes. Next, use the DTK.test\n function to conduct the test and save the output as an object\n to input into the DTK.plot function, which produces a\n confidence interval plot for each of the pairwise comparisons.\n Lastly, the function TK.test conducts the original Tukey-Kramer\n test.","Published":"2013-07-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DTMCPack","Version":"0.1-2","Title":"Suite of functions related to discrete-time discrete-state\nMarkov Chains","Description":"A series of functions which aid in both simulating and\n determining the properties of finite, discrete-time, discrete\n state markov chains. Two functions (DTMC, MultDTMC) produce n\n iterations of a Markov Chain(s) based on transition\n probabilities and an initial distribution. The function FPTime\n determines the first passage time into each state. The\n function statdistr determines the stationary distribution of a\n Markov Chain.","Published":"2013-05-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dtplyr","Version":"0.0.2","Title":"Data Table Back-End for 'dplyr'","Description":"This implements the data table back-end for 'dplyr' so that you\n can seamlessly use data table and 'dplyr' together.","Published":"2017-04-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DTR","Version":"1.7","Title":"Estimation and Comparison of Dynamic Treatment Regimes","Description":"Estimation and comparison of survival distributions of dynamic treatment regimes (DTRs) from sequentially randomized clinical trials.","Published":"2015-12-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dtree","Version":"0.2.3","Title":"Decision Trees","Description":"Combines various decision tree algorithms, plus both\n linear regression and ensemble methods into one package.\n Allows for the use of both continuous and categorical outcomes.\n An optional feature is to quantify the (in)stability to the\n decision tree methods, indicating when results can be trusted\n and when ensemble methods may be preferential.","Published":"2017-04-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DTRlearn","Version":"1.2","Title":"Learning Algorithms for Dynamic Treatment Regimes","Description":"Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each stage by time-varying subject-specific features and intermediate outcomes observed in previous stages. This package implements three methods: O-learning (Zhao et. al. 2012,2014), Q-learning (Murphy et. al. 2007; Zhao et.al. 2009) and P-learning (Liu et. al. 2014, 2015) to estimate the optimal DTRs.","Published":"2015-12-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DTRreg","Version":"1.2","Title":"DTR Estimation and Inference via G-Estimation, Dynamic WOLS, and\nQ-Learning","Description":"Dynamic treatment regime estimation and inference via G-estimation,\n dynamic weighted ordinary least squares (dWOLS) and Q-learning. Inference\n via bootstrap and (for G-estimation) recursive sandwich estimation.","Published":"2017-05-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dtt","Version":"0.1-2","Title":"Discrete Trigonometric Transforms","Description":"This package provides functions for 1D and 2D Discrete\n Cosine Transform (DCT), Discrete Sine Transform (DST) and\n Discrete Hartley Transform (DHT).","Published":"2013-12-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dtw","Version":"1.18-1","Title":"Dynamic Time Warping Algorithms","Description":"A comprehensive implementation of dynamic time warping (DTW) algorithms in R. DTW computes the optimal (least cumulative distance) alignment between points of two time series. Common DTW variants covered include local (slope) and global (window) constraints, subsequence matches, arbitrary distance definitions, normalizations, minimum variance matching, and so on. Provides cumulative distances, alignments, specialized plot styles, etc.","Published":"2015-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dtwclust","Version":"4.0.1","Title":"Time Series Clustering Along with Optimizations for the Dynamic\nTime Warping Distance","Description":"Time series clustering along with optimized techniques related\n to the Dynamic Time Warping distance and its corresponding lower bounds.\n Implementations of partitional, hierarchical, fuzzy, k-Shape and TADPole\n clustering are available. Functionality can be easily extended with\n custom distance measures and centroid definitions.","Published":"2017-06-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dtwSat","Version":"0.2.3","Title":"Time-Weighted Dynamic Time Warping for Satellite Image Time\nSeries Analysis","Description":"Provides an implementation of the Time-Weighted Dynamic Time\n Warping (TWDTW) method for land cover mapping using satellite image time series.\n TWDTW is based on the Dynamic Time Warping technique and has achieved high\n accuracy for land cover classification using satellite data. The method is\n based on comparing unclassified satellite image time series with a set of known\n temporal patterns (e.g. phenological cycles associated with the vegetation).\n Using 'dtwSat' the user can build temporal patterns for land cover types, apply\n the TWDTW analysis for satellite datasets, visualize the results of the time\n series analysis, produce land cover maps, create temporal plots for land cover\n change, and compute accuracy assessment metrics.","Published":"2017-05-16","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dualScale","Version":"0.9.1","Title":"Dual Scaling Analysis of Multiple Choice Data","Description":"Functions to analyze multiple choice data using Dual Scaling","Published":"2014-01-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"duckduckr","Version":"1.0.0","Title":"Simple Client for the DuckDuckGo Instant Answer API","Description":"Programmatic access to the DuckDuckGo Instant Answer API .","Published":"2017-04-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dummies","Version":"1.5.6","Title":"Create dummy/indicator variables flexibly and efficiently","Description":"Expands factors, characters and other eligible classes\n into dummy/indicator variables.","Published":"2012-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dummy","Version":"0.1.3","Title":"Automatic Creation of Dummies with Support for Predictive\nModeling","Description":"Efficiently create dummies of all factors and character vectors in a data frame. Support is included for learning the categories on one data set (e.g., a training set) and deploying them on another (e.g., a test set).","Published":"2015-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dunn.test","Version":"1.3.4","Title":"Dunn's Test of Multiple Comparisons Using Rank Sums","Description":"Computes Dunn's test (1964) for stochastic dominance and reports the results among multiple pairwise comparisons after a Kruskal-Wallis test for stochastic dominance among k groups (Kruskal and Wallis, 1952). The interpretation of stochastic dominance requires an assumption that the CDF of one group does not cross the CDF of the other. 'dunn.test' makes k(k-1)/2 multiple pairwise comparisons based on Dunn's z-test-statistic approximations to the actual rank statistics. The null hypothesis for each pairwise comparison is that the probability of observing a randomly selected value from the first group that is larger than a randomly selected value from the second group equals one half; this null hypothesis corresponds to that of the Wilcoxon-Mann-Whitney rank-sum test. Like the rank-sum test, if the data can be assumed to be continuous, and the distributions are assumed identical except for a difference in location, Dunn's test may be understood as a test for median difference. 'dunn.test' accounts for tied ranks.","Published":"2017-04-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DunnettTests","Version":"2.0","Title":"Software implementation of step-down and step-up Dunnett test\nprocedures","Description":"For the implementation of the step-down or step-up Dunnett testing procedures, the package includes R functions to calculate critical constants and R functions to calculate adjusted P-values of the test statistics. In addition, the package also contains functions to evaluate testing powers and hence the necessary sample sizes specially for the classical problem of comparisons of several treatments with a control.","Published":"2013-12-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dupiR","Version":"1.2","Title":"Bayesian inference from count data using discrete uniform priors","Description":"Inference of population sizes using a binomial likelihood and least informative discrete uniform priors.","Published":"2014-12-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dvfBm","Version":"1.0","Title":"Discrete variations of a fractional Brownian motion","Description":"Hurst exponent estimation of a fractional Brownian motion\n by using discrete variations methods in presence of outliers\n and/or an additive noise","Published":"2009-11-22","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"DVHmetrics","Version":"0.3.6","Title":"Analyze Dose-Volume Histograms and Check Constraints","Description":"Functionality for analyzing dose-volume histograms (DVH)\n in radiation oncology: Read DVH text files, calculate DVH\n metrics as well as generalized equivalent uniform dose (gEUD),\n biologically effective dose (BED), equivalent dose in 2 Gy\n fractions (EQD2), normal tissue complication probability\n (NTCP), and tumor control probability (TCP). Show DVH\n diagrams, check and visualize quality assurance constraints\n for the DVH. Includes web-based graphical user interface.","Published":"2016-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dvn","Version":"0.3.5","Title":"Access to Dataverse 3 APIs","Description":"Provides access to Dataverse version 3 APIs, enabling access to archived data (and metadata), and the ability to create and manipulate studies in a user's dataverse(s). For Dataverse server versions >= 4.0, please use the dataverse package instead.","Published":"2016-06-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dwapi","Version":"0.1.1","Title":"A Client for Data.world's REST API","Description":"A set of wrapper functions for data.world's REST API endpoints.","Published":"2017-05-23","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"DWreg","Version":"2.0","Title":"Parametric Regression for Discrete Response","Description":"Regression for a discrete response, where the conditional distribution is modelled via a discrete Weibull distribution.","Published":"2016-07-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dygraphs","Version":"1.1.1.4","Title":"Interface to 'Dygraphs' Interactive Time Series Charting Library","Description":"An R interface to the 'dygraphs' JavaScript charting library\n (a copy of which is included in the package). Provides rich facilities\n for charting time-series data in R, including highly configurable\n series- and axis-display and interactive features like zoom/pan and\n series/point highlighting.","Published":"2017-01-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"DYM","Version":"0.2","Title":"Did You Mean?","Description":"Add a \"Did You Mean\" feature to the R interactive. With this\n package, error messages for misspelled input of variable names or package names\n suggest what you really want to do in addition to notification of the mistake.","Published":"2016-01-22","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dyn","Version":"0.2-9.3","Title":"Time Series Regression","Description":"Time series regression. The dyn class interfaces ts,\n irts(), zoo() and zooreg() time series classes to lm(), glm(),\n loess(), quantreg::rq(), MASS::rlm(), MCMCpack::MCMCregress(),\n quantreg::rq(), randomForest::randomForest() and other regression\n functions allowing those functions to be used with time series\n including specifications that may contain lags, diffs and\n missing values.","Published":"2017-02-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"DynamicDistribution","Version":"1.1","Title":"Dynamically visualized probability distributions and their\nmoments","Description":"The package is aimed at dynamically visualizing probability\n distributions and their moments and all the commonly used distributions are\n included.","Published":"2013-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dynamicGraph","Version":"0.2.2.6","Title":"dynamicGraph","Description":"Interactive graphical tool for manipulating graphs","Published":"2010-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dynamichazard","Version":"0.3.3","Title":"Dynamic Hazard Models using State Space Models","Description":"Contains functions that lets you fit dynamic hazard models with binary \n outcomes using state space models. The methods are originally described in \n Fahrmeir (1992) and Fahrmeir (1994) \n . The functions also provide an extension hereof where the \n Extended Kalman filter is replaced by an Unscented Kalman filter. Models are \n fitted with the regular coxph() like formula.","Published":"2017-06-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dynamicTreeCut","Version":"1.63-1","Title":"Methods for Detection of Clusters in Hierarchical Clustering\nDendrograms","Description":"Contains methods for detection of clusters in hierarchical clustering dendrograms.","Published":"2016-03-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dynatopmodel","Version":"1.1","Title":"Implementation of the Dynamic TOPMODEL Hydrological Model","Description":"A native R implementation and enhancement of the Dynamic TOPMODEL\n semi-distributed hydrological model. Includes some pre-processsing and\n output routines.","Published":"2016-01-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dynaTree","Version":"1.2-10","Title":"Dynamic Trees for Learning and Design","Description":"Inference by sequential Monte Carlo for \n dynamic tree regression and classification models\n with hooks provided for sequential design and optimization, \n fully online learning with drift, variable selection, and \n sensitivity analysis of inputs. Illustrative \n examples from the original dynamic trees paper are facilitated\n by demos in the package; see demo(package=\"dynaTree\").","Published":"2017-03-15","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"dynBiplotGUI","Version":"1.1.5","Title":"Full Interactive GUI for Dynamic Biplot in R","Description":"A GUI to solve dynamic biplots and classical biplot. Try matrices\n of 2-way and 3-way. The GUI can be run in multiple languages.","Published":"2017-02-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"DynClust","Version":"3.13","Title":"Denoising and clustering for dynamical image sequence (2D or\n3D)+T","Description":"DynClust is a two-stage procedure for the denoising and clustering of stack of noisy images acquired over time. Clustering only assumes that the data contain an unknown but small number of dynamic features. The method first denoises the signals using local spatial and full temporal information. The clustering step uses the previous output to aggregate voxels based on the knowledge of their spatial neighborhood. Both steps use a single keytool based on the statistical comparison of the difference of two signals with the null signal. No assumption is therefore required on the shape of the signals. The data are assumed to be normally distributed (or at least follow a symmetric distribution) with a known constant variance. Working pixelwise, the method can be time-consuming depending on the size of the data-array but harnesses the power of multicore cpus.","Published":"2014-04-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dynCorr","Version":"1.0.0","Title":"Dynamic Correlation Package","Description":"Computes dynamical correlation estimates and percentile\n bootstrap confidence intervals for pairs of longitudinal\n responses, including consideration of lags and derivatives.","Published":"2017-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dynetNLAResistance","Version":"0.1.0","Title":"Resisting Neighbor Label Attack in a Dynamic Network","Description":"An anonymization algorithm to resist neighbor label attack in a dynamic network.","Published":"2016-11-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dynia","Version":"0.2","Title":"Fit Dynamic Intervention Model","Description":"Fit dynamic intervention model using the arima() function.","Published":"2014-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dynlm","Version":"0.3-5","Title":"Dynamic Linear Regression","Description":"Dynamic linear models and time series regression.","Published":"2016-08-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"DynNom","Version":"4.1.1","Title":"Dynamic Nomograms for Linear, Generalized Linear and\nProportional Hazard Models","Description":"Demonstrate the results of a statistical model object as a dynamic nomogram in an RStudio panel or web browser. Also, the generic DNbuilder() function in this package provides a simple and straightforward way to build and publish a dynamic nomogram on the web to use the app independent of R. 'DynNom' supports a variety of model objects; lm(), glm(), coxph() models and also ols(), Glm(), lrm(), cph() models in the 'rms' package.","Published":"2017-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dynOmics","Version":"1.0","Title":"Fast Fourier Transform to Identify Associations Between Time\nCourse Omics Data","Description":"Implements the fast Fourier transform to estimate delays of expression initiation between trajectories to integrate and\n analyse time course omics data.","Published":"2016-11-10","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"dynpanel","Version":"0.1.0","Title":"Dynamic Panel Data Models","Description":"Computes the first stage GMM estimate of a dynamic linear model with p lags of the dependent variables.","Published":"2016-08-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dynpred","Version":"0.1.2","Title":"Companion Package to \"Dynamic Prediction in Clinical Survival\nAnalysis\"","Description":"The dynpred package contains functions for dynamic prediction in survival analysis.","Published":"2015-07-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"dynr","Version":"0.1.11-2","Title":"Dynamic Modeling in R","Description":"Intensive longitudinal data have become increasingly prevalent in\n various scientific disciplines. Many such data sets are noisy, multivariate,\n and multi-subject in nature. The change functions may also be continuous, or\n continuous but interspersed with periods of discontinuities (i.e., showing\n regime switches). The package 'dynr' (Dynamic Modeling in R) is an R package\n that implements a set of computationally efficient algorithms for handling a\n broad class of linear and nonlinear discrete- and continuous-time models with\n regime-switching properties under the constraint of linear Gaussian measurement\n functions. The discrete-time models can generally take on the form of a state-\n space or difference equation model. The continuous-time models are generally\n expressed as a set of ordinary or stochastic differential equations. All\n estimation and computations are performed in C, but users are provided with the\n option to specify the model of interest via a set of simple and easy-to-learn\n model specification functions in R. Model fitting can be performed using single-\n subject time series data or multiple-subject longitudinal data.","Published":"2017-06-17","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"dynRB","Version":"0.9","Title":"Dynamic Range Boxes","Description":"Improves the concept of multivariate range boxes, which is highly susceptible for outliers and does not consider the distribution of the data. The package uses dynamic range boxes to overcome these problems.","Published":"2016-10-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"dynsbm","Version":"0.4","Title":"Dynamic Stochastic Block Models","Description":"Dynamic stochastic block model that combines a stochastic block model (SBM) for its static part with independent Markov chains for the evolution of the nodes groups through time, developed in Matias and Miele (2016) .","Published":"2017-05-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dynsim","Version":"1.2.1","Title":"Dynamic Simulations of Autoregressive Relationships","Description":"Dynamic simulations and graphical depictions of autoregressive\n relationships.","Published":"2015-12-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"dynsurv","Version":"0.3-5","Title":"Dynamic Models for Survival Data","Description":"Functions to fit time-varying coefficient models for interval\n censored and right censored survival data. Three major approaches are\n implemented: 1) Bayesian Cox model with time-independent, time-varying or\n dynamic coefficients for right censored and interval censored data; 2)\n Spline based time-varying coefficient Cox model for right censored data; 3)\n Transformation model with time-varying coefficients for right censored data\n using estimating equations.","Published":"2017-01-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"DynTxRegime","Version":"3.01","Title":"Methods for Estimating Optimal Dynamic Treatment Regimes","Description":"Methods to estimate dynamic treatment regimes using Interactive Q-Learning, Q-Learning, weighted learning, and value-search methods based on Augmented Inverse Probability Weighted Estimators and Inverse Probability Weighted Estimators.","Published":"2017-05-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"DySeq","Version":"0.22","Title":"Functions for Dyadic Sequence Analyses","Description":"Functions for dyadic binary/dichotomous sequence analyses are implemented in this contribution.\n The focus is on estimating actor-partner-interaction models using various approaches, for \n instances the approach of Bakeman & Gottman's (1997) ,\n generalized multi-level models, and basic Markov models. Moreover, coefficients of one model \n can be translated into those of the other models. Finally, simulation-based power analyses are\n provided. ","Published":"2017-03-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"e1071","Version":"1.6-8","Title":"Misc Functions of the Department of Statistics, Probability\nTheory Group (Formerly: E1071), TU Wien","Description":"Functions for latent class analysis, short time Fourier\n\t transform, fuzzy clustering, support vector machines,\n\t shortest path computation, bagged clustering, naive Bayes\n\t classifier, ...","Published":"2017-02-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"eaf","Version":"1.07","Title":"Plots of the Empirical Attainment Function","Description":"Plots of the empirical attainment function for two objectives.","Published":"2015-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eAnalytics","Version":"0.1.3","Title":"Dynamic Web-Based Analytics for the Energy Industry","Description":"A 'Shiny' web application for energy industry analytics.\n Take an overview of the industry, measure Key Performance Indicators,\n identify changes in the industry over time, and discover new relationships in the data.","Published":"2017-02-19","License":"Apache License","snapshot_date":"2017-06-23"} {"Package":"earlywarnings","Version":"1.0.59","Title":"Early Warning Signals Toolbox for Detecting Critical Transitions\nin Timeseries","Description":"The Early-Warning-Signals Toolbox provides methods for estimating\n statistical changes in timeseries that can be used for identifying nearby\n critical transitions. Based on Dakos et al (2012) Methods for Detecting\n Early Warnings of Critical Transitions in Time Series Illustrated Using\n Simulated Ecological Data. PLoS ONE 7(7):e41010","Published":"2014-04-12","License":"FreeBSD","snapshot_date":"2017-06-23"} {"Package":"earth","Version":"4.5.0","Title":"Multivariate Adaptive Regression Splines","Description":"Build regression models using the techniques in Friedman's\n papers \"Fast MARS\" and \"Multivariate Adaptive Regression\n Splines\". (The term \"MARS\" is trademarked and thus not used in\n the name of the package.)","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"earthtones","Version":"0.1.0","Title":"Derive a Color Palette from a Particular Location on Earth","Description":"Downloads a satellite image via Google Maps/Earth (these are\n originally from a variety of aerial photography sources), \n translates the image into a perceptually uniform color space,\n runs one of a few different clustering algorithms on the colors in the image \n searching for a user-supplied number of colors,\n and returns the resulting color palette. ","Published":"2016-09-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EasyABC","Version":"1.5","Title":"Efficient Approximate Bayesian Computation Sampling Schemes","Description":"Enables launching a series of simulations of a computer code from the R session, and to retrieve the simulation outputs in an appropriate format for post-processing treatments. Five sequential sampling schemes and three coupled-to-MCMC schemes are implemented.","Published":"2015-09-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"easyanova","Version":"4.0","Title":"Analysis of variance and other important complementary analyzes","Description":"Perform analysis of variance and other important complementary analyzes. The functions are easy to use. Performs analysis in various designs, with balanced and unbalanced data.","Published":"2014-09-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"easyDes","Version":"3.0","Title":"An Easy Way to Descriptive Analysis","Description":"\n Descriptive analysis is essential for publishing medical articles.\n This package provides an easy way to conduct the descriptive analysis.\n 1. Both numeric and factor variables can be handled. For numeric variables, normality test will be applied to choose the parametric and nonparametric test.\n 2. Both two or more groups can be handled. For groups more than two, the post hoc test will be applied, 'Tukey' for the numeric variables and 'FDR' for the factor variables.\n 3. T test, ANOVA or Fisher test can be forced to apply.","Published":"2017-04-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"easyformatr","Version":"0.1.2","Title":"Tools for Building R Formats","Description":"Builds format strings that can be used with strptime() and sprintf().","Published":"2016-07-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EasyHTMLReport","Version":"0.1.1","Title":"EasyHTMLReport","Description":"It is a package that can be used to send HTML reports easily.","Published":"2013-08-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EasyMARK","Version":"1.0","Title":"Utility functions for working with mark-recapture data","Description":"Contains a few utility functions for working with capture-history data, a function for simulating capture-history data, and a function to fit this data, using a Gibbs sampler. ","Published":"2014-04-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EasyMx","Version":"0.1-3","Title":"Easy Model-Builder Functions for OpenMx","Description":"Utilities for building certain kinds of common matrices and models in \n the extended structural equation modeling package, OpenMx.","Published":"2017-05-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"easyNCDF","Version":"0.0.4","Title":"Tools to Easily Read/Write NetCDF Files into/from\nMultidimensional R Arrays","Description":"Set of wrappers for the 'ncdf4' package to simplify and extend its reading/writing capabilities into/from multidimensional R arrays.","Published":"2017-05-17","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"easynls","Version":"4.0","Title":"Easy nonlinear model","Description":"The package fit and plot some nonlinear models","Published":"2014-02-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"easypackages","Version":"0.1.0","Title":"Easy Loading and Installing of Packages","Description":"Easily load and install multiple packages from different sources, \n including CRAN and GitHub. The libraries function allows you to load or attach \n multiple packages in the same function call. The packages function will load one \n or more packages, and install any packages that are not installed on your system \n (after prompting you). Also included is a from_import function that allows you \n to import specific functions from a package into the global environment.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"easypower","Version":"1.0.1","Title":"Sample Size Estimation for Experimental Designs","Description":"Power analysis is used in the estimation of sample sizes for\n experimental designs. Most programs and R packages will only output the highest\n recommended sample size to the user. Often the user input can be complicated\n and computing multiple power analyses for different treatment comparisons can\n be time consuming. This package simplifies the user input and allows the user\n to view all of the sample size recommendations or just the ones they want to see.\n The calculations used to calculate the recommended sample sizes are from the\n 'pwr' package.","Published":"2015-11-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"easyPubMed","Version":"2.3","Title":"Search and Retrieve Scientific Publication Records from PubMed","Description":"Query NCBI Entrez and retrieve PubMed records in XML or text format. Process PubMed records by extracting and aggregating data from selected fields. A large number of records can be easily downloaded via this simple-to-use interface to the NCBI PubMed API. ","Published":"2017-02-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"easyreg","Version":"1.0","Title":"Easy Regression","Description":"Performs analysis of regression in simple designs with quantitative treatments, \n including mixed models and non linear models. Plot graphics (equations and data).","Published":"2016-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"easySdcTable","Version":"0.3.0","Title":"Easy Interface to the Statistical Disclosure Control Package\n'sdcTable'","Description":"The main function, ProtectTable(), performs table suppression according to a \n frequency rule with a data set as the only required input. Within this function, \n protectTable(), protectLinkedTables() or runArgusBatchFile() in package 'sdcTable' is called. \n Lists of level-hierarchy (parameter 'dimList') and other required input to these functions \n are created automatically. \n The function, PTgui(), starts a graphical user interface based on the shiny package.","Published":"2017-04-10","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EasyStrata","Version":"8.6","Title":"Evaluation of stratified genome-wide association meta-analysis\nresults","Description":"This is a pipelining tool that facilitates \n evaluation and visualisation of stratified genome-wide \n association meta-analyses (GWAMAs) results data. It \n provides (i) statistical methods to test and to account \n for between-strata difference and to clump genome-wide\n results into independent loci and (ii) extended graphical \n features (e.g., Manhattan, Miami and QQ plots) tailored \n for stratified GWAMA results.","Published":"2014-06-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"easyVerification","Version":"0.4.2","Title":"Ensemble Forecast Verification for Large Data Sets","Description":"Set of tools to simplify application of atomic forecast\n verification metrics for (comparative) verification of ensemble forecasts\n to large data sets. The forecast metrics are imported from the\n 'SpecsVerification' package, and additional forecast metrics are provided\n with this package. Alternatively, new user-defined forecast scores can be\n implemented using the example scores provided and applied using the\n functionality of this package.","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"eba","Version":"1.7-2","Title":"Elimination-by-Aspects Models","Description":"Fitting and testing multi-attribute probabilistic choice\n models, especially the Bradley-Terry-Luce (BTL) model (Bradley &\n Terry, 1952; Luce, 1959), elimination-by-aspects (EBA) models\n (Tversky, 1972), and preference tree (Pretree) models (Tversky &\n Sattath, 1979).","Published":"2016-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ebal","Version":"0.1-6","Title":"Entropy reweighting to create balanced samples","Description":"Package implements entropy balancing, a data preprocessing procedure that allows users to reweight a dataset such that the covariate distributions in the reweighted data satisfy a set of user specified moment conditions. This can be useful to create balanced samples in observational studies with a binary treatment where the control group data can be reweighted to match the covariate moments in the treatment group. Entropy balancing can also be used to reweight a survey sample to known characteristics from a target population.","Published":"2014-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EBASS","Version":"0.1","Title":"Sample Size Calculation Method for Cost-Effectiveness Studies\nBased on Expected Value of Perfect Information","Description":"We propose a new sample size calculation method for trial-based\n cost-effectiveness analyses. Our strategy is based on the value of perfect\n information that would remain after the completion of the study.","Published":"2016-10-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EbayesThresh","Version":"1.3.2","Title":"Empirical Bayes Thresholding and Related Methods","Description":"This package carries out Empirical Bayes thresholding\n using the methods developed by I. M. Johnstone and B. W.\n Silverman. The basic problem is to estimate a mean vector given\n a vector of observations of the mean vector plus white noise,\n taking advantage of possible sparsity in the mean vector.\n Within a Bayesian formulation, the elements of the mean vector\n are modelled as having, independently, a distribution that is a\n mixture of an atom of probability at zero and a suitable\n heavy-tailed distribution. The mixing parameter can be\n estimated by a marginal maximum likelihood approach. This\n leads to an adaptive thresholding approach on the original\n data. Extensions of the basic method, in particular to wavelet\n thresholding, are also implemented within the package.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ebdbNet","Version":"1.2.5","Title":"Empirical Bayes Estimation of Dynamic Bayesian Networks","Description":"Infer the adjacency matrix of a\n\tnetwork from time course data using an empirical Bayes\n\testimation procedure based on Dynamic Bayesian Networks.","Published":"2016-11-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"EBEN","Version":"4.6","Title":"Empirical Bayesian Elastic Net","Description":"Provides the Empirical Bayesian Elastic Net for handling multicollinearity in generalized linear regression models. As a special case of the 'EBglmnet'\n package (also available on CRAN), this package encourages a grouping effects to select relevant variables and estimate the corresponding non-zero effects. ","Published":"2015-10-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ebGenotyping","Version":"2.0.1","Title":"Genotyping and SNP Detection using Next Generation Sequencing\nData","Description":"Genotyping the population using next generation sequencing data is essentially important for the rare variant detection. In order to distinguish the genomic structural variation from sequencing error, we propose a statistical model which involves the genotype effect through a latent variable to depict the distribution of non-reference allele frequency data among different samples and different genome loci, while decomposing the sequencing error into sample effect and positional effect. An ECM algorithm is implemented to estimate the model parameters, and then the genotypes and SNPs are inferred based on the empirical Bayes method.","Published":"2016-04-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EBglmnet","Version":"4.1","Title":"Empirical Bayesian Lasso and Elastic Net Methods for Generalized\nLinear Models","Description":"Provides empirical Bayesian lasso and elastic net algorithms for variable selection and effect estimation. Key features include sparse variable selection and effect estimation via generalized linear regression models, high dimensionality with p>>n, and significance test for nonzero effects. This package outperforms other popular methods such as lasso and elastic net methods in terms of power of detection, false discovery rate, and power of detecting grouping effects.","Published":"2016-01-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ebimetagenomics","Version":"0.2","Title":"EBI Metagenomics Portal","Description":"Functions for querying the EBI Metagenomics Portal (https://www.ebi.ac.uk/metagenomics/). The current main focus is on taxa abundance data, but the intention is that this package should evolve into a general purpose package for working with EBI Metagenomics data using R. ","Published":"2016-10-16","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"EBMAforecast","Version":"0.52","Title":"Ensemble BMA Forecasting","Description":"Ensemble BMA for social science data.","Published":"2016-02-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EBrank","Version":"1.0.0","Title":"Empirical Bayes Ranking","Description":"Empirical Bayes ranking applicable to parallel-estimation settings where the estimated parameters are asymptotically unbiased and normal, with known standard errors. A mixture normal prior for each parameter is estimated using Empirical Bayes methods, subsequentially ranks for each parameter are simulated from the resulting joint posterior over all parameters (The marginal posterior densities for each parameter are assumed independent). Finally, experiments are ordered by expected posterior rank, although computations minimizing other plausible rank-loss functions are also given. ","Published":"2017-01-12","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"EBS","Version":"3.1","Title":"Exact Bayesian Segmentation","Description":"Performs an exact Bayesian segmentation on data and returns the probabilities of breakpoints, an ICL criteria, comparison of change-point location, etc.","Published":"2016-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ebSNP","Version":"1.0","Title":"Genotyping and SNP calling using single-sample next generation\nsequencing data","Description":"Genotyping and SNP calling tool for single-sample next generation sequencing data analysis using an empirical Bayes method.","Published":"2014-06-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ecb","Version":"0.2","Title":"Programmatic Access to the European Central Bank's Statistical\nData Warehouse (SDW)","Description":"Provides an interface to the European Central Bank's Statistical\n Data Warehouse API, allowing for programmatic retrieval of a vast quantity\n of statistical data.","Published":"2016-03-18","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"ECctmc","Version":"0.2.4","Title":"Simulation from Endpoint-Conditioned Continuous Time Markov\nChains","Description":"Draw sample paths for endpoint-conditioned continuous time Markov chains via modified rejection sampling or uniformization.","Published":"2017-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ecd","Version":"0.8.3","Title":"Elliptic Distribution and Lambda Option Pricing Model","Description":"An implementation of the univariate elliptic distribution, and\n lambda option pricing model. It provides detailed functionality and data\n sets for the distribution and modelling. Especially, it contains functions\n for the computation of density, probability, quantile, fitting procedures,\n option prices, volatility smile. It also comes with sample financial data,\n and plotting routines.","Published":"2017-01-06","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"Ecdat","Version":"0.3-1","Title":"Data Sets for Econometrics","Description":"Data sets for econometrics.","Published":"2016-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ecdfHT","Version":"0.1.1","Title":"Empirical CDF for Heavy Tailed Data","Description":"Computes and plots a transformed empirical CDF (ecdf) as a\n diagnostic for heavy tailed data, specifically data with power law decay on the\n tails. Routines for annotating the plot, comparing data to a model, fitting a\n nonparametric model, and some multivariate extensions are given.","Published":"2016-09-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ecespa","Version":"1.1-8","Title":"Functions for Spatial Point Pattern Analysis","Description":"Some wrappers, functions and data sets for for spatial point pattern analysis (mainly based on spatstat), used in the book \"Introduccion al Analisis Espacial de Datos en Ecologia y Ciencias Ambientales: Metodos y Aplicaciones\".","Published":"2015-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ECFsup","Version":"0.1-2","Title":"Equal Covariance Functions Testing by L2-Norm and Sup-Norm","Description":"Testing the equality of several covariance functions of functional data. Four different methods are implemented: L2-norm with W-S naive, L2-norm with W-S bias-reduced, L2-norm (Zhang 2013) , and sup-norm with resampling (Guo et al. 2017) .","Published":"2017-06-17","License":"GNU Lesser General Public License","snapshot_date":"2017-06-23"} {"Package":"Ecfun","Version":"0.1-7","Title":"Functions for Ecdat","Description":"Functions to update data sets in Ecdat and to create,\n manipulate, plot and analyze those and similar data sets.","Published":"2016-05-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ECharts2Shiny","Version":"0.2.11","Title":"Embedding Interactive Charts Generated with ECharts Library into\nShiny Applications","Description":"Embed interactive charts to their Shiny applications. These charts will be generated by ECharts library developed by Baidu (). Current version supports line chart, bar chart, pie chart, scatter plot, gauge, word cloud, radar chart, tree map, and heat map.","Published":"2017-05-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"echogram","Version":"0.1.0","Title":"Echogram Visualisation and Analysis","Description":"Easily import multi-frequency acoustic data stored in 'HAC' files (see for more information on the format), and produce echogram visualisations with predefined or customized color palettes. It is also possible to merge consecutive echograms; mask or delete unwanted echogram areas; model and subtract background noise; and more important, develop, test and interpret different combinations of frequencies in order to perform acoustic filtering of the echogram's data. ","Published":"2017-02-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ecipex","Version":"1.0","Title":"Efficient calculation of fine structure isotope patterns via\nFourier transforms of simplex-based elemental models","Description":"Provides a function that quickly computes the fine structure\n isotope patterns of a set of chemical formulas to a given degree of\n accuracy (up to the limit set by errors in floating point arithmetic). A\n data-set comprising the masses and isotopic abundances of individual\n elements is also provided.","Published":"2014-08-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eclust","Version":"0.1.0","Title":"Environment Based Clustering for Interpretable Predictive Models\nin High Dimensional Data","Description":"Companion package to the paper: An analytic approach for \n interpretable predictive models in high dimensional data, in the presence of \n interactions with exposures. Bhatnagar, Yang, Khundrakpam, Evans, Blanchette, Bouchard, Greenwood (2017) . \n This package includes an algorithm for clustering high dimensional data that can be affected by an environmental factor. ","Published":"2017-01-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ecm","Version":"2.0.0","Title":"Build Error Correction Models","Description":"Functions for easy building of error correction models (ECM) for time series regression. ","Published":"2017-04-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eco","Version":"3.1-7","Title":"Ecological Inference in 2x2 Tables","Description":"We implement the Bayesian and likelihood methods proposed \n in Imai, Lu, and Strauss (2008, 2011) for ecological inference in 2 \n by 2 tables as well as the method of bounds introduced by Duncan and \n Davis (1953). The package fits both parametric and nonparametric \n models using either the Expectation-Maximization algorithms (for \n likelihood models) or the Markov chain Monte Carlo algorithms (for \n Bayesian models). For all models, the individual-level data can be \n directly incorporated into the estimation whenever such data are available.\n Along with in-sample and out-of-sample predictions, the package also\n provides a functionality which allows one to quantify the effect of data\n aggregation on parameter estimation and hypothesis testing under the\n parametric likelihood models.","Published":"2015-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ecodist","Version":"1.2.9","Title":"Dissimilarity-based functions for ecological analysis","Description":"Dissimilarity-based analysis functions including ordination and Mantel test functions, intended for use with spatial and community data.","Published":"2013-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ecoengine","Version":"1.10.0","Title":"Programmatic Interface to the API Serving UC Berkeley's Natural\nHistory Data","Description":"The ecoengine provides access to more than 5 million georeferenced\n specimen records from the University of California, Berkeley's Natural History\n Museums.","Published":"2016-05-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EcoGenetics","Version":"1.2.1","Title":"Spatial Analysis of Phenotypic, Genotypic and Environmental Data","Description":"Management and exploratory analysis of spatial data in population biology. Easy integration of information from multiple sources with 'ecogen' objects.","Published":"2017-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EcoHydRology","Version":"0.4.12","Title":"A community modeling foundation for Eco-Hydrology","Description":"This package provides a flexible foundation for scientists, \n engineers, and policy makers to base teaching exercises as well as for \n more applied use to model complex eco-hydrological interactions. ","Published":"2014-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EcoIndR","Version":"1.0","Title":"Ecological Indicators","Description":"Calculates several indices, such as of diversity, fluctuation, etc., and they are used to estimate ecological indicators.","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ecolMod","Version":"1.2.6","Title":"\"A practical guide to ecological modelling - using R as a\nsimulation platform\"","Description":"Figures, data sets and examples from the book \"A practical guide to ecological modelling - using R as a simulation platform\" by Karline Soetaert and Peter MJ Herman (2009). Springer.\n All figures from chapter x can be generated by \"demo(chapx)\", where x = 1 to 11. \n The R-scripts of the model examples discussed in the book are in subdirectory \"examples\", ordered per chapter.\n Solutions to model projects are in the same subdirectories.","Published":"2014-12-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EconDemand","Version":"1.0","Title":"General Analysis of Various Economics Demand Systems","Description":"Tools for general properties including price, quantity, elasticity, convexity, marginal revenue and manifold of various economics demand systems including Linear, Translog, CES, LES and CREMR.","Published":"2016-07-16","License":"GNU General Public License version 2","snapshot_date":"2017-06-23"} {"Package":"ecoreg","Version":"0.2.1","Title":"Ecological Regression using Aggregate and Individual Data","Description":"Estimating individual-level covariate-outcome associations \n using aggregate data (\"ecological inference\") or a combination of \n aggregate and individual-level data (\"hierarchical related regression\").","Published":"2015-09-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ecoseries","Version":"0.1.3","Title":"An R Interface to Brazilian Central Bank and Sidra APIs and the\nIPEA Data","Description":"Creates an R interface to the Bacen and Sidra APIs and IPEA data .","Published":"2017-05-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ecosim","Version":"1.3","Title":"Toolbox for Aquatic Ecosystem Modeling","Description":"Classes and methods for implementing aquatic ecosystem models,\n for running these models, and for visualizing their results.","Published":"2017-02-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EcoSimR","Version":"0.1.0","Title":"Null Model Analysis for Ecological Data","Description":"Given a site by species interaction matrix, users can make inferences about species interactions by performance hypothesis comparing test statistics against a null distribution. The current package provides algorithms and metrics for niche-overlap, body size ratios and species co-occurrence. Users can also integrate their own algorithms and metrics within these frameworks or completely novel null models. Detailed explanations about the underlying assumptions of null model analysis in ecology can be found at http://ecosimr.org. ","Published":"2015-04-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ECOSolveR","Version":"0.3","Title":"Embedded Conic Solver in R","Description":"R interface to the Embedded COnic Solver (ECOS), an efficient\n\t and robust C library for convex problems. Conic and equality\n\t constraints can be specified in addition to integer and\n\t boolean variable constraints for mixed-integer problems. This\n\t R interface is inspired by the python interface and has\n\t similar calling conventions.","Published":"2017-05-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ecospace","Version":"1.1.3","Title":"Simulating Community Assembly and Ecological Diversification\nUsing Ecospace Frameworks","Description":"Implements stochastic simulations of community assembly (ecological\n diversification) using customizable ecospace frameworks (functional trait\n spaces). Provides a wrapper to calculate common ecological disparity and\n functional ecology statistical dynamics as a function of species richness.\n Functions are written so they will work in a parallel-computing environment.","Published":"2017-02-20","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"ecospat","Version":"2.1.1","Title":"Spatial Ecology Miscellaneous Methods","Description":"Collection of R functions and data sets for the support of spatial ecology analyses with a focus on pre-, core and post- modelling analyses of species distribution, niche quantification and community assembly. Written by current and former members and collaborators of the ecospat group of Antoine Guisan, Department of Ecology and Evolution (DEE) & Institute of Earth Surface Dynamics (IDYST), University of Lausanne, Switzerland.","Published":"2016-11-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ecotoxicology","Version":"1.0.1","Title":"Methods for Ecotoxicology","Description":"Implementation of the EPA's Ecological Exposure Research Division (EERD) tools (discontinued in 1999) for Probit and Trimmed Spearman-Karber Analysis.\n Probit and Spearman-Karber methods from Finney's book \"Probit analysis a statistical treatment of the sigmoid response curve\" with options for most accurate results or identical results to the book.\n Probit and all the tables from Finney's book (code-generated, not copied) with the generating functions included.\n Control correction: Abbott, Schneider-Orelli, Henderson-Tilton, Sun-Shepard.\n Toxicity scales: Horsfall-Barratt, Archer, Gauhl-Stover, Fullerton-Olsen, etc.","Published":"2015-10-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"EcoTroph","Version":"1.6","Title":"EcoTroph R package","Description":"EcoTroph is an approach and software for modelling marine and freshwater ecosystems. It is articulated entirely around trophic levels. EcoTroph's key displays are bivariate plots, with trophic levels as the abscissa, and biomass flows or related quantities as ordinates. Thus, trophic ecosystem functioning can be modelled as a continuous flow of biomass surging up the food web, from lower to higher trophic levels, due to predation and ontogenic processes. Such an approach, wherein species as such disappear, may be viewed as the ultimate stage in the use of the trophic level metric for ecosystem modelling, providing a simplified but potentially useful caricature of ecosystem functioning and impacts of fishing. This version contains catch trophic spectrum analysis (CTSA) function and corrected versions of the mf.diagnosis and create.ETmain functions.","Published":"2013-09-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ecoval","Version":"1.1","Title":"Procedures for Ecological Assessment of Surface Waters","Description":"Functions for evaluating and visualizing\n ecological assessment procedures for surface waters\n containing physical, chemical and biological assessments\n in the form of value functions.","Published":"2017-01-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EcoVirtual","Version":"1.0","Title":"Simulation of Ecological Models","Description":"Computer simulations of classical ecological models as a\n learning resource.","Published":"2016-06-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ecp","Version":"3.0.0","Title":"Non-Parametric Multiple Change-Point Analysis of Multivariate\nData","Description":"Implements various procedures for finding \n\t multiple change-points. Two methods make use of dynamic \n\t programming and probabilistic pruning, with no distributional \n\t assumptions other than the existence of certain absolute \n\t moments in one method. Hierarchical and exact search methods \n\t are included. All methods return the set of estimated change-\n\t points as well as other summary information.","Published":"2016-09-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ecr","Version":"2.0.0","Title":"Evolutionary Computation in R","Description":"Framework for building evolutionary algorithms for both single- and multi-objective continuous or discrete optimization problems. A set of predefined evolutionary building blocks and operators is included. Moreover, the user can easily set up custom objective functions, operators, building blocks and representations sticking to few conventions. The package allows both a black-box approach for standard tasks (plug-and-play style) and a much more flexible white-box approach where the evolutionary cycle is written by hand.","Published":"2017-04-26","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"edarf","Version":"1.1.1","Title":"Exploratory Data Analysis using Random Forests","Description":"Functions useful for exploratory data analysis\n using random forests which can be used to compute multivariate partial\n dependence, observation, class, and variable-wise marginal and joint permutation\n importance as well as observation-specific measures of distance \n (supervised or unsupervised). All of the aforementioned functions are\n accompanied by 'ggplot2' plotting functions.","Published":"2017-03-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"edcc","Version":"1.0-0","Title":"Economic Design of Control Charts","Description":"This package provides a unified approach for Economic\n Design of Control Charts. The main purpose of this package is\n to find out the optimal parameters to minimize the ECH\n (Expected Cost per Hour) of the process.","Published":"2013-01-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"edci","Version":"1.1-2","Title":"Edge Detection and Clustering in Images","Description":"Detection of edge points in images based on the difference\n of two asymmetric M-kernel estimators. Linear and circular\n regression clustering based on redescending M-estimators.\n Detection of linear edges in images.","Published":"2016-08-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"edeaR","Version":"0.6.0","Title":"Exploratory and Descriptive Event-Based Data Analysis","Description":"Functions for exploratory and descriptive analysis of event based data. Provides methods for describing and selecting process data, and for preparing event log data for process mining. Builds on the S3-class for event logs implemented in the package 'bupaR'.","Published":"2017-06-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"edeR","Version":"1.0.0","Title":"Email Data Extraction Using R","Description":"This package will allow to connect with email server through Internet Message Access Protocol (IMAP) and extract header information e.g. from, to, cc, subject, date and time. User will supply their email address and password along with other options. Initially this package is developed only for Gmail. To run the functions from this package user have to have IMAP enabled Gmail account.","Published":"2014-02-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"edesign","Version":"1.0-13","Title":"Maximum Entropy Sampling","Description":"An implementation of maximum entropy sampling for spatial\n data is provided. An exact branch-and-bound algorithm as well as greedy and\n dual greedy heuristics are included.","Published":"2015-09-04","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"edf","Version":"1.0.0","Title":"Read Data from European Data Format (EDF and EDF+) Files","Description":"Import physiologic data stored in\n the European Data Format (EDF and EDF+) into R.\n Both EDF and EDF+ files are supported. Discontinuous\n EDF+ files are not yet supported.","Published":"2016-04-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EDFIR","Version":"1.0","Title":"Estimating Discrimination Factors","Description":"Functions for reading in data sets of prey and predator isotopic measurements and producing estimates for discrimination factors.","Published":"2015-07-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"edfReader","Version":"1.1.2","Title":"Reading EDF(+) and BDF(+) Files","Description":"Reads European Data Format files EDF and EDF+, see ,\n BioSemi Data Format files BDF, see ,\n and BDF+ files, see .\n The files are read in two steps: first the header is read\n and then the signals (using the header object as a parameter).","Published":"2017-05-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"edfun","Version":"0.2.0","Title":"Creating Empirical Distribution Functions","Description":"Easily creating empirical distribution functions from data: 'dfun', 'pfun',\n 'qfun' and 'rfun'.","Published":"2016-08-27","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"edgar","Version":"1.0.9","Title":"Platform for EDGAR Filing Management","Description":"In the USA, firms file different forms with the U.S. Securities and\n Exchange Commission (SEC) through EDGAR (Electronic Data Gathering, Analysis,\n and Retrieval system). EDGAR automated system collects all the different\n necessary filings and make it publicly available. Secondly, it validates\n collected filings, then perform indexing and accepting of these submitted\n forms. Investors, regulators, and researchers often require these forms for\n various purposes. This package helps in data gathering, management, and \n\tvisualization in this regard. It downloads SEC EDGAR quarterly master\n indexes, daily master indexes, and filings from SEC.org site and perform \n sentiment analysis of these filing.","Published":"2017-06-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"edgebundleR","Version":"0.1.4","Title":"Circle Plot with Bundled Edges","Description":"Generates interactive circle plots with the nodes around the\n circumference and linkages between the connected nodes using hierarchical\n edge bundling via the D3 JavaScript library. See for more\n information on D3.","Published":"2016-03-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"edgeCorr","Version":"1.0","Title":"Spatial Edge Correction","Description":"Facilitates basic spatial edge correction to point pattern data.","Published":"2016-03-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"edgeRun","Version":"1.0.9","Title":"More Powerful Unconditional Testing of Negative Binomial Means\nfor Digital Gene Expression Data","Description":"Extends edgeR functionality by improving on exactTest using an unconditional exact test of negative binomial means. ","Published":"2014-09-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EDISON","Version":"1.1.1","Title":"Network Reconstruction and Changepoint Detection","Description":"Package EDISON (Estimation of Directed Interactions from\n Sequences Of Non-homogeneous gene expression) runs an MCMC\n simulation to reconstruct networks from time series data, using\n a non-homogeneous, time-varying dynamic Bayesian network.\n Networks segments and changepoints are inferred concurrently,\n and information sharing priors provide a reduction of the\n inference uncertainty.","Published":"2016-03-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EditImputeCont","Version":"1.0.2","Title":"Simultaneous Edit-Imputation for Continuous Microdata","Description":"An integrated editing and imputation method for continuous microdata under linear constraints is implemented. It relies on a Bayesian nonparametric hierarchical modeling approach as described in Kim et al. (2015) . In this approach, the joint distribution of the data is estimated by a flexible joint probability model. The generated edit-imputed data are guaranteed to satisfy all imposed edit rules, whose types include ratio edits, balance edits and range restrictions.","Published":"2016-10-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"editrules","Version":"2.9.0","Title":"Parsing, Applying, and Manipulating Data Cleaning Rules","Description":"Facilitates reading and manipulating (multivariate) data restrictions\n (edit rules) on numerical and categorical data. Rules can be defined with common R syntax\n and parsed to an internal (matrix-like format). Rules can be manipulated with\n variable elimination and value substitution methods, allowing for feasibility checks\n and more. Data can be tested against the rules and erroneous fields can be found based\n on Fellegi and Holt's generalized principle. Rules dependencies can be visualized with \n using the igraph package.","Published":"2015-06-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"eDMA","Version":"1.4-0","Title":"Dynamic Model Averaging with Grid Search","Description":"Perform Dynamic Model Averaging with grid search as in Dangl and Halling (2012) using parallel computing.","Published":"2017-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"edpclient","Version":"0.1.0","Title":"Empirical Data Platform Client","Description":"R client for Empirical Data Platform. More information is at . For support, contact support@empirical.com.","Published":"2017-05-30","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"EDR","Version":"0.6-6","Title":"Estimation of the Effective Dimension Reduction ('EDR') Space","Description":"The library contains R-functions to estimate the effective\n dimension reduction space in 'multi-index' regression models.","Published":"2016-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"edrGraphicalTools","Version":"2.1","Title":"Provides tools for dimension reduction methods","Description":"This package comes to illustrate the articles \"A graphical\n tool for selecting the number of slices and the dimension of\n the model in SIR and SAVE approaches\" and \"Comparison of sliced\n inverse regression approaches for underdetermined cases\"","Published":"2013-11-09","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"edstan","Version":"1.0.6","Title":"Stan Models for Item Response Theory","Description":"Provides convenience functions and pre-programmed Stan models\n related to item response theory. Its purpose is to make fitting\n common item response theory models using Stan easy.","Published":"2017-01-31","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EdSurvey","Version":"1.0.6","Title":"Education Survey","Description":"Read in and analysis functions for education surveys and\n assessments data from the National Center for Education Statistics\n (NCES) , including the National Assessment\n of Educational Progress (NAEP)\n data .","Published":"2017-05-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"educineq","Version":"0.1.0","Title":"Compute and Decompose Inequality in Education","Description":"Easily compute education inequality measures and the distribution \n of educational attainments for any group of countries, using the data set \n developed in Jorda, V. and Alonso, JM. (2017) . \n The package offers the possibility to compute not only the Gini index, but \n also generalized entropy measures for different values of the sensitivity \n parameter. In particular, the package includes functions to compute the \n mean log deviation, which is more sensitive to the bottom part of the \n distribution; the Theil’s entropy measure, equally sensitive to all parts \n of the distribution; and finally, the GE measure when the sensitivity \n parameter is set equal to 2, which gives more weight to differences in \n higher education. The decomposition of these measures in the components \n between-country and within-country inequality is also provided. Two \n graphical tools are also provided, to analyse the evolution of the\n distribution of educational attainments: The cumulative distribution \n function and the Lorenz curve.","Published":"2017-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eechidna","Version":"1.1","Title":"Exploring Election and Census Highly Informative Data Nationally\nfor Australia","Description":"Data from the 2013 and 2016 Australian Federal Election (House of\n Representatives) and the 2011 Australian Census. Includes tools for\n visualizing and analysing the data. This package incorporates\n data that is copyright Commonwealth of Australia (Australian\n Electoral Commission) 2016.","Published":"2017-05-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eefAnalytics","Version":"1.0.6","Title":"Analysing Education Trials","Description":"Provides tools for analysing education trials. Making different\n methods accessible in a single place is essential for sensitivity analysis\n of education trials, particularly the implication of the different methods in\n analysing simple randomised trials, cluster randomised trials and multisite\n trials.","Published":"2017-02-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"eegkit","Version":"1.0-2","Title":"Toolkit for Electroencephalography Data","Description":"Analysis and visualization tools for electroencephalography (EEG) data. Includes functions for plotting (a) EEG caps, (b) single- and multi-channel EEG time courses, and (c) EEG spatial maps. Also includes smoothing and Independent Component Analysis functions for EEG data analysis, and a function for simulating event-related potential EEG data.","Published":"2015-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eegkitdata","Version":"1.0","Title":"Data for package eegkit","Description":"Contains the example EEG data used in the package eegkit. Also contains code for easily creating larger EEG datasets from the EEG Database on the UCI Machine Learning Repository.","Published":"2014-09-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eel","Version":"1.1","Title":"Extended Empirical Likelihood","Description":"Compute the extended empirical log likelihood ratio (Tsao & Wu, 2014) for the mean and parameters defined by estimating equations. ","Published":"2015-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EEM","Version":"1.1.1","Title":"Read and Preprocess Fluorescence Excitation-Emission Matrix\n(EEM) Data","Description":"Read raw EEM data and prepares them for further analysis.","Published":"2016-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"eemR","Version":"0.1.5","Title":"Tools for Pre-Processing Emission-Excitation-Matrix (EEM)\nFluorescence Data","Description":"Provides various tools for preprocessing Emission-Excitation-Matrix (EEM) for Parallel Factor Analysis (PARAFAC). Different\n methods are also provided to calculate common metrics such as humification index and fluorescence index.","Published":"2017-05-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eeptools","Version":"1.0.0","Title":"Convenience Functions for Education Data","Description":"Collection of convenience functions to make working with\n administrative records easier and more consistent. Includes functions to\n clean strings, identify cut points, and quickly combine shapefiles and\n data frames for plotting. Also includes three example data sets of \n administrative education records for learning how to process records with \n errors.","Published":"2016-11-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"eesim","Version":"0.1.0","Title":"Simulate and Evaluate Time Series for Environmental Epidemiology","Description":"Provides functions to create simulated time series of environmental\n exposures (e.g., temperature, air pollution) and health outcomes for use in\n power analysis and simulation studies in environmental epidemiology. This\n package also provides functions to evaluate the results of simulation studies\n based on these simulated time series. This work was supported by a grant\n from the National Institute of Environmental Health Sciences (R00ES022631) and\n a fellowship from the Colorado State University Programs for Research and\n Scholarly Excellence.","Published":"2017-06-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EFAutilities","Version":"0.1.0","Title":"Utility Functions for Exploratory Factor Analysis","Description":"A number of utility function for exploratory\n factor analysis are included in this package. In particular, it computes standard errors for parameter estimates and factor correlations under a variety of conditions.","Published":"2016-12-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EFDR","Version":"0.1.1","Title":"Wavelet-Based Enhanced FDR for Signal Detection in Noisy Images","Description":"Enhanced False Discovery Rate (EFDR) is a tool to detect anomalies\n in an image. The image is first transformed into the wavelet domain in\n order to decorrelate any noise components, following which the coefficients\n at each resolution are standardised. Statistical tests (in a multiple\n hypothesis testing setting) are then carried out to find the anomalies. The\n power of EFDR exceeds that of standard FDR, which would carry out tests on\n every wavelet coefficient: EFDR choose which wavelets to test based on a\n criterion described in Shen et al. (2002). The package also provides\n elementary tools to interpolate spatially irregular data onto a grid of the\n required size. The work is based on Shen, X., Huang, H.-C., and Cressie, N.\n 'Nonparametric hypothesis testing for a spatial signal.' Journal of the\n American Statistical Association 97.460 (2002): 1122-1140.","Published":"2015-04-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"effectFusion","Version":"1.0","Title":"Bayesian Effect Fusion for Categorical Predictors","Description":"Variable selection and Bayesian effect fusion for categorical predictors in linear regression models. Effect fusion aims at the question which categories have a similar effect on the response and therefore can be fused to obtain a sparser representation of the model. Effect fusion and variable selection can be obtained either with a prior that has an interpretation as spike and slab prior on the level effect differences or with a sparse finite mixture prior on the level effects. The regression coefficients are estimated with a flat uninformative prior after model selection or model averaged. For posterior inference, an MCMC sampling scheme is used that involves only Gibbs sampling steps.","Published":"2016-11-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EffectLiteR","Version":"0.4-2","Title":"Average and Conditional Effects","Description":"Use structural equation modeling to estimate average and\n conditional effects of a treatment variable on an outcome variable, taking into\n account multiple continuous and categorical covariates.","Published":"2016-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"effects","Version":"3.1-2","Title":"Effect Displays for Linear, Generalized Linear, and Other Models","Description":"\n Graphical and tabular effect displays, e.g., of interactions, for \n various statistical models with linear predictors.","Published":"2016-10-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EffectsRelBaseline","Version":"0.5","Title":"Test changes of a grouped response relative to baseline","Description":"Functions to test for changes of a response to a stimulus grouping relative \n to a background or baseline response.","Published":"2013-09-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"EffectStars","Version":"1.7","Title":"Visualization of Categorical Response Models","Description":"Notice: The package EffectStars2 provides a more up-to-date implementation of effect stars! EffectStars provides functions to visualize regression models with categorical response. The effects of the variables are plotted with star plots in order to allow for an optical impression of the fitted model.","Published":"2016-09-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EffectStars2","Version":"0.1-1","Title":"Effect Stars","Description":"Provides functions for the method of effect star. Effect stars can be used to visualize estimates of parameters corresponding to different groups, for example in multinomial logit models. Beside the main function 'effectstars' there exist methods for special objects, for example for 'vglm' objects from the 'VGAM' package.","Published":"2016-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EffectTreat","Version":"0.3","Title":"Prediction of Therapeutic Success","Description":"In personalized medicine, one wants to know, for a given patient and his or her outcome for a predictor (pre-treatment variable), how likely it is that a treatment will be more beneficial than an alternative treatment. This package allows for the quantification of the predictive causal association (i.e., the association between the predictor variable and the individual causal effect of the treatment) and related metrics. Part of this software has been developed using funding provided from the European Union's 7th Framework Programme for research, technological development and demonstration under Grant Agreement no 602552.","Published":"2017-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EfficientMaxEigenpair","Version":"0.1.1","Title":"Efficient Initials for Computing the Maximal Eigenpair","Description":"An implementation for using efficient initials to compute the\n maximal eigenpair in R. It provides two algorithms to find the efficient\n initials under two cases: the tridiagonal matrix case and the general matrix\n case. Besides, it also provides algorithms for the next to the maximal eigenpair under\n these two cases.","Published":"2017-06-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"efflog","Version":"1.0","Title":"The Causal Effects for a Causal Loglinear Model","Description":"Fitting a causal loglinear model and calculating the causal effects for a causal loglinear model with the multiplicative interaction or without the multiplicative interaction, obtaining the natural direct, indirect and the total effect. It calculates also the cell effect, which is a new interaction effect.","Published":"2015-07-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"effsize","Version":"0.7.1","Title":"Efficient Effect Size Computation","Description":"A collection of functions to compute the standardized \n effect sizes for experiments (Cohen d, Hedges g, Cliff delta, Vargha-Delaney A). \n The computation algorithms have been optimized to allow efficient computation even \n with very large data sets.","Published":"2017-03-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"efreadr","Version":"0.2.2","Title":"Read European Eddy Fluxes CSV Files","Description":"The European Eddy Fluxes Database Cluster distributes fluxes of different Green House Gases measured mainly using the eddy covariance technique acquired in sites involved in EU projects but also single sites in Europe, Africa and others continents that decided to share their measurements in the database . The package provides two functions to load and row-wise bind CSV files distributed by the database. Currently only L2, L3, and L4 (L=Level), half-hourly and daily (aggregation) files are supported.","Published":"2017-05-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EFS","Version":"1.0.1","Title":"Tool for Ensemble Feature Selection","Description":"Provides a function to check the\n importance of a feature based on a dependent classification\n variable. An ensemble of feature selection methods\n is used to determine the normalized importance value of\n all features. Combining these methods in one function\n (building the cumulative importance values) provides a \n stable feature selection tool. This selection\n can also be viewed in a barplot using the barplot_fs() function\n and proved using the evaluation function efs_eval().","Published":"2016-11-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ega","Version":"2.0.0","Title":"Error Grid Analysis","Description":"Functions for assigning Clarke or Parkes (Consensus) error grid\n zones to blood glucose values, and for plotting both types of error grids\n in both mg/mL and mmol/L units.","Published":"2017-03-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"egcm","Version":"1.0.8","Title":"Engle-Granger Cointegration Models","Description":"An easy-to-use implementation of the Engle-Granger\n two-step procedure for identifying pairs of cointegrated series. It is geared towards \n the analysis of pairs of securities. Summary and plot functions are provided, \n and the package is able to fetch closing prices of securities from Yahoo.\n A variety of unit root tests are supported, and an improved unit root test is included. ","Published":"2015-11-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"eggCounts","Version":"1.3","Title":"Hierarchical Modelling of Faecal Egg Counts","Description":"An implementation of hierarchical models\n for faecal egg count data to assess anthelmintic\n efficacy. Bayesian inference is done via MCMC sampling using Stan.","Published":"2017-01-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"egonet","Version":"1.2","Title":"Tool for ego-centric measures in Social Network Analysis","Description":"A small tool for Social Network Analysis, dealing with\n ego-centric network measures, including Burt's effective size\n and aggregate constraint and an import code suitable for a\n large number of adjacency matrices. A free web application is\n also available on http://www.egonet.associazionerospo.org","Published":"2012-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EGRET","Version":"2.6.0","Title":"Exploration and Graphics for RivEr Trends (EGRET)","Description":"Statistics and graphics for streamflow history,\n water quality trends, and the statistical modeling algorithm: Weighted\n Regressions on Time, Discharge, and Season (WRTDS).","Published":"2016-07-27","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"EGRETci","Version":"1.0.2","Title":"Exploration and Graphics for RivEr Trends (EGRET) Confidence\nIntervals","Description":"Collection of functions to evaluate uncertainty of results from\n water quality analysis using the Weighted Regressions on Time Discharge and\n Season (WRTDS) method. This package is an add-on to the EGRET package that\n performs the WRTDS analysis.","Published":"2016-04-16","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"eha","Version":"2.4-5","Title":"Event History Analysis","Description":"Sampling of risk sets in Cox regression, selections in \n the Lexis diagram, bootstrapping. Parametric proportional \n hazards fitting with left truncation and right censoring for \n common families of distributions, piecewise constant hazards, \n and discrete models. AFT regression for left truncated and \n right censored data. Binary and Poisson regression for \n clustered data, fixed and random effects with bootstrapping.","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eHOF","Version":"1.8","Title":"Extended HOF (Huisman-Olff-Fresco) Models","Description":"Extended and enhanced hierarchical logistic regression models (called Huisman-Olff-Fresco in biology, see Huisman et al. 1993 JVS ) models. Response curves along one-dimensional gradients including no response, monotone, plateau, unimodal and bimodal models.","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ei","Version":"1.3-3","Title":"Ecological Inference","Description":"Software accompanying Gary King's book: A Solution to the Ecological Inference Problem. (1997). Princeton University Press. ISBN 978-0691012407. ","Published":"2016-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EIAdata","Version":"0.0.3","Title":"R Wrapper for the Energy Information Administration (EIA) API","Description":"An R wrapper to allow the user to query categories and Series IDs, and import data, from the EIA's API.","Published":"2015-03-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"eiCompare","Version":"2.1","Title":"Compares EI, Goodman, RxC Estimates","Description":"Compares estimates from three ecological inference routines, based on King (1997) , ; King et. al. (2004) , .","Published":"2017-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eigeninv","Version":"2011.8-1","Title":"Generates (dense) matrices that have a given set of eigenvalues","Description":"Solves the ``inverse eigenvalue problem'' which is to\n generate a real-valued matrix that has the specified real\n eigenvalue spectrum. It can generate infinitely many dense\n matrices, symmetric or asymmetric, with the given set of\n eigenvalues. Algorithm can also generate stochastic and doubly\n stochastic matrices.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eigenmodel","Version":"1.01","Title":"Semiparametric factor and regression models for symmetric\nrelational data","Description":"This package estimates the parameters of a model for\n symmetric relational data (e.g., the above-diagonal part of a\n square matrix), using a model-based eigenvalue decomposition\n and regression. Missing data is accomodated, and a posterior\n mean for missing data is calculated under the assumption that\n the data are missing at random. The marginal distribution of\n the relational data can be arbitrary, and is fit with an\n ordered probit specification.","Published":"2012-03-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"eigenprcomp","Version":"1.0","Title":"Computes confidence intervals for principal components","Description":"Computes confidence intervals for the proportion explained by the first 1,2,k principal components, and computes confidence intervals for each eigenvalue. Both computations are done via nonparametric bootstrap.","Published":"2013-07-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EILA","Version":"0.1-2","Title":"Efficient Inference of Local Ancestry","Description":"Implementation of Efficient Inference of Local Ancestry \n\t using fused quantile regression and k-means classifier","Published":"2013-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eiPack","Version":"0.1-7","Title":"eiPack: Ecological Inference and Higher-Dimension Data\nManagement","Description":"Provides methods for analyzing RxC ecological contingency\n tables using the extreme case analysis, ecological regression,\n and Multinomial-Dirichlet ecological inference models. Also\n provides tools for manipulating higher-dimension data objects.","Published":"2012-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eive","Version":"2.1","Title":"An algorithm for reducing errors-in-variable bias in simple\nlinear regression","Description":"EIVE performs a compact genetic algorithm search to reduce errors-in-variables bias in linear regression.","Published":"2014-07-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"eiwild","Version":"0.6.7","Title":"Ecological Inference with individual and aggregate data","Description":"This package allows to use the hybrid Multinomial-Dirichlet-Model\n of Ecological Inference for estimating inner Cells of RxC-Tables. This was\n already implemented in the eiPack-package. eiwild-package now has the\n possibility to use individual level data to support the aggregate level\n data and using different Hyperpriori-Distributions.","Published":"2014-03-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EL","Version":"1.0","Title":"Two-sample Empirical Likelihood","Description":"Empirical likelihood (EL) inference for two-sample problems. The following statistics are included: the difference of two-sample means, smooth Huber estimators, quantile (qdiff) and cumulative distribution functions (ddiff), probability-probability (P-P) and quantile-quantile (Q-Q) plots as well as receiver operating characteristic (ROC) curves.","Published":"2011-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"elasso","Version":"1.1","Title":"Enhanced Least Absolute Shrinkage and Selection Operator\nRegression Model","Description":"Performs some enhanced variable selection algorithms \n based on the least absolute shrinkage and selection operator for regression model.","Published":"2015-10-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ElastH","Version":"0.3.1","Title":"Replicar metodologia de SPE/MF para calculo de elasticidade de\nreceita","Description":"O pacote desponibiliza funções para estimar modelos de componentes \n não observados e determinar intervenções automaticamente. Com especial\n atenção para a replicação dos modelos utilizados na metodologia de calculo\n do resultado estrutural da SPE/MF.\n The package provides simple ways to estimates general unobserved components models\n and automatically detects intervenctions. It is specially useful to\n replicate Brazilian Ministry of Finance methodology to estimate income-output gap\n elasticities.","Published":"2017-05-31","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"elastic","Version":"0.7.8","Title":"General Purpose Interface to 'Elasticsearch'","Description":"Connect to 'Elasticsearch', a 'NoSQL' database built on the 'Java'\n Virtual Machine. Interacts with the 'Elasticsearch' 'HTTP' 'API'\n (), including functions for\n setting connection details to 'Elasticsearch' instances, loading bulk data,\n searching for documents with both 'HTTP' query variables and 'JSON' based body\n requests. In addition, 'elastic' provides functions for interacting with 'APIs'\n for 'indices', documents, nodes, clusters, an interface to the cat 'API', and\n more.","Published":"2016-11-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"elasticIsing","Version":"0.2","Title":"Ising Network Estimation using Elastic Net and k-Fold\nCross-Validation","Description":"Description: Uses k-fold cross-validation and elastic-net regularization to estimate the\n Ising model on binary data. Produces 3D plots of the cost function as a function\n of the tuning parameter in addition to the optimal network structure.","Published":"2016-09-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"elasticnet","Version":"1.1","Title":"Elastic-Net for Sparse Estimation and Sparse PCA","Description":"This package provides functions for fitting the entire\n solution path of the Elastic-Net and also provides functions\n for estimating sparse Principal Components. The Lasso solution\n paths can be computed by the same function. First version:\n 2005-10.","Published":"2012-06-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"elasticsearchr","Version":"0.2.0","Title":"A Lightweight Interface for Interacting with Elasticsearch from\nR","Description":"A lightweight R interface to 'Elasticsearch' - a NoSQL search-engine and \n column store database (see for more \n information). This package implements a simple Domain-Specific Language (DSL) for indexing, \n deleting, querying, sorting and aggregating data using 'Elasticsearch'.","Published":"2016-12-20","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"elec","Version":"0.1.2","Title":"Collection of functions for statistical election audits","Description":"This is a bizzare collection of functions written to do\n various sorts of statistical election audits. There are also\n functions to generate simulated voting data, and simulated\n \"truth\" so as to do simulations to check charactaristics of\n these methods.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"elec.strat","Version":"0.1.1","Title":"Functions for election audits using stratified random samples","Description":"An extension of the elec package intended for use on\n election audits using stratified random samples. Includes\n functions to obtain conservative and exact p-values, and\n functions that give sample sizes that may make election audits\n more efficient.","Published":"2012-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"electionsBR","Version":"0.3.0","Title":"R Functions to Download and Clean Brazilian Electoral Data","Description":"Offers a set of functions to easily download and clean \n Brazilian electoral data from the Superior Electoral Court website. \n Among others, the package retrieves data on local and\n federal elections for all positions (city councilor, mayor, state deputy,\n federal deputy, governor, and president) aggregated by\n state, city, and electoral zones. ","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"elementR","Version":"1.3.1","Title":"A Set of R6 Classes & a Shiny Application for Reducing Elemental\nLA-ICPMS Data from Solid Structures","Description":"Aims to facilitate the reduction of elemental microchemistry data from solid-phase LA-ICPMS analysis (laser ablation inductive coupled plasma mass spectrometry). The elementR package provides a reactive and user friendly interface for conducting all steps needed for an optimal data reduction while leaving maximum control for user.","Published":"2017-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ElemStatLearn","Version":"2015.6.26","Title":"Data Sets, Functions and Examples from the Book: \"The Elements\nof Statistical Learning, Data Mining, Inference, and\nPrediction\" by Trevor Hastie, Robert Tibshirani and Jerome\nFriedman","Description":"Useful when reading the book above mentioned, in the\n documentation referred to as `the book'.","Published":"2015-06-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"elevatr","Version":"0.1.3","Title":"Access Elevation Data from Various APIs","Description":"Several web services are available that provide access to elevation\n data. This package provides access to several of those services and \n returns elevation data either as a SpatialPointsDataFrame from \n point elevation services or as a raster object from raster \n elevation services. Currently, the package supports access to the\n Mapzen Elevation Service , \n Mapzen Terrain Service ,\n Amazon Web Services Terrain Tiles and the USGS\n Elevation Point Query Service .","Published":"2017-03-16","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"elexr","Version":"1.0","Title":"Load Associated Press Election Results with Elex","Description":"Provides R access to election results data. Wraps elex (https://github.com/newsdev/elex/), a Python package and command line tool for fetching and parsing Associated Press election results.","Published":"2016-02-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"elhmc","Version":"1.0.0","Title":"Sampling from a Empirical Likelihood Bayesian Posterior of\nParameters Using Hamiltonian Monte Carlo","Description":"A tool to draw samples from a Empirical Likelihood Bayesian posterior\n of parameters using Hamiltonian Monte Carlo.","Published":"2016-09-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"elliplot","Version":"1.1.1","Title":"Ellipse Summary Plot of Quantiles","Description":"Correlation chart of two set (x and y) of data. \n Using Quantiles. Visualize the effect of factor. ","Published":"2013-09-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ellipse","Version":"0.3-8","Title":"Functions for drawing ellipses and ellipse-like confidence\nregions","Description":"This package contains various routines for drawing\n ellipses and ellipse-like confidence regions, implementing the\n plots described in Murdoch and Chow (1996), A graphical display\n of large correlation matrices, The American Statistician 50,\n 178-180. There are also routines implementing the profile plots\n described in Bates and Watts (1988), Nonlinear Regression\n Analysis and its Applications.","Published":"2013-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"elliptic","Version":"1.3-7","Title":"Elliptic Functions","Description":"\n A suite of elliptic and related functions including Weierstrass and\n Jacobi forms. Also includes various tools for manipulating and\n visualizing complex functions.","Published":"2016-05-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"elmNN","Version":"1.0","Title":"Implementation of ELM (Extreme Learning Machine ) algorithm for\nSLFN ( Single Hidden Layer Feedforward Neural Networks )","Description":"Training and predict functions for SLFN ( Single\n Hidden-layer Feedforward Neural Networks ) using the ELM\n algorithm. ELM algorithm differs from the traditional\n gradient-based algorithms for very short training times ( it\n doesn't need any iterative tuning, this makes learning time\n very fast ) and there is no need to set any other parameters\n like learning rate, momentum, epochs, etc.","Published":"2012-07-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ELMR","Version":"1.0","Title":"Extreme Machine Learning (ELM)","Description":"Training and prediction functions are provided for the Extreme Learning Machine algorithm (ELM). The ELM use a Single Hidden Layer Feedforward Neural Network (SLFN) with random generated weights and no gradient-based backpropagation. The training time is very short and the online version allows to update the model using small chunk of the training set at each iteration. The only parameter to tune is the hidden layer size and the learning function.","Published":"2015-11-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"eLNNpaired","Version":"0.2.3","Title":"Model-Based Gene Clustering for Genomics Data from\nPaired/Matched Designs","Description":"Perform model based gene clustering for genomics data generated from paired/matched design based on mixture of extended lognormal normal Bayesian hierarchical models (See Li Y, Morrow J, Raby B, Tantisira K, Weiss ST, Huang W, Qiu W. (2017), ).","Published":"2017-04-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EloChoice","Version":"0.29","Title":"Preference Rating for Visual Stimuli Based on Elo Ratings","Description":"Allows calculating global scores for characteristics of visual stimuli. Stimuli are presented as sequence of pairwise comparisons ('contests'), during each of which a rater expresses preference for one stimulus over the other. The algorithm for calculating global scores is based on Elo rating, which updates individual scores after each single pairwise contest. Elo rating is widely used to rank chess players according to their performance. Its core feature is that dyadic contests with expected outcomes lead to smaller changes of participants' scores than outcomes that were unexpected. As such, Elo rating is an efficient tool to rate individual stimuli when a large number of such stimuli are paired against each other in the context of experiments where the goal is to rank stimuli according to some characteristic of interest.","Published":"2015-10-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EloRating","Version":"0.43","Title":"Animal Dominance Hierarchies by Elo Rating","Description":"Calculate Elo ratings as means to describe animal dominance hierarchies","Published":"2014-11-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"elrm","Version":"1.2.2","Title":"Exact Logistic Regression via MCMC","Description":"elrm implements a Markov Chain Monte Carlo algorithm to\n approximate exact conditional inference for logistic regression\n models. Exact conditional inference is based on the\n distribution of the sufficient statistics for the parameters of\n interest given the sufficient statistics for the remaining\n nuisance parameters. Using model formula notation, users\n specify a logistic model and model terms of interest for exact\n inference.","Published":"2013-12-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ElstonStewart","Version":"1.1","Title":"Elston-Stewart Algorithm","Description":"Flexible implementation of Elston-Stewart algorithm","Published":"2014-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ELT","Version":"1.6","Title":"Experience Life Tables","Description":"Build experience life tables.","Published":"2016-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ELYP","Version":"0.7-3","Title":"Empirical Likelihood Analysis for the Cox Model and\nYang-Prentice (2005) Model","Description":"Empirical likelihood ratio tests for the Yang and Prentice (short/long term hazards ratio) models. \n Empirical likelihood tests within a Cox model, for parameters defined via \n\t\t\t both baseline hazard function and regression parameters.","Published":"2015-08-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EMA","Version":"1.4.5","Title":"Easy Microarray Data Analysis","Description":"We propose both a clear analysis strategy and a selection of tools to investigate microarray gene expression data. The most usual and relevant existing R functions were discussed, validated and gathered in an easy-to-use R package (EMA) devoted to gene expression microarray analysis. These functions were improved for ease of use, enhanced visualisation and better interpretation of results.","Published":"2016-09-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EMAtools","Version":"0.1.2","Title":"Data Management Tools for Real-Time Monitoring/Ecological\nMomentary Assessment Data","Description":"Do data management functions common in real-time monitoring (also called: ecological momentary assessment, experience sampling, micro-longitudinal) data, including creating power curves for multilevel data, centering on participant means and merging event-level data into momentary data sets where you need\n the events to correspond to the nearest data point in the momentary data. This is VERY early release software, and more features will be added over time. ","Published":"2017-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EMbC","Version":"2.0.0","Title":"Expectation-Maximization Binary Clustering","Description":"Unsupervised, multivariate, binary clustering for meaningful annotation of data, taking into account the uncertainty in the data. A specific constructor for trajectory movement analysis yields behavioural annotation of trajectories based on estimated local measures of velocity and turning angle, eventually with solar position covariate as a daytime indicator, (\"Expectation-Maximization Binary Clustering for Behavioural Annotation\").","Published":"2016-11-10","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"embryogrowth","Version":"6.5","Title":"Tools to Analyze the Thermal Reaction Norm of Embryo Growth","Description":"Tools to analyze the embryo growth and the sexualisation thermal reaction norms.","Published":"2017-03-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EMC","Version":"1.3","Title":"Evolutionary Monte Carlo (EMC) algorithm","Description":"random walk Metropolis, Metropolis Hasting, parallel tempering, evolutionary Monte Carlo, temperature ladder construction and placement","Published":"2011-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EMCC","Version":"1.3","Title":"Evolutionary Monte Carlo (EMC) Methods for Clustering","Description":"Evolutionary Monte Carlo methods for clustering, temperature\n ladder construction and placement. This package implements methods\n introduced in Goswami, Liu and Wong (2007) .\n The paper above introduced probabilistic genetic-algorithm-style crossover\n moves for clustering. The paper applied the algorithm to several clustering\n problems including Bernoulli clustering, biological sequence motif\n clustering, BIC based variable selection, mixture of Normals clustering,\n and showed that the proposed algorithm performed better both as a sampler\n and as a stochastic optimizer than the existing tools, namely, Gibbs sampling,\n ``split-merge'' Metropolis-Hastings algorithm, K-means clustering, and the\n MCLUST algorithm (in the package 'mclust').","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Emcdf","Version":"0.1.1","Title":"Computation and Visualization of Empirical Joint Distribution\n(Empirical Joint CDF)","Description":"Computes and visualizes empirical joint distribution of multivariate data with optimized algorithms and multi-thread computation. There is a faster algorithm using dynamic programming to compute the whole empirical joint distribution of a bivariate data. There are optimized algorithms for computing empirical joint CDF function values for other multivariate data. Visualization is focused on bivariate data. Levelplots and wireframes are included.","Published":"2017-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EMCluster","Version":"0.2-7","Title":"EM Algorithm for Model-Based Clustering of Finite Mixture\nGaussian Distribution","Description":"EM algorithms and several efficient\n initialization methods for model-based clustering of finite\n mixture Gaussian distribution with unstructured dispersion\n in both of unsupervised and semi-supervised learning.","Published":"2017-04-28","License":"Mozilla Public License 2.0","snapshot_date":"2017-06-23"} {"Package":"EMD","Version":"1.5.7","Title":"Empirical Mode Decomposition and Hilbert Spectral Analysis","Description":"This package carries out empirical mode decomposition and Hilbert spectral\n analysis. For usage of EMD, see Kim and Oh, 2009 (Kim, D and Oh, H.-S. (2009) EMD: A Package for Empirical \n Mode Decomposition and Hilbert Spectrum, The R Journal, 1, 40-46). ","Published":"2014-01-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"emdbook","Version":"1.3.9","Title":"Support Functions and Data for \"Ecological Models and Data\"","Description":"Auxiliary functions and data sets for \"Ecological Models and Data\", a book presenting maximum likelihood estimation and related topics for ecologists (ISBN 978-0-691-12522-0).","Published":"2016-02-11","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"emdi","Version":"1.1.0","Title":"Estimating and Mapping Disaggregated Indicators","Description":"Functions that support estimating, assessing and mapping regional\n disaggregated indicators. So far, estimation methods comprise direct estimation\n and the model-based approach Empirical Best Prediction (see \"Small area\n estimation of poverty indicators\" by Molina and Rao (2010) ), \n as well as their precision estimates. The assessment of the used model\n is supported by a summary and diagnostic plots. For a suitable presentation of\n estimates, map plots can be easily created. Furthermore, results can easily be\n exported to excel.","Published":"2017-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"emdist","Version":"0.3-1","Title":"Earth Mover's Distance","Description":"Package providing calculation of Earth Mover's Distance\n (EMD).","Published":"2012-12-02","License":"MIT","snapshot_date":"2017-06-23"} {"Package":"emg","Version":"1.0.6","Title":"Exponentially Modified Gaussian (EMG) Distribution","Description":"Provides basic distribution functions for a mixture model of a Gaussian and exponential distribution.","Published":"2015-04-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"emil","Version":"2.2.6","Title":"Evaluation of Modeling without Information Leakage","Description":"A toolbox for designing and evaluating predictive models with\n resampling methods. The aim of this package is to provide a simple and\n efficient general framework for working with any type of prediction\n problem, be it classification, regression or survival analysis, that is\n easy to extend and adapt to your specific setting. Some commonly used\n methods for classification, regression and survival analysis are included.","Published":"2016-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"emIRT","Version":"0.0.8","Title":"EM Algorithms for Estimating Item Response Theory Models","Description":"Various Expectation-Maximization (EM) algorithms are implemented for item response theory\n\t\t\t\t(IRT) models. The current implementation includes IRT models for binary and ordinal\n\t\t\t\tresponses, along with dynamic and hierarchical IRT models with binary responses. The\n\t\t\t\tlatter two models are derived and implemented using variational EM. Subsequent edits\n\t\t\t\talso include variational network and text scaling models.","Published":"2017-02-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"EmiStatR","Version":"1.2.0","Title":"Estimation of Wastewater Emissions in Combined Sewer Systems","Description":"The EmiStatR provides a fast and parallelised calculator to estimate combined wastewater emissions. \n It supports the planning and design of urban drainage systems, without the requirement of \n extensive simulation tools. The EmiStatR package implements modular R methods. This enables \n to add new functionalities through the R framework. Furthermore, EmiStatR was implemented \n with an interactive user interface with sliders and input data exploration.","Published":"2016-07-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"EML","Version":"1.0.3","Title":"Read and Write Ecological Metadata Language Files","Description":"Parse and serialize Ecological Metadata Language ('EML', see\n for\n more information) files into S4 objects.","Published":"2017-05-01","License":"FreeBSD","snapshot_date":"2017-06-23"} {"Package":"eMLEloglin","Version":"1.0.1","Title":"Fitting log-Linear Models in Sparse Contingency Tables","Description":"Log-linear modeling is a popular method for the analysis of contingency table data. When the table is sparse, the data can fall on the boundary of the convex support, and we say that \"the MLE does not exist\" in the sense that some parameters cannot be estimated. However, an extended MLE always exists, and a subset of the original parameters will be estimable. The 'eMLEloglin' package determines which sampling zeros contribute to the non-existence of the MLE. These problematic zero cells can be removed from the contingency table and the model can then be fit (as far as is possible) using the glm() function.","Published":"2016-11-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"emma","Version":"0.1-0","Title":"Evolutionary model-based multiresponse approach","Description":"The evolutionary model-based multiresponse approach (EMMA)\n is a novel methodology to process optimisation and product\n improvement. The approach is suitable to contexts in which the\n experimental cost and/or time limit the number of implementable\n trials.","Published":"2011-10-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EMMAgeo","Version":"0.9.4","Title":"End-Member Modelling of Grain-Size Data","Description":"End-member modelling analysis of grain-size data. ","Published":"2016-03-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"emme2","Version":"0.9","Title":"Read and Write to an EMME/2 databank","Description":"This package includes functions to read and write to an\n EMME/2 databank","Published":"2013-01-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"EMMIXcontrasts2","Version":"0.1.2","Title":"Contrasts in Mixed Effects for EMMIX Model with Random Effects 2","Description":"For forming contrasts in the mixed effects for mixtures of linear mixed models fitted to the gene profiles. ","Published":"2017-05-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"EMMIXcskew","Version":"0.9-4","Title":"Fitting Mixtures of CFUST Distributions","Description":"Functions to fit finite mixture of multivariate canonical fundamental skew t (FM-CFUST) distributions, random sample generation, 2D and 3D contour plots. ","Published":"2017-02-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"EMMIXskew","Version":"1.0.2","Title":"The EM Algorithm and Skew Mixture Distribution","Description":"EM algorithm for Fitting Mixture of Multivariate Skew Normal and Skew t Distributions. An implementation of the algorithm described in Wang, Ng, and McLachlan (2009) .","Published":"2017-05-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"EMMIXuskew","Version":"0.11-6","Title":"Fitting Unrestricted Multivariate Skew t Mixture Models","Description":"Functions to fit finite mixture of unrestricted multivariate skew t (FM-uMST) model, random sample generation, discriminant analysis, 2D and 3D contour plots ","Published":"2014-08-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"EMMLi","Version":"0.0.3","Title":"A Maximum Likelihood Approach to the Analysis of Modularity","Description":"Fit models of modularity to morphological landmarks. Perform model \n selection on results. Fit models with a single within-module correlation or\n with separate within-module correlations fitted to each module.","Published":"2017-02-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EMMREML","Version":"3.1","Title":"Fitting Mixed Models with Known Covariance Structures","Description":"The main functions are 'emmreml', and 'emmremlMultiKernel'. 'emmreml' solves a mixed model with known covariance structure using the 'EMMA' algorithm. 'emmremlMultiKernel' is a wrapper for 'emmreml' to handle multiple random components with known covariance structures. The function 'emmremlMultivariate' solves a multivariate gaussian mixed model with known covariance structure using the 'ECM' algorithm.","Published":"2015-07-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"emoa","Version":"0.5-0","Title":"Evolutionary Multiobjective Optimization Algorithms","Description":"Collection of building blocks for the design and analysis\n of evolutionary multiobjective optimization algorithms.","Published":"2012-09-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"emojifont","Version":"0.5.0","Title":"Emoji and Font Awesome in Graphics","Description":"An implementation of using emoji and fontawesome for using in both\n base and 'ggplot2' graphics.","Published":"2017-04-27","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"emon","Version":"1.3.2","Title":"Tools for Environmental and Ecological Survey Design","Description":"Statistical tools for environmental and ecological surveys.\n Simulation-based power and precision analysis; detection probabilities from\n different survey designs; visual fast count estimation.","Published":"2017-03-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"emov","Version":"0.1.1","Title":"Eye Movement Analysis Package for Fixation and Saccade Detection","Description":"Fixation and saccade detection in eye movement recordings. This package implements a dispersion-based algorithm (I-DT) proposed by Salvucci & Goldberg (2000) which detects fixation duration and position.","Published":"2016-04-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EMP","Version":"2.0.2","Title":"Expected Maximum Profit Classification Performance Measure","Description":"Functions for estimating EMP (Expected Maximum Profit Measure) in Credit Risk Scoring and Customer Churn Prediction, according to Verbraken et al (2013, 2014) , .","Published":"2017-05-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"EmpiricalCalibration","Version":"1.3.1","Title":"Routines for Performing Empirical Calibration of Observational\nStudy Estimates","Description":"Routines for performing empirical calibration of observational\n study estimates. By using a set of negative control hypotheses we can\n estimate the empirical null distribution of a particular observational\n study setup. This empirical null distribution can be used to compute a\n calibrated p-value, which reflects the probability of observing an\n estimated effect size when the null hypothesis is true taking both random\n and systematic error into account.","Published":"2017-05-16","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"empiricalFDR.DESeq2","Version":"1.0.3","Title":"Simulation-Based False Discovery Rate in RNA-Seq","Description":"Auxiliary functions for the DESeq2 package to simulate read counts according to the null hypothesis (i.e., with empirical sample size factors, per-gene total counts and dispersions, but without effects of predictor variables) and to compute the empirical false discovery rate.","Published":"2015-05-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"emplik","Version":"1.0-3","Title":"Empirical Likelihood Ratio for Censored/Truncated Data","Description":"Empirical likelihood ratio tests for means/quantiles/hazards\n \tfrom possibly censored and/or truncated data. Now does regression too.\n\tThis version contains some C code.","Published":"2016-08-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"emplik2","Version":"1.20","Title":"Empirical Likelihood Ratio Test for Two Samples with Censored\nData","Description":"Calculates the p-value for a mean-type hypothesis (or multiple mean-type hypotheses) based on two samples with censored data.","Published":"2015-02-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ems","Version":"0.3.1.6","Title":"Epimed Solutions Collection for Data Editing, Analysis, and\nBenchmarking of Health Units","Description":"Collection of functions for data analysis and\n editing. Most of them are related to benchmarking with prediction models.","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EMSaov","Version":"2.2","Title":"The Analysis of Variance with EMS","Description":"Provides the analysis of variance table including the expected mean squares (EMS) for various types of experimental design. When some variables are random effects or we use special experimental design such as nested design, repeated-measures design, or split-plot design, it is not easy to find the appropriate test, especially denominator for F-statistic which depends on EMS. ","Published":"2017-02-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EMSC","Version":"0.8","Title":"Extended Multiplicative Signal Correction","Description":"Background correction of spectral like data. Handles variations in\n scaling, polynomial baselines, interferents, constituents and replicate variation.\n Parameters for corrections are stored for further analysis, and spectra are corrected\n accordingly.","Published":"2016-04-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EMT","Version":"1.1","Title":"Exact Multinomial Test: Goodness-of-Fit Test for Discrete\nMultivariate data","Description":"The package provides functions to carry out a\n Goodness-of-fit test for discrete multivariate data. It is\n tested if a given observation is likely to have occurred under\n the assumption of an ab-initio model. A p-value can be\n calculated using different distance measures between observed\n and expected frequencies. A Monte Carlo method is provided to\n make the package capable of solving high-dimensional problems.","Published":"2013-01-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"emulator","Version":"1.2-15","Title":"Bayesian emulation of computer programs","Description":"\n This package allows one to estimate the output of a computer program,\n as a function of the input parameters, without actually running it.\n The computer program is assumed to be a Gaussian process, whose\n parameters are estimated using Bayesian techniques that give a PDF of\n expected program output. This PDF is conditional on a ``training set''\n of runs, each consisting of a point in parameter space and the model\n output at that point. The emphasis is on complex codes that take\n weeks or months to run, and that have a large number of undetermined\n input parameters; many climate prediction models fall into this\n class. The emulator essentially determines Bayesian posterior\n estimates of the PDF of the output of a model, conditioned on results\n from previous runs and a user-specified prior linear model. A \n working example is given in the help page for function `interpolant()',\n which should be the first point of reference.","Published":"2014-09-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"emuR","Version":"0.2.2","Title":"Main Package of the EMU Speech Database Management System","Description":"Provides the next iteration of the EMU Speech \n Database Management System (EMU-SDMS) with database management, data \n extraction, data preparation and data visualization facilities.","Published":"2017-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EMVC","Version":"0.3","Title":"Entropy Minimization over Variable Clusters","Description":"Contains logic for the data-driven optimization of annotations via minimization of the entropy of variable group members over discrete variable clusters (see Frost, HR and Moore, JH (2014) ).","Published":"2017-04-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"enaR","Version":"2.10.0","Title":"Tools for Ecological Network Analysis","Description":"Provides algorithms for the analysis of ecological networks.","Published":"2017-03-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"encode","Version":"0.3","Title":"Represent Ordered Lists and Pairs as Strings","Description":"Interconverts between ordered lists and compact string notation. \n Useful for capturing code lists, and pair-wise codes and decodes, for text storage.\n Analogous to factor levels and labels. Generics encode() and decode()\n perform interconversion, while codes() and decodes() extract components of an encoding.\n The function encoded() checks whether something is interpretable as an encoding.","Published":"2017-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"endogenous","Version":"1.0","Title":"Classical Simultaneous Equation Models","Description":"Likelihood-based approaches to estimate linear regression parameters and treatment effects in the presence of endogeneity. Specifically, this package includes James Heckman's classical simultaneous equation models-the sample selection model for outcome selection bias and hybrid model with structural shift for endogenous treatment. For more information, see the seminal paper of Heckman (1978) in which the details of these models are provided. This package accommodates repeated measures on subjects with a working independence approach. The hybrid model further accommodates treatment effect modification.","Published":"2016-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"endogMNP","Version":"0.2-1","Title":"R Package for Fitting Multinomial Probit Models with Endogenous\nSelection","Description":"endogMNP is an R package that fits a Bayesian multinomial\n probit model with endogenous selection, which is sometimes\n called an endogenous switching model. This can be used to\n model discrete choice data when respondents select themselves\n into one of several groups. This package is based on the MNP\n package by Kosuke Imai and David A. van Dyk. This package\n modifies their code.","Published":"2010-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"endorse","Version":"1.5.2","Title":"Bayesian Measurement Models for Analyzing Endorsement\nExperiments","Description":"Fit the hierarchical and non-hierarchical Bayesian measurement models proposed by Bullock, Imai, and Shapiro (2011) to analyze endorsement experiments. Endorsement experiments are a survey methodology for eliciting truthful responses to sensitive questions. This methodology is helpful when measuring support for socially sensitive political actors such as militant groups. The model is fitted with a Markov chain Monte Carlo algorithm and produces the output containing draws from the posterior distribution. ","Published":"2017-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"endtoend","Version":"1.0","Title":"Transmissions and Receptions in an End to End Network","Description":"Computes the expectation of the number of transmissions and receptions considering an End-to-End transport model with limited number of retransmissions per packet. It provides theoretical results and also estimated values based on Monte Carlo simulations.","Published":"2016-09-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"energy","Version":"1.7-0","Title":"E-Statistics: Multivariate Inference via the Energy of Data","Description":"E-statistics (energy) tests and statistics for multivariate and univariate inference,\n including distance correlation, one-sample, two-sample, and multi-sample tests for\n comparing multivariate distributions, are implemented. Measuring and testing\n multivariate independence based on distance correlation, partial distance correlation,\n multivariate goodness-of-fit tests, clustering based on energy distance, testing for\n multivariate normality, distance components (disco) for non-parametric analysis of\n structured data, and other energy statistics/methods are implemented.","Published":"2016-08-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"energyr","Version":"0.1.1","Title":"Data Published by the United States Federal Energy Regulatory\nCommission","Description":"Data published by the United States Federal Energy Regulatory Commission including\n electric company financial data, natural gas company financial data, \n hydropower plant data, liquified natural gas plant data, oil company financial data\n natural gas company financial data, and natural gas storage field data.","Published":"2016-09-24","License":"Apache License","snapshot_date":"2017-06-23"} {"Package":"english","Version":"1.1-2","Title":"Translate Integers into English","Description":"Allow numbers to be presented in an English language\n version, one, two, three, ...","Published":"2017-03-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EngrExpt","Version":"0.1-8","Title":"Data sets from \"Introductory Statistics for Engineering\nExperimentation\"","Description":"Datasets from Nelson, Coffin and Copeland \"Introductory\n Statistics for Engineering Experimentation\" (Elsevier, 2003)\n with sample code.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"engsoccerdata","Version":"0.1.5","Title":"English and European Soccer Results 1871-2016","Description":"Analyzing English & European soccer results data from 1871-2016.","Published":"2016-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"enigma","Version":"0.3.0","Title":"Client for the 'Enigma' 'API'","Description":"The company 'Enigma' () holds many public\n 'datasets' from governments, companies, universities, and organizations.\n 'Enigma' provides an 'API' for data, 'metadata', and statistics on each of\n the 'datasets'. 'enigma' is a client to interact with the 'Enigma' 'API',\n including getting the data and 'metadata' for 'datasets' in 'Enigma', as \n well as collecting statistics on 'datasets'. In addition, you can download \n a 'gzipped' 'csv' file of a 'dataset' if you want the whole 'dataset'. An \n 'API' key from 'Enigma' is required to use 'enigma'.","Published":"2017-02-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ENiRG","Version":"1.0.1","Title":"Ecological Niche in R and GRASS","Description":"A set of tools for the analysis of ecological niche of species and calculation of habitat suitability maps.","Published":"2016-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ENMeval","Version":"0.2.2","Title":"Automated Runs and Evaluations of Ecological Niche Models","Description":"Automatically partitions data into evaluation bins, executes ecological niche models across a range of settings, and calculates a variety of evaluation statistics. Current version only implements ENMs with Maxent (Phillips et al. 2006).","Published":"2017-01-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ENmisc","Version":"1.2-7","Title":"Neuwirth miscellaneous","Description":"The ENmisc package contains utility function for different\n purposes: mtapply and mlapply (multivariate version of tapply\n and lapply), wtd.boxplot (a boxplot with weights), and a visual\n interface to restructuring mosaic plots.","Published":"2013-04-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"enpls","Version":"5.8","Title":"Ensemble Partial Least Squares Regression","Description":"An algorithmic framework for measuring feature importance,\n outlier detection, model applicability domain evaluation,\n and ensemble predictive modeling with (sparse)\n partial least squares regressions.","Published":"2017-03-25","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EnQuireR","Version":"0.10","Title":"A package dedicated to questionnaires","Description":"A package dedicated to questionnaires","Published":"2010-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"enRich","Version":"3.0","Title":"An R Package for the Analysis of Multiple ChIP-Seq Data","Description":"An R package for joint statistical modelling of ChIP-seq data, accounting for technical/biological replicates, multiple conditions and different ChIP efficiencies of the individual experiments.","Published":"2015-10-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"enrichR","Version":"1.0","Title":"Provides an R Interface to 'Enrichr'","Description":"Provides an R interface to all 'Enrichr' databases, a web-based tool for analysing gene sets and returns any enrichment of common annotated biological functions. .","Published":"2017-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"enrichvs","Version":"0.0.5","Title":"Enrichment assessment of virtual screening approaches","Description":"These programs are used for calculating enrichment\n factors, drawing enrichment curves to evaluate virtual\n screening approaches.","Published":"2011-06-29","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"enrichwith","Version":"0.0.4","Title":"Methods to Enrich R Objects with Extra Components","Description":"Provides the \"enrich\" method to enrich list-like R objects with new, relevant components. The current version has methods for enriching objects of class 'family', 'link-glm', 'lm' and 'glm'. The resulting objects preserve their class, so all methods associated with them still apply. The package also provides the 'enriched_glm' function that has the same interface as 'glm' but results in objects of class 'enriched_glm'. In addition to the usual components in a `glm` object, 'enriched_glm' objects carry an object-specific simulate method and functions to compute the scores, the observed and expected information matrix, the first-order bias, as well as model densities, probabilities, and quantiles at arbitrary parameter values. The package can also be used to produce customizable source code templates for the structured implementation of methods to compute new components and enrich arbitrary objects.","Published":"2017-05-12","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"EnsCat","Version":"1.1","Title":"Clustering of Categorical Data","Description":"An implementation of the clustering methods of categorical data\n discussed in Amiri, S., Clarke, B., and Clarke, J. (2015). Clustering categorical \n data via ensembling dissimilarity matrices. Preprint .","Published":"2017-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EnsembleBase","Version":"1.0.2","Title":"Extensible Package for Parallel, Batch Training of Base Learners\nfor Ensemble Modeling","Description":"Extensible S4 classes and methods for batch training of regression and classification algorithms such as Random Forest, Gradient Boosting Machine, Neural Network, Support Vector Machines, K-Nearest Neighbors, Penalized Regression (L1/L2), and Bayesian Additive Regression Trees. These algorithms constitute a set of 'base learners', which can subsequently be combined together to form ensemble predictions. This package provides cross-validation wrappers to allow for downstream application of ensemble integration techniques, including best-error selection. All base learner estimation objects are retained, allowing for repeated prediction calls without the need for re-training. For large problems, an option is provided to save estimation objects to disk, along with prediction methods that utilize these objects. This allows users to train and predict with large ensembles of base learners without being constrained by system RAM.","Published":"2016-09-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ensembleBMA","Version":"5.1.4","Title":"Probabilistic Forecasting using Ensembles and Bayesian Model\nAveraging","Description":"Bayesian Model Averaging to create probabilistic forecasts\n from ensemble forecasts and weather observations.","Published":"2017-03-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EnsembleCV","Version":"0.8","Title":"Extensible Package for Cross-Validation-Based Integration of\nBase Learners","Description":"Extends the base classes and methods of EnsembleBase package for cross-validation-based integration of base learners. Default implementation calculates average of repeated CV errors, and selects the base learner / configuration with minimum average error. The package takes advantage of the file method provided in EnsembleBase package for writing estimation objects to disk in order to circumvent RAM bottleneck. Special save and load methods are provided to allow estimation objects to be saved to permanent files on disk, and to be loaded again into temporary files in a later R session. The package can be extended, e.g. by adding variants of the current implementation.","Published":"2016-09-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ensembleMOS","Version":"0.7","Title":"Ensemble Model Output Statistics","Description":"Ensemble Model Output Statistics to create probabilistic\n forecasts from ensemble forecasts and weather observations.","Published":"2013-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EnsemblePCReg","Version":"1.1.1","Title":"Extensible Package for Principal-Component-Regression-Based\nHeterogeneous Ensemble Meta-Learning","Description":"Extends the base classes and methods of 'EnsembleBase' package for Principal-Components-Regression-based (PCR) integration of base learners. Default implementation uses cross-validation error to choose the optimal number of PC components for the final predictor. The package takes advantage of the file method provided in 'EnsembleBase' package for writing estimation objects to disk in order to circumvent RAM bottleneck. Special save and load methods are provided to allow estimation objects to be saved to permanent files on disk, and to be loaded again into temporary files in a later R session. Users and developers can extend the package by extending the generic methods and classes provided in 'EnsembleBase' package as well as this package.","Published":"2016-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EnsemblePenReg","Version":"0.7","Title":"Extensible Classes and Methods for Penalized-Regression-Based\nIntegration of Base Learners","Description":"Extending the base classes and methods of EnsembleBase package for Penalized-Regression-based (Ridge and Lasso) integration of base learners. Default implementation uses cross-validation error to choose the optimal lambda (shrinkage parameter) for the final predictor. The package takes advantage of the file method provided in EnsembleBase package for writing estimation objects to disk in order to circumvent RAM bottleneck. Special save and load methods are provided to allow estimation objects to be saved to permanent files on disk, and to be loaded again into temporary files in a later R session. Users and developers can extend the package by extending the generic methods and classes provided in EnsembleBase package as well as this package.","Published":"2016-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ensembleR","Version":"0.1.0","Title":"Ensemble Models in R","Description":"Functions to use ensembles of several machine learning models\n specified in caret package.","Published":"2016-09-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ensurer","Version":"1.1","Title":"Ensure Values at Runtime","Description":"Add simple runtime contracts to R values. These ensure that values\n fulfil certain conditions and will raise appropriate errors if they do not.","Published":"2015-04-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"entropart","Version":"1.4-7","Title":"Entropy Partitioning to Measure Diversity","Description":"Measurement and partitioning of diversity, based on Tsallis entropy.","Published":"2017-03-29","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"entropy","Version":"1.2.1","Title":"Estimation of Entropy, Mutual Information and Related Quantities","Description":"This package implements various estimators of entropy, such\n as the shrinkage estimator by Hausser and Strimmer, the maximum likelihood \n and the Millow-Madow estimator, various Bayesian estimators, and the \n Chao-Shen estimator. It also offers an R interface to the NSB estimator.\n Furthermore, it provides functions for estimating Kullback-Leibler divergence,\n chi-squared, mutual information, and chi-squared statistic of independence.\n In addition there are functions for discretizing continuous random variables.","Published":"2014-11-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"EntropyEstimation","Version":"1.2","Title":"Estimation of Entropy and Related Quantities","Description":"Contains methods for the estimation of Shannon's entropy, variants of Renyi's entropy, mutual information, Kullback-Leibler divergence, and generalized Simpson's indices. The estimators used have a bias that decays exponentially fast. ","Published":"2015-01-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"EntropyExplorer","Version":"1.1","Title":"Tools for Exploring Differential Shannon Entropy, Differential\nCoefficient of Variation and Differential Expression","Description":"Rows of two matrices are compared for Shannon entropy,\n coefficient of variation, and expression. P-values can be requested for all metrics.","Published":"2015-06-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EnvCpt","Version":"0.1.1","Title":"Detection of Structural Changes in Climate and Environment Time\nSeries","Description":"Tools for automatic model selection and diagnostics for Climate and Environmental data. In particular the envcpt() function does automatic model selection between a variety of trend, changepoint and autocorrelation models. The envcpt() function should be your first port of call.","Published":"2016-10-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"envDocument","Version":"2.3.0","Title":"Document the R Working Environment","Description":"Prints out information about the R working environment\n (system, R version,loaded and attached packages and versions) from a single\n function \"env_doc()\". Optionally adds information on git repository,\n tags, commits and remotes (if available).","Published":"2017-03-11","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"enveomics.R","Version":"1.1.5","Title":"Various Utilities for Microbial Genomics and Metagenomics","Description":"A collection of functions for microbial ecology and other\n applications of genomics and metagenomics. Companion package for the\n Enveomics Collection (Rodriguez-R, L.M. and Konstantinidis, K.T., 2016\n ).","Published":"2017-04-04","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"enviGCMS","Version":"0.1.1","Title":"GC-MS Data Analysis for Environmental Science","Description":"Gas Chromatography-Mass Spectrometer(GC-MS) Data Analysis for Environmental Science. This package covered topics such as raw data process, molecular isotope ratio, matrix effects, etc. in environmental analysis.","Published":"2016-11-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"enviPat","Version":"2.2","Title":"Isotope Pattern, Profile and Centroid Calculation for Mass\nSpectrometry","Description":"Fast and very memory-efficient calculation of isotope patterns,\n subsequent convolution to theoretical envelopes (profiles) plus valley\n detection and centroidization or intensoid calculation. Batch processing,\n resolution interpolation, wrapper, adduct calculations and molecular\n formula parsing.","Published":"2016-10-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"enviPick","Version":"1.5","Title":"Peak Picking for High Resolution Mass Spectrometry Data","Description":"Sequential partitioning, clustering and peak detection of\n centroided LC-MS mass spectrometry data (.mzXML). Interactive result and raw\n data plot.","Published":"2016-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"envirem","Version":"1.1","Title":"Generation of ENVIREM Variables","Description":"Generation of bioclimatic rasters that will be particularly \n\tuseful for species distribution modeling. ","Published":"2017-02-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EnviroPRA","Version":"1.0","Title":"Environmental Probabilistic Risk Assessment Tools","Description":"Methods to perform a Probabilistic Environmental Risk assessment from exposure to toxic substances - i.e. USEPA (1997) -.","Published":"2017-02-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"EnviroStat","Version":"0.4-2","Title":"Statistical Analysis of Environmental Space-Time Processes","Description":"Functions and datasets to support the book by Nhu D Le and James V Zidek, Springer (2006).","Published":"2015-06-03","License":"AGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"envlpaster","Version":"0.1-2","Title":"Enveloping the Aster Model","Description":"Envelope methodology and aster modeling are combined to provide users with precise estimation of expected Darwinian fitness.","Published":"2016-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EnvNicheR","Version":"1.4","Title":"Niche Estimation","Description":"A plot overlying the niche of multiple species is obtained: 1) to determine the niche conditions which favor a higher species richness, 2) to create a box plot with the range of environmental variables of the species, 3) to obtain a list of species in an area of the niche selected by the user and, 4) to estimate niche overlap among the species.","Published":"2016-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EnvStats","Version":"2.2.1","Title":"Package for Environmental Statistics, Including US EPA Guidance","Description":"Graphical and statistical analyses of environmental data, with \n focus on analyzing chemical concentrations and physical parameters, usually in \n the context of mandated environmental monitoring. Major environmental \n statistical methods found in the literature and regulatory guidance documents, \n with extensive help that explains what these methods do, how to use them, \n and where to find them in the literature. Numerous built-in data sets from \n regulatory guidance documents and environmental statistics literature. Includes \n scripts reproducing analyses presented in the book \"EnvStats: An R Package for \n Environmental Statistics\" (Millard, 2013, Springer, ISBN 978-1-4614-8455-4, \n ).","Published":"2017-01-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"epade","Version":"0.3.8","Title":"Easy Plots","Description":"A collection of nice plotting functions directly from a\n data.frame with limited customisation possibilities.","Published":"2013-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"epandist","Version":"1.1.1","Title":"Statistical Functions for the Censored and Uncensored\nEpanechnikov Distribution","Description":"Analyzing censored variables usually requires the use of optimization algorithms. This package provides an alternative algebraic approach to the task of determining the expected value of a random censored variable with a known censoring point. Likewise this approach allows for the determination of the censoring point if the expected value is known. These results are derived under the assumption that the variable follows an Epanechnikov kernel distribution with known mean and range prior to censoring. Statistical functions related to the uncensored Epanechnikov distribution are also provided by this package.","Published":"2016-02-04","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"epanetReader","Version":"0.5.1","Title":"Read Epanet Files into R","Description":"Reads water network simulation data in 'Epanet' text-based\n '.inp' and '.rpt' formats into R. Also reads results from 'Epanet-msx'.\n Provides basic summary information and plots. The README file has a \n quick introduction. See \n for more information on the 'Epanet' software for modeling\n hydraulic and water quality behavior of water piping systems.","Published":"2017-05-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EPGLM","Version":"1.1.2","Title":"Gaussian Approximation of Bayesian Binary Regression Models","Description":"The main functions compute the expectation propagation approximation of a Bayesian probit/logit models with Gaussian prior. More information can be found in Chopin and Ridgway (2015). More models and priors should follow.","Published":"2016-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Epi","Version":"2.15","Title":"A Package for Statistical Analysis in Epidemiology","Description":"Functions for demographic and epidemiological analysis in\n the Lexis diagram, i.e. register and cohort follow-up data, in\n particular representation, manipulation and simulation of multi state \n data - the Lexis suite of functions, which includes interfaces to\n 'mstate', 'etm' and 'cmprsk' packages. \n Also contains functions for Age-Period-Cohort and Lee-Carter\n modeling and a function for interval censored data and some useful\n functions for tabulation and plotting, as well as a number of\n epidemiological data sets. ","Published":"2017-05-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"epibasix","Version":"1.3","Title":"Elementary Epidemiological Functions for Epidemiology and\nBiostatistics","Description":"This package contains elementary tools for analysis of\n common epidemiological problems, ranging from sample size\n estimation, through 2x2 contingency table analysis and basic\n measures of agreement (kappa, sensitivity/specificity).\n Appropriate print and summary statements are also written to\n facilitate interpretation wherever possible. Source code is\n commented throughout to facilitate modification. The target\n audience includes advanced undergraduate and graduate students\n in epidemiology or biostatistics courses, and clinical\n researchers.","Published":"2012-11-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EpiBayes","Version":"0.1.2","Title":"Implements Hierarchical Bayesian Models for Epidemiological\nApplications","Description":"Hierarchical Bayesian models for use in disease \n\t\tfreedom and disease prevalence studies, designed \n\t\twith epidemiological applications in mind. The models \n\t\tthemselves are in the spirit of those presented in Branscum \n\t\tet al. (2006) (see package documentation for full \n\t\treference). The helper functions and methods were designed \n\t\tto make implementation and processing of the complex model \n\t\toutput relatively simple in application.","Published":"2015-06-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"epicontacts","Version":"1.0.1","Title":"Handling, Visualisation and Analysis of Epidemiological Contacts","Description":"A collection of tools for representing epidemiological contact data, composed of case line lists and contacts between cases. Also contains procedures for data handling, interactive graphics, and statistics.","Published":"2017-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EpiContactTrace","Version":"0.10.0","Title":"Epidemiological Tool for Contact Tracing","Description":"Routines for epidemiological contact tracing\n and visualisation of network of contacts.","Published":"2017-01-27","License":"EUPL","snapshot_date":"2017-06-23"} {"Package":"EpiCurve","Version":"1.1-0","Title":"Plot an Epidemic Curve","Description":"Creates simple or stacked epidemic curves for hourly, daily, weekly or monthly incomes data.","Published":"2017-06-18","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"epidata","Version":"0.1.0","Title":"Tools to Retrieve Economic Policy Institute Data Library\nExtracts","Description":"The Economic Policy Institute () provides\n researchers, media, and the public with easily accessible, up-to-date, and\n comprehensive historical data on the American labor force. It is compiled\n from Economic Policy Institute analysis of government data sources. Use\n it to research wages, inequality, and other economic indicators over time\n and among demographic groups. Data is usually updated monthly.","Published":"2017-01-08","License":"AGPL","snapshot_date":"2017-06-23"} {"Package":"epiDisplay","Version":"3.2.2.0","Title":"Epidemiological Data Display Package","Description":"Package for data exploration and result presentation.\n Full 'epicalc' package with data management functions is available \n at the author's repository.","Published":"2015-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EpiDynamics","Version":"0.3.0","Title":"Dynamic Models in Epidemiology","Description":"Mathematical models of infectious diseases in humans and animals.\n Both, deterministic and stochastic models can be simulated and plotted.","Published":"2015-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EpiEstim","Version":"1.1-2","Title":"EpiEstim: a package to estimate time varying reproduction\nnumbers from epidemic curves","Description":"This package provides tools to quantify transmissibility\n throughout an epidemic from the analysis of time series of\n incidence.","Published":"2013-03-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"epifit","Version":"0.1.2","Title":"Flexible Modelling Functions for Epidemiological Data Analysis","Description":"Provides flexible model fitting used in epidemiological data\n analysis by a unified model specification, along with some data manipulation\n functions. This package covers fitting of variety models including Cox\n regression models, linear regression models, Poisson regression models, logistic\n models and others whose likelihood is expressed in negative binomial, gamma and\n Weibull distributions.","Published":"2017-01-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EpiILM","Version":"1.2","Title":"Spatial and Network Based Individual Level Models for Epidemics","Description":"Provides tools for simulating from discrete-time individual level models for infectious disease data analysis. This epidemic model class contains spatial and contact-network based models with two disease types: Susceptible-Infectious (SI) and Susceptible-Infectious-Removed (SIR). ","Published":"2017-04-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EpiModel","Version":"1.5.0","Title":"Mathematical Modeling of Infectious Disease","Description":"Tools for simulating mathematical models of infectious disease. \n Epidemic model classes include deterministic compartmental models, stochastic \n agent-based models, and stochastic network models. Network models use the\n robust statistical methods of exponential-family random graph models (ERGMs) \n from the Statnet suite of software packages in R. Standard templates for epidemic \n modeling include SI, SIR, and SIS disease types. EpiModel features \n an easy API for extending these templates to address novel scientific research aims.","Published":"2017-06-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"epinet","Version":"2.1.7","Title":"Epidemic/Network-Related Tools","Description":"A collection of epidemic/network-related tools. Simulates transmission of diseases through contact networks. Performs Bayesian inference on network and epidemic parameters, given epidemic data.","Published":"2016-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"epiR","Version":"0.9-82","Title":"Tools for the Analysis of Epidemiological Data","Description":"Tools for the analysis of epidemiological data. Contains functions for directly and indirectly adjusting measures of disease frequency, quantifying measures of association on the basis of single or multiple strata of count data presented in a contingency table, and computing confidence intervals around incidence risk and incidence rate estimates. Miscellaneous functions for use in meta-analysis, diagnostic test interpretation, and sample size calculations.","Published":"2017-04-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"episensr","Version":"0.8.0","Title":"Basic Sensitivity Analysis of Epidemiological Results","Description":"Basic sensitivity analysis of the observed relative risks adjusting\n for unmeasured confounding and misclassification of the\n exposure/outcome, or both. It follows the bias analysis methods and\n examples from the book by Lash T.L, Fox M.P, and Fink A.K.\n \"Applying Quantitative Bias Analysis to Epidemiologic Data\",\n ('Springer', 2009).","Published":"2017-01-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"episheet","Version":"0.2.0","Title":"Rothman's Episheet","Description":"A collection of R functions supporting the text book\n Modern Epidemiology, Second Edition, by Kenneth J.Rothman and Sander Greenland.\n ISBN 13: 978-0781755641 See for more information.","Published":"2016-09-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"episplineDensity","Version":"0.0-1","Title":"Density Estimation with Soft Information by Exponential\nEpi-splines","Description":"Produce one-dimensional density estimates using \n exponential epi-splines. The user may incorporate soft information, by \n imposing constraints that (i) require unimodality; (ii) require that the \n density be monotone non-increase or non-decreasing; (iii) put upper bounds\n on first or second moments; (iv) bound the density's values at mesh points;\n (v) require that the estimate be continuous or continuously differentiable;\n and more.","Published":"2014-11-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"epistasis","Version":"0.0.1-1","Title":"Detecting Epistatic Selection with Partially Observed Genotype\nData","Description":"An efficient multi-core package to reconstruct an underlying network of\n genomic signatures of high-dimensional epistatic selection from \n partially observed genotype data. The phenotype that we consider is viability. \n\t\t\t The network captures the conditional dependent short- and long-range linkage \n\t\t\t disequilibrium structure of genomes and thus reveals aberrant marker-marker \n\t\t\t associations that are due to epistatic selection. We target on high-dimensional\n\t\t\t genotype data where number of variables (markers) is larger than number of \n\t\t\t sample sizes (p >> n). The computations is memory-optimized using the sparse \n\t\t\t matrix output.","Published":"2016-11-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EpistemicGameTheory","Version":"0.1.2","Title":"Constructing an Epistemic Model for the Games with Two Players","Description":"Constructing an epistemic model such that, for every player i and for every choice c(i) which is optimal, there is one type that expresses common belief in rationality.","Published":"2017-05-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"epitools","Version":"0.5-9","Title":"Epidemiology Tools","Description":"Tools for training and practicing epidemiologists including methods for two-way and multi-way contingency tables.","Published":"2017-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EpiWeek","Version":"1.1","Title":"Conversion Between Epidemiological Weeks and Calendar Dates","Description":"Users can easily derive the calendar dates from epidemiological weeks, and vice versa.","Published":"2016-04-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Eplot","Version":"1.0","Title":"Plotting longitudinal series","Description":"Aim: Adjust the graphical parameters to create nicer\n longitudinal series plots. The default set of graphical parameters is very\n general, and can be improved upon when we are interested in plotting data\n points observed over time. Functions facilitate plotting those kind of\n series, univariate plots, bivariate plots (with vertical axis on both left\n and right hand sides), multivariate plots and plots which allow to examine\n whether a new observation is 'unusual' via construction and visualization\n of prediction intervals around it.","Published":"2014-08-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"eply","Version":"0.1.0","Title":"Apply a Function Over Expressions","Description":"Evaluate a function over a data frame of expressions.","Published":"2017-01-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"epoc","Version":"0.2.5-1","Title":"EPoC (Endogenous Perturbation analysis of Cancer)","Description":"Estimates sparse matrices A or G using fast lasso regression from mRNA transcript levels Y and CNA profiles U. Two models are provided, EPoC A where\n AY + U + R = 0\n and EPoC G where\n Y = GU + E,\n the matrices R and E are so far treated as noise. For details see the reference and the manual page of `lassoshooting'.","Published":"2013-08-26","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"epr","Version":"2.0","Title":"Easy polynomial regression","Description":"The package performs analysis of polynomial regression in simple designs with quantitative treatments","Published":"2013-07-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"epxToR","Version":"0.2-0","Title":"Import 'Epidata' XML Files '.epx'","Description":"Import data from 'Epidata' XML files '.epx' and convert it to R data structures.","Published":"2017-03-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EQL","Version":"1.0-0","Title":"Extended-Quasi-Likelihood-Function (EQL)","Description":"Computation of the EQL for a given family of variance\n functions, Saddlepoint-approximations and related auxiliary\n functions (e.g. Hermite polynomials)","Published":"2009-09-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"eqs2lavaan","Version":"3.0","Title":"EQS Output Conversion to lavaan Functions","Description":"Transitioning from EQS to R for structural equation modeling (SEM)\n is made easier with a set of functions to convert .out files into R code.\n The EQS output can be converted into lavaan syntax and run in the R\n environment. Other functions parse descriptive statistics and the covariance matrix\n from an EQS .out file. A heat map plot of a covariance matrix is also\n included.","Published":"2013-11-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"eqtl","Version":"1.1-7","Title":"Tools for analyzing eQTL experiments: A complementary to Karl\nBroman's 'qtl' package for genome-wide analysis","Description":"Analysis of experimental crosses to identify genes (called\n quantitative trait loci, QTLs) contributing to variation in\n quantitative traits.","Published":"2012-03-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"equate","Version":"2.0.6","Title":"Observed-Score Linking and Equating","Description":"Contains methods for observed-score linking\n and equating under the single-group, equivalent-groups,\n and nonequivalent-groups with anchor test(s) designs.\n Equating types include identity, mean, linear, general\n linear, equipercentile, circle-arc, and composites of\n these. Equating methods include synthetic, nominal\n weights, Tucker, Levine observed score, Levine true\n score, Braun/Holland, frequency estimation, and chained\n equating. Plotting and summary methods, and methods for\n multivariate presmoothing and bootstrap error estimation\n are also provided.","Published":"2017-01-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"equateIRT","Version":"2.0-3","Title":"Direct, Chain and Average Equating Coefficients with Standard\nErrors Using IRT Methods","Description":"Computation of direct, chain and average (bisector) equating coefficients with standard errors using Item Response Theory (IRT) methods for dichotomous items. Test scoring can be performed by true score equating and observed score equating methods.","Published":"2017-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"equivalence","Version":"0.7.2","Title":"Provides Tests and Graphics for Assessing Tests of Equivalence","Description":"Provides statistical tests and graphics for assessing tests\n of equivalence. Such tests have similarity as the alternative\n\thypothesis instead of the null. Sample data sets are included.","Published":"2016-05-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"equSA","Version":"1.1.2","Title":"Estimate Graphical Models from Multiple Types of Datasets and\nConstruct Networks","Description":"Provides an equivalent measure of partial correlation coefficients for high-dimensional Gaussian Graphical Models to learn and visualize the underlying relationships between variables from single or multiple datasets. You can refer to for more detail. Based on this method, the package also provides the method for constructing networks for Next Generation Sequencing Data.","Published":"2017-06-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"erah","Version":"1.0.5","Title":"Automated Spectral Deconvolution, Alignment, and Metabolite\nIdentification in GC/MS-Based Untargeted Metabolomics","Description":"Automated compound deconvolution, alignment across samples, and identification of metabolites by spectral library matching in Gas Chromatography - Mass spectrometry (GC-MS) untargeted metabolomics. Outputs a table with compound names, matching scores and the integrated area of the compound for each sample.","Published":"2017-01-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"erboost","Version":"1.3","Title":"Nonparametric Multiple Expectile Regression via ER-Boost","Description":"Expectile regression is a nice tool for estimating the conditional expectiles of a response variable given a set of covariates. This package implements a regression tree based gradient boosting estimator for nonparametric multiple expectile regression. ","Published":"2015-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"erer","Version":"2.5","Title":"Empirical Research in Economics with R","Description":"Functions, datasets, and sample codes related to the book of 'Empirical Research in Economics: Growing up with R' by Dr. Changyou Sun are included. Marginal effects for binary or ordered choice models can be calculated. Static and dynamic Almost Ideal Demand System (AIDS) models can be estimated. A typical event analysis in finance can be conducted with several functions included.","Published":"2016-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ergm","Version":"3.7.1","Title":"Fit, Simulate and Diagnose Exponential-Family Models for\nNetworks","Description":"An integrated set of tools to analyze and simulate networks based on exponential-family random graph models (ERGMs). 'ergm' is a part of the Statnet suite of packages for network analysis.","Published":"2017-03-21","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ergm.count","Version":"3.2.2","Title":"Fit, Simulate and Diagnose Exponential-Family Models for\nNetworks with Count Edges","Description":"A set of extensions for the 'ergm' package to fit weighted networks whose edge weights are counts.","Published":"2016-03-29","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ergm.ego","Version":"0.3.0","Title":"Fit, Simulate and Diagnose Exponential-Family Random Graph\nModels to Egocentrically Sampled Network Data","Description":"Utilities for managing egocentrically sampled network data and a wrapper around the 'ergm' package to facilitate ERGM inference and simulation from such data.","Published":"2016-04-19","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ergm.graphlets","Version":"1.0.3","Title":"ERG Modeling Based on Graphlet Properties","Description":"Integrates graphlet statistics based model terms for use in exponential-family random graph models ('ergm') as part of the 'statnet' suite of packages.","Published":"2015-06-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ergm.rank","Version":"1.0.1","Title":"Fit, Simulate and Diagnose Exponential-Family Models for\nRank-Order Relational Data","Description":"A set of extensions for the 'ergm' package to fit weighted networks whose edge weights are ranks.","Published":"2016-04-19","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ergm.userterms","Version":"3.1.1","Title":"User-specified terms for the statnet suite of packages","Description":"A template package to demonstrate the use of user-specified statistics for use in \"ergm\" models as part of the \"statnet\" suite of packages.","Published":"2013-11-28","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ergmharris","Version":"1.0","Title":"Local Health Department network data set","Description":"Data for use with the Sage Introduction to Exponential\n Random Graph Modeling text by Jenine K. Harris. Network data\n set consists of 1283 local health departments and the\n communication links among them along with several attributes.","Published":"2013-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"eRm","Version":"0.15-7","Title":"Extended Rasch Modeling","Description":"Fits Rasch models (RM), linear logistic test models (LLTM), rating scale model (RSM), linear rating scale models (LRSM), partial credit models (PCM), and linear partial credit models (LPCM). Missing values are allowed in the data matrix. Additional features are the ML estimation of the person parameters, Andersen's LR-test, item-specific Wald test, Martin-Löf-Test, nonparametric Monte-Carlo Tests, itemfit and personfit statistics including infit and outfit measures, various ICC and related plots, automated stepwise item elimination, simulation module for various binary data matrices.","Published":"2016-11-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ERP","Version":"1.1","Title":"Significance Analysis of Event-Related Potentials Data","Description":"The functions provided in the package ERP are designed for the significance analysis of ERP data in a linear model framework. The possible procedures are either the collection of FDR or FWER controlling methods available in the generic function p.adjust, the same collection combined with a factor modeling of the time dependence among tests (see Sheu, Perthame, Lee and Causeur, 2016) and the Guthrie and Buchwald (1991) test.","Published":"2015-12-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"erp.easy","Version":"1.1.0","Title":"Event-Related Potential (ERP) Data Exploration Made Easy","Description":"A set of user-friendly functions to aid in organizing, plotting\n and analyzing event-related potential (ERP) data. Provides an easy-to-learn\n method to explore ERP data. Should be useful to those without a background\n in computer programming, and to those who are new to ERPs (or new to the\n more advanced ERP software available). Emphasis has been placed on highly\n automated processes using functions with as few arguments as possible.\n Expects processed (cleaned) data.","Published":"2017-03-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"erpR","Version":"0.2.0","Title":"Event-related potentials (ERP) analysis, graphics and utility\nfunctions","Description":"This package is dedicated to the analysis of event-related potentials (ERPs). Event-related potentials are the measured brain responses associated with a specific sensory, cognitive, or motor event and are obtained from electroencephalographic (EEG) signal. The erpR package contains a series of functions for importing ERP data, computing traditional ERP measures, exploratory ERP analyses and plotting.","Published":"2014-05-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"errint","Version":"1.0","Title":"Builds Error Intervals","Description":"Builds and analyzes error intervals for a particular model predictions assuming different distributions for noise in the data.","Published":"2017-01-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"errorizer","Version":"0.2.1","Title":"Function Errorizer","Description":"Provides a function to convert existing R functions into \"errorized\" versions \n with added logging and handling functionality when encountering errors or warnings. \n The errorize function accepts an existing R function as its first argument and \n returns a R function with the exact same arguments and functionality. However, \n if an error or warning occurs when running that \"errorized\" R function, it will save a \n .Rds file to the current working directory with the relevant objects and information \n required to immediately recreate the error. ","Published":"2016-12-11","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"errorlocate","Version":"0.1.2","Title":"Locate Errors with Validation Rules","Description":"Errors in data can be located and removed using validation rules from package 'validate'.","Published":"2016-12-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"errors","Version":"0.0.2","Title":"Error Propagation for R Vectors","Description":"Support for painless automatic error propagation in numerical operations.","Published":"2017-06-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ES","Version":"1.0","Title":"Edge Selection","Description":"Implementation of the Edge Selection Algorithm","Published":"2013-08-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"esaBcv","Version":"1.2.1","Title":"Estimate Number of Latent Factors and Factor Matrix for Factor\nAnalysis","Description":"These functions estimate the latent factors of a given matrix, no matter it is high-dimensional or not. It tries to first estimate the number of factors using bi-cross-validation and then estimate the latent factor matrix and the noise variances. For more information about the method, see Art B. Owen and Jingshu Wang 2015 archived article on factor model (http://arxiv.org/abs/1503.03515). ","Published":"2015-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"esaddle","Version":"0.0.3","Title":"Extended Empirical Saddlepoint Density Approximation","Description":"Tools for fitting the Extended Empirical Saddlepoint (EES) density.","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"esc","Version":"0.3.0","Title":"Effect Size Computation for Meta Analysis","Description":"Implementation of the web-based 'Practical Meta-Analysis Effect Size\n Calculator' from David B. Wilson ()\n in R. Based on the input, the effect size can be returned as standardized mean \n difference, Hedges' g, correlation coefficient r or Fisher's transformation z, \n odds ratio or log odds effect size.","Published":"2017-04-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ESEA","Version":"1.0","Title":"ESEA: Discovering the Dysregulated Pathways based on Edge Set\nEnrichment Analysis","Description":"The package can identify the dysregulated canonical pathways by investigating the changes of biological relationships of pathways in the context of gene expression data. (1) The ESEA package constructs a background set of edges by extracting pathway structure (e.g. interaction, regulation, modification, and binding etc.) from the seven public databases (KEGG; Reactome; Biocarta; NCI; SPIKE; HumanCyc; Panther) and the edge sets of pathways for each of the above databases. (2) The ESEA package can can quantify the change of correlation between genes for each edge based on gene expression data with cases and controls. (3) The ESEA package uses the weighted Kolmogorov-Smirnov statistic to calculate an edge enrichment score (EES), which reflects the degree to which a given pathway is associated the specific phenotype. (4) The ESEA package can provide the visualization of the results.","Published":"2015-01-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ESG","Version":"0.1","Title":"ESG - A package for asset projection","Description":"The package presents a \"Scenarios\" class containing\n general parameters, risk parameters and projection results.\n Risk parameters are gathered together into a ParamsScenarios\n sub-object. The general process for using this package is to\n set all needed parameters in a Scenarios object, use the\n customPathsGeneration method to proceed to the projection, then\n use xxx_PriceDistribution() methods to get asset prices.","Published":"2013-01-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ESGtoolkit","Version":"0.1","Title":"Toolkit for the simulation of financial assets and interest\nrates models","Description":"Toolkit for Monte Carlo simulations of financial assets and\n interest rates models, involved in an Economic Scenario Generator (ESG).\n The underlying simulation loops have been implemented in C++.","Published":"2014-06-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"eshrink","Version":"0.1.0","Title":"Shrinkage for Effect Estimation","Description":"Computes shrinkage estimators for regression problems. Selects\n penalty parameter by minimizing bias and variance in the effect estimate, where bias and variance are estimated from the posterior predictive distribution.","Published":"2017-04-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ESKNN","Version":"1.0","Title":"Ensemble of Subset of K-Nearest Neighbours Classifiers for\nClassification and Class Membership Probability Estimation","Description":"Functions for classification and group membership probability estimation are given. \n The issue of non-informative features in the data is addressed by utilizing the ensemble method. \n A few optimal models are selected in the ensemble from an initially large set of base k-nearest neighbours (KNN) models, generated on subset of features from the training data.\n A two stage assessment is applied in selection of optimal models for the ensemble in the training function. \n The prediction functions for classification and class membership probability estimation returns class outcomes and class membership probability estimates for the test data. \n The package includes measure of classification error and brier score, for classification and probability estimation tasks respectively. ","Published":"2015-09-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"esmisc","Version":"0.0.3","Title":"Misc Functions of Eduard Szöcs","Description":"Misc functions programmed by Eduard Szöcs. \n Provides read_regnie() to read gridded precipitation data from German Weather \n Service (DWD, see for more information).","Published":"2017-01-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"estatapi","Version":"0.3.0","Title":"R Interface to e-Stat API","Description":"Provides an interface to e-Stat API, the one-stop service for official statistics of the Japanese government.","Published":"2016-08-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EstCRM","Version":"1.4","Title":"Calibrating Parameters for the Samejima's Continuous IRT Model","Description":"Estimates item and person parameters for the Samejima's Continuous Response Model (CRM), computes item fit residual statistics, draws empirical 3D item category response curves, draws theoretical 3D item category response curves, and generates data under the CRM for simulation studies.","Published":"2015-07-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EstHer","Version":"1.0","Title":"Estimation of Heritability in High Dimensional Sparse Linear\nMixed Models using Variable Selection","Description":"Our method is a variable selection method to select active components in sparse linear mixed models in order to estimate the heritability. The selection allows us to reduce the size of the data sets which improves the accuracy of the estimations. Our package also provides a confidence interval for the estimated heritability.","Published":"2015-07-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"estimability","Version":"1.2","Title":"Tools for Assessing Estimability of Linear Predictions","Description":"Provides tools for determining estimability of linear functions of regression coefficients, \n and 'epredict' methods that handle non-estimable cases correctly.","Published":"2016-11-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"EstimateGroupNetwork","Version":"0.1.2","Title":"Perform the Joint Graphical Lasso and Selects Tuning Parameters","Description":"Can be used to simultaneously estimate networks (Gaussian Graphical Models) in data from different groups or classes via Joint Graphical Lasso. Tuning parameters are selected via information criteria (AIC / BIC / eBIC) or crossvalidation.","Published":"2017-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"estout","Version":"1.2","Title":"Estimates Output","Description":"This package is intended to speedup the process of\n creating model-comparing tables common in Macroeconomics. The\n function collection stores the estimates of several models and\n formats it to a table of the form estimate starred and std.err.\n below. The default output is LaTeX but output to CSV for later\n editing in a spreadsheet tool is possible as well. It works for\n linear models (lm) and panel models from the \"plm\"-package\n (plm). Two further implemented functions \"descsto\" and\n \"desctab\" enable you to export descriptive statistics of\n data-frames and single variables to LaTeX and CSV.","Published":"2013-02-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EstSimPDMP","Version":"1.2","Title":"Estimation and Simulation for PDMPs","Description":"This package deals with the estimation of the jump rate for piecewise-deterministic Markov processes (PDMPs), from only one observation of the process within a long time. The main functions provide an estimate of this function. The state space may be discrete or continuous. The associated paper has been published in Scandinavian Journal of Statistics and is given in references. Other functions provide a method to simulate random variables from their (conditional) hazard rate, and then to simulate PDMPs.","Published":"2014-11-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"etable","Version":"1.2.0","Title":"Easy Table","Description":"A table function for descriptive statistics in tabular format, using variables in a data.frame. You can create simple or highly customized tables.","Published":"2013-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ETAS","Version":"0.4.1","Title":"Modeling Earthquake Data Using ETAS Model","Description":"Fits the space-time Epidemic Type Aftershock Sequence\n (ETAS) model to earthquake catalogs using a stochastic declustering \n approach. The ETAS model is a spatio-temporal marked point process\n model and a special case of the Hawkes process. The package is based \n on a Fortran program by Jiancang Zhuang\n (available at ),\n which is modified and translated into C++ and C such that it \n can be called from R.","Published":"2017-02-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"etasFLP","Version":"1.4.0","Title":"Mixed FLP and ML Estimation of ETAS Space-Time Point Processes","Description":"Estimation of the components of an ETAS (Epidemic Type Aftershock Sequence) model for earthquake description. Non-parametric background seismicity can be estimated through FLP (Forward Likelihood Predictive), while parametric components are estimated through maximum likelihood. The two estimation steps are alternated until convergence is obtained. For each event the probability of being a background event is estimated and used as a weight for declustering steps. Many options to control the estimation process are present, together with some diagnostic tools. Some descriptive functions for earthquakes catalogs are present; also plot, print, summary, profile methods are defined for main output (objects of class 'etasclass').","Published":"2017-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ETC","Version":"1.3","Title":"Equivalence to control","Description":"The package allows selecting those treatments of a one-way layout\n being equivalent to a control. Bonferroni adjusted \"two one-sided t-tests\"\n (TOST) and related simultaneous confidence intervals are given for both\n differences or ratios of means of normally distributed data. For the case of\n equal variances and balanced sample sizes for the treatment groups, the\n single-step procedure of Bofinger and Bofinger (1995) can be chosen. For\n non-normal data, the Wilcoxon test is applied.","Published":"2009-01-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"EthSEQ","Version":"2.0.1","Title":"Ethnicity Annotation from Whole Exome Sequencing Data","Description":"Reliable and rapid ethnicity annotation from whole exome sequencing data.","Published":"2017-04-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"etl","Version":"0.3.5","Title":"Extract-Transform-Load Framework for Medium Data","Description":"A framework for loading medium-sized data from\n the Internet to a local or remote relational database management system.\n This package itself doesn't do much more than provide a toy example and set up\n the method structure. Packages that depend on this package will facilitate the\n construction and maintenance of their respective databases.","Published":"2016-11-29","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"ETLUtils","Version":"1.3","Title":"Utility Functions to Execute Standard Extract/Transform/Load\nOperations (using Package 'ff') on Large Data","Description":"Provides functions to facilitate the use of the 'ff' package in\n interaction with big data in 'SQL' databases (e.g. in\n 'Oracle', 'MySQL', 'PostgreSQL', 'Hive') by allowing easy importing directly into 'ffdf'\n objects using 'DBI', 'RODBC' and 'RJDBC'. Also contains some basic utility\n functions to do fast left outer join merging based on 'match', factorisation of data and a basic\n function for re-coding vectors.","Published":"2015-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"etm","Version":"0.6-2","Title":"Empirical Transition Matrix","Description":"Matrix of transition probabilities for any time-inhomogeneous multistate model with finite state space","Published":"2014-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"etma","Version":"1.1-1","Title":"Epistasis Test in Meta-Analysis","Description":"Traditional meta-regression based method has been developed for using meta-analysis data, but it faced the challenge of inconsistent estimates. This package purpose a new statistical method to detect epistasis using incomplete information summary, and have proven it not only successfully let consistency of evidence, but also increase the power compared with traditional method (Detailed tutorial is shown in website).","Published":"2016-04-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"etrunct","Version":"0.1","Title":"Computes Moments of Univariate Truncated t Distribution","Description":"Computes moments of univariate truncated t distribution.\n There is only one exported function, e_trunct(), which should be seen for details.","Published":"2016-07-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"etseed","Version":"0.1.0","Title":"Client for 'etcd', a 'Key-value' Database","Description":"Client to interact with the 'etcd' 'key-value' data store\n . Functions included for managing\n directories, keys, nodes, and getting statistics.","Published":"2016-09-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"eulerian","Version":"1.0","Title":"eulerian: A package to find eulerian paths from graphs","Description":"An eulerian path is a path in a graph which visits every edge exactly once. This package provides methods to handle eulerian paths or cycles.","Published":"2014-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"eulerr","Version":"2.0.0","Title":"Area-Proportional Euler Diagrams","Description":"If possible, generates exactly area-proportional Euler diagrams,\n or otherwise approximately proportional diagrams using numerical\n optimization. A Euler diagram is a generalization of a Venn diagram,\n relaxing the criterion that all interactions need to be represented.","Published":"2017-06-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"euroMix","Version":"1.1.1","Title":"Calculations for DNA Mixtures","Description":"Calculations for DNA mixtures accounting for possibly inbred pedigrees (simulations with conditioning, LR). Calculation of exact p-values. ","Published":"2015-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"europepmc","Version":"0.1.4","Title":"R Interface to the Europe PubMed Central RESTful Web Service","Description":"An R Client for the Europe PubMed Central RESTful Web Service \n (see for more information). It\n gives access to both metadata on life science literature and open access\n full texts. Europe PMC indexes all PubMed content and other literature\n sources including Agricola, a bibliographic database of citations to the\n agricultural literature, or Biological Patents. In addition to bibliographic\n metadata, the client allows users to fetch citations and reference lists.\n Links between life-science literature and other EBI databases, including\n ENA, PDB or ChEMBL are also accessible. No registration or API key is\n required. See the vignettes for usage examples.","Published":"2017-03-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"europop","Version":"0.3.1","Title":"Historical Populations of European Cities, 1500-1800","Description":"This dataset contains population estimates of all European cities \n with at least 10,000 inhabitants during the period 1500-1800. These data are\n adapted from Jan De Vries, \"European Urbanization, 1500-1800\" (1984).","Published":"2017-02-24","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"EurosarcBayes","Version":"1.0","Title":"Bayesian Single Arm Sample Size Calculation Software","Description":"Frequentist and Bayesian single arm trial design and sample size software. \n\tDesigns cover one and two binary endpoints with both single and multi-stage \n\tmethodology. The research leading to these results has received funding from the \n\tEuropean Union Seventh Framework Programme (FP7/2007-2013) under grant agreement \n\tnumber 278742 (Eurosarc).","Published":"2015-11-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"eurostat","Version":"3.1.1","Title":"Tools for Eurostat Open Data","Description":"Tools to download data from the Eurostat database\n together with search and\n manipulation utilities.","Published":"2017-03-16","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"eva","Version":"0.2.4","Title":"Extreme Value Analysis with Goodness-of-Fit Testing","Description":"Goodness-of-fit tests for selection of r in the r-largest order\n statistics (GEVr) model. Goodness-of-fit tests for threshold selection in the\n Generalized Pareto distribution (GPD). Random number generation and density functions\n for the GEVr distribution. Profile likelihood for return level estimation\n using the GEVr and Generalized Pareto distributions. P-value adjustments for\n sequential, multiple testing error control. Non-stationary fitting of GEVr and\n GPD.","Published":"2016-09-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"EvalEst","Version":"2015.4-2","Title":"Dynamic Systems Estimation - Extensions","Description":"Provides functions for evaluating (time series) model\n\testimation methods. These facilitate Monte Carlo experiments of repeated\n\tsimulations and estimations. Also provides methods for\n\tlooking at the distribution of the results from these experiments,\n\tincluding model roots (which are an equivalence class invariant).","Published":"2015-05-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"evaluate","Version":"0.10","Title":"Parsing and Evaluation Tools that Provide More Details than the\nDefault","Description":"Parsing and evaluation tools that make it easy to recreate the\n command line behaviour of R.","Published":"2016-10-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EvaluationMeasures","Version":"1.1.0","Title":"Collection of Model Evaluation Measure Functions","Description":"Provides Some of the most important evaluation measures for evaluating a model. Just by giving the real and predicted class, measures such as accuracy, sensitivity, specificity, ppv, npv, fmeasure, mcc and ... will be returned.","Published":"2016-07-27","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"evaluator","Version":"0.1.0","Title":"Information Security Quantified Risk Assessment Toolkit","Description":"An open source information security strategic risk analysis \n toolkit based on the OpenFAIR taxonomy \n and risk assessment standard \n . Empowers an organization to \n perform a quantifiable, repeatable, and data-driven review of its security \n program.","Published":"2017-02-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Evapotranspiration","Version":"1.10","Title":"Modelling Actual, Potential and Reference Crop\nEvapotranspiration","Description":"Uses data and constants to calculate potential evapotranspiration (PET) and actual evapotranspiration (AET) from 21 different formulations including Penman, Penman-Monteith FAO 56, Priestley-Taylor and Morton formulations.","Published":"2016-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"evclass","Version":"1.1.1","Title":"Evidential Distance-Based Classification","Description":"Different evidential distance-based classifiers, which provide\n outputs in the form of Dempster-Shafer mass functions. The methods are: the\n evidential K-nearest neighbor rule and the evidential neural network.","Published":"2017-03-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"evclust","Version":"1.0.3","Title":"Evidential Clustering","Description":"Various clustering algorithms that produce a credal partition,\n i.e., a set of Dempster-Shafer mass functions representing the membership of objects\n to clusters. The mass functions quantify the cluster-membership uncertainty of the objects.\n The algorithms are: Evidential c-Means (ECM), Relational Evidential c-Means (RECM),\n Constrained Evidential c-Means (CECM), EVCLUS and EK-NNclus.","Published":"2016-09-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"EvCombR","Version":"0.1-2","Title":"Evidence Combination in R","Description":"Package for combining pieces of evidence","Published":"2014-04-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"evd","Version":"2.3-2","Title":"Functions for Extreme Value Distributions","Description":"Extends simulation, distribution, quantile and density\n functions to univariate and multivariate parametric extreme\n value distributions, and provides fitting functions which\n calculate maximum likelihood estimates for univariate and\n bivariate maxima models, and for univariate and bivariate\n threshold models.","Published":"2015-12-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"evdbayes","Version":"1.1-1","Title":"Bayesian Analysis in Extreme Value Theory","Description":"Provides functions for the bayesian analysis of extreme\n value models, using MCMC methods.","Published":"2014-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eVenn","Version":"2.3.4","Title":"A Powerful Tool to Quickly Compare Huge Lists and Draw Venn\nDiagrams","Description":"Compare lists (from 2 to infinite) and plot the results in a Venn diagram if (N<=4) with regulation details. It allows to produce a complete annotated file, merging the annotations of the compared lists. It is also possible to compute an overlaps table to show the overlaps proportions of all the couples of lists and draw proportional Venn diagrams.","Published":"2016-10-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"event","Version":"1.1.0","Title":"Event History Procedures and Models","Description":"Functions for setting up and analyzing event history data.","Published":"2017-02-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"eventdataR","Version":"0.1.0","Title":"Event Data Repository","Description":"Event dataset repository including both real-life and artificial event logs. They can be used in combination with functionalities provided by the 'bupaR' packages 'edeaR', 'processmapR', etc.","Published":"2017-06-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"eventInterval","Version":"1.3","Title":"Sequential Event Interval Analysis","Description":"Functions for analysis of rate changes in sequential events.","Published":"2015-10-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"events","Version":"0.5","Title":"Store and manipulate event data","Description":"Stores, manipulates, aggregates and otherwise messes with event\n data from KEDS/TABARI or any other extraction tool with similar output","Published":"2012-01-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"eventstudies","Version":"1.1","Title":"Event study and extreme event analysis","Description":"Implementation of short and long term event study\n methodology","Published":"2013-05-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EventStudy","Version":"0.31","Title":"Event Study Analysis in R","Description":"Perform Event Studies from through our Application Programming Interface, parse the results, visualize it, and / or use the results in further analysis.","Published":"2017-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"evidenceFactors","Version":"1.00","Title":"Reporting Tools for Sensitivity Analysis of Evidence Factors in\nObservational Studies","Description":"Integrated Sensitivity Analysis of Evidence Factors in Observational Studies.","Published":"2016-08-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"evir","Version":"1.7-3","Title":"Extreme Values in R","Description":"Functions for extreme value theory, which may be divided\n into the following groups; exploratory data analysis, block\n maxima, peaks over thresholds (univariate and bivariate), point\n processes, gev/gpd distributions.","Published":"2012-07-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"evmix","Version":"2.7","Title":"Extreme Value Mixture Modelling, Threshold Estimation and\nBoundary Corrected Kernel Density Estimation","Description":"The usual distribution functions, maximum likelihood inference and\n model diagnostics for univariate stationary extreme value mixture models\n are provided. Kernel density estimation including various boundary\n corrected kernel density estimation methods and a wide choice of kernels,\n with cross-validation likelihood based bandwidth estimator.\n Reasonable consistency with the base functions in the 'evd' package is\n provided, so that users can safely interchange most code.","Published":"2017-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"evobiR","Version":"1.1","Title":"Comparative and Population Genetic Analyses","Description":"Comparative analysis of continuous traits influencing discrete states, and utility tools to facilitate comparative analyses. Implementations of ABBA/BABA type statistics to test for introgression in genomic data. Wright-Fisher, phylogenetic tree, and statistical distribution Shiny interactive simulations for use in teaching.","Published":"2015-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"evolqg","Version":"0.2-5","Title":"Tools for Evolutionary Quantitative Genetics","Description":"Provides functions for covariance matrix comparisons, estimation\n of repeatabilities in measurements and matrices, and general evolutionary\n quantitative genetics tools.","Published":"2017-02-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"evolvability","Version":"1.1.0","Title":"Calculation of Evolvability Parameters","Description":"An implementation of the evolvability parameters defined in Hansen and Houle (2008).","Published":"2015-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Evomorph","Version":"0.9","Title":"Evolutionary Morphometric Simulation","Description":"Evolutionary process simulation using geometric morphometric data. Manipulation of landmark data files (TPS), shape plotting and distances plotting functions.","Published":"2016-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"evoper","Version":"0.4.0","Title":"Evolutionary Parameter Estimation for 'Repast Simphony' Models","Description":"The EvoPER, Evolutionary Parameter Estimation for 'Repast Simphony'\n Agent-Based framework (), provides optimization\n driven parameter estimation methods based on evolutionary computation\n techniques which could be more efficient and require, in some cases,\n fewer model evaluations than other alternatives relying on experimental design.","Published":"2017-03-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"EvoRAG","Version":"2.0","Title":"Evolutionary Rates Across Gradients","Description":"Uses maximum likelihood to estimate rates of trait evolution across environmental gradients.","Published":"2014-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"evt0","Version":"1.1-3","Title":"Mean of order p, peaks over random threshold Hill and high\nquantile estimates","Description":"Computes extreme value index (EVI) estimate for heavy tailed models by Mean of order p (MOP) \n\t and peaks over random threshold (PORT) Hill methodologies. \n Besides, also computes moment, generalised Hill and mixed moment estimates for EVI. \n Compute high quantile or value-at-risk (VaR) based on above EVI estimates.","Published":"2013-12-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"evtree","Version":"1.0-5","Title":"Evolutionary Learning of Globally Optimal Trees","Description":"Commonly used classification and regression tree methods like the CART algorithm\n are recursive partitioning methods that build the model in a forward stepwise search.\n\t Although this approach is known to be an efficient heuristic, the results of recursive\n\t tree methods are only locally optimal, as splits are chosen to maximize homogeneity at\n\t the next step only. An alternative way to search over the parameter space of trees is\n\t to use global optimization methods like evolutionary algorithms. The 'evtree' package\n\t implements an evolutionary algorithm for learning globally optimal classification and\n\t regression trees in R. CPU and memory-intensive tasks are fully computed in C++ while\n\t the 'partykit' package is leveraged to represent the resulting trees in R, providing\n\t unified infrastructure for summaries, visualizations, and predictions.","Published":"2017-04-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"EW","Version":"1.1","Title":"Edgeworth Expansion","Description":"Edgeworth Expansion calculation.","Published":"2015-05-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"EWGoF","Version":"2.1","Title":"Goodness-of-Fit Tests for the Exponential and Two-Parameter\nWeibull Distributions","Description":"An implementation of a large number of the goodness-of-fit tests for the exponential and Weibull distributions classified into families: the tests based on the empirical distribution function, the tests based on the probability plot, the tests based on the normalized spacings, the tests based on the Laplace transform and the likelihood-based tests.","Published":"2015-05-29","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"Exact","Version":"1.7","Title":"Unconditional Exact Test","Description":"Performs unconditional exact tests and power calculations for 2x2 contingency tables. Unconditional exact tests are often more powerful than conditional exact tests and asymptotic tests.","Published":"2016-10-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"exact2x2","Version":"1.5.2","Title":"Exact Tests and Confidence Intervals for 2x2 Tables","Description":"Calculates conditional exact tests (Fisher's exact test, Blaker's exact test, or exact McNemar's test) and unconditional exact tests (including score-based tests on differences in proportions, ratios of proportions, and odds ratios, and Boshcloo's test) with appropriate matching confidence intervals, and provides power and sample size calculations. Also gives melded confidence intervals for the binomial case. ","Published":"2017-02-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"exactci","Version":"1.3-1","Title":"Exact P-Values and Matching Confidence Intervals for Simple\nDiscrete Parametric Cases","Description":"Calculates exact tests and confidence intervals for one-sample binomial and one- or two-sample Poisson cases. ","Published":"2015-07-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ExactCIdiff","Version":"1.3","Title":"Inductive Confidence Intervals for the difference between two\nproportions","Description":"This is a package for exact Confidence Intervals for the\n difference between two independent or dependent proportions.","Published":"2013-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"exactLoglinTest","Version":"1.4.2","Title":"Monte Carlo Exact Tests for Log-linear models","Description":"Monte Carlo and MCMC goodness of fit tests for log-linear\n models","Published":"2013-02-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"exactmeta","Version":"1.0-2","Title":"Exact fixed effect meta analysis","Description":"Perform exact fixed effect meta analysis for rare events data without the need of artificial continuity correction.","Published":"2014-09-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ExactPath","Version":"1.0","Title":"Exact solution paths for regularized LASSO regressions with L_1\npenalty","Description":"ExactPath implements an algorithm for exact LASSO\n solution. Two methods are provided to print and visualize the\n whole solution paths. Use ?ExactPath to see an introduction.\n Packages ncvreg and lars are required so that their data sets\n can be used in examples.","Published":"2013-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"exactRankTests","Version":"0.8-29","Title":"Exact Distributions for Rank and Permutation Tests","Description":"Computes exact conditional p-values and quantiles using an\n implementation of the Shift-Algorithm by Streitberg & Roehmel.","Published":"2017-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"exampletestr","Version":"0.4.0","Title":"Help for Writing Unit Tests Based on Function Examples","Description":"Take the examples written in your documentation of functions and \n use them to create shells (skeletons which must be manually completed by\n the user) of test files to be tested with the 'testthat' package. ","Published":"2017-04-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"exams","Version":"2.2-1","Title":"Automatic Generation of Exams in R","Description":"Automatic generation of exams based on exercises in Markdown or LaTeX format,\n\tpossibly including R code for dynamic generation of exercise elements.\n\tExercise types include single-choice and multiple-choice questions, arithmetic problems,\n\tstring questions, and combinations thereof (cloze). Output formats include standalone\n\tfiles (PDF, HTML, Docx, ODT, ...), Moodle XML, QTI 1.2 (for OLAT/OpenOLAT), QTI 2.1,\n\tBlackboard, ARSnova, and TCExam. In addition to fully customizable PDF exams, a\n\tstandardized PDF format is provided that can be printed, scanned, and automatically evaluated.","Published":"2017-03-16","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ExceedanceTools","Version":"1.2.2","Title":"Confidence regions for exceedance sets and contour lines","Description":"Tools for constructing confidence regions for exceedance regions\n and contour lines.","Published":"2014-07-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"excel.link","Version":"0.9.7","Title":"Convenient Data Exchange with Microsoft Excel","Description":"Allows access to data in running instance of Microsoft Excel\n (e. g. 'xl[a1] = xl[b2]*3' and so on). Graphics can be transferred with\n 'xl[a1] = current.graphics()'. There is an Excel workbook with examples of\n calling R from Excel in the 'doc' folder. It tries to keep things as\n simple as possible - there are no needs in any additional\n installations besides R, only 'VBA' code in the Excel workbook.\n Microsoft Excel is required for this package.","Published":"2017-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"excerptr","Version":"1.3.0","Title":"Excerpt Structuring Comments from Your Code File and Set a Table\nof Contents","Description":"This is an R interface to the\n python package 'excerpts' ().","Published":"2017-06-22","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ExcessMass","Version":"1.0","Title":"Excess Mass Calculation and Plots","Description":"Implementation of a function which calculates the empirical excess mass \n\tfor given \\eqn{\\lambda} and given maximal number of modes (excessm()). Offering \n\tpowerful plot features to visualize empirical excess mass (exmplot()). This \n\tincludes the possibility of drawing several plots (with different maximal \n\tnumber of modes / cut off values) in a single graph.","Published":"2017-05-16","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"exCon","Version":"0.2.1","Title":"Interactive Exploration of Contour Data","Description":"Interactive tools to explore topographic-like data\n sets. Such data sets take the form of a matrix in which the rows and\n columns provide location/frequency information, and the matrix elements\n contain altitude/response information. Such data is found in cartography,\n 2D spectroscopy and chemometrics. The functions in this package create\n interactive web pages showing the contoured data, possibly with\n slices from the original matrix parallel to each dimension. The interactive\n behavior is created using the D3.js 'JavaScript' library by Mike Bostock.","Published":"2017-01-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"excursions","Version":"2.2.2","Title":"Excursion Sets and Contour Credibility Regions for Random Fields","Description":"Functions that compute probabilistic excursion sets, contour credibility regions, contour avoiding regions, and simultaneous confidence bands for latent Gaussian random processes and fields. The package also contains functions that calculate these quantities for models estimated with the INLA package.","Published":"2016-07-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"exif","Version":"0.1.0","Title":"Read EXIF Metadata from JPEGs","Description":"Extracts Exchangeable Image File Format (EXIF) metadata, such as camera make and model, ISO speed and the date-time\n the picture was taken on, from JPEG images. Incorporates the 'easyexif' (https://github.com/mayanklahiri/easyexif)\n library.","Published":"2015-12-14","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"exifr","Version":"0.1.1","Title":"EXIF Image Data in R","Description":"Reads EXIF data using ExifTool \n and returns results as a data frame.\n ExifTool is a platform-independent Perl library plus a command-line\n application for reading, writing and editing meta information in a wide variety\n of files. ExifTool supports many different metadata formats including EXIF,\n GPS, IPTC, XMP, JFIF, GeoTIFF, ICC Profile, Photoshop IRB, FlashPix, AFCP and\n ID3, as well as the maker notes of many digital cameras by Canon, Casio, FLIR,\n FujiFilm, GE, HP, JVC/Victor, Kodak, Leaf, Minolta/Konica-Minolta, Motorola, Nikon,\n Nintendo, Olympus/Epson, Panasonic/Leica, Pentax/Asahi, Phase One, Reconyx, Ricoh,\n Samsung, Sanyo, Sigma/Foveon and Sony.","Published":"2016-02-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ExomeDepth","Version":"1.1.10","Title":"Calls Copy Number Variants from Targeted Sequence Data","Description":"Calls copy number variants (CNVs) from targeted sequence data, typically exome sequencing experiments designed to identify the genetic basis of Mendelian disorders.","Published":"2016-05-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"exp2flux","Version":"0.1","Title":"Convert Gene EXPression Data to FBA FLUXes","Description":"For a given metabolic model with well formed Gene-Protein-Reaction (GPR) associations and an expressionSet with their associated gene expression values, this package converts gene expression values to the FBA boundaries for each reaction based in the boolean rules described in its associated GPR.","Published":"2016-10-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"expandFunctions","Version":"0.1.0","Title":"Feature Matrix Builder","Description":"Generates feature matrix outputs from R object inputs\n using a variety of expansion functions. The generated\n feature matrices have applications as inputs\n for a variety of machine learning algorithms.\n The expansion functions are based on coercing the input\n to a matrix, treating the columns as features and\n converting individual columns or combinations into blocks of\n columns.\n Currently these include expansion of columns by\n efficient sparse embedding by vectors of lags,\n quadratic expansion into squares and unique products,\n powers by vectors of degree,\n vectors of orthogonal polynomials functions,\n and block random affine projection transformations (RAPTs).\n The transformations are\n magrittr- and cbind-friendly, and can be used in a\n building block fashion. For instance, taking the cos() of\n the output of the RAPT transformation generates a\n stationary kernel expansion via Bochner's theorem, and this\n expansion can then be cbind-ed with other features.\n Additionally, there are utilities for replacing features,\n removing rows with NAs,\n creating matrix samples of a given distribution,\n a simple wrapper for LASSO with CV,\n a Freeman-Tukey transform,\n generalizations of the outer function,\n matrix size-preserving discrete difference by row,\n plotting, etc.","Published":"2016-10-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"expands","Version":"2.0.0","Title":"Expanding Ploidy and Allele-Frequency on Nested Subpopulations","Description":"Expanding Ploidy and Allele Frequency on Nested Subpopulations (expands) characterizes coexisting subpopulations in a single tumor sample using copy number and allele frequencies derived from exome- or whole genome sequencing input data (). The model detects coexisting genotypes by leveraging run-specific tradeoffs between depth of coverage and breadth of coverage. This package predicts the number of clonal expansions, the size of the resulting subpopulations in the tumor bulk, the mutations specific to each subpopulation, tumor purity and phylogeny. The main function runExPANdS() provides the complete functionality needed to predict coexisting subpopulations from single nucleotide variations (SNVs) and associated copy numbers. The robustness of subpopulation predictions increases with the number of mutations provided. It is recommended that at least 200 mutations are used as input to obtain stable results. Updates in version 2.0 include: (i) copy-neutral LOH are now modelled; (ii) more robust calculation of cell frequency probabilities from kernel density estimates instead of Gaussian mixtures. Further documentation and FAQ available at .","Published":"2017-04-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ExpDE","Version":"0.1.2","Title":"Modular Differential Evolution for Experimenting with Operators","Description":"Modular implementation of the Differential Evolution algorithm for\n experimenting with different types of operators.","Published":"2016-03-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ExpDes","Version":"1.1.2","Title":"Experimental Designs package","Description":"Package for analysis of simple experimental designs (CRD,\n RBD and LSD), experiments in double factorial schemes (in CRD\n and RBD), experiments in a split plot in time schemes (in CRD\n and RBD), experiments in double factorial schemes with an\n additional treatment (in CRD and RBD), experiments in triple\n factorial scheme (in CRD and RBD) and experiments in triple\n factorial schemes with an additional treatment (in CRD and\n RBD), performing the analysis of variance and means comparison\n by fitting regression models until the third power\n (quantitative treatments) or by a multiple comparison test,\n Tukey test, test of Student-Newman-Keuls (SNK), Scott-Knott,\n Duncan test, t test (LSD) and Bonferroni t test (protected LSD)\n - for qualitative treatments.","Published":"2013-05-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ExpDes.pt","Version":"1.1.2","Title":"Pacote Experimental Designs (Portuguese)","Description":"Pacote destinado a analise de delineamentos experimentais\n simples (DIC, DBC e DQL), experimentos em esquema de fatorial\n duplo (em DIC e DBC), experimentos em esquema de parcelas\n subdivididas no tempo (em DIC e DBC), experimentos em esquema\n de fatorial duplo com um tratamento adicional (em DIC e DBC),\n experimentos em esquema de fatorial triplo (em DIC e DBC) e\n experimentos em esquema de fatorial triplo com um tratamento\n adicional (em DIC e DBC); realizando a analise de variancia e\n comparacao de medias pelo ajuste de modelos de regressao ate o\n terceiro grau (tratamentos quantitativos) ou por testes de\n comparacao multipla: teste de Tukey, teste de\n Student-Newman-Keuls (SNK), teste de Scott-Knott, teste de\n Duncan, teste t (LSD), teste t de Bonferroni (LSD protegido) e\n teste Bootstrap - tratamentos qualitativos.","Published":"2013-05-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"expectreg","Version":"0.39","Title":"Expectile and Quantile Regression","Description":"Expectile and quantile regression of models with nonlinear effects\n e.g. spatial, random, ridge using least asymmetric weighed squares / absolutes\n as well as boosting; also supplies expectiles for common distributions.","Published":"2014-03-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"experiment","Version":"1.1-1","Title":"experiment: R package for designing and analyzing randomized\nexperiments","Description":"The package provides various statistical methods for\n designing and analyzing randomized experiments. One main\n functionality of the package is the implementation of\n randomized-block and matched-pair designs based on possibly\n multivariate pre-treatment covariates. The package also\n provides the tools to analyze various randomized experiments\n including cluster randomized experiments, randomized\n experiments with noncompliance, and randomized experiments with\n missing data.","Published":"2013-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"expert","Version":"1.0-0","Title":"Modeling without data using expert opinion","Description":"Expert opinion (or judgment) is a body of techniques to\n estimate the distribution of a random variable when data is scarce\n or unavailable. Opinions on the quantiles of the distribution are\n sought from experts in the field and aggregated into a final\n estimate. The package supports aggregation by means of the Cooke,\n Mendel-Sheridan and predefined weights models.","Published":"2008-10-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"expint","Version":"0.1-4","Title":"Exponential Integral and Incomplete Gamma Function","Description":"The exponential integrals E_1(x), E_2(x), E_n(x) and\n Ei(x), and the incomplete gamma function G(a, x) defined for\n negative values of its first argument. The package also gives easy\n access to the underlying C routines through an API; see the package\n vignette for details. A test package included in sub-directory\n example_API provides an implementation. C routines derived from the\n GNU Scientific Library .","Published":"2017-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ExplainPrediction","Version":"1.1.8","Title":"Explanation of Predictions for Classification and Regression\nModels","Description":"Generates explanations for classification and regression models and visualizes them.\n Explanations are generated for individual predictions as well as for models as a whole. Two explanation methods\n are included, EXPLAIN and IME. The EXPLAIN method is fast but might miss explanations expressed redundantly\n in the model. The IME method is slower as it samples from all feature subsets.\n For the EXPLAIN method see Robnik-Sikonja and Kononenko (2008) , \n and the IME method is described in Strumbelj and Kononenko (2010, JMLR, vol. 11:1-18).\n All models in package 'CORElearn' are natively supported, for other prediction models a wrapper function is provided \n and illustrated for models from packages 'randomForest', 'nnet', and 'e1071'.","Published":"2017-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"explor","Version":"0.3.2","Title":"Interactive Interfaces for Results Exploration","Description":"Shiny interfaces and graphical functions for multivariate analysis results exploration.","Published":"2017-06-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"exploreR","Version":"0.1","Title":"Tools for Quickly Exploring Data","Description":"Simplifies some complicated and labor intensive processes involved in exploring and explaining data. Allows you to quickly and efficiently visualize the interaction between variables and simplifies the process of discovering covariation in your data. Also includes some convenience features designed to remove as much redundant typing as possible.","Published":"2016-02-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"expm","Version":"0.999-2","Title":"Matrix Exponential, Log, 'etc'","Description":"Computation of the matrix exponential, logarithm, sqrt,\n and related quantities.","Published":"2017-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"expoRkit","Version":"0.9","Title":"Expokit in R","Description":"An R-interface to the Fortran package Expokit.","Published":"2012-10-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ExPosition","Version":"2.8.19","Title":"Exploratory analysis with the singular value decomposition","Description":"ExPosition is for descriptive (i.e., fixed-effects) multivariate analysis with the singular value decomposition. ","Published":"2013-12-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"expoTree","Version":"1.0.1","Title":"Calculate density dependent likelihood of a phylogenetic tree","Description":"Calculates the density dependent likelihood of a phylogenetic tree. It takes branching and sampling times as an argument and integrates the likelihood function over the whole tree.","Published":"2013-09-03","License":"BSD_3_clause + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"expp","Version":"1.1","Title":"Spatial analysis of extra-pair paternity","Description":"Tools and data to accompany Schlicht, Valcu and Kempenaers \"Spatial patterns of extra-pair paternity: beyond paternity gains and losses\"","Published":"2014-08-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"exprso","Version":"0.1.8","Title":"Rapid Implementation of Machine Learning Algorithms for Genomic\nData","Description":"Supervised machine learning has an increasingly important role in biological\n studies. However, the sheer complexity of classification pipelines poses a significant\n barrier to the expert biologist unfamiliar with machine learning. Moreover,\n many biologists lack the time or technical skills necessary to establish their own\n pipelines. This package introduces a framework for the rapid implementation of\n high-throughput supervised machine learning built with the biologist user in mind.\n Written by biologists, for biologists, this package provides a user-friendly interface\n that empowers investigators to execute state-of-the-art binary and multi-class\n classification, including deep learning, with minimal programming\n experience necessary.","Published":"2016-12-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"expsmooth","Version":"2.3","Title":"Data Sets from \"Forecasting with Exponential Smoothing\"","Description":"Data sets from the book \"Forecasting with exponential smoothing: the state space approach\" by \n\tHyndman, Koehler, Ord and Snyder (Springer, 2008).","Published":"2015-04-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"expss","Version":"0.7.1","Title":"Tables with Labels and Some Useful Functions from Spreadsheets\nand 'SPSS' Statistics","Description":"Package provides tabulation functions with support of 'SPSS'-style labels, \n multiple / nested banners, weights and multiple-response variables. \n Additionally it offers useful functions for data processing in the social / \n marketing research surveys - popular data transformation functions from 'SPSS' Statistics\n ('RECODE', 'COUNT', 'COMPUTE', 'DO IF', etc.) and 'Excel' ('COUNTIF', 'VLOOKUP', etc.).\n Proper methods for labelled variables add value labels support to base R and other packages.\n Package aimed to help people to move data processing from 'Excel'/'SPSS' to R.","Published":"2017-04-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"exptest","Version":"1.2","Title":"Tests for Exponentiality","Description":"Tests for the composite hypothesis of exponentiality","Published":"2013-12-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"exreport","Version":"0.4.1","Title":"Fast, Reliable and Elegant Reproducible Research","Description":"Analysis of experimental results and automatic report generation in both interactive HTML and LaTeX. This package ships with a rich interface for data modeling and built in functions for the rapid application of statistical tests and generation of common plots and tables with publish-ready quality.","Published":"2016-02-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"EXRQ","Version":"1.0","Title":"Extreme Regression of Quantiles","Description":"Estimation for high conditional quantiles based on quantile regression.","Published":"2016-07-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"exsic","Version":"1.1.1","Title":"Convenience Functions for Botanists to Create Specimens Indices","Description":"The package provides tools for botanists, plant taxonomists,\n curators of plant genebanks and perhaps other biological collections.","Published":"2014-10-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ExtDist","Version":"0.6-3","Title":"Extending the Range of Functions for Probability Distributions","Description":"A consistent, unified and extensible\n framework for estimation of parameters for probability distributions, including \n parameter estimation procedures that allow for weighted samples; the current set of distributions included are: the standard beta, The four-parameter beta, Burr, gamma, Gumbel, Johnson SB and SU, Laplace, logistic, normal, symmetric truncated normal, truncated normal, symmetric-reflected truncated beta, standard symmetric-reflected truncated beta, triangular, uniform, and Weibull distributions; decision criteria and selections based on these decision criteria.","Published":"2015-05-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"extdplyr","Version":"0.1.4","Title":"Data Manipulation Extensions of 'Dplyr' and 'Tidyr'","Description":"If 'dplyr' is a grammar for data manipulation, 'extdplyr' is like\n a short paragraph written in 'dplyr'. 'extdplyr' extends 'dplyr' and\n 'tidyr' verbs to some common \"routines\" that manipulate data sets. It uses\n the same interface and preserves all the features from 'dplyr', has good \n performance, and supports various data sources.","Published":"2017-02-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"exteriorMatch","Version":"1.0.0","Title":"Constructs the Exterior Match from Two Matched Control Groups","Description":"If one treated group is matched to one control reservoir in two different ways to produce two sets of treated-control matched pairs, then the two control groups may be entwined, in the sense that some control individuals are in both control groups. The exterior match is used to compare the two control groups.","Published":"2016-11-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"extfunnel","Version":"1.3","Title":"Additional Funnel Plot Augmentations","Description":"This is a package containing the function extfunnel()\n which produces a funnel plot including additional augmentations\n such as statistical significance contours and heterogeneity\n contours.","Published":"2013-12-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"extlasso","Version":"0.2","Title":"Maximum penalized likelihood estimation with extended lasso\npenalty","Description":"The package estimates coefficients of extended LASSO penalized linear regression and generalized linear models. Currently lasso and elastic net penalized linear regression and generalized linear models are considered. The package currently utilizes an accurate approximation of L1 penalty and then a modified Jacobi algorithm to estimate the coefficients. There is provision for plotting of the solutions and predictions of coefficients at given values of lambda. The package also contains functions for cross validation to select a suitable lambda value given the data. The package also provides a function for estimation in fused lasso penalized linear regression.","Published":"2014-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"extraBinomial","Version":"2.1","Title":"Extra-binomial approach for pooled sequencing data","Description":"This package tests for differences in minor allele\n frequency between groups and is based on an extra-binomial\n variation model for pooled sequencing data.","Published":"2012-07-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"extracat","Version":"1.7-4","Title":"Categorical Data Analysis and Visualization","Description":"Categorical Data Analysis and Visualization.","Published":"2015-11-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"extraDistr","Version":"1.8.5","Title":"Additional Univariate and Multivariate Distributions","Description":"Density, distribution function, quantile function\n and random generation for a number of univariate\n and multivariate distributions. This package implements the\n following distributions: Bernoulli, beta-binomial, beta-negative\n binomial, beta prime, Bhattacharjee, Birnbaum-Saunders,\n bivariate normal, bivariate Poisson, categorical, Dirichlet,\n Dirichlet-multinomial, discrete gamma, discrete Laplace,\n discrete normal, discrete uniform, discrete Weibull, Frechet,\n gamma-Poisson, generalized extreme value, Gompertz,\n generalized Pareto, Gumbel, half-Cauchy, half-normal, half-t,\n Huber density, inverse chi-squared, inverse-gamma, Kumaraswamy,\n Laplace, logarithmic, Lomax, multivariate hypergeometric,\n multinomial, negative hypergeometric, non-standard t,\n non-standard beta, normal mixture, Poisson mixture, Pareto,\n power, reparametrized beta, Rayleigh, shifted Gompertz, Skellam,\n slash, triangular, truncated binomial, truncated normal,\n truncated Poisson, Tukey lambda, Wald, zero-inflated binomial,\n zero-inflated negative binomial, zero-inflated Poisson.","Published":"2017-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"extrafont","Version":"0.17","Title":"Tools for using fonts","Description":"Tools to using fonts other than the standard PostScript fonts.\n This package makes it easy to use system TrueType fonts and with PDF or\n PostScript output files, and with bitmap output files in Windows. extrafont\n can also be used with fonts packaged specifically to be used with, such as\n the fontcm package, which has Computer Modern PostScript fonts with math\n symbols. See https://github.com/wch/extrafont for instructions and\n examples.","Published":"2014-12-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"extrafontdb","Version":"1.0","Title":"Package for holding the database for the extrafont package","Description":"Package for holding the database for the extrafont package","Published":"2012-06-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"extraTrees","Version":"1.0.5","Title":"Extremely Randomized Trees (ExtraTrees) Method for\nClassification and Regression","Description":"Classification and regression based on an ensemble of decision trees. The package also provides extensions of ExtraTrees to multi-task learning and quantile regression. Uses Java implementation of the method.","Published":"2014-12-27","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"ExtremeBounds","Version":"0.1.5.2","Title":"Extreme Bounds Analysis (EBA)","Description":"An implementation of Extreme Bounds Analysis (EBA), a global sensitivity analysis that examines the robustness of determinants in regression models. The package supports both Leamer's and Sala-i-Martin's versions of EBA, and allows users to customize all aspects of the analysis.","Published":"2016-08-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"extremefit","Version":"0.2.2","Title":"Estimation of Extreme Conditional Quantiles and Probabilities","Description":"Extreme value theory, nonparametric kernel estimation, tail\n conditional probabilities, extreme conditional quantile, adaptive estimation,\n quantile regression, survival probabilities.","Published":"2017-03-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"extRemes","Version":"2.0-8","Title":"Extreme Value Analysis","Description":"Functions for performing extreme value analysis.","Published":"2016-08-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"extremeStat","Version":"1.3.0","Title":"Extreme Value Statistics and Quantile Estimation","Description":"Code to fit, plot and compare several (extreme value)\n distribution functions. Can also compute (truncated) distribution quantile estimates and\n draw a plot with return periods on a linear scale.","Published":"2017-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"extremevalues","Version":"2.3.2","Title":"Univariate Outlier Detection","Description":"Detect outliers in one-dimensional data.","Published":"2016-01-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"extremogram","Version":"1.0.2","Title":"Estimation of Extreme Value Dependence for Time Series Data","Description":"Estimation of the sample univariate, cross and return time extremograms. The package can also adds empirical confidence bands to each of the extremogram plots via a permutation procedure under the assumption that the data are independent. Finally, the stationary bootstrap allows us to construct credible confidence bands for the extremograms. ","Published":"2016-10-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"extWeibQuant","Version":"1.1","Title":"Estimate Lower Extreme Quantile with the Censored Weibull MLE\nand Censored Weibull Mixture","Description":"It implements the subjectively censored Weibull MLE and censored Weibull mixture methods for the lower quantile estimation. Quantile estimates from these two methods are robust to model misspecification in the lower tail. It also includes functions to evaluation the standard error of the resulting quantile estimates. Also, the methods here can be used to fit the Weibull or Weibull mixture for the Type-I or Type-II right censored data.","Published":"2014-12-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"eyelinker","Version":"0.1","Title":"Load Raw Data from Eyelink Eye Trackers","Description":"Eyelink eye trackers output a horrible mess, typically under\n the form of a '.asc' file. The file in question is an assorted collection of\n messages, events and raw data. This R package will attempt to make sense of it.","Published":"2016-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"eyetracking","Version":"1.1","Title":"Eyetracking Helper Functions","Description":"Misc function for working with eyetracking data","Published":"2012-10-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"eyetrackingR","Version":"0.1.6","Title":"Eye-Tracking Data Analysis","Description":"A set of tools that address tasks along the pipeline from raw\n data to analysis and visualization for eye-tracking data. Offers several\n popular types of analyses, including linear and growth curve time analyses,\n onset-contingent reaction time analyses, as well as several non-parametric\n bootstrapping approaches.","Published":"2016-03-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ez","Version":"4.4-0","Title":"Easy Analysis and Visualization of Factorial Experiments","Description":"Facilitates easy analysis of factorial experiments, including\n purely within-Ss designs (a.k.a. \"repeated measures\"), purely between-Ss\n designs, and mixed within-and-between-Ss designs. The functions in this package\n aim to provide simple, intuitive and consistent specification of data analysis\n and visualization. Visualization functions also include design visualization for\n pre-analysis data auditing, and correlation matrix visualization. Finally, this\n package includes functions for non-parametric analysis, including permutation\n tests and bootstrap resampling. The bootstrap function obtains predictions\n either by cell means or by more advanced/powerful mixed effects models, yielding\n predictions and confidence intervals that may be easily visualized at any level\n of the experiment's design.","Published":"2016-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ezec","Version":"1.0.1","Title":"Easy Interface to Effective Concentration Calculations","Description":"Because fungicide resistance is an important phenotypic trait for\n fungi and oomycetes, it is necessary to have a standardized method of\n statistically analyzing the Effective Concentration (EC) values. This\n package is designed for those who are not terribly familiar with R to be\n able to analyze and plot an entire set of isolates using the 'drc' package.","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ezglm","Version":"1.0","Title":"selects significant non-additive interaction between two\nvariables using fast GLM implementation","Description":"This package implements a simplified version of least\n squares, and logistic regression for efficiently selecting the\n significant non-additive interactions between two variables.","Published":"2012-06-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ezknitr","Version":"0.6","Title":"Avoid the Typical Working Directory Pain When Using 'knitr'","Description":"An extension of 'knitr' that adds flexibility in several\n ways. One common source of frustration with 'knitr' is that it assumes\n the directory where the source file lives should be the working directory,\n which is often not true. 'ezknitr' addresses this problem by giving you\n complete control over where all the inputs and outputs are, and adds several\n other convenient features to make rendering markdown/HTML documents easier.","Published":"2016-09-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ezsim","Version":"0.5.5","Title":"provide an easy to use framework to conduct simulation","Description":"ezsim provides a handy way to run simulation and examine its result","Published":"2014-06-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ezsummary","Version":"0.2.1","Title":"Generate Data Summary in a Tidy Format","Description":"Functions that simplify the process of generating print-ready data summary using 'dplyr' syntax.","Published":"2016-07-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fabCI","Version":"0.1","Title":"FAB Confidence Intervals","Description":"Frequentist assisted by Bayes (FAB) confidence interval\n construction. See 'Adaptive multigroup confidence intervals with constant\n coverage' by Yu and Hoff .","Published":"2017-01-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"face","Version":"0.1-3","Title":"Fast Covariance Estimation for Sparse Functional Data","Description":"Fast covariance estimation for sparse functional data or longitudinal data for the paper Xiao et al., Stat. Comput., .","Published":"2017-04-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FacPad","Version":"3.0","Title":"Bayesian Sparse Factor Analysis model for the inference of\npathways responsive to drug treatment","Description":"This method tries to explain the gene-wise treatment response ratios in terms of the latent pathways. It uses bayesian sparse factor modeling to infer the loadings (weights) of each pathway on its associated probesets as well as the latent factor activity levels for each treatment.","Published":"2014-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FactMixtAnalysis","Version":"1.0","Title":"Factor Mixture Analysis with covariates","Description":"The package estimates Factor Mixture Analysis via the EM\n algorithm","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FACTMLE","Version":"1.1","Title":"Maximum Likelihood Factor Analysis","Description":"Perform Maximum Likelihood Factor analysis on a covariance matrix or data matrix.","Published":"2015-11-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FactoClass","Version":"1.1.3","Title":"Combination of Factorial Methods and Cluster Analysis","Description":"Multivariate exploration of a data table with factorial\n analysis and cluster methods.","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"factoextra","Version":"1.0.4","Title":"Extract and Visualize the Results of Multivariate Data Analyses","Description":"Provides some easy-to-use functions to extract and visualize the\n output of multivariate data analyses, including 'PCA' (Principal Component\n Analysis), 'CA' (Correspondence Analysis), 'MCA' (Multiple Correspondence\n Analysis), 'FAMD' (Factor Analysis of Mixed Data), 'MFA' (Multiple Factor Analysis) and 'HMFA' (Hierarchical Multiple\n Factor Analysis) functions from different R packages. It contains also functions\n for simplifying some clustering analysis steps and provides 'ggplot2' - based\n elegant data visualization.","Published":"2017-01-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FactoInvestigate","Version":"1.0","Title":"Automatic Description of Factorial Analysis","Description":"Brings a set of tools to help and automatically realise the description of principal component analyses (from 'FactoMineR' functions). Detection of existing outliers, identification of the informative components, graphical views and dimensions description are performed threw dedicated functions. The Investigate() function performs all these functions in one, and returns the result as a report document (Word, PDF or HTML).","Published":"2017-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FactoMineR","Version":"1.36","Title":"Multivariate Exploratory Data Analysis and Data Mining","Description":"Exploratory data analysis methods to summarize, visualize and describe datasets. The main principal component methods are available, those with the largest potential in terms of applications: principal component analysis (PCA) when variables are quantitative, correspondence analysis (CA) and multiple correspondence analysis (MCA) when variables are categorical, Multiple Factor Analysis when variables are structured in groups, etc. and hierarchical cluster analysis.","Published":"2017-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"factorcpt","Version":"0.1.2","Title":"Simultaneous Change-Point and Factor Analysis","Description":"Identifies change-points in the common and the idiosyncratic components via factor modelling.","Published":"2016-12-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FactoRizationMachines","Version":"0.11","Title":"Machine Learning with Higher-Order Factorization Machines","Description":"Implementation of three machine learning approaches: Support Vector Machines (SVM) with a linear kernel, second-order Factorization Machines (FM), and higher-order Factorization Machines (HoFM).","Published":"2017-03-21","License":"CC BY-NC-ND 4.0","snapshot_date":"2017-06-23"} {"Package":"factorplot","Version":"1.1-2","Title":"Graphical Presentation of Simple Contrasts","Description":"Methods to calculate, print, summarize and plot pairwise differences from GLMs, GLHT or Multinomial Logit models.","Published":"2016-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"factorQR","Version":"0.1-4","Title":"Bayesian quantile regression factor models","Description":"Package to fit Bayesian quantile regression models that\n assume a factor structure for at least part of the design\n matrix.","Published":"2010-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FactorsR","Version":"1.1","Title":"Identification of the Factors Affecting Species Richness","Description":"It identifies the factors significantly related to species richness, and their relative contribution, using multiple regressions and support vector machine models. It uses an output file of 'ModestR' () with data of richness of the species and environmental variables in a cell size defined by the user. The residuals of the support vector machine model are shown on a map. Negative residuals may be potential areas with undiscovered and/or unregistered species, or areas with decreased species richness due to the negative effect of anthropogenic factors.","Published":"2017-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"factorstochvol","Version":"0.8.3","Title":"Bayesian Estimation of (Sparse) Latent Factor Stochastic\nVolatility Models","Description":"Markov chain Monte Carlo (MCMC) sampler for fully Bayesian\n estimation of latent factor stochastic volatility models.\n Sparsity can be achieved through the usage of Normal-Gamma priors\n on the factor loading matrix.","Published":"2016-12-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Factoshiny","Version":"1.0.5","Title":"Perform Factorial Analysis from 'FactoMineR' with a Shiny\nApplication","Description":"Perform factorial analysis with a menu and draw graphs interactively thanks to 'FactoMineR' and a Shiny application.","Published":"2016-11-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FACTscorer","Version":"0.1.0","Title":"Scores the FACT and FACIT Family of Patient-Reported Outcome\nMeasures","Description":"Provides functions to score the Functional Assessment of Cancer\n Therapy (FACT) and Functional Assessment of Chronic Illness Therapy (FACIT)\n family of patient-reported outcome (PRO) measures. The questionnaires \n themselves can be downloaded from www.FACIT.org. For most of the FACIT \n questionnaires, FACIT.org provides scoring syntax for use with commercial \n statistical software (SAS and SPSS). The FACTscorer R package is intended \n to serve as a free, reliable alternative for those without access to SAS or \n SPSS. Additionally, it will allow R users to both score and analyze the \n FACT and FACIT scales in R, avoiding the time-consuming and and error-prone \n process of transferring data back-and-forth between statistical software.\n Finally, use of the FACTscorer package will prevent many sources of scoring\n error common when using SAS and/or SPSS syntax (e.g., copy-paste errors and\n other accidental modifications to the syntax).","Published":"2015-01-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"factualR","Version":"0.5","Title":"thin wrapper for the Factual.com server API","Description":"Per the Factual.com website, \"Factual is a platform where\n anyone can share and mash open, living data on any subject.\"\n The data is in the form of tables and is accessible via REST\n API. The factualR package is a thin wrapper around the\n Factual.com API, to make it even easier for people working with\n R to explore Factual.com data sets.","Published":"2011-01-03","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"FADA","Version":"1.3.2","Title":"Variable Selection for Supervised Classification in High\nDimension","Description":"The functions provided in the FADA (Factor Adjusted Discriminant Analysis) package aim at performing supervised classification of high-dimensional and correlated profiles. The procedure combines a decorrelation step based on a \n factor modeling of the dependence among covariates and a classification method. The available methods are Lasso regularized logistic model\n (see Friedman et al. (2010)), sparse linear discriminant analysis (see\n Clemmensen et al. (2011)), shrinkage linear and diagonal discriminant\n analysis (see M. Ahdesmaki et al. (2010)). More methods of classification can be used on the decorrelated data provided by the package FADA.","Published":"2016-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FAdist","Version":"2.2","Title":"Distributions that are Sometimes Used in Hydrology","Description":"Probability distributions that are sometimes useful in hydrology.","Published":"2015-09-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Fahrmeir","Version":"2016.5.31","Title":"Data from the Book \"Multivariate Statistical Modelling Based on\nGeneralized Linear Models\", First Edition, by Ludwig Fahrmeir\nand Gerhard Tutz","Description":"Data and functions for the book \"Multivariate Statistical \n Modelling Based on Generalized Linear Models\", first edition, by \n Ludwig Fahrmeir and Gerhard Tutz. Useful when using the book.","Published":"2016-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fail","Version":"1.3","Title":"File Abstraction Interface Layer (FAIL)","Description":"More comfortable interface to work with R data or source files\n in a key-value fashion.","Published":"2015-10-01","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FAiR","Version":"0.4-15","Title":"Factor Analysis in R","Description":"This package estimates factor analysis models using a\n genetic algorithm, which permits a general mechanism for\n restricted optimization with arbitrary restrictions that are\n chosen at run time with the help of a GUI. Importantly,\n inequality restrictions can be imposed on functions of multiple\n parameters, which provides a new avenues for testing and\n generating theories with factor analysis models. This package\n also includes an entirely new estimator of the common factor\n analysis model called semi-exploratory factor analysis, which\n is a general alternative to exploratory and confirmatory factor\n analysis. Finally, this package integrates a lot of other\n packages that estimate sample covariance matrices and thus\n provides a lot of alternatives to the traditional sample\n covariance calculation. Note that you need to have the Gtk run\n time library installed on your system to use this package; see\n the URL below for detailed installation instructions. Most\n users would only need to understand the first twenty-four pages\n of the PDF manual.","Published":"2014-02-08","License":"AGPL (>= 3) + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"faisalconjoint","Version":"1.15","Title":"Faisal Conjoint Model: A New Approach to Conjoint Analysis","Description":"It is used for systematic analysis of decisions based on attributes and its levels.","Published":"2015-02-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fakeR","Version":"1.0","Title":"Simulates Data from a Data Frame of Different Variable Types","Description":"Generates fake data from a dataset of different variable types.\n The package contains the functions simulate_dataset and simulate_dataset_ts \n to simulate time-independent and time-dependent data. It randomly samples \n character and factor variables from contingency tables and numeric and \n ordered factors from a multivariate normal distribution. It currently supports the \n simulation of stationary and zero-inflated count time series. ","Published":"2016-05-26","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"falcon","Version":"0.2","Title":"Finding Allele-Specific Copy Number in Next-Generation\nSequencing Data","Description":"This is a method for Allele-specific DNA Copy Number Profiling using Next-Generation Sequencing. Given the allele-specific coverage at the variant loci, this program segments the genome into regions of homogeneous allele-specific copy number. It requires, as input, the read counts for each variant allele in a pair of case and control samples. For detection of somatic mutations, the case and control samples can be the tumor and normal sample from the same individual.","Published":"2016-04-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"falconx","Version":"0.2","Title":"Finding Allele-Specific Copy Number in Whole-Exome Sequencing\nData","Description":"This is a method for Allele-specific DNA Copy Number profiling for whole-Exome sequencing data. Given the allele-specific coverage and site biases at the variant loci, this program segments the genome into regions of homogeneous allele-specific copy number. It requires, as input, the read counts for each variant allele in a pair of case and control samples, as well as the site biases. For detection of somatic mutations, the case and control samples can be the tumor and normal sample from the same individual. The implemented method is based on the paper: Chen, H., Jiang, Y., Maxwell, K., Nathanson, K. and Zhang, N. (under review). Allele-specific copy number estimation by whole Exome sequencing.","Published":"2017-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fam2r","Version":"1.2","Title":"From 'Familias' to R","Description":"Functionality provided for conditional simulation, likelihoods and plotting of pedigrees, mostly as a wrapper for 'paramlink'. Users typically start by exporting from the Windows version of 'Familias'.","Published":"2017-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fame","Version":"2.21","Title":"Interface for FAME Time Series Database","Description":"Read and write FAME databases.","Published":"2015-07-12","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"FamEvent","Version":"1.3","Title":"Family Age-at-Onset Data Simulation and Penetrance Estimation","Description":"Simulates age-at-onset traits associated with a segregating major gene in family data \n obtained from population-based, clinic-based, or multi-stage designs. Appropriate ascertainment \n correction is utilized to estimate age-dependent penetrance functions either parametrically from \n the fitted model or nonparametrically from the data. The Expectation and Maximization algorithm \n can infer missing genotypes and carrier probabilities estimated from family's genotype and\n phenotype information or from a fitted model. Plot functions include pedigrees of simulated \n families and predicted penetrance curves based on specified parameter values.","Published":"2017-03-26","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"Familias","Version":"2.4","Title":"Probabilities for Pedigrees Given DNA Data","Description":"\n An interface to the core Familias functions (www.familias.name), \n which are programmed in C++.","Published":"2016-02-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FAMILY","Version":"0.1.19","Title":"A Convex Formulation for Modeling Interactions with Strong\nHeredity","Description":"Fits penalized linear and logistic regression models with pairwise interaction terms.","Published":"2015-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FAmle","Version":"1.3.5","Title":"Maximum Likelihood and Bayesian Estimation of Univariate\nProbability Distributions","Description":"Estimate parameters of univariate probability distributions \n with maximum likelihood and Bayesian methods.","Published":"2017-06-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FAMT","Version":"2.5","Title":"Factor Analysis for Multiple Testing (FAMT) : simultaneous tests\nunder dependence in high-dimensional data","Description":"The method proposed in this package takes into account the impact of dependence on the multiple testing procedures for high-throughput data as proposed by Friguet et al. (2009). The common information shared by all the variables is modeled by a factor analysis structure. The number of factors considered in the model is chosen to reduce the false discoveries variance in multiple tests. The model parameters are estimated thanks to an EM algorithm. Adjusted tests statistics are derived, as well as the associated p-values. The proportion of true null hypotheses (an important parameter when controlling the false discovery rate) is also estimated from the FAMT model. Graphics are proposed to interpret and describe the factors.","Published":"2014-01-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fanc","Version":"2.2","Title":"Penalized Likelihood Factor Analysis via Nonconvex Penalty","Description":"Computes the penalized maximum likelihood estimates of factor loadings and unique variances for various tuning parameters. The pathwise coordinate descent along with EM algorithm is used. This package also includes a new graphical tool which outputs path diagram, goodness-of-fit indices and model selection criteria for each regularization parameter. The user can change the regularization parameter by manipulating scrollbars, which is helpful to find a suitable value of regularization parameter.","Published":"2016-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fANCOVA","Version":"0.5-1","Title":"Nonparametric Analysis of Covariance","Description":"This package contains a collection of R functions to\n perform nonparametric analysis of covariance for regression\n curves or surfaces. Testing the equality or parallelism of\n nonparametric curves or surfaces is equivalent to analysis of\n variance (ANOVA) or analysis of covariance (ANCOVA) for\n one-sample functional data. Three different testing methods are\n available in the package, including one based on L-2 distance,\n one based on an ANOVA statistic, and one based on variance\n estimators.","Published":"2010-10-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"fancycut","Version":"0.1.1","Title":"A Fancy Version of 'base::cut'","Description":"Provides the function fancycut() which is like cut() except\n you can mix left open and right open intervals with point values,\n intervals that are closed on both ends and intervals that are open on both ends.","Published":"2017-01-08","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"fanovaGraph","Version":"1.4.8","Title":"Building Kriging Models from FANOVA Graphs","Description":"Estimation and plotting of a function's FANOVA graph to identify the interaction structure and fitting, prediction and simulation of a Kriging model modified by the identified structure. The interactive function plotManipulate() can only be run on the RStudio IDE with RStudio's package 'manipulate' loaded. RStudio is freely available (www.rstudio.org), and includes package 'manipulate'. The equivalent function plotTk() bases on CRAN Repository packages only.","Published":"2015-10-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fanplot","Version":"3.4.1","Title":"Visualisation of Sequential Probability Distributions Using Fan\nCharts","Description":"Visualise sequential distributions using a range of plotting\n styles. Sequential distribution data can be input as either simulations or\n values corresponding to percentiles over time. Plots are added to\n existing graphic devices using the fan function. Users can choose from four\n different styles, including fan chart type plots, where a set of coloured\n polygon, with shadings corresponding to the percentile values are layered\n to represent different uncertainty levels.","Published":"2015-10-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FAOSTAT","Version":"2.0","Title":"Download Data from the FAOSTAT Database of the Food and\nAgricultural Organization (FAO) of the United Nations","Description":"A list of functions to download statistics from FAOSTAT (database\n of the Food and Agricultural Organization of the United Nations) and WDI\n (database of the World Bank), and to perform some harmonization operations.","Published":"2015-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"faoutlier","Version":"0.7.1","Title":"Influential Case Detection Methods for Factor Analysis and\nStructural Equation Models","Description":"Tools for detecting and summarize influential cases that\n can affect exploratory and confirmatory factor analysis models as well as\n structural equation models more generally.","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"far","Version":"0.6-5","Title":"Modelization for Functional AutoRegressive Processes","Description":"Modelizations and previsions functions for\n Functional AutoRegressive processes using\n nonparametric methods: functional kernel,\n estimation of the covariance operator in\n a subspace, ...","Published":"2015-07-20","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"faraway","Version":"1.0.7","Title":"Functions and Datasets for Books by Julian Faraway","Description":"Books are \"Practical Regression and ANOVA in R\" on CRAN, \"Linear Models with R\" published 1st Ed. August 2004, 2nd Ed. July 2014 by CRC press, ISBN 9781439887332, and \"Extending the Linear Model with R\" published by CRC press in 1st Ed. December 2005 and 2nd Ed. March 2016, ISBN 9781584884248.","Published":"2016-02-15","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"farff","Version":"1.0","Title":"A Faster 'ARFF' File Reader and Writer","Description":"Reads and writes 'ARFF' files. 'ARFF' (Attribute-Relation File Format) files are like 'CSV' files, with a little bit of added meta information in a header and standardized NA values. They are quite often used for machine learning data sets and were introduced for the 'WEKA' machine learning 'Java' toolbox. See for further info on 'ARFF' and for for more info on 'WEKA'. 'farff' gets rid of the 'Java' dependency that 'RWeka' enforces, and it is at least a faster reader (for bigger files). It uses 'readr' as parser back-end for the data section of the 'ARFF' file. Consistency with 'RWeka' is tested on 'Github' and 'Travis CI' with hundreds of 'ARFF' files from 'OpenML'. Note that the 'OpenML' package is currently only available from 'Github' at: .","Published":"2016-09-11","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fArma","Version":"3010.79","Title":"ARMA Time Series Modelling","Description":"Environment for teaching \"Financial Engineering and\n Computational Finance\"","Published":"2013-06-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"farsi","Version":"1.0","Title":"Translate integers into persian","Description":"Allow numbers to be presented in an persian language\n version","Published":"2013-03-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fAsianOptions","Version":"3010.79","Title":"EBM and Asian Option Valuation","Description":"Environment for teaching \"Financial Engineering and\n Computational Finance\"","Published":"2013-06-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fasjem","Version":"1.1.0","Title":"A Fast and Scalable Joint Estimator for Learning Multiple\nRelated Sparse Gaussian Graphical Models","Description":"The FASJEM (A Fast and Scalable Joint Estimator for Learning Multiple Related Sparse Gaussian Graphical Models) is a joint estimator which is fast and scalable for learning multiple related sparse Gaussian graphical models. For more details, please see .","Published":"2017-05-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fAssets","Version":"3011.83","Title":"Rmetrics - Analysing and Modelling Financial Assets","Description":"Environment for teaching \n \"Financial Engineering and Computational Finance\".","Published":"2014-10-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fast","Version":"0.64","Title":"Implementation of the Fourier Amplitude Sensitivity Test (FAST)","Description":"The Fourier Amplitude Sensitivity Test (FAST) is a method to determine global sensitivities of a model on parameter changes with relatively few model runs. This package implements this sensitivity analysis method.","Published":"2015-08-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fastAdaboost","Version":"1.0.0","Title":"a Fast Implementation of Adaboost","Description":"Implements Adaboost based on C++ backend code.\n This is blazingly fast and especially useful for large, in memory data sets. \n The package uses decision trees as weak classifiers. Once the classifiers\n have been trained, they can be used to predict new data. \n Currently, we support only binary classification tasks.\n The package implements the Adaboost.M1 algorithm and the real\n Adaboost(SAMME.R) algorithm.","Published":"2016-02-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FastBandChol","Version":"0.1.1","Title":"Fast Estimation of a Covariance Matrix by Banding the Cholesky\nFactor","Description":"Fast and numerically stable estimation of a covariance matrix by banding the Cholesky factor using a modified Gram-Schmidt algorithm implemented in RcppArmadilo. See for details on the algorithm. ","Published":"2015-08-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fastclime","Version":"1.4.1","Title":"A Fast Solver for Parameterized LP Problems, Constrained L1\nMinimization Approach to Sparse Precision Matrix Estimation and\nDantzig Selector","Description":"Provides a method of recovering the precision matrix efficiently \n and solving for the dantzig selector by applying the parametric \n simplex method. The computation is based on a linear optimization\n solver. It also contains a generic LP solver and a parameterized LP \n solver using parametric simplex method.","Published":"2016-04-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fastcluster","Version":"1.1.22","Title":"Fast Hierarchical Clustering Routines for R and Python","Description":"This is a two-in-one package which provides interfaces to\n both R and Python. It implements fast hierarchical, agglomerative\n clustering routines. Part of the functionality is designed as drop-in\n replacement for existing routines: linkage() in the SciPy package\n 'scipy.cluster.hierarchy', hclust() in R's 'stats' package, and the\n 'flashClust' package. It provides the same functionality with the\n benefit of a much faster implementation. Moreover, there are\n memory-saving routines for clustering of vector data, which go beyond\n what the existing packages provide. For information on how to install\n the Python files, see the file INSTALL in the source distribution.","Published":"2016-12-09","License":"FreeBSD | GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fastcmh","Version":"0.2.7","Title":"Significant Interval Discovery with Categorical Covariates","Description":"A method which uses the Cochran-Mantel-Haenszel test with significant pattern mining to detect intervals in binary genotype data which are significantly associated with a particular phenotype, while accounting for categorical covariates.","Published":"2016-09-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"fastdigest","Version":"0.6-3","Title":"Fast, Low Memory-Footprint Digests of R Objects","Description":"Provides an R interface to Bob Jenkin's streaming, \n non-cryptographic 'SpookyHash' hash algorithm for use in digest-based \n comparisons of R objects. 'fastdigest' plugs directly into R's internal \n serialization machinery, allowing digests of all R objects the serialize() \n function supports, including reference-style objects via custom hooks. Speed is\n high and scales linearly by object size; memory usage is constant and \n negligible.","Published":"2015-10-08","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"fastDummies","Version":"0.1.1","Title":"Fast Creation of Dummy (Binary) Columns from Categorical\nVariables","Description":"Creates dummy columns from columns that have categorical variables (character or factor types). You can also specify which columns to make dummies out of, or which columns to ignore. This package provides a significant speed increase from creating dummy variables through model.matrix().","Published":"2017-05-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"fasteraster","Version":"1.1.1","Title":"Raster Image Processing and Vector Recognition","Description":"If there is a need to recognise edges on a raster image or a bitmap or any kind of a matrix, one can find packages\n that does only 90 degrees vectorization. Typically the nature of artefact images is linear and can be vectorized in much more\n efficient way than draw a series of 90 degrees lines. The fasteraster package does recognition of lines using only one pass.\n It also allows to calculate mass and the mass centers for the recognized zones or polygons. ","Published":"2017-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fastGHQuad","Version":"0.2","Title":"Fast Rcpp implementation of Gauss-Hermite quadrature","Description":"Fast, numerically-stable Gauss-Hermite quadrature","Published":"2014-08-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FastGP","Version":"1.2","Title":"Efficiently Using Gaussian Processes with Rcpp and RcppEigen","Description":"Contains Rcpp and RcppEigen implementations of matrix operations useful for Gaussian process models, such as the inversion of a symmetric Toeplitz matrix, sampling from multivariate normal distributions, evaluation of the log-density of a multivariate normal vector, and Bayesian inference for latent variable Gaussian process models with elliptical slice sampling (Murray, Adams, and MacKay 2010).","Published":"2016-02-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fastGraph","Version":"1.1","Title":"Fast Drawing and Shading of Graphs of Statistical Distributions","Description":"Provides functionality to produce graphs of probability density functions and cumulative distribution functions with few keystrokes, allows shading under the curve of the probability density function to illustrate concepts such as p-values and critical values, and fits a simple linear regression line on a scatter plot with the equation as the main title.","Published":"2016-07-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FastHCS","Version":"0.0.5","Title":"Robust Algorithm for Principal Component Analysis","Description":"The FastHCS algorithm of Schmitt and Vakili (2014) for high-dimensional, robust PCA modelling and associated outlier detection and diagnostic tools.","Published":"2015-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fastHICA","Version":"1.0.2","Title":"Hierarchical Independent Component Analysis: a Multi-Scale\nSparse Non-Orthogonal Data-Driven Basis","Description":"It implements HICA (Hierarchical Independent Component Analysis) algorithm. This approach, obtained through the integration between treelets and Independent Component Analysis, is able to provide a multi-scale non-orthogonal data-driven basis, whose elements have a phenomenological interpretation according to the problem under study.","Published":"2015-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fastHorseshoe","Version":"0.1.0","Title":"The Elliptical Slice Sampler for Bayesian Horseshoe Regression","Description":"The elliptical slice sampler for Bayesian shrinkage linear regression, such as horseshoe, double-exponential and user specific priors. ","Published":"2016-11-29","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fastICA","Version":"1.2-1","Title":"FastICA Algorithms to Perform ICA and Projection Pursuit","Description":"Implementation of FastICA algorithm to perform Independent\n Component Analysis (ICA) and Projection Pursuit.","Published":"2017-06-12","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"FastImputation","Version":"2.0","Title":"Learn from Training Data then Quickly Fill in Missing Data","Description":"TrainFastImputation() uses training data to describe a\n multivariate normal distribution that the data approximates or\n can be transformed into approximating and stores this information\n as an object of class 'FastImputationPatterns'. FastImputation()\n function uses this 'FastImputationPatterns' object to impute (make\n a good guess at) missing data in a single line or a whole data frame\n of data. This approximates the process used by 'Amelia'\n but is much faster when\n filling in values for a single line of data.","Published":"2017-03-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fastJT","Version":"1.0.3","Title":"Efficient Jonckheere-Terpstra Test Statistics for Robust Machine\nLearning and Genome-Wide Association Studies","Description":"This 'Rcpp'-based package implements highly efficient functions for the calculation of the Jonckheere-Terpstra statistic. It can be used for a variety of applications, including feature selection in machine learning problems, or to conduct genome-wide association studies (GWAS) with multiple quantitative phenotypes. The code leverages 'OpenMP' directives for multi-core computing to reduce overall processing time. ","Published":"2017-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FastKM","Version":"1.0","Title":"A Fast Multiple-Kernel Method Based on a Low-Rank Approximation","Description":"A computationally efficient and statistically rigorous fast Kernel Machine method for multi-kernel analysis. The approach is based on a low-rank approximation to the nuisance effect kernel matrices. The algorithm is applicable to continuous, binary, and survival traits and is implemented using the existing single-kernel analysis software 'SKAT' and 'coxKM'. 'coxKM' can be obtained from http://www.hsph.harvard.edu/xlin/software.html.","Published":"2015-11-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FastKNN","Version":"0.0.1","Title":"Fast k-Nearest Neighbors","Description":"Compute labels for a test set according to the k-Nearest Neighbors classification. This is a fast way to do k-Nearest Neighbors classification because the distance matrix -between the features of the observations- is an input to the function rather than being calculated in the function itself every time.","Published":"2015-02-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fastLSU","Version":"0.1.0","Title":"Fast Linear Step Up Procedure of Benjamini–Hochberg FDR Method\nfor Huge-Scale Testing Problems","Description":"An efficient algorithm to apply the Benjamini–Hochberg Linear Step Up FDR controlling procedure in huge-scale testing problems (proposed in Vered Madar and Sandra Batista(2016) ). Unlike \"BH\" method, the package does not require any p value ordering. Besides, it permits separating p values arbitrarily into computationally feasible chunks of arbitrary size and produces the same results as those from applying linear step up BH procedure to the entire set of tests.","Published":"2016-09-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fastM","Version":"0.0-2","Title":"Fast Computation of Multivariate M-estimators","Description":"The package implements the new algorithm for fast computation of M-scatter matrices using a partial Newton-Raphson procedure for several estimators. ","Published":"2014-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fastmatch","Version":"1.1-0","Title":"Fast match() function","Description":"Package providing a fast match() replacement for cases\n\tthat require repeated look-ups. It is slightly faster that R's\n\tbuilt-in match() function on first match against a table, but\n\textremely fast on any subsequent lookup as it keeps the hash\n\ttable in memory.","Published":"2017-01-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fastnet","Version":"0.1.3","Title":"Large-Scale Social Network Analysis","Description":"We present an implementation of the algorithms required to simulate large-scale social networks and retrieve their most relevant metrics.","Published":"2017-06-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FastPCS","Version":"0.1.2","Title":"FastPCS Robust Fit of Multivariate Location and Scatter","Description":"The FastPCS algorithm of Vakili and Schmitt (2014) for robust estimation of multivariate location and scatter and multivariate outliers detection. ","Published":"2015-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fastpseudo","Version":"0.1","Title":"Fast Pseudo Observations","Description":"Computes pseudo-observations for survival analysis on right-censored data based on restricted mean survival time.","Published":"2015-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fastqcr","Version":"0.1.0","Title":"Quality Control of Sequencing Data","Description":"'FASTQC' is the most widely used tool for evaluating the quality of high throughput sequencing data. \n It produces, for each sample, an html report and a compressed file containing the raw data. \n If you have hundreds of samples, you are not going to open up each 'HTML' page. \n You need some way of looking at these data in aggregate. \n 'fastqcr' Provides helper functions to easily parse, aggregate and analyze \n 'FastQC' reports for large numbers of samples. It provides a convenient solution for building \n a 'Multi-QC' report, as well as, a 'one-sample' report with result interpretations.","Published":"2017-04-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fastR","Version":"0.10.2","Title":"Foundations and Applications of Statistics Using R","Description":"Data sets and utilities to accompany\n \"Foundations and Applications of Statistics: an Introduction\n using R\" (R Pruim, published by AMS, 2011), a text covering\n topics from probability and mathematical statistics at an advanced\n undergraduate level. R is integrated throughout, and access to all\n the R code in the book is provided via the snippet function.","Published":"2015-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FastRCS","Version":"0.0.7","Title":"Fits the FastRCS Robust Multivariable Linear Regression Model","Description":"The FastRCS algorithm of Vakili and Schmitt (2014) for robust fit of the multivariable linear regression model and outliers detection.","Published":"2015-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FastRWeb","Version":"1.1-1","Title":"Fast Interactive Framework for Web Scripting Using R","Description":"Infrastrcture for creating rich, dynamic web content using R scripts while maintaining very fast response time.","Published":"2015-07-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fastSOM","Version":"1.0.0","Title":"Fast Calculation of Spillover Measures","Description":"Functions for computing spillover measures, especially spillover\n tables and spillover indices, as well as their average, minimal, and maximal\n values.","Published":"2016-07-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fastTextR","Version":"1.0","Title":"An Interface to the 'fastText' Library","Description":"An interface to the 'fastText' library\n\t. The package\n\tcan be used for text classification and to learn word vectors.\n\tThe install folder contains the 'PATENTS' file.\n\tAn example how to use 'fastTextR' can be found in the 'README' file.","Published":"2017-05-12","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fasttime","Version":"1.0-2","Title":"Fast Utility Function for Time Parsing and Conversion","Description":"Fast functions for timestamp\n\tmanipulation that avoid system calls and take shortcuts\n\tto facilitate operations on very large data.","Published":"2016-10-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fat2Lpoly","Version":"1.2.2","Title":"Two-Locus Family-Based Association Test with Polytomic Outcome","Description":"Performs family-based association tests with a polytomous outcome under 2-locus and 1-locus models\n defined by some design matrix. ","Published":"2015-10-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"FatTailsR","Version":"1.7-5","Title":"Kiener Distributions and Fat Tails in Finance","Description":"Kiener distributions K1, K2, K3, K4 and K7 to characterize\n distributions with left and right, symmetric or asymmetric fat tails in market\n finance, neuroscience and other disciplines. Two algorithms to estimate with\n a high accuracy distribution parameters, quantiles, value-at-risk and expected\n shortfall. Include power hyperbolas and power hyperbolic functions. ","Published":"2017-05-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fauxpas","Version":"0.1.0","Title":"HTTP Error Helpers","Description":"HTTP error helpers. Methods included for general purpose HTTP \n error handling, as well as individual methods for every HTTP status\n code, both via status code numbers as well as their descriptive names.\n Supports ability to adjust behavior to stop, message or warning.\n Includes ability to use custom whisker template to have any configuration\n of status code, short description, and verbose message. Currently \n supports integration with 'crul', 'curl', and 'httr'.","Published":"2016-11-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"favnums","Version":"1.0.0","Title":"A Dataset of Favourite Numbers","Description":"A dataset of favourite numbers, selected from an online poll of over 30,000 people by Alex Bellos\n (http://pages.bloomsbury.com/favouritenumber).","Published":"2015-07-22","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"FAwR","Version":"1.1.1","Title":"Functions and Datasets for \"Forest Analytics with R\"","Description":"Provides functions and datasets from the book \"Forest Analytics with R\".","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fbar","Version":"0.1.23","Title":"An Extensible Approach to Flux Balance Analysis","Description":"This is a simple package for Flux Balance Analysis and related\n metabolic modelling techniques. Functions are provided for: parsing\n models in tabular format, converting parsed metabolic models to input\n formats for common linear programming solvers, and\n evaluating and applying gene-protein-reaction mappings. In addition, there\n are wrappers to parse a model, select a solver, find the metabolic fluxes,\n and return the results applied to the original model. Compared to other\n packages in this field, this package puts a much heavier focus on\n providing reusable components that can be used in the design of new\n implementation of new techniques, in particular those that involve large\n parameter sweeps.","Published":"2017-01-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fBasics","Version":"3011.87","Title":"Rmetrics - Markets and Basic Statistics","Description":"Environment for teaching \n\t\"Financial Engineering and Computational Finance\".","Published":"2014-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fbati","Version":"1.0-1.1","Title":"Gene by Environment Interaction and Conditional Gene Tests for\nNuclear Families","Description":"Does family-based gene by environment interaction tests, joint gene, gene-environment interaction test, and a test of a set of genes conditional on another set of genes.","Published":"2016-07-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"FBFsearch","Version":"1.1","Title":"Algorithm for Searching the Space of Gaussian Directed Acyclic\nGraph Models Through Moment Fractional Bayes Factors","Description":"We propose an objective Bayesian algorithm for searching the space of Gaussian directed acyclic graph (DAG) models. The algorithm proposed makes use of moment fractional Bayes factors (MFBF) and thus it is suitable for learning sparse graph. The algorithm is implemented by using Armadillo: an open-source C++ linear algebra library. ","Published":"2016-11-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FBN","Version":"1.5.1","Title":"FISH Based Normalization and Copy Number inference of SNP\nmicroarray data","Description":"Normalizes the data from a file containing the raw values\n of the SNP probes of microarrray data by using the FISH probes\n and their corresponding CNs.","Published":"2012-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fBonds","Version":"3010.77","Title":"Bonds and Interest Rate Models","Description":"Environment for teaching \"Financial Engineering and\n Computational Finance\"","Published":"2013-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fbRads","Version":"0.2","Title":"Analyzing and Managing Facebook Ads from R","Description":"Wrapper functions around the Facebook Marketing 'API' to create, read, update and delete custom audiences, images, campaigns, ad sets, ads and related content.","Published":"2016-04-06","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"fbRanks","Version":"2.0","Title":"Association Football (Soccer) Ranking via Poisson Regression","Description":"This package uses time dependent Poisson regression and a record of goals scored in matches to rank teams via estimated attack and defense strengths. The statistical model is based on Dixon and Coles (1997) Modeling Association Football Scores and Inefficiencies in the Football Betting Market, Applied Statistics, Volume 46, Issue 2, 265-280. The package has a some webscrapers to assist in the development and updating of a match database. If the match database contains unconnected clusters (i.e. sets of teams that have only played each other and not played teams from other sets), each cluster is ranked separately relative to the median team strength in the cluster. The package contains functions for predicting and simulating tournaments and leagues from estimated models. The package allows fitting via the glm(), speedglm(), and glmnet() functions. The latter allows fast and efficient fitting of very large numbers of teams. The fitting algorithm will analyze the match data and determine which teams form a cluster (a set of teams where there is a path of matches connecting every team) and fit each cluster separately.","Published":"2013-10-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fbroc","Version":"0.4.0","Title":"Fast Algorithms to Bootstrap Receiver Operating Characteristics\nCurves","Description":"Implements a very fast C++ algorithm to quickly bootstrap receiver\n operating characteristics (ROC) curves and derived performance metrics,\n including the area under the curve (AUC) and the partial area under the curve as well as \n the true and false positive rate. The analysis of paired receiver operating curves is supported as well,\n so that a comparison of two predictors is possible. You can also plot the\n results and calculate confidence intervals. On a typical desktop computer the time needed for \n the calculation of 100000 bootstrap replicates given 500 observations requires time on the\n order of magnitude of one second.","Published":"2016-06-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fcd","Version":"0.1","Title":"Fused Community Detection","Description":"Efficient procedures for community detection in network studies, especially for sparse networks with not very obvious community structure. The algorithms impose penalties on the differences of the coordinates which represent the community labels of the nodes.","Published":"2013-12-15","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"fCertificates","Version":"0.5-4","Title":"Basics of Certificates and Structured Products Valuation","Description":"Collection of pricing by duplication methods for popular structured products (\"Zertifikate\").","Published":"2015-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FCGR","Version":"1.0-0","Title":"Fatigue Crack Growth in Reliability","Description":"Fatigue Crack Growth in Reliability estimates the distribution\n of material lifetime due to mechanical fatigue efforts. The FCGR\n package provides simultaneous crack growth curves fitting to \n different specimens in materials under mechanical stress efforts. \n Linear mixed-effects models (LME) with smoothing B-Splines \n and the linearized Paris-Erdogan law are applied. Once defined\n the fail for a determined crack length, the distribution function\n of failure times to fatigue is obtained. The density function is\n estimated by applying nonparametric binned kernel density estimate \n (bkde) and the kernel estimator of the distribution function (kde). \n The results of Pinheiro and Bates method based on nonlinear \n mixed-effects regression (nlme) can be also retrieved. The package \n contains the crack.growth, PLOT.cg, IB.F, and Alea.A (database) functions.","Published":"2015-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fclust","Version":"1.1.2","Title":"Fuzzy Clustering","Description":"Algorithms for fuzzy clustering, cluster validity indices and plots for cluster validity and visualizing fuzzy clustering results.","Published":"2015-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fcm","Version":"0.1.1","Title":"Inference of Fuzzy Cognitive Maps (FCMs)","Description":"Provides a selection of 6 different inference rules and 4 threshold functions in order to obtain the inference of the FCM (Fuzzy Cognitive Map). Moreover, the 'fcm' package returns a data frame of the concepts' values of each state after the inference procedure. Fuzzy cognitive maps were introduced by Kosko (1986) providing ideal causal cognition tools for modeling and simulating dynamic systems.","Published":"2017-05-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FCMapper","Version":"1.1","Title":"Fuzzy Cognitive Mapping","Description":"Provides several functions to create and manipulate fuzzy\n cognitive maps. It is based on 'FCMapper' for Excel, distributed at , developed by Michael Bachhofer and Martin Wildenberg.\n Maps are inputted as adjacency matrices. Attributes of the maps and the\n equilibrium values of the concepts (including with user-defined constrained\n values) can be calculated. The maps can be graphed with a function that calls\n 'igraph'. Multiple maps with shared concepts can be aggregated.","Published":"2016-02-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FCNN4R","Version":"0.6.2","Title":"Fast Compressed Neural Networks for R","Description":"Provides an interface to kernel routines from the FCNN C++ library.\n FCNN is based on a completely new Artificial Neural Network representation that\n offers unmatched efficiency, modularity, and extensibility. FCNN4R provides\n standard teaching (backpropagation, Rprop, simulated annealing, stochastic\n gradient) and pruning algorithms (minimum magnitude, Optimal Brain Surgeon),\n but it is first and foremost an efficient computational engine. Users can\n easily implement their algorithms by taking advantage of fast gradient computing\n routines, as well as network reconstruction functionality (removing weights\n and redundant neurons, reordering inputs, merging networks). Networks can be\n exported to C functions in order to integrate them into virtually any software\n solution.","Published":"2016-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fCopulae","Version":"3011.81","Title":"Rmetrics - Bivariate Dependence Structures with Copulae","Description":"Environment for teaching\n\t\"Financial Engineering and Computational Finance\".","Published":"2014-09-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fcros","Version":"1.5.4","Title":"A Method to Search for Differentially Expressed Genes and to\nDetect Recurrent Chromosomal Copy Number Aberrations","Description":"A fold change rank based method is presented to search for genes with changing\n expression and to detect recurrent chromosomal copy number aberrations. This \n method may be useful for high-throughput biological data (micro-array, sequencing, ...).\n Probabilities are associated with genes or probes in the data set and there is no\n problem of multiple tests when using this method. For array-based comparative genomic\n hybridization data, segmentation results are obtained by merging the significant\n probes detected.","Published":"2017-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FD","Version":"1.0-12","Title":"Measuring functional diversity (FD) from multiple traits, and\nother tools for functional ecology","Description":"FD is a package to compute different multidimensional FD indices. It implements a distance-based framework to measure FD that allows any number and type of functional traits, and can also consider species relative abundances. It also contains other useful tools for functional ecology.","Published":"2014-08-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fda","Version":"2.4.4","Title":"Functional Data Analysis","Description":"These functions were developed to support functional data\n analysis as described in Ramsay, J. O. and Silverman, B. W.\n (2005) Functional Data Analysis. New York: Springer. They were\n ported from earlier versions in Matlab and S-PLUS. An\n introduction appears in Ramsay, J. O., Hooker, Giles, and\n Graves, Spencer (2009) Functional Data Analysis with R and\n Matlab (Springer). The package includes data sets and script\n files working many examples including all but one of the 76\n figures in this latter book. Matlab versions of the code and\n sample analyses are no longer distributed through CRAN, as they\n were when the book was published. For those, ftp from\n http://www.psych.mcgill.ca/misc/fda/downloads/FDAfuns/\n There you find a set of .zip files containing the functions and\n sample analyses, as well as two .txt files giving instructions for\n installation and some additional information.\n The changes from Version 2.4.1 are fixes of bugs in density.fd and\n removal of functions create.polynomial.basis, polynompen, and \n polynomial. These were deleted because the monomial basis\n does the same thing and because there were errors in the code.","Published":"2014-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fda.usc","Version":"1.3.0","Title":"Functional Data Analysis and Utilities for Statistical Computing","Description":"Routines for exploratory and descriptive analysis of functional data such as depth measurements, atypical curves detection, regression models, supervised classification, unsupervised classification and functional analysis of variance.","Published":"2016-11-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fdakma","Version":"1.2.1","Title":"Functional Data Analysis: K-Mean Alignment","Description":"It performs simultaneously clustering and alignment of a multidimensional or unidimensional functional dataset by means of k-mean alignment.","Published":"2015-05-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"fdaMixed","Version":"0.5","Title":"Functional Data Analysis in a Mixed Model Framework","Description":"Likelihood based analysis of 1-dimension functional data\n in a mixed-effects model framework. Matrix computation are\n approximated by semi-explicit operator equivalents with linear\n computational complexity.","Published":"2017-03-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fdapace","Version":"0.3.0","Title":"Functional Data Analysis and Empirical Dynamics","Description":"Provides implementation of various methods of Functional Data Analysis (FDA) and Empirical Dynamics. The core of this package is Functional Principal Component Analysis (FPCA), a key technique for functional data analysis, for sparsely or densely sampled random trajectories and time courses, via the Principal Analysis by Conditional Estimation (PACE) algorithm or numerical integration. PACE is useful for the analysis of data that have been generated by a sample of underlying (but usually not fully observed) random trajectories. It does not rely on pre-smoothing of trajectories, which is problematic if functional data are sparsely sampled. PACE provides options for functional regression and correlation, for Longitudinal Data Analysis, the analysis of stochastic processes from samples of realized trajectories, and for the analysis of underlying dynamics. The core computational algorithms are implemented using the 'Eigen' C++ library for numerical linear algebra and 'RcppEigen' \"glue\".","Published":"2017-01-25","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fdaPDE","Version":"0.1-4","Title":"Functional Data Analysis and Partial Differential Equations;\nStatistical Analysis of Functional and Spatial Data, Based on\nRegression with Partial Differential Regularizations","Description":"An implementation of regression models with partial differential regularizations, making use of the Finite Element Method. The models efficiently handle data distributed over irregularly shaped domains and can comply with various conditions at the boundaries of the domain. A priori information about the spatial structure of the phenomenon under study can be incorporated in the model via the differential regularization.","Published":"2016-04-23","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"fdasrvf","Version":"1.8.1","Title":"Elastic Functional Data Analysis","Description":"Performs alignment, PCA, and modeling of multidimensional and\n unidimensional functions using the square-root velocity framework\n (Srivastava et al., 2011 and\n Tucker et al., 2014 ). This framework\n allows for elastic analysis of functional data through phase and\n amplitude separation.","Published":"2017-04-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fdatest","Version":"2.1","Title":"Interval Testing Procedure for Functional Data","Description":"Implementation of the Interval Testing Procedure for functional data in different frameworks (i.e., one or two-population frameworks, functional linear models) by means of different basis expansions (i.e., B-spline, Fourier, and phase-amplitude Fourier). The current version of the package requires functional data evaluated on a uniform grid; it automatically projects each function on a chosen functional basis; it performs the entire family of multivariate tests; and, finally, it provides the matrix of the p-values of the previous tests and the vector of the corrected p-values. The functional basis, the coupled or uncoupled scenario, and the kind of test can be chosen by the user. The package provides also a plotting function creating a graphical output of the procedure: the p-value heat-map, the plot of the corrected p-values, and the plot of the functional data.","Published":"2015-02-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FDboost","Version":"0.3-0","Title":"Boosting Functional Regression Models","Description":"Regression models for functional data, i.e., scalar-on-function,\n function-on-scalar and function-on-function regression models, are fitted\n by a component-wise gradient boosting algorithm.","Published":"2017-05-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fdcov","Version":"1.0.0","Title":"Analysis of Covariance Operators","Description":"Provides a variety of tools for the analysis of covariance operators.","Published":"2016-06-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FDGcopulas","Version":"1.0","Title":"Multivariate Dependence with FDG Copulas","Description":"FDG copulas are a class of copulas featuring an interesting balance between flexibility and tractability. This package provides tools to construct, calculate the pairwise dependence coefficients of, simulate from, and fit FDG copulas. The acronym FDG stands for 'one-Factor with Durante Generators', as an FDG copula is a one-factor copula -- that is, the variables are independent given a latent factor -- whose linking copulas belong to the Durante class of bivariate copulas (also referred to as exchangeable Marshall-Olkin or semilinear copulas).","Published":"2014-10-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"fdq","Version":"0.2","Title":"Forest Data Quality","Description":"Contains methods of analysis of forest databases, the purpose of the analyzes is to evaluate \n the quality of the data present in the databases focusing on the dimensions of consistency, punctuality \n and completeness. Databases can range from forest inventory data to growth model data. The package has \n methods to work with large volumes of data quickly, in addition in certain analyzes it is possible to \n generate the graphs for a better understanding of the analysis and reporting of the analyzed analysis.","Published":"2016-12-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fdrci","Version":"2.1","Title":"Permutation-Based FDR Point and Confidence Interval Estimation","Description":"FDR functions for permutation-based estimators, including pi0 as well as FDR\n confidence intervals. The confidence intervals account for dependencies between\n tests by the incorporation of an overdispersion parameter, which is estimated\n from the permuted data.","Published":"2016-11-15","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"fdrDiscreteNull","Version":"1.0","Title":"False Discovery Rate Procedure Under Discrete Null Distributions","Description":"It is known that current false discovery rate (FDR) procedures can be very conservative\n when applied to p-values (and test statistics) with discrete (and heterogeneous) null distributions.\n This package implements the more powerful weighted generalized FDR procedure that adapts to these two features of the discrete paradigm for multiple testing. \n The package takes in the original data set rather than the p-values in order to carry out the adjustments needed for multiple testing in this paradigm. The methodology applies also to multiple testing where the null p-values are uniformly distributed.\n The package implements the method for three types of test statistics and their p-values:\n (a) binomial test on if two independent Poisson distributions have the same means, (b) Fisher's exact test on if the conditional distribution is the same as the marginal distribution for two binomial distributions,\n (c) the exact negative binomial test on if two independent negative binomial distributions with the same size parameter have the same means. \n It depends on the R packages ``MCMCpack'' to use its function ``dnoncenhypergeom'' for hypergeometric distributions, and edgeR to uses its normalization techniques for data that follow negative binomial distributions.","Published":"2015-02-16","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"FDRreg","Version":"0.1","Title":"False discovery rate regression","Description":"Tools for FDR problems, including false discovery rate regression.\n See corresponding paper: \"False discovery rate regression: application to\n neural synchrony detection in primary visual cortex.\" James G. Scott, Ryan\n C. Kelly, Matthew A. Smith, Robert E. Kass.","Published":"2014-03-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"FDRsampsize","Version":"1.0","Title":"Compute Sample Size that Meets Requirements for Average Power\nand FDR","Description":"Defines a collection of functions to compute average power and sample size for studies that use the false discovery rate as the final measure of statistical significance.","Published":"2016-01-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fdrtool","Version":"1.2.15","Title":"Estimation of (Local) False Discovery Rates and Higher Criticism","Description":"Estimates both tail area-based false \n discovery rates (Fdr) as well as local false discovery rates (fdr) for a \n variety of null models (p-values, z-scores, correlation coefficients,\n t-scores). The proportion of null values and the parameters of the null \n distribution are adaptively estimated from the data. In addition, the package \n contains functions for non-parametric density estimation (Grenander estimator), \n for monotone regression (isotonic regression and antitonic regression with weights),\n for computing the greatest convex minorant (GCM) and the least concave majorant (LCM), \n for the half-normal and correlation distributions, and for computing\n empirical higher criticism (HC) scores and the corresponding decision threshold.","Published":"2015-07-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"fds","Version":"1.7","Title":"Functional data sets","Description":"Functional data sets","Published":"2013-08-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fdth","Version":"1.2-1","Title":"Frequency Distribution Tables, Histograms and Polygons","Description":"Perform frequency distribution tables, associated histograms\n and polygons from vector, data.frame and matrix objects for\n numerical and categorical variables.","Published":"2015-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FeaLect","Version":"1.10","Title":"Scores Features for Feature Selection","Description":"For each feature, a score is computed that can be useful\n for feature selection. Several random subsets are sampled from\n the input data and for each random subset, various linear\n models are fitted using lars method. A score is assigned to\n each feature based on the tendency of LASSO in including that\n feature in the models.Finally, the average score and the models\n are returned as the output. The features with relatively low\n scores are recommended to be ignored because they can lead to\n overfitting of the model to the training data. Moreover, for\n each random subset, the best set of features in terms of global\n error is returned. They are useful for applying Bolasso, the\n alternative feature selection method that recommends the\n intersection of features subsets.","Published":"2015-05-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"feather","Version":"0.3.1","Title":"R Bindings to the Feather 'API'","Description":"Read and write feather files, a lightweight binary columnar\n data store designed for maximum speed.","Published":"2016-11-09","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"feature","Version":"1.2.13","Title":"Local Inferential Feature Significance for Multivariate Kernel\nDensity Estimation","Description":"Local inferential feature significance for multivariate kernel density estimation.","Published":"2015-10-26","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"FeatureHashing","Version":"0.9.1.1","Title":"Creates a Model Matrix via Feature Hashing with a Formula\nInterface","Description":"Feature hashing, also called as the hashing trick, is a method to transform \n features of a instance to a vector. Thus, it is a method to transform a real dataset to a matrix. \n Without looking up the indices in an associative array, \n it applies a hash function to the features and uses their hash values as indices directly.\n The method of feature hashing in this package was proposed in Weinberger et al. (2009). \n The hashing algorithm is the murmurhash3 from the digest package. \n Please see the README in https://github.com/wush978/FeatureHashing for more information.","Published":"2015-10-18","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"features","Version":"2015.12-1","Title":"Feature Extraction for Discretely-Sampled Functional Data","Description":"Discretely-sampled function is first smoothed. Features\n of the smoothed function are then extracted. Some of the key\n features include mean value, first and second derivatives,\n critical points (i.e. local maxima and minima), curvature of\n cunction at critical points, wiggliness of the function, noise\n in data, and outliers in data.","Published":"2015-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"featurizer","Version":"0.2","Title":"Some Helper Functions that Help Create Features from Data","Description":"A collection of functions that would help one to build features based on external data. Very useful for Data Scientists in data to day work. Many functions create features using parallel computation. Since the nitty gritty of parallel computation is hidden under the hood, the user need not worry about creating clusters and shutting them down.","Published":"2017-06-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fechner","Version":"1.0-3","Title":"Fechnerian Scaling of Discrete Object Sets","Description":"Functions and example datasets for Fechnerian scaling of discrete\n object sets. User can compute Fechnerian distances among objects representing\n subjective dissimilarities, and other related information. See\n package?fechner for an overview.","Published":"2016-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fecR","Version":"0.0.1","Title":"Fishing Effort Calculator in R","Description":"Calculates fishing effort following the DG MARE Ad-Hoc Workshops on Transversal Variables in Zagreb (2015) and Nicosia (2016).","Published":"2016-11-03","License":"EUPL","snapshot_date":"2017-06-23"} {"Package":"FedData","Version":"2.4.5","Title":"Functions to Automate Downloading Geospatial Data Available from\nSeveral Federated Data Sources","Description":"Functions to automate downloading geospatial data available from\n several federated data sources (mainly sources maintained by the US Federal\n government). Currently, the package enables extraction from six datasets:\n The National Elevation Dataset digital elevation models (1 and 1/3 arc-second;\n USGS); The National Hydrography Dataset (USGS); The Soil Survey Geographic\n (SSURGO) database from the National Cooperative Soil Survey (NCSS), which is\n led by the Natural Resources Conservation Service (NRCS) under the USDA; the\n Global Historical Climatology Network (GHCN), coordinated by National Climatic\n Data Center at NOAA; the Daymet gridded estimates of daily weather parameters \n for North America, version 3, available from the Oak Ridge National Laboratory's\n Distributed Active Archive Center (DAAC); and the International Tree Ring Data Bank.","Published":"2017-03-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"federalregister","Version":"0.2.0","Title":"Client Package for the U.S. Federal Register API","Description":"Access data from the Federal Register API .","Published":"2015-12-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FeedbackTS","Version":"1.4","Title":"Analysis of Feedback in Time Series","Description":"Analysis of fragmented time directionality to investigate feedback in time series. Tools provided by the package allow the analysis of feedback for a single time series and the analysis of feedback for a set of time series collected across a spatial domain.","Published":"2016-05-10","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"feedeR","Version":"0.0.7","Title":"Read RSS/Atom Feeds from R","Description":"Retrieve data from RSS/Atom feeds.","Published":"2016-10-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FENmlm","Version":"1.0","Title":"Fixed Effects Nonlinear Maximum Likelihood Models","Description":"Efficient estimation of fixed-effect maximum likelihood models with, possibly, non-linear right hand sides.","Published":"2015-10-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fermicatsR","Version":"1.4","Title":"Fermi Large Area Telescope Catalogs","Description":"Data from various catalogs of astrophysical gamma-ray sources\n detected by NASA's Large Area Telescope (The Astrophysical Journal, 697, 1071,\n 2009 June 1), on board the Fermi gamma-ray satellite. More information on\n Fermi and its data products is available from the Fermi Science Support Center\n (http://fermi.gsfc.nasa.gov/ssc/).","Published":"2016-03-12","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"fetchR","Version":"2.1-0","Title":"Calculate Wind Fetch","Description":"Wind fetch is the unobstructed length of water over which wind can\n blow from a certain direction. The wind fetch is typically calculated for many directions\n around the compass rose for a given location, which can then be incorporated \n into a larger model (such as the InVEST coastal vulnerability model;\n ),\n or simply averaged for a reasonable measure of \n the overall wind exposure for a specific marine location. The process of calculating\n wind fetch can be extremely time-consuming and tedious, particularly if a large\n number of fetch vectors are required at many locations. The 'fetchR' package\n calculates wind fetch and summarises the information efficiently. There are \n also plot methods to help visualise the wind exposure at the various\n locations, and methods to output the fetch vectors to a KML file for further\n investigation.","Published":"2017-04-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fExoticOptions","Version":"2152.78","Title":"Exotic Option Valuation","Description":"Environment for teaching \"Financial Engineering and\n Computational Finance\"","Published":"2012-11-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fExpressCertificates","Version":"1.2","Title":"fExpressCertificates - Structured Products Valuation for\nExpressCertificates/Autocallables","Description":"Collection of pricing by duplication and Monte Carlo methods for Express Certificates products (also known as Autocallables)","Published":"2013-08-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fExtremes","Version":"3010.81","Title":"Rmetrics - Extreme Financial Market Data","Description":"Environment for teaching \"Financial Engineering and\n Computational Finance\"","Published":"2013-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ff","Version":"2.2-13","Title":"memory-efficient storage of large data on disk and fast access\nfunctions","Description":"The ff package provides data structures that are stored on\n\tdisk but behave (almost) as if they were in RAM by transparently \n\tmapping only a section (pagesize) in main memory - the effective \n\tvirtual memory consumption per ff object. ff supports R's standard \n\tatomic data types 'double', 'logical', 'raw' and 'integer' and \n\tnon-standard atomic types boolean (1 bit), quad (2 bit unsigned), \n\tnibble (4 bit unsigned), byte (1 byte signed with NAs), ubyte (1 byte \n\tunsigned), short (2 byte signed with NAs), ushort (2 byte unsigned), \n\tsingle (4 byte float with NAs). For example 'quad' allows efficient \n\tstorage of genomic data as an 'A','T','G','C' factor. The unsigned \n\ttypes support 'circular' arithmetic. There is also support for \n\tclose-to-atomic types 'factor', 'ordered', 'POSIXct', 'Date' and \n\tcustom close-to-atomic types. \n\tff not only has native C-support for vectors, matrices and arrays \n\twith flexible dimorder (major column-order, major row-order and \n\tgeneralizations for arrays). There is also a ffdf class not unlike \n\tdata.frames and import/export filters for csv files.\n\tff objects store raw data in binary flat files in native encoding,\n\tand complement this with metadata stored in R as physical and virtual\n\tattributes. ff objects have well-defined hybrid copying semantics, \n\twhich gives rise to certain performance improvements through \n\tvirtualization. ff objects can be stored and reopened across R \n\tsessions. ff files can be shared by multiple ff R objects \n\t(using different data en/de-coding schemes) in the same process \n\tor from multiple R processes to exploit parallelism. A wide choice of \n\tfinalizer options allows to work with 'permanent' files as well as \n\tcreating/removing 'temporary' ff files completely transparent to the \n\tuser. On certain OS/Filesystem combinations, creating the ff files\n\tworks without notable delay thanks to using sparse file allocation.\n\tSeveral access optimization techniques such as Hybrid Index \n\tPreprocessing and Virtualization are implemented to achieve good \n\tperformance even with large datasets, for example virtual matrix \n\ttranspose without touching a single byte on disk. Further, to reduce \n\tdisk I/O, 'logicals' and non-standard data types get stored native and \n\tcompact on binary flat files i.e. logicals take up exactly 2 bits to \n\trepresent TRUE, FALSE and NA. \n\tBeyond basic access functions, the ff package also provides \n\tcompatibility functions that facilitate writing code for ff and ram \n\tobjects and support for batch processing on ff objects (e.g. as.ram, \n\tas.ff, ffapply). ff interfaces closely with functionality from package \n\t'bit': chunked looping, fast bit operations and coercions between \n\tdifferent objects that can store subscript information ('bit', \n\t'bitwhich', ff 'boolean', ri range index, hi hybrid index). This allows\n\tto work interactively with selections of large datasets and quickly \n\tmodify selection criteria. \n\tFurther high-performance enhancements can be made available upon request. ","Published":"2014-04-09","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ffbase","Version":"0.12.3","Title":"Basic Statistical Functions for Package 'ff'","Description":"Extends the out of memory vectors of 'ff' with\n statistical functions and other utilities to ease their usage.","Published":"2016-03-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FFD","Version":"1.0-6","Title":"Freedom from Disease","Description":"Functions, S4 classes/methods and a graphical user interface (GUI) to design surveys to substantiate freedom from disease using a modified hypergeometric function (see Cameron and Baldock, 1997). Herd sensitivities are computed according to sampling strategies \"individual sampling\" or \"limited sampling\" (see M. Ziller, T. Selhorst, J. Teuffert, M. Kramer and H. Schlueter, 2002). Methods to compute the a-posteriori alpha-error are implemented. Risk-based targeted sampling is supported.","Published":"2015-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FField","Version":"0.1.0","Title":"Force field simulation for a set of points","Description":"Force field simulation of interaction of set of points.\n Very useful for placing text labels on graphs, such as\n scatterplots.","Published":"2013-06-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ffmanova","Version":"0.2-2","Title":"Fifty-fifty MANOVA","Description":"This package performs general linear modeling with\n multiple responses (MANCOVA). An overall p-value for each\n model term is calculated by the 50-50 MANOVA method, which\n handles collinear responses. Rotation testing is used to\n compute adjusted single response p-values according to\n familywise error rates and false discovery rates.","Published":"2012-12-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ffstream","Version":"0.1.5","Title":"Forgetting Factor Methods for Change Detection in Streaming Data","Description":"An implementation of the adaptive forgetting factor scheme described in Bodenham and Adams (2016) which adaptively estimates the mean and variance of a stream in order to detect multiple changepoints in streaming data. The implementation is in C++ and uses Rcpp. Additionally, implementations of the fixed forgetting factor scheme from the same paper, as well as the classic CUSUM and EWMA methods, are included.","Published":"2016-11-22","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"FFTrees","Version":"1.2.3","Title":"Generate, Visualise, and Compare Fast and Frugal Decision Trees","Description":"Create, visualise, and test fast and frugal decision trees (FFTrees). FFTrees are very simple decision trees for\n classifying cases (i.e.; breast cancer patients) into one of two classes (e.g.;\n no cancer vs. true cancer) based on a small number of cues (e.g.; test results). FFTrees can be preferable to more complex algorithms because they are easy to communicate, require very little information, and are\n robust against overfitting.","Published":"2017-05-04","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"fftw","Version":"1.0-4","Title":"Fast FFT and DCT Based on the FFTW Library","Description":"Provides a simple and efficient wrapper around the fastest\n Fourier transform in the west (FFTW) library.","Published":"2017-05-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fftwtools","Version":"0.9-8","Title":"Wrapper for 'FFTW3' Includes: One-Dimensional Univariate,\nOne-Dimensional Multivariate, and Two-Dimensional Transform","Description":"Provides a wrapper for several 'FFTW' functions. This package provides access to the two-dimensional 'FFT', the multivariate 'FFT', and the one-dimensional real to complex 'FFT' using the 'FFTW3' library. The package includes the functions fftw() and mvfftw() which are designed to mimic the functionality of the R functions fft() and mvfft(). The 'FFT' functions have a parameter that allows them to not return the redundant complex conjugate when the input is real data. ","Published":"2017-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fgac","Version":"0.6-1","Title":"Generalized Archimedean Copula","Description":"Bi-variate data fitting is done by two stochastic\n components: the marginal distributions and the dependency\n structure. The dependency structure is modeled through a\n copula. An algorithm was implemented considering seven families\n of copulas (Generalized Archimedean Copulas), the best fitting\n can be obtained looking all copula's options (totally positive\n of order 2 and stochastically increasing models).","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"FGalgorithm","Version":"1.0","Title":"Flury and Gautschi algorithms","Description":"This is a package for implementation of Flury-Gautschi\n algorithms.","Published":"2013-06-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fGarch","Version":"3010.82.1","Title":"Rmetrics - Autoregressive Conditional Heteroskedastic Modelling","Description":"Environment for teaching \"Financial Engineering and\n Computational Finance\".","Published":"2016-08-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Fgmutils","Version":"0.9.4","Title":"Forest Growth Model Utilities","Description":"Growth models and forest production require existing data\n manipulation and the creation of new data, structured from basic forest\n inventory data. The purpose of this package is provide functions to support\n these activities.","Published":"2016-10-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FGN","Version":"2.0-12","Title":"Fractional Gaussian Noise and power law decay time series model\nfitting","Description":"Exact MLE and Whittle MLE estimation for power law decay models.","Published":"2014-05-16","License":"CC BY-NC-SA 3.0","snapshot_date":"2017-06-23"} {"Package":"fgof","Version":"0.2-1","Title":"Fast Goodness-of-fit Test","Description":"Goodness-of-fit test with multiplier or parametric\n bootstrap.","Published":"2012-05-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"fgpt","Version":"2.3","Title":"Floating Grid Permutation Technique","Description":"A permutation technique to explore and control for spatial autocorrelation. This package contains low level functions for performing permutations and calculating statistics as well as higher level functions. Higher level functions are an easy to use function for performing spatially restricted permutation tests and summarize and plot results. ","Published":"2015-02-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"FGSG","Version":"1.0.2","Title":"Feature Grouping and Selection Over an Undirected Graph","Description":"Implement algorithms for feature grouping and selection over an undirected graph, solves problems like graph fused lasso, graph OSCAR and so on.","Published":"2015-02-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fgui","Version":"1.0-5","Title":"Function GUI","Description":"Rapidly create a GUI interface for a function you created\n by automatically creating widgets for arguments of the\n function. Automatically parses help routines for\n context-sensative help to these arguments. The interface\n essentially a wrapper to some tcltk routines to both simplify\n and facilitate GUI creation. More advanced tcltk routines/GUI\n objects can be incorporated into the interface for greater\n customization for the more experienced.","Published":"2012-12-27","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"FHDI","Version":"1.0","Title":"Fractional Hot Deck and Fully Efficient Fractional Imputation","Description":"Impute general multivariate missing data with the fractional hot deck imputation.","Published":"2017-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FHtest","Version":"1.3","Title":"Tests for Right and Interval-Censored Survival Data Based on the\nFleming-Harrington Class","Description":"Functions to compare two or more survival curves with:\n a) The Fleming-Harrington test for right-censored data based on permutations and on counting processes.\n b) An extension of the Fleming-Harrington test for interval-censored data based on a permutation distribution and on a score vector distribution.","Published":"2015-07-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FI","Version":"1.0","Title":"Provide functions for forest inventory calculations","Description":"Provide functions for forest inventory calculations.\n Common volumetric equations (Smalian, Newton and Huber) as well\n stacking factor and form","Published":"2013-01-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FIACH","Version":"0.1.2","Title":"Retrospective Noise Control for fMRI","Description":"Useful functions for fMRI preprocessing.","Published":"2015-10-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fICA","Version":"1.0-3","Title":"Classical, Reloaded and Adaptive FastICA Algorithms","Description":"Algorithms for classical symmetric and deflation-based FastICA, reloaded deflation-based FastICA algorithm and an algorithm for adaptive deflation-based FastICA using multiple nonlinearities.","Published":"2015-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fields","Version":"9.0","Title":"Tools for Spatial Data","Description":"For curve, surface and function fitting with an emphasis\n on splines, spatial data and spatial statistics. The major methods\n include cubic, and thin plate splines, Kriging, and compactly supported\n covariance functions for large data sets. The splines and Kriging methods are\n supported by functions that can determine the smoothing parameter\n (nugget and sill variance) and other covariance function parameters by cross\n validation and also by restricted maximum likelihood. For Kriging\n there is an easy to use function that also estimates the correlation\n scale (range parameter). A major feature is that any covariance function\n implemented in R and following a simple format can be used for\n spatial prediction. There are also many useful functions for plotting\n and working with spatial data as images. This package also contains\n an implementation of sparse matrix methods for large spatial data\n sets and currently requires the sparse matrix (spam) package. Use\n help(fields) to get started and for an overview. The fields source\n code is deliberately commented and provides useful explanations of\n numerical details as a companion to the manual pages. The commented\n source code can be viewed by expanding source code version\n and looking in the R subdirectory. The reference for fields can be generated\n by the citation function in R and has DOI . ","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FieldSim","Version":"3.2.1","Title":"Random Fields (and Bridges) Simulations","Description":"Tools for random fields and bridges simulations.","Published":"2015-03-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"fiery","Version":"0.2.2","Title":"A Lightweight and Flexible Web Framework","Description":"A very flexible framework for building server side logic in R. The \n framework is unopinionated when it comes to how HTTP requests and WebSocket\n messages are handled and supports all levels of app complexity; from serving\n static content to full-blown dynamic web-apps. Fiery does not hold your hand\n as much as e.g. the shiny package does, but instead sets you free to create\n your web app the way you want.","Published":"2017-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fifer","Version":"1.1","Title":"A Biostatisticians Toolbox for Various Activities, Including\nPlotting, Data Cleanup, and Data Analysis","Description":"Functions and datasets that can be used for data cleanup (e.g., functions for eliminating all but a few columns from a dataset, selecting a range of columns, quickly editing column names), plotting/presenting data (prism-like reproductions, spearman plots for ordinal data, making colored tables, plotting interactions with quantitative variables), and analyses common to biostatistics (e.g., random forest, multiple comparisons with chi square tests). See the package vignette for a brief introduction to many of the main functions. ","Published":"2017-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fiftystater","Version":"1.0.1","Title":"Map Data to Visualize the Fifty U.S. States with Alaska and\nHawaii Insets","Description":"A simple data package to ease the process of creating choropleths\n in 'ggplot2' with all fifty U.S. states and Washington D.C.,\n including Alaska and Hawaii as insets.","Published":"2016-11-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"filehash","Version":"2.4-1","Title":"Simple Key-Value Database","Description":"Implements a simple key-value style database where character string keys\n are associated with data values that are stored on the disk. A simple interface is provided for inserting,\n retrieving, and deleting data from the database. Utilities are provided that allow 'filehash' databases to be\n treated much like environments and lists are already used in R. These utilities are provided to encourage\n interactive and exploratory analysis on large datasets. Three different file formats for representing the\n database are currently available and new formats can easily be incorporated by third parties for use in the\n 'filehash' framework.","Published":"2017-04-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"filehashSQLite","Version":"0.2-4","Title":"Simple key-value database using SQLite","Description":"Simple key-value database using SQLite as the backend","Published":"2012-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"filematrix","Version":"1.1.0","Title":"File-Backed Matrix Class with Convenient Read and Write Access","Description":"Interface for working with large matrices stored in files,\n not in computer memory. Supports multiple non-character\n data types (double, integer, logical and raw) of\n various sizes (e.g. 8 and 4 byte real values).\n Access to parts of the matrix is done by indexing, \n exactly as with usual R matrices.\n Supports very large matrices.\n Tested on multi-terabyte matrices.\n Allows for more than 2^32 rows or columns.\n Allows for quick addition of extra columns to a filematrix.\n Cross-platform as the package has R code only.","Published":"2016-05-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"filenamer","Version":"0.2.1","Title":"Easy Management of File Names","Description":"Create descriptive file names with ease. New file names are\n automatically (but optionally) time stamped and placed in date stamped\n directories. Streamline your analysis pipeline with input and output file\n names that have informative tags and proper file extensions.","Published":"2016-04-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"fileplyr","Version":"0.2.0","Title":"Chunk Processing or Split-Apply-Combine on Delimited Files and\nDistributed Dataframes","Description":"Perform chunk processing or split-apply-combine on data in a\n delimited file (example: CSV) and Distributed Dataframes (DDF) across multiple\n cores of a single machine with low memory footprint. These functions are a\n convenient wrapper over the versatile package 'datadr'.","Published":"2017-02-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"files","Version":"0.0.1","Title":"Effective File Navigation from the R Console","Description":"Functions for printing the contents of a folder as columns in a ragged-bottom data.frame and for\n viewing the details (size, time created, time modified, etc.) of a folder's top level contents.","Published":"2016-07-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"filesstrings","Version":"1.0.0","Title":"Handy String and File Manipulation","Description":"Convenient functions for moving files, deleting directories, \n and a variety of string operations that facilitate manipulating file names \n and extracting information from strings.","Published":"2017-06-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fImport","Version":"3000.82","Title":"Rmetrics - Economic and Financial Data Import","Description":"Environment for teaching \"Financial Engineering and\n Computational Finance\"","Published":"2013-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FinAna","Version":"0.1.1","Title":"Financial Analysis and Regression Diagnostic Analysis","Description":"Functions for financial analysis and financial modeling, \n including batch graphs generation, beta calculation, \n descriptive statistics, annuity calculation, bond pricing \n and financial data download.","Published":"2017-02-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"financial","Version":"0.2","Title":"Solving financial problems in R","Description":"Time value of money, cash flows and other financial\n functions.","Published":"2013-12-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FinancialInstrument","Version":"1.2.0","Title":"Financial Instrument Model Infrastructure for R","Description":"Infrastructure for defining meta-data and\n relationships for financial instruments.","Published":"2014-12-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"FinancialMath","Version":"0.1.1","Title":"Financial Mathematics for Actuaries","Description":"Contains financial math functions and introductory derivative functions included in the Society of Actuaries and Casualty Actuarial Society 'Financial Mathematics' exam, and some topics in the 'Models for Financial Economics' exam.","Published":"2016-12-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FinAsym","Version":"1.0","Title":"Classifies implicit trading activity from market quotes and\ncomputes the probability of informed trading","Description":"This package accomplishes two tasks: a) it classifies\n implicit trading activity from quotes in OTC markets using the\n algorithm of Lee and Ready (1991); b) based on information for\n trade initiation, the package computes the probability of\n informed trading of Easley and O'Hara (1987).","Published":"2012-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FinCal","Version":"0.6.3","Title":"Time Value of Money, Time Series Analysis and Computational\nFinance","Description":"Package for time value of money calculation, time series analysis and computational finance.","Published":"2016-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"finch","Version":"0.1.0","Title":"Parse Darwin Core Files","Description":"Parse and create Darwin Core () Simple \n and Archives. Functionality includes reading and parsing all the \n files in a Darwin Core Archive, including the datasets and metadata; \n read and parse simple Darwin Core files; and validation of Darwin \n Core Archives.","Published":"2016-12-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FinCovRegularization","Version":"1.1.0","Title":"Covariance Matrix Estimation and Regularization for Finance","Description":"Estimation and regularization for covariance matrix of asset\n returns. For covariance matrix estimation, three major types of factor\n models are included: macroeconomic factor model, fundamental factor model and\n statistical factor model. For covariance matrix regularization, four regularized\n estimators are included: banding, tapering, hard-thresholding and soft-\n thresholding. The tuning parameters of these regularized estimators are selected\n via cross-validation.","Published":"2016-04-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FindAllRoots","Version":"1.0","Title":"Find all root(s) of the equation and Find root(s) of the\nequation by dichotomy","Description":"Find all root(s) of the equation,including complex\n roots;Find root(s) of the equation by dichotomy.Besides,in\n dichotomy, more than one interval can be given at a time.","Published":"2012-07-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FindIt","Version":"1.0","Title":"Finding Heterogeneous Treatment Effects","Description":"The heterogeneous treatment effect estimation procedure \n proposed by Imai and Ratkovic (2013). \n The proposed method is applicable, for\n example, when selecting a small number of most (or least)\n efficacious treatments from a large number of alternative\n treatments as well as when identifying subsets of the\n population who benefit (or are harmed by) a treatment of\n interest. The method adapts the Support Vector Machine\n classifier by placing separate LASSO constraints over the\n pre-treatment parameters and causal heterogeneity parameters of\n interest. This allows for the qualitative distinction between\n causal and other parameters, thereby making the variable\n selection suitable for the exploration of causal heterogeneity. \n\tThe package also contains the function, CausalANOVA, which estimates \n\tthe average marginal interaction effects by a regularized ANOVA as \n\tproposed by Egami and Imai (2016+). ","Published":"2016-12-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FindMinIC","Version":"1.6","Title":"Find Models with Minimum IC","Description":"Creates models from all combinations of a list of variables and sorts by minimum IC (information criterion).","Published":"2013-12-18","License":"LGPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"findpython","Version":"1.0.2","Title":"Python Tools to Find an Acceptable Python Binary","Description":"Package designed to find an acceptable python binary.","Published":"2017-03-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"findviews","Version":"0.1.3","Title":"A View Generator for Multidimensional Data","Description":"A tool to explore wide data sets, by detecting, ranking\n and plotting groups of statistically dependent columns.","Published":"2016-12-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FinePop","Version":"1.4.0","Title":"Fine-Scale Population Analysis","Description":"Statistical tool set for population genetics. The package provides following functions: 1) empirical Bayes estimator of Fst and other measures of genetic differentiation, 2) regression analysis of environmental effects on genetic differentiation using bootstrap method, 3) interfaces to read and manipulate 'GENEPOP' format data files and allele/haplotype frequency format files.","Published":"2017-06-16","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"fingerprint","Version":"3.5.4","Title":"Functions for Processing Binary Fingerprint Data","Description":"A S4 class to represent binary 'fingerprints' and methods to manipulate \n fingerprint objects. Internally, the 'fingerprint' class models a binary fingerprint\n as a vector of integers, such\n that each element represents the position in the fingerprint that is set to 1.\n The bitwise logical functions in R are overridden so that they can be used directly\n with 'fingerprint' objects. A number of distance metrics are also\n available (many contributed by Michael Fadock). Fingerprints \n can be converted to Euclidean vectors (i.e., points on the unit hypersphere) and\n can also be folded using OR. Arbitrary fingerprint formats can be handled via line\n handlers. Currently handlers are provided for CDK, MOE and BCI fingerprint data.","Published":"2016-11-12","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"fingertipsR","Version":"0.1.0","Title":"Fingertips Data for Public Health","Description":"Fingertips () contains data for many indicators of public health in England. The\n underlying data is now more easily accessible by making use of the API.","Published":"2017-06-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"finiteruinprob","Version":"0.6","Title":"Computation of the Probability of Ruin Within a Finite Time\nHorizon","Description":"In the Cramér–Lundberg risk process perturbed by a Wiener\n process, this packages provides approximations to the probability of\n ruin within a finite time horizon. Currently, there are three methods\n implemented: The first one uses saddlepoint approximation (two\n variants are provided), the second one uses importance sampling and\n the third one is based on the simulation of a dual process. This last\n method is not very accurate and only given here for completeness.","Published":"2016-12-30","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"finreportr","Version":"1.0.1","Title":"Financial Data from U.S. Securities and Exchange Commission","Description":"Download and display company financial data from the U.S. Securities\n and Exchange Commission's EDGAR database. It contains a suite of functions with\n web scraping and XBRL parsing capabilities that allows users to extract data from EDGAR \n in an automated and scalable manner. See \n for more information.","Published":"2016-10-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FinTS","Version":"0.4-5","Title":"Companion to Tsay (2005) Analysis of Financial Time Series","Description":"R companion to Tsay (2005)\n Analysis of Financial Time Series, 2nd ed. (Wiley).\n Includes data sets, functions and script files\n required to work some of the examples. Version 0.3-x\n includes R objects for all data files used in the text\n and script files to recreate most of the analyses in\n chapters 1-3 and 9 plus parts of chapters 4 and 11.","Published":"2014-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FisherEM","Version":"1.4","Title":"The Fisher-EM algorithm","Description":"The FisherEM package provides an efficient algorithm for\n the unsupervised classification of high-dimensional data. This\n FisherEM algorithm models and clusters the data in a\n discriminative and low-dimensional latent subspace. It also\n provides a low-dimensional representation of the clustered\n data. A sparse version of Fisher-EM algorithm is also provided.","Published":"2013-06-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fisheyeR","Version":"0.9","Title":"Fisheye and Hyperbolic-space-alike Interactive Visualization\nTools in R","Description":"fisheyeR provides tools for creating Interactive Data\n Visualizations by implementing ideas from Furnas, Munzner,\n Costa and Venturini.","Published":"2010-01-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FisHiCal","Version":"1.1","Title":"Iterative FISH-based Calibration of Hi-C Data","Description":"FisHiCal integrates Hi-C and FISH data, offering a modular and easy-to-use tool for chromosomal spatial analysis. ","Published":"2014-06-24","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"fishkirkko2015","Version":"1.0.0","Title":"Dataset of Measurements of Fish Species at Kirkkojarvi Lake,\nFinland","Description":"Dataset of 302 measurements of 11 fish species to accompany the\n manuscript \"Length-weight relationships of six freshwater fish species\n from lake Kirkkojarvi, Finland\".","Published":"2016-09-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"fishmethods","Version":"1.10-2","Title":"Fishery Science Methods and Models in R","Description":"Fishery science methods and models from published literature and contributions from colleagues.","Published":"2017-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fishMod","Version":"0.29","Title":"Fits Poisson-Sum-of-Gammas GLMs, Tweedie GLMs, and Delta\nLog-Normal Models","Description":"Fits models to catch and effort data. Single-species models are 1) delta log-normal, 2) Tweedie, or 3) Poisson-gamma (G)LMs.","Published":"2016-09-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"fishmove","Version":"0.3-3","Title":"Prediction of Fish Movement Parameters","Description":"Functions to predict fish movement parameters plotting leptokurtic fish dispersal kernels (see Radinger and Wolter, 2014: Patterns and predictors of fish dispersal in rivers. Fish and Fisheries. 15:456-473.)","Published":"2015-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FishResp","Version":"0.1.0","Title":"An Analytical Tool for Aquatic Respirometry","Description":"Calculates metabolic rate of fish and other aquatic organisms measured\n using an intermittent-flow respirometry approach. The tool is used to\n run a set of graphical QC tests of raw respirometry data, correct it for\n background respiration and chamber effect, filter and extract target\n values of absolute and mass-specific metabolic rate. Experimental design\n should include background respiration tests and measuring of one or two\n metabolic rate traits. The package allows a user to import raw respirometry\n data obtained from 'AutoResp' (see for more\n information) or other oxygen logger software.","Published":"2017-05-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FIT","Version":"0.0.4","Title":"Transcriptomic Dynamics Models in Field Conditions","Description":"Provides functionality for constructing\n statistical models of transcriptomic dynamics in field conditions.\n It further offers the function to predict expression of a gene given \n the attributes of samples and meteorological data. ","Published":"2016-11-28","License":"MPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fit.models","Version":"0.5-14","Title":"Compare Fitted Models","Description":"The fit.models function and its associated methods (coefficients,\n print, summary, plot, etc.) were originally provided in the robust package to\n compare robustly and classically fitted model objects. The aim of the fit.models\n package is to separate this fitted model object comparison functionality from\n the robust package and to extend it to support fitting methods (e.g., classical,\n robust, Bayesian, regularized, etc.) more generally.","Published":"2017-04-06","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"fit4NM","Version":"3.3.3","Title":"NONMEM platform","Description":"This package is for NONMEM user","Published":"2012-10-29","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"FitAR","Version":"1.94","Title":"Subset AR Model Fitting","Description":"Comprehensive model building function for identification,\n estimation and diagnostic checking for AR and subset AR models.\n Two types of subset AR models are supported. One family of\n subset AR models, denoted by ARp, is formed by taking subet of\n the original AR coefficients and in the other, denoted by ARz,\n subsets of the partial autocorrelations are used. The main\n advantage of the ARz model is its applicability to very large\n order models.","Published":"2013-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FitARMA","Version":"1.6","Title":"FitARMA: Fit ARMA or ARIMA using fast MLE algorithm","Description":"Implements fast maximum likelihood algorithm for fitting ARMA time series. Uses S3 methods print, summary, fitted, residuals. Fast exact Gaussian ARMA simulation. ","Published":"2013-09-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fitbitScraper","Version":"0.1.8","Title":"Scrapes Data from Fitbit","Description":"Scrapes data from Fitbit . This does not use the official\n API, but instead uses the API that the web dashboard uses to generate the graphs\n displayed on the dashboard after login at .","Published":"2017-04-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fitdc","Version":"0.0.1","Title":"Garmin FIT File Decoder","Description":"A pure R package for decoding activity files written in the FIT (\"Flexible and Interoperable Data Transfer\") format. A format that is fast becoming the standard for recording running and cycling data. Details of the FIT protocol can be found at .","Published":"2016-09-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fitdistrplus","Version":"1.0-9","Title":"Help to Fit of a Parametric Distribution to Non-Censored or\nCensored Data","Description":"Extends the fitdistr() function (of the MASS package) with several functions to help the fit of a parametric distribution to non-censored or censored data. Censored data may contain left censored, right censored and interval censored values, with several lower and upper bounds. In addition to maximum likelihood estimation (MLE), the package provides moment matching (MME), quantile matching (QME) and maximum goodness-of-fit estimation (MGE) methods (available only for non-censored data). Weighted versions of MLE, MME and QME are available.","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fitDRC","Version":"1.1","Title":"Fitting Density Ratio Classes","Description":"Fits Density Ratio Classes to elicited\n probability-quantile points or intervals","Published":"2013-12-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fitplc","Version":"1.1-7","Title":"Fit Hydraulic Vulnerability Curves","Description":"Fits Weibull or sigmoidal models to percent loss conductivity (plc) curves as a function of plant water potential, computes confidence intervals of parameter estimates and predictions with bootstrap or parametric methods, and provides convenient plotting methods.","Published":"2017-03-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"FITSio","Version":"2.1-0","Title":"FITS (Flexible Image Transport System) Utilities","Description":"Utilities to read and write files in the FITS (Flexible\n Image Transport System) format, a standard format in astronomy (see\n e.g. for more information).\n Present low-level routines allow: reading, parsing, and modifying\n FITS headers; reading FITS images (multi-dimensional arrays);\n reading FITS binary and ASCII tables; and writing FITS images\n (multi-dimensional arrays). Higher-level functions allow: reading\n files composed of one or more headers and a single (perhaps\n multidimensional) image or single table; reading tables into\n data frames; generating vectors for image array axes; scaling and\n writing images as 16-bit integers. Known incompletenesses are\n reading random group extensions, as well as\n bit, complex, and array descriptor data types in binary tables.","Published":"2016-11-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fitTetra","Version":"1.0","Title":"fitTetra is an R package for assigning tetraploid genotype\nscores","Description":"Package fitTetra contains three functions that can be used\n to assign genotypes to a collection of tetraploid samples based\n on bialleleic marker assays. Functions fitTetra (to fit several\n models for one marker from the data and select the best\n fitting) or saveMarkerModels (calls fitTetra for multiple\n markers and saves the results to files) will probably be the\n most convenient to use. Function CodomMarker offers more\n control and fits one specified model for a given marker.","Published":"2013-05-28","License":"GPL (>= 2.2)","snapshot_date":"2017-06-23"} {"Package":"fitur","Version":"0.3.0","Title":"Fit Univariate Distributions","Description":"Wrapper for computing parameters and then assigning to distribution\n function families.","Published":"2017-04-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fivethirtyeight","Version":"0.2.0","Title":"Data and Code Behind the Stories and Interactives at\n'FiveThirtyEight'","Description":"An R library that provides access to the code and data sets\n published by FiveThirtyEight . Note\n that while we received guidance from editors at 538, this package is not\n officially published by 538.","Published":"2017-03-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fixedTimeEvents","Version":"1.0","Title":"The Distribution of Distances Between Discrete Events in Fixed\nTime","Description":"Distribution functions and test for over-representation of short\n distances in the Liland distribution. Simulation functions are included for\n comparison.","Published":"2016-10-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FixSeqMTP","Version":"0.1.2","Title":"Fixed Sequence Multiple Testing Procedures","Description":"Several generalized / directional Fixed Sequence Multiple Testing\n Procedures (FSMTPs) are developed for testing a sequence of pre-ordered\n hypotheses while controlling the FWER, FDR and Directional Error (mdFWER).\n All three FWER controlling generalized FSMTPs are designed under arbitrary\n dependence, which allow any number of acceptances. Two FDR controlling\n generalized FSMTPs are respectively designed under arbitrary dependence and\n independence, which allow more but a given number of acceptances. Two mdFWER\n controlling directional FSMTPs are respectively designed under arbitrary\n dependence and independence, which can also make directional decisions based\n on the signs of the test statistics. The main functions for each proposed\n generalized / directional FSMTPs are designed to calculate adjusted p-values\n and critical values, respectively. For users' convenience, the functions also\n provide the output option for printing decision rules.","Published":"2017-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fizzbuzzR","Version":"0.1.1","Title":"Fizz Buzz Implementation","Description":"An implementation of the Fizz Buzz algorithm, as defined e.g. in . \n It provides the standard algorithm with 3 replaced by Fizz and 5 replaced by Buzz, with the option of specifying start \n and end numbers, step size and the numbers being replaced by fizz and buzz, respectively. This package gives \n interviewers the optional answer of \"I use fizzbuzzR::fizzbuzz()\" when interviewing rather than having to write an algorithm\n themselves.","Published":"2016-10-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"FKF","Version":"0.1.3","Title":"Fast Kalman Filter","Description":"This is a fast and flexible implementation of the Kalman\n filter, which can deal with NAs. It is entirely written in C\n and relies fully on linear algebra subroutines contained in\n BLAS and LAPACK. Due to the speed of the filter, the fitting of\n high-dimensional linear state space models to large datasets\n becomes possible. This package also contains a plot function\n for the visualization of the state vector and graphical\n diagnostics of the residuals.","Published":"2014-02-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flacco","Version":"1.7","Title":"Feature-Based Landscape Analysis of Continuous and Constrained\nOptimization Problems","Description":"Contains tools and features, which can be used for an Exploratory\n Landscape Analysis (ELA) of single-objective continuous optimization problems.\n Those features are able to quantify rather complex properties, such as the\n global structure, separability, etc., of the optimization problems.","Published":"2017-06-14","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"flam","Version":"3.1","Title":"Fits Piecewise Constant Models with Data-Adaptive Knots","Description":"Implements the fused lasso additive model as proposed in Petersen, A., Witten, D., and Simon, N. (2015). Fused Lasso Additive Model. To appear in the Journal of Computational and Graphical Statistics.","Published":"2016-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flan","Version":"0.5","Title":"FLuctuation ANalysis on Mutation Models","Description":"Tools for fluctuations analysis of mutant cells counts.","Published":"2017-05-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"flare","Version":"1.5.0","Title":"Family of Lasso Regression","Description":"The package \"flare\" provides the implementation of a family of Lasso variants including Dantzig Selector, LAD Lasso, SQRT Lasso, Lq Lasso for estimating high dimensional sparse linear model. We adopt the alternating direction method of multipliers and convert the original optimization problem into a sequential L1 penalized least square minimization problem, which can be efficiently solved by linearization algorithm. A multi-stage screening approach is adopted for further acceleration. Besides the sparse linear model estimation, we also provide the extension of these Lasso variants to sparse Gaussian graphical model estimation including TIGER and CLIME using either L1 or adaptive penalty. Missing values can be tolerated for Dantzig selector and CLIME. The computation is memory-optimized using the sparse matrix output. ","Published":"2014-10-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"flars","Version":"1.0","Title":"Functional LARS","Description":"Variable selection algorithm for functional linear regression with scalar response variable and mixed scalar/functional predictors. ","Published":"2016-06-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flashClust","Version":"1.01-2","Title":"Implementation of optimal hierarchical clustering","Description":"Fast implementation of hierarchical clustering","Published":"2012-08-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flexclust","Version":"1.3-4","Title":"Flexible Cluster Algorithms","Description":"The main function kcca implements a general framework for\n k-centroids cluster analysis supporting arbitrary distance\n measures and centroid computation. Further cluster methods\n include hard competitive learning, neural gas, and QT\n clustering. There are numerous visualization methods for\n cluster results (neighborhood graphs, convex cluster hulls,\n barcharts of centroids, ...), and bootstrap methods for the\n analysis of cluster stability.","Published":"2013-07-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"flexCWM","Version":"1.7","Title":"Flexible Cluster-Weighted Modeling","Description":"Allows for maximum likelihood fitting of cluster-weighted models, a class of mixtures of regression models with random covariates.","Published":"2017-02-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"flexdashboard","Version":"0.5","Title":"R Markdown Format for Flexible Dashboards","Description":"Format for converting an R Markdown document to a grid oriented\n dashboard. The dashboard flexibly adapts the size of it's components to the\n containing web page.","Published":"2017-03-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FlexDir","Version":"1.0","Title":"Tools to Work with the Flexible Dirichlet Distribution","Description":"Provides tools to work with the Flexible Dirichlet\n distribution. The main features are an E-M algorithm for computing the maximum\n likelihood estimate of the parameter vector and a function based on conditional\n bootstrap to estimate its asymptotic variance-covariance matrix. It contains\n also functions to plot graphs, to generate random observations and to handle\n compositional data.","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flexmix","Version":"2.3-14","Title":"Flexible Mixture Modeling","Description":"A general framework for finite mixtures of regression models\n using the EM algorithm is implemented. The package provides the E-step\n and all data handling, while the M-step can be supplied by the user to\n easily define new models. Existing drivers implement mixtures of standard\n linear models, generalized linear models and model-based clustering.","Published":"2017-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FlexParamCurve","Version":"1.5-3","Title":"Tools to Fit Flexible Parametric Curves","Description":"Model selection tools and 'selfStart' functions to fit parametric curves in 'nls', 'nlsList' and 'nlme' frameworks.","Published":"2015-08-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"flexPM","Version":"2.0","Title":"Flexible Parametric Models for Censored and Truncated Data","Description":"Estimation of flexible parametric models for survival data.","Published":"2015-11-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"flexrsurv","Version":"1.4.1","Title":"Flexible Relative Survival Analysis","Description":"Package for parametric relative survival analyses. It allows to model non-linear and \n non-proportional effects using splines (B-spline and truncated power basis). It also includes \n both non proportional and non linear effects of \n\t\t\tRemontet, L. et al. (2007) and \n\t\t\tMahboubi, A. et al. (2011) . ","Published":"2017-05-18","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"flexsurv","Version":"1.1","Title":"Flexible Parametric Survival and Multi-State Models","Description":"Flexible parametric models for time-to-event data,\n including the Royston-Parmar spline model, generalized gamma and\n generalized F distributions. Any user-defined parametric\n distribution can be fitted, given at least an R function defining\n the probability density or hazard. There are also tools for\n fitting and predicting from fully parametric multi-state models.","Published":"2017-03-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flexsurvcure","Version":"0.0.1","Title":"Flexible Parametric Cure Models","Description":"Flexible parametric mixture and non-mixture cure models for time-to-event data.","Published":"2017-05-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flextable","Version":"0.2.0","Title":"Functions for Tabular Reporting","Description":"Create pretty tables for 'Microsoft Word', 'Microsoft PowerPoint' and 'HTML' documents. \n Functions are provided to let users create tables, modify and format their content. \n It extends package 'officer' that does not contain any feature for customized tabular reporting. \n Function tabwid() produces an 'htmlwidget' ready to be used in 'Shiny' or 'R Markdown (*.Rmd)' documents.","Published":"2017-06-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"flifo","Version":"0.1.4","Title":"Don't Get Stuck with Stacks in R","Description":"Functions to create and manipulate \n FIFO (First In First Out), LIFO (Last In First Out), and NINO (Not In or Never Out) \n stacks in R.","Published":"2017-01-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FLightR","Version":"0.4.6","Title":"Hidden Markov Model for Solar Geolocation Archival Tags","Description":"Estimate positions of animal from data collected by solar geolocation archival tags.","Published":"2017-03-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FLIM","Version":"1.2","Title":"Farewell’s Linear Increments Model","Description":"FLIM fits linear models for the observed increments in a longitudinal dataset, and imputes missing values according to the models.","Published":"2014-12-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"flip","Version":"2.4.3","Title":"Multivariate Permutation Tests","Description":"It implements many univariate and multivariate permutation (and rotation) tests. Allowed tests: the t one and two samples, ANOVA, linear models, Chi Squared test, rank tests (i.e. Wilcoxon, Mann-Whitney, Kruskal-Wallis), Sign test and McNemar. Test on Linear Models are performed also in presence of covariates (i.e. nuisance parameters). The permutation and the rotation methods to get the null distribution of the test statistics are available. It also implements methods for multiplicity control such as Westfall-Young minP procedure and Closed Testing (Marcus, 1976) and k-FWER. Moreover, it allows to test for fixed effects in mixed effects models.","Published":"2014-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flippant","Version":"1.1.0","Title":"Dithionite Scramblase Assay Analysis","Description":"The lipid scrambling activity of protein extracts and purified\n scramblases is often determined using a fluorescence-based assay involving\n many manual steps. flippant offers an integrated solution for the analysis\n and publication-grade graphical presentation of dithionite scramblase\n assays, as well as a platform for review, dissemination and extension of the\n strategies it employs. The package's name derives from a play on the fact\n that lipid scrambling is also sometimes referred to as 'flipping'.","Published":"2017-02-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"FLLat","Version":"1.2-1","Title":"Fused Lasso Latent Feature Model","Description":"Fits the Fused Lasso Latent Feature model, which is used for modeling multi-sample aCGH data to identify regions of copy number variation (CNV). Produces a set of features that describe the patterns of CNV and a set of weights that describe the composition of each sample. Also provides functions for choosing the optimal tuning parameters and the appropriate number of features, and for estimating the false discovery rate.","Published":"2017-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flock","Version":"0.7","Title":"Process Synchronization Using File Locks","Description":"Implements synchronization between R processes (spawned by using the \"parallel\" package for instance) using file locks. Supports both exclusive and shared locking.","Published":"2016-11-12","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"flood","Version":"0.1.1","Title":"Statistical Methods for the (Regional) Analysis of Flood\nFrequency","Description":"Includes several statistical methods for the estimation of parameters and high quantiles of river flow distributions. The focus is on regional estimation based on homogeneity assumptions and computed from multivariate observations (multiple measurement stations).\n\tFor details see Kinsvater et al. (2017) .","Published":"2017-01-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flora","Version":"0.2.8","Title":"Tools for Interacting with the Brazilian Flora 2020","Description":"Tools to quickly compile taxonomic and distribution data from\n the Brazilian Flora 2020 at http://floradobrasil.jbrj.gov.br/.","Published":"2017-01-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flowDiv","Version":"1.0","Title":"Cytometric Diversity Indices from 'FlowJo' Workspaces","Description":"Concatenates some 'flowWorkspace', 'flowCore', 'vegan' and 'gdata' packages functionalities to import 'FlowJo' workspaces and calculates ecological diversity indices for gated populations, based on two dimensions cytograms.","Published":"2015-10-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"flower","Version":"1.0","Title":"Tools for characterizing flowering traits","Description":"Flowering is an important life history trait of flowering plants. It has been mainly analyzed with respect to flowering onset and duration of flowering. This tools provide some functions to compute the temporal distribution of an flowering individual related to other population members. fCV() measures the temporal variation in flowering. RIind() measures the rank order of flowering for individual plants within a population. SI(), SI2(), SI3(), and SI4() calculate flowering synchrony with different methods.","Published":"2015-01-28","License":"GPL (>= 1.0)","snapshot_date":"2017-06-23"} {"Package":"flowfield","Version":"1.0","Title":"Forecasts future values of a univariate time series","Description":"Flow field forecasting draws information from an interpolated flow field of the observed time series to incrementally build a forecast.","Published":"2014-03-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"flowr","Version":"0.9.10","Title":"Streamlining Design and Deployment of Complex Workflows","Description":"This framework allows you to design and implement complex\n pipelines, and deploy them on your institution's computing cluster. This has\n been built keeping in mind the needs of bioinformatics workflows. However, it is\n easily extendable to any field where a series of steps (shell commands) are to\n be executed in a (work)flow.","Published":"2016-04-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"flows","Version":"1.1.1","Title":"Flow Selection and Analysis","Description":"Selections on flow matrices, statistics on selected flows, map and\n graph visualisations.","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FlowScreen","Version":"1.2.2","Title":"Daily Streamflow Trend and Change Point Screening","Description":"Screens daily streamflow time series for temporal trends and \n change-points. This package has been primarily developed for assessing \n the quality of daily streamflow time series. It also contains tools for \n plotting and calculating many different streamflow metrics. The package can be \n used to produce summary screening plots showing change-points and significant \n temporal trends for high flow, low flow, and/or baseflow statistics, or it can \n be used to perform more detailed hydrological time series analyses. The \n package was designed for screening daily streamflow time series from Water \n Survey Canada and the United States Geological Survey but will also work \n with streamflow time series from many other agencies.","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FLR","Version":"1.0","Title":"Fuzzy Logic Rule Classifier","Description":"FLR algorithm for classification","Published":"2014-05-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flsa","Version":"1.05","Title":"Path algorithm for the general Fused Lasso Signal Approximator","Description":"This package implements a path algorithm for the Fused\n Lasso Signal Approximator. For more details see the help files","Published":"2013-03-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FLSSS","Version":"5.2","Title":"Multi-Threaded Multidimensional Fixed Size Subset Sum Solver and\nExtension to General-Purpose Knapsack Problem","Description":"A novel algorithm for solving the subset sum problem with bounded error in multidimensional real domain and its application to the general-purpose knapsack problem.","Published":"2016-12-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Flury","Version":"0.1-3","Title":"Data Sets from Flury, 1997","Description":"Contains data sets from Bernard Flury (1997) A First\n Course in Multivariate Statistics, Springer NY","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"flux","Version":"0.3-0","Title":"Flux rate calculation from dynamic closed chamber measurements","Description":"Functions for the calculation of greenhouse gas flux rates \n\tfrom closed chamber concentration measurements. The package follows \n\ta modular concept: Fluxes can be calculated in just two simple steps \n\tor in several steps if more control in details is wanted. Additionally \n\tplot and preparation functions as well as functions for modelling\n\tgpp and reco are provided.","Published":"2014-04-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fma","Version":"2.3","Title":"Data Sets from \"Forecasting: Methods and Applications\" by\nMakridakis, Wheelwright & Hyndman (1998)","Description":"All data sets from \"Forecasting: methods and applications\" by\n Makridakis, Wheelwright & Hyndman (Wiley, 3rd ed., 1998).","Published":"2017-02-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fmbasics","Version":"0.2.0","Title":"Financial Market Building Blocks","Description":"Implements basic financial market objects like currencies, currency\n pairs, interest rates and interest rate indices. You will be able to use\n Benchmark instances of these objects which have been defined using their most\n common conventions or those defined by International Swap Dealer Association\n (ISDA, ) legal documentation.","Published":"2017-04-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FMC","Version":"1.0.0","Title":"Factorial Experiments with Minimum Level Changes","Description":"Generate cost effective minimally changed run sequences \n for symmetrical as well as asymmetrical factorial \n designs.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fmdates","Version":"0.1.2","Title":"Financial Market Date Calculations","Description":"Implements common date calculations relevant for specifying\n the economic nature of financial market contracts that are typically defined\n by International Swap Dealer Association (ISDA, ) legal\n documentation. This includes methods to check whether dates are business\n days in certain locales, functions to adjust and shift dates and time length\n (or day counter) calculations.","Published":"2017-02-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FME","Version":"1.3.5","Title":"A Flexible Modelling Environment for Inverse Modelling,\nSensitivity, Identifiability and Monte Carlo Analysis","Description":"Provides functions to help in fitting models to data, to\n perform Monte Carlo, sensitivity and identifiability analysis. It is\n intended to work with models be written as a set of differential\n equations that are solved either by an integration routine from\n package 'deSolve', or a steady-state solver from package\n 'rootSolve'. However, the methods can also be used with other types of\n functions.","Published":"2016-09-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FMP","Version":"1.4","Title":"Filtered Monotonic Polynomial IRT Models","Description":"Estimates Filtered Monotonic Polynomial IRT Models as described by Liang and Browne (2015) . ","Published":"2016-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fmri","Version":"1.7-2","Title":"Analysis of fMRI Experiments","Description":"Contains R-functions to perform an fMRI analysis as described in\n Tabelow et al. (2006) ,\n Polzehl et al. (2010) ,\n Tabelow and Polzehl (2011) .","Published":"2016-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fmrs","Version":"1.0-9","Title":"Variable Selection in Finite Mixture of AFT Regression and FMR","Description":"Provides parameter estimation as well as variable selection in\n Finite Mixture of Accelerated Failure Time Regression and Finite\n Mixture of Regression Models.\n Furthermore, this package provides Ridge Regression and Elastic Net.","Published":"2016-07-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"fmsb","Version":"0.6.1","Title":"Functions for Medical Statistics Book with some Demographic Data","Description":"Several utility functions for the book entitled \n\t\"Practices of Medical and Health Data Analysis using R\"\n\t(Pearson Education Japan, 2007) with Japanese demographic\n\tdata and some demographic analysis related functions.","Published":"2017-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FMsmsnReg","Version":"1.0","Title":"Regression Models with Finite Mixtures of Skew Heavy-Tailed\nErrors","Description":"Fit linear regression models where the random errors follow a finite mixture of of Skew Heavy-Tailed Errors.","Published":"2016-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FMStable","Version":"0.1-2","Title":"Finite Moment Stable Distributions","Description":"This package implements some basic procedures for dealing\n with log maximally skew stable distributions, which are also\n called finite moment log stable distributions.","Published":"2012-09-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fmt","Version":"1.0","Title":"Variance estimation of FMT method (Fully Moderated t-statistic)","Description":"This package computes posterior residual variances to be\n used in the denominator of a moderated t-statistic from a\n linear model analysis of microarray data. It is an extension\n of the moderated t-statistic original proposed by Smyth (2004).\n LOESS local regression and empirical Bayesian method are used\n to estimate gene specific prior degrees of freedom and prior\n variance based on average gene intensity level. The posterior\n residual variance in the denominator is a weighted average of\n prior and residual variance and the weights are prior degrees\n of freedom and residual variance degrees of freedom. The\n degrees of freedom of the moderated t-statistic is simply the\n sum of prior and residual variance degrees of freedom.","Published":"2012-08-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fMultivar","Version":"3011.78","Title":"Rmetrics - Analysing and Modeling Multivariate Financial Return\nDistributions","Description":"Environment for teaching \n\t\"Financial Engineering and Computational Finance\"","Published":"2014-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FNN","Version":"1.1","Title":"Fast Nearest Neighbor Search Algorithms and Applications","Description":"Cover-tree and kd-tree fast k-nearest neighbor search algorithms and related applications\n including KNN classification, regression and information measures are implemented.","Published":"2013-07-31","License":"GPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"fNonlinear","Version":"3010.78","Title":"Nonlinear and Chaotic Time Series Modelling","Description":"Environment for teaching \"Financial Engineering and\n Computational Finance\"","Published":"2013-06-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"foba","Version":"0.1","Title":"greedy variable selection","Description":"foba is a package that implements forward, backward, and foba sparse learning algorithms for ridge regression, described in the paper \"Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations\".","Published":"2008-11-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"focusedMDS","Version":"1.3.3","Title":"Focused, Interactive Multidimensional Scaling","Description":"Takes a distance matrix and plots it as an \n interactive graph. One point is focused at the center of the graph,\n around which all other points are plotted in their exact distances as\n given in the distance matrix. All other non-focus points are plotted \n as best as possible in relation to one another. Double click on any \n point to choose a new focus point, and hover over points to see their\n ID labels. If color label categories are given, hover over colors in \n the legend to highlight only those points and click on colors to \n highlight multiple groups. For more information on the rationale \n and mathematical background, as well as an interactive introduction,\n see .","Published":"2017-03-31","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"foghorn","Version":"0.4.4","Title":"Summarizes CRAN Check Results in the Terminal","Description":"The CRAN check results in your R terminal.","Published":"2017-05-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fold","Version":"0.1.2","Title":"A Self-Describing Dataset Format and Interface","Description":"Defines a compact data format that includes metadata. \n The function fold() creates the format by converting \n from data.frame, and unfold() converts back. The predictability\n of the folded format supports reusability of data processing tools,\n while the presence of embedded metadata improves portability, \n interpretability, and efficiency.","Published":"2017-04-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fontBitstreamVera","Version":"0.1.1","Title":"Fonts with 'Bitstream Vera Fonts' License","Description":"Provides fonts licensed under the 'Bitstream Vera Fonts'\n license for the 'fontquiver' package.","Published":"2017-02-01","License":"file LICENCE","snapshot_date":"2017-06-23"} {"Package":"fontcm","Version":"1.1","Title":"Computer Modern font for use with extrafont package","Description":"Computer Modern font for use with extrafont package","Published":"2014-03-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fontHind","Version":"0.1.1","Title":"Additional 'ggplot2' Themes Using 'Hind' Fonts","Description":"Provides 'ggplot2' themes based on the 'Hind' fonts.\n 'Hind' is an open source 'typeface' supporting the 'Devanagari' and Latin scripts.\n Developed explicitly for use in User Interface design, the 'Hind' font family includes five styles.\n More information about the font can be found at .","Published":"2017-02-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fontLiberation","Version":"0.1.0","Title":"Liberation Fonts","Description":"A placeholder for the Liberation fontset intended for the\n `fontquiver` package. This fontset covers the 12 combinations of\n families (sans, serif, mono) and faces (plain, bold, italic, bold\n italic) supported in R graphics devices.","Published":"2016-10-15","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fontMPlus","Version":"0.1.1","Title":"Additional 'ggplot2' Themes Using 'M+' Fonts","Description":"Provides 'ggplot2' themes based on the 'M+' fonts.\n The 'M+' fonts are a font family under a free license. The font family provides\n multilingual glyphs. The fonts provide 'Kana', over 5,000 'Kanji', Basic Latin,\n Latin-1 Supplement, Latin Extended-A, and 'IPA' Extensions glyphs. Most of the Greek,\n Cyrillic, Vietnamese, and extended glyphs and symbols are included too.\n So the fonts are in conformity with ISO-8859-1, 2, 3, 4, 5, 7, 9, 10, 13, 14, 15, 16,\n Windows-1252, T1, and VISCII encoding.\n More information about the fonts can be found at .","Published":"2017-02-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fontquiver","Version":"0.2.1","Title":"Set of Installed Fonts","Description":"Provides a set of fonts with permissive licences. This is\n useful when you want to avoid system fonts to make sure your\n outputs are reproducible.","Published":"2017-02-01","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"foodweb","Version":"1-0","Title":"visualisation and analysis of food web networks","Description":"Calculates twelve commonly-used, basic measures of food\n web network structure from binary, predator-prey matrices:\n species richness, connectance, total number of links, link\n density, number of trophic positions, predator:prey ratio, and\n fraction of carnivores, herbivores, top species and\n intermediate species. Employs food web language in the code\n and output, translates between a couple of common food web\n formats, can handle food webs consisting of multiple levels,\n and can automate the analysis for a large number of webs. The\n program produces 3-dimensional graphs of high quality that can\n be customized by the user.","Published":"2012-05-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fOptions","Version":"3022.85","Title":"Rmetrics - Pricing and Evaluating Basic Options","Description":"Environment for teaching \n\t\"Financial Engineering and Computational Finance\".\n\tPricing and Evaluating Basic Options.","Published":"2015-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"forams","Version":"2.0-5","Title":"Foraminifera and Community Ecology Analyses","Description":"SHE, FORAM Index and ABC Method analyses and custom plot\n functions for community data.","Published":"2015-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"forcats","Version":"0.2.0","Title":"Tools for Working with Categorical Variables (Factors)","Description":"Helpers for reordering factor levels (including moving\n specified levels to front, ordering by first appearance, reversing, and\n randomly shuffling), and tools for modifying factor levels (including\n collapsing rare levels into other, 'anonymising', and manually 'recoding').","Published":"2017-01-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"foreach","Version":"1.4.3","Title":"Provides Foreach Looping Construct for R","Description":"Support for the foreach looping construct. Foreach is an\n idiom that allows for iterating over elements in a collection,\n without the use of an explicit loop counter. This package in\n particular is intended to be used for its return value, rather\n than for its side effects. In that sense, it is similar to the\n standard lapply function, but doesn't require the evaluation\n of a function. Using foreach without side effects also\n facilitates executing the loop in parallel.","Published":"2015-10-13","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"ForeCA","Version":"0.2.4","Title":"Forecastable Component Analysis","Description":"Implementation of Forecastable Component Analysis ('ForeCA'),\n including main algorithms and auxiliary function (summary, plotting, etc.) to\n apply 'ForeCA' to multivariate time series data. 'ForeCA' is a novel dimension\n reduction (DR) technique for temporally dependent signals. Contrary to other\n popular DR methods, such as 'PCA' or 'ICA', 'ForeCA' takes time dependency\n explicitly into account and searches for the most ''forecastable'' signal.\n The measure of forecastability is based on the Shannon entropy of the spectral\n density of the transformed signal.","Published":"2016-03-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"forecast","Version":"8.1","Title":"Forecasting Functions for Time Series and Linear Models","Description":"Methods and tools for displaying and analysing\n univariate time series forecasts including exponential smoothing\n via state space models and automatic ARIMA modelling.","Published":"2017-06-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ForecastCombinations","Version":"1.1","Title":"Forecast Combinations","Description":"Aim: Supports the most frequently used methods to combine forecasts. Among others: Simple average, Ordinary Least Squares, Least Absolute Deviation, Constrained Least Squares, Variance-based, Best Individual model, Complete subset regressions and Information-theoretic (information criteria based).","Published":"2015-11-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"forecastHybrid","Version":"0.4.1","Title":"Convenient Functions for Ensemble Time Series Forecasts","Description":"Convenient functions for ensemble forecasts in R combining\n approaches from the 'forecast' package. Forecasts generated from auto.arima(), ets(),\n thetam(), nnetar(), stlm(), and tbats() can be combined with equal weights, weights\n based on in-sample errors, or CV weights. Cross validation for time series data\n and user-supplied models and forecasting functions is also supported to evaluate model accuracy.","Published":"2017-06-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"forecastSNSTS","Version":"1.2-0","Title":"Forecasting for Stationary and Non-Stationary Time Series","Description":"Methods to compute linear h-step ahead prediction coefficients based\n on localised and iterated Yule-Walker estimates and empirical mean squared\n and absolute prediction errors for the resulting predictors. Also, functions\n to compute autocovariances for AR(p) processes, to simulate tvARMA(p,q) time\n series, and to verify an assumption from Kley et al. (2017),\n Preprint .","Published":"2017-06-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"forecTheta","Version":"2.2","Title":"Forecasting Time Series by Theta Models","Description":"Routines for forecasting univariate time series using Theta Models. Contains several cross-validation routines. ","Published":"2016-05-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"forega","Version":"1.0","Title":"Floating-Point Genetic Algorithms with Statistical Forecast\nBased Inheritance Operator","Description":"The implemented algorithm performs a floating-point genetic algorithm search with a statistical forecasting operator that generates offspring which probably will be generated in future generations. Use of this operator enhances the search capabilities of floating-point genetic algorithms because offspring generated by usual genetic operators rapidly forecasted before performing more generations. ","Published":"2016-01-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"foreign","Version":"0.8-69","Title":"Read Data Stored by 'Minitab', 'S', 'SAS', 'SPSS', 'Stata',\n'Systat', 'Weka', 'dBase', ...","Description":"Reading and writing data stored by some versions of\n\t'Epi Info', 'Minitab', 'S', 'SAS', 'SPSS', 'Stata', 'Systat', 'Weka',\n\tand for reading and writing some 'dBase' files.","Published":"2017-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"forensic","Version":"0.2","Title":"Statistical Methods in Forensic Genetics","Description":"The statistical evaluation of DNA mixtures, DNA profile\n match probability","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"forensim","Version":"4.3","Title":"Statistical tools for the interpretation of forensic DNA\nmixtures","Description":"Statistical methods and simulation tools for the\n interpretation of forensic DNA mixtures","Published":"2013-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"forestFloor","Version":"1.11.1","Title":"Visualizes Random Forests with Feature Contributions","Description":"Form visualizations of high dimensional mapping structures of random forests and feature contributions.","Published":"2017-05-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"forestinventory","Version":"0.2.0","Title":"Design-Based Global and Small-Area Estimations for Multiphase\nForest Inventories","Description":"Extensive global and small-area estimation procedures for multiphase\n forest inventories under the design-based Monte-Carlo approach are provided.\n The implementation includes estimators for simple and cluster sampling\n published by Daniel Mandallaz in 2007 (),\n 2013 (, ,\n , )\n and 2016 (). It provides point estimates,\n their external- and design-based variances as well as confidence intervals.\n The procedures have also been optimized for the use of remote sensing data\n as auxiliary information.","Published":"2017-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"forestmodel","Version":"0.4.3","Title":"Forest Plots from Regression Models","Description":"Produces forest plots using 'ggplot2' from models produced by functions\n such as stats::lm(), stats::glm() and survival::coxph().","Published":"2017-04-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"forestplot","Version":"1.7","Title":"Advanced Forest Plot Using 'grid' Graphics","Description":"A forest plot that allows for\n multiple confidence intervals per row,\n custom fonts for each text element,\n custom confidence intervals,\n text mixed with expressions, and more.\n The aim is to extend the use of forest plots beyond meta-analyses.\n This is a more general version of the original 'rmeta' package's forestplot()\n function and relies heavily on the 'grid' package.","Published":"2017-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ForestTools","Version":"0.1.4","Title":"Analysing Remotely Sensed Forest Data","Description":"Provides tools for analyzing remotely sensed forest data, including functions for detecting treetops from canopy models, outlining tree crowns and generating spatial statistics.","Published":"2017-04-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ForImp","Version":"1.0.3","Title":"Imputation of Missing Values Through a Forward Imputation\nAlgorithm","Description":"Imputation of missing values in datasets of ordinal variables through a forward imputation algorithm","Published":"2015-01-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ForIT","Version":"1.0","Title":"Functions from the 2nd Italian Forest Inventory (INFC)","Description":"This package provides estimates of tree volume and biomass from\n Italian NFI models","Published":"2014-07-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FormalSeries","Version":"1.0","Title":"Elementary arithemtic in formal series rings","Description":"Implemented, addition, subtracking, multiplication,\n division in formal series rings of any number of variables\n (except division is only to 3 variables). Also are available\n \"[\" \"[<-\" operators.","Published":"2012-08-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"formatR","Version":"1.5","Title":"Format R Code Automatically","Description":"Provides a function tidy_source() to format R source code. Spaces\n and indent will be added to the code automatically, and comments will be\n preserved under certain conditions, so that R code will be more\n human-readable and tidy. There is also a Shiny app as a user interface in\n this package (see tidy_app()).","Published":"2017-04-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"formattable","Version":"0.2.0.1","Title":"Create 'Formattable' Data Structures","Description":"Provides functions to create formattable vectors and data frames.\n 'Formattable' vectors are printed with text formatting, and formattable\n data frames are printed with multiple types of formatting in HTML\n to improve the readability of data presented in tabular form rendered in\n web pages.","Published":"2016-08-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Formula","Version":"1.2-1","Title":"Extended Model Formulas","Description":"Infrastructure for extended formulas with multiple parts on the\n right-hand side and/or multiple responses on the left-hand side.","Published":"2015-04-07","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"formula.tools","Version":"1.6.1","Title":"Utilities for Formulas, Expressions, Calls and Other Objects","Description":"These utilities facilitate the programmatic manipulations of\n formulas, expressions, calls, names, symbols and other objects. These \n objects all share the same structure: a left-hand side operator and \n right-hand side. This packages provides methods for accessing and modifying\n the structures as well as extracting names and symbols from these objects.","Published":"2017-02-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fortunes","Version":"1.5-4","Title":"R Fortunes","Description":"A collection of fortunes from the R community.","Published":"2016-12-29","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"forward","Version":"1.0.3","Title":"Forward search","Description":"Forward search approach to robust analysis in linear and\n generalized linear regression models.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ForwardSearch","Version":"1.0","Title":"Forward Search using asymptotic theory","Description":"Forward Search analysis of time series regressions. Implements the asymptotic theory developed in Johansen and Nielsen (2013, 2014).","Published":"2014-09-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fossil","Version":"0.3.7","Title":"Palaeoecological and Palaeogeographical Analysis Tools","Description":"A set of analytical tools useful in analysing ecological\n and geographical data sets, both ancient and modern. The\n package includes functions for estimating species richness\n (Chao 1 and 2, ACE, ICE, Jacknife), shared species/beta\n diversity, species area curves and geographic distances and\n areas.","Published":"2012-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fourierin","Version":"0.2.2","Title":"Computes Numeric Fourier Integrals","Description":"Computes Fourier integrals of functions of one and two variables using the Fast Fourier transform. The Fourier transforms must be evaluated on a regular grid.","Published":"2017-05-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fourPNO","Version":"1.0.4","Title":"Bayesian 4 Parameter Item Response Model","Description":"Estimate Barton & Lord's (1981) \n four parameter IRT model with lower and upper asymptotes using Bayesian\n formulation described by Culpepper (2016) .","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FourScores","Version":"1.0","Title":"FourScores - A game for two players","Description":"A game for two players: Who gets first four in a row\n (horizontal, vertical or diagonal) wins. Published by Milton\n Bradley, designed by Howard Wexler and Ned Strongin.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fpa","Version":"1.0","Title":"Spatio-Temporal Fixation Pattern Analysis","Description":"Spatio-temporal Fixation Pattern Analysis (FPA) is a new method of analyzing eye \n movement data, developed by Mr. Jinlu Cao under the supervision of Prof. Chen Hsuan-Chih at \n The Chinese University of Hong Kong, and Prof. Wang Suiping at the South China Normal \n Univeristy. The package \"fpa\" is a R implementation which makes FPA analysis much easier. \n There are four major functions in the package: ft2fp(), get_pattern(), plot_pattern(), and \n lineplot(). The function ft2fp() is the core function, which can complete all the preprocessing \n within moments. The other three functions are supportive functions which visualize the eye \n fixation patterns.","Published":"2016-08-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fpc","Version":"2.1-10","Title":"Flexible Procedures for Clustering","Description":"Various methods for clustering and cluster validation.\n Fixed point clustering. Linear regression clustering. Clustering by \n merging Gaussian mixture components. Symmetric \n and asymmetric discriminant projections for visualisation of the \n separation of groupings. Cluster validation statistics\n for distance based clustering including corrected Rand index. \n Cluster-wise cluster stability assessment. Methods for estimation of \n the number of clusters: Calinski-Harabasz, Tibshirani and Walther's \n prediction strength, Fang and Wang's bootstrap stability. \n Gaussian/multinomial mixture fitting for mixed \n continuous/categorical variables. Variable-wise statistics for cluster\n interpretation. DBSCAN clustering. Interface functions for many \n clustering methods implemented in R, including estimating the number of\n clusters with kmeans, pam and clara. Modality diagnosis for Gaussian\n mixtures. For an overview see package?fpc.","Published":"2015-08-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"fpca","Version":"0.2-1","Title":"Restricted MLE for Functional Principal Components Analysis","Description":"A geometric approach to MLE for functional principal\n components","Published":"2011-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FPCA2D","Version":"1.0","Title":"Two Dimensional Functional Principal Component Analysis","Description":"Compute the two dimension functional principal component scores for a series of two dimension images.","Published":"2016-09-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fpCompare","Version":"0.2.1","Title":"Reliable Comparison of Floating Point Numbers","Description":"Comparisons of floating point numbers are problematic due to errors\n associated with the binary representation of decimal numbers.\n Despite being aware of these problems, people still use numerical methods\n that fail to account for these and other rounding errors (this pitfall is\n the first to be highlighted in Circle 1 of Burns (2012)\n [The R Inferno](http://www.burns-stat.com/pages/Tutor/R_inferno.pdf)).\n This package provides new relational operators useful for performing\n floating point number comparisons with a set tolerance.","Published":"2015-09-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FPDclustering","Version":"1.1","Title":"PD-Clustering and Factor PD-Clustering","Description":"Probabilistic distance clustering (PD-clustering) is an iterative, distribution free, probabilistic clustering method. PD-clustering assigns units to a cluster according to their probability of membership, under the constraint that the product of the probability and the distance of each point to any cluster centre is a constant. PD-clustering is a flexible method that can be used with non-spherical clusters, outliers, or noisy data. Facto PD-clustering (FPDC) is a recently proposed factor clustering method that involves a linear transformation of variables and a cluster optimizing the PD-clustering criterion. It works on high dimensional datasets.","Published":"2016-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fpest","Version":"0.1.1","Title":"Estimating Finite Population Total","Description":"Given the values of sampled units and selection probabilities \n the desraj function in the package computes the estimated value of the total\n as well as estimated variance.","Published":"2017-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fPortfolio","Version":"3011.81","Title":"Rmetrics - Portfolio Selection and Optimization","Description":"Environment for teaching \n\t\"Financial Engineering and Computational Finance\".","Published":"2014-10-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fpow","Version":"0.0-2","Title":"Computing the noncentrality parameter of the noncentral F\ndistribution","Description":"Returns the noncentrality parameter of the noncentral F\n distribution if probability of type I and type II error,\n degrees of freedom of the numerator and the denominator are\n given. It may be useful for computing minimal detectable\n differences for general ANOVA models. This program is\n documented in the paper of A. Baharev, S. Kemeny, On the\n computation of the noncentral F and noncentral beta\n distribution; Statistics and Computing, 2008, 18 (3), 333-340.","Published":"2012-11-01","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"fpp","Version":"0.5","Title":"Data for \"Forecasting: principles and practice\"","Description":"All data sets required for the examples and exercises in\n the book \"Forecasting: principles and practice\" by Rob J\n Hyndman and George Athanasopoulos. All packages required to run\n the examples are also loaded.","Published":"2013-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fpp2","Version":"2.1","Title":"Data for \"Forecasting: Principles and Practice\" (2nd Edition)","Description":"All data sets required for the examples and exercises \n in the book \"Forecasting: principles and practice\" \n by Rob J Hyndman and George Athanasopoulos . \n All packages required to run the examples are also loaded.","Published":"2017-05-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"fptdApprox","Version":"2.1","Title":"Approximation of First-Passage-Time Densities for Diffusion\nProcesses","Description":"Efficient approximation of first-passage-time densities for diffusion processes based on the First-Passage-Time Location (FPTL) function.","Published":"2015-08-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fracdiff","Version":"1.4-2","Title":"Fractionally differenced ARIMA aka ARFIMA(p,d,q) models","Description":"Maximum likelihood estimation of the parameters of a\n fractionally differenced ARIMA(p,d,q) model (Haslett and\n Raftery, Appl.Statistics, 1989).","Published":"2012-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fracprolif","Version":"1.0.6","Title":"Fraction Proliferation via a Quiescent Growth Model","Description":"Functions for fitting data to a quiescent growth model,\n i.e. a growth process that involves members of the population\n who stop dividing or propagating.","Published":"2015-04-27","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fractal","Version":"2.0-1","Title":"Fractal Time Series Modeling and Analysis","Description":"Stochastic fractal and deterministic chaotic time series\n analysis.","Published":"2016-05-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fractaldim","Version":"0.8-4","Title":"Estimation of fractal dimensions","Description":"Implements various methods for estimating fractal dimension of time series and 2-dimensional data.","Published":"2014-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FractalParameterEstimation","Version":"1.0","Title":"Estimation of Parameters p and q for Randomized Sierpinski\nCarpet for [p-p-p-q]-Model","Description":"The parameters p and q are estimated with the aid of randomized Sierpinski Carpet which is built on [p-p-p-q]-model. Thereby, for three times simulation with p-value and once with q-value is assumed. Hence these parameters are estimated and displayed.","Published":"2015-07-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fractalrock","Version":"1.1.0","Title":"Generate fractal time series with non-normal returns\ndistribution","Description":"The basic principle driving fractal generation of time\n series is that data is generated iteratively based on\n increasing levels of resolution. The initial series is defined\n by a so-called initiator pattern and then generators are used\n to replace each segment of the initial pattern. Regular,\n repeatable patterns can be produced by using the same seed and\n generators. By using a set of generators, non-repeatable time\n series can be produced. This technique is the basis of the\n fractal time series process in this package.","Published":"2013-02-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FRACTION","Version":"1.0","Title":"Numeric number into fraction","Description":"This is the package which can help you turn\n numeric,dataframe,matrix into fraction form.","Published":"2012-07-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fractional","Version":"0.1.3","Title":"Vulgar Fractions in R","Description":"The main function of this package allows numerical vector objects to\n be displayed with their values in vulgar fractional form. This is convenient if\n patterns can then be more easily detected. In some cases replacing the components\n of a numeric vector by a rational approximation can also be expected to remove\n some component of round-off error. The main functions form a re-implementation\n of the functions 'fractions' and 'rational' of the MASS package, but using a\n radically improved programming strategy.","Published":"2016-02-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fragilityindex","Version":"0.0.8","Title":"Fragility Index","Description":"Implements and extends the fragility index calculation for\n dichotomous results as described in Walsh, Srinathan, McAuley,\n Mrkobrada, Levine, Ribic, Molnar, Dattani, Burke, Guyatt,\n Thabane, Walter, Pogue, and Devereaux (2014)\n .","Published":"2016-11-08","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"Fragman","Version":"1.0.7","Title":"Fragment Analysis in R","Description":"Performs fragment analysis using genetic data coming from capillary electrophoresis machines. These are files with FSA extension which stands for FASTA-type file, and .txt files from Beckman CEQ 8000 system, both contain DNA fragment intensities read by machinery. In addition to visualization, it performs automatic scoring of SSRs (Sample Sequence Repeats; a type of genetic marker very common across the genome) and other type of PCR markers (standing for Polymerase Chain Reaction) in biparental populations such as F1, F2, BC (backcross), and diversity panels (collection of genetic diversity).","Published":"2016-09-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"frailtyEM","Version":"0.7.0-1","Title":"Fitting Frailty Models with the EM Algorithm","Description":"Contains functions for fitting shared frailty models with a semi-parametric\n baseline hazard with the Expectation-Maximization algorithm. Supported data formats \n include clustered failures with left truncation and recurrent events in gap-time\n or Andersen-Gill format. Several frailty distributions, such as the the gamma, positive stable\n and the Power Variance Family are supported. ","Published":"2017-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"frailtyHL","Version":"1.1","Title":"Frailty Models via H-likelihood","Description":"The frailtyHL package implements the h-likelihood\n estimation procedures for frailty models. The package fits\n Cox's proportional hazards models with random effects (or\n frailties). For the frailty distribution lognormal and gamma\n are allowed. The h-likelihood uses the Laplace approximation\n when the numerical integration is intractable, giving a\n statistically efficient estimation in frailty models.","Published":"2012-05-12","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"frailtypack","Version":"2.12.1","Title":"General Frailty Models: Shared, Joint and Nested Frailty Models\nwith Prediction","Description":"The following several classes of frailty models using a penalized likelihood estimation on the hazard function but also a parametric estimation can be fit using this R package:\n 1) A shared frailty model (with gamma or log-normal frailty distribution) and Cox proportional hazard model. Clustered and recurrent survival times can be studied.\n 2) Additive frailty models for proportional hazard models with two correlated random effects (intercept random effect with random slope).\n 3) Nested frailty models for hierarchically clustered data (with 2 levels of clustering) by including two iid gamma random effects.\n 4) Joint frailty models in the context of the joint modelling for recurrent events with terminal event for clustered data or not. A joint frailty model for two semi-competing risks and clustered data is also proposed.\n\t\t5) Joint general frailty models in the context of the joint modelling for recurrent events with terminal event data with two independent frailty terms.\n\t\t6) Joint Nested frailty models in the context of the joint modelling for recurrent events with terminal event, for hierarchically clustered data (with two levels of clustering) by including two iid gamma random effects.\n\t\t7) Multivariate joint frailty models for two types of recurrent events and a terminal event.\n\t\t8) Joint models for longitudinal data and a terminal event.\n\t\t9) Trivariate joint models for longitudinal data, recurrent events and a terminal event. \n\t\tPrediction values are available (for a terminal event or for a new recurrent event). Left-truncated (not for Joint model), right-censored data, interval-censored data (only for Cox proportional hazard and shared frailty model) and strata are allowed. In each model, the random effects have the gamma or normal distribution. Now, you can also consider time-varying covariates effects in Cox, shared and joint frailty models (1-5). The package includes concordance measures for Cox proportional hazards models and for shared frailty models.","Published":"2017-06-16","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"frailtySurv","Version":"1.3.2","Title":"General Semiparametric Shared Frailty Model","Description":"Simulates and fits semiparametric shared frailty models under a\n wide range of frailty distributions using a consistent and\n asymptotically-normal estimator. Currently supports: gamma, power variance\n function, log-normal, and inverse Gaussian frailty models.","Published":"2017-02-02","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"frair","Version":"0.5.100","Title":"Tools for Functional Response Analysis","Description":"Tools to support sensible statistics for functional response analysis.","Published":"2017-03-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Frames2","Version":"0.2.1","Title":"Estimation in Dual Frame Surveys","Description":"Point and interval estimation in dual frame surveys. In contrast\n to classic sampling theory, where only one sampling frame is considered,\n dual frame methodology assumes that there are two frames available for\n sampling and that, overall, they cover the entire target population. Then,\n two probability samples (one from each frame) are drawn and information\n collected is suitably combined to get estimators of the parameter of\n interest.","Published":"2015-12-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"franc","Version":"1.1.1","Title":"Detect the Language of Text","Description":"With no external dependencies and\n support for 335 languages; all languages spoken by\n more than one million speakers. 'Franc' is a port\n of the 'JavaScript' project of the same name,\n see .","Published":"2015-11-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FRAPO","Version":"0.4-1","Title":"Financial Risk Modelling and Portfolio Optimisation with R","Description":"Accompanying package of the book 'Financial Risk Modelling\n and Portfolio Optimisation with R', second edition. The data sets used in the book are contained in this package.","Published":"2016-12-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"FRB","Version":"1.8","Title":"Fast and Robust Bootstrap","Description":"This package performs robust inference based on applying\n Fast and Robust Bootstrap on robust estimators. Available\n methods are multivariate regression, PCA and Hotelling tests.","Published":"2013-04-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"frbs","Version":"3.1-0","Title":"Fuzzy Rule-Based Systems for Classification and Regression Tasks","Description":"An implementation of various learning algorithms based on fuzzy rule-based systems (FRBSs) for dealing with classification and regression tasks. Moreover, it allows to construct an FRBS model defined by human experts. \n FRBSs are based on the concept of fuzzy sets, proposed by Zadeh in 1965, which aims at\n representing the reasoning of human experts in a set of IF-THEN rules, to\n handle real-life problems in, e.g., control, prediction and inference, data\n mining, bioinformatics data processing, and robotics. FRBSs are also known\n as fuzzy inference systems and fuzzy models. During the modeling of an\n FRBS, there are two important steps that need to be conducted: structure\n identification and parameter estimation. Nowadays, there exists a wide\n variety of algorithms to generate fuzzy IF-THEN rules automatically from\n numerical data, covering both steps. Approaches that have been used in the\n past are, e.g., heuristic procedures, neuro-fuzzy techniques, clustering\n methods, genetic algorithms, squares methods, etc. Furthermore, in this\n version we provide a universal framework named 'frbsPMML', which is adopted\n from the Predictive Model Markup Language (PMML), for representing FRBS\n models. PMML is an XML-based language to provide a standard for describing\n models produced by data mining and machine learning algorithms. Therefore,\n we are allowed to export and import an FRBS model to/from 'frbsPMML'.\n Finally, this package aims to implement the most widely used standard\n procedures, thus offering a standard package for FRBS modeling to the R\n community.","Published":"2015-05-22","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FRCC","Version":"1.0","Title":"Fast Regularized Canonical Correlation Analysis","Description":"This package implements the functions associated with Fast\n Regularized Canonical Correlation Analysis.","Published":"2012-10-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FREddyPro","Version":"1.0","Title":"Post-Processing EddyPro Full Output File","Description":"Despike, ustar filtering, plotting, footprint modelling and general post-processing of a standard EddyPro full output file (LI-COR Inc 2011-2015 ).","Published":"2016-09-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"freeknotsplines","Version":"1.0","Title":"Free-Knot Splines","Description":"This package is for fitting free-knot splines for data\n with one independent variable and one dependent variable. Four\n free-knot spline algorithms are provided for the case where the\n number of knots is known in advance. A knot-search algorithm\n is provided for the case where the number of knots is not known\n in advance. In addition, methods are available to compute the\n fitted values, the residuals, and the coefficients of the\n splines, and to plot the results, along with a method to\n summarize the results.","Published":"2012-10-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FreeSortR","Version":"1.2","Title":"Free Sorting Data Analysis","Description":"Provides tools for describing and analysing free sorting data. Main methods are computation of consensus partition and factorial analysis of the dissimilarity matrix between stimuli (using multidimensional scaling approach).","Published":"2016-05-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"freestats","Version":"0.0.3","Title":"Statistical algorithms used in common data mining course","Description":"A collections of useful statistical functions used in Columbia\n course W4240/W4400.","Published":"2014-05-07","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"freesurfer","Version":"1.0","Title":"Wrapper Functions for 'Freesurfer'","Description":"Wrapper functions that interface with 'Freesurfer' (https://\n surfer.nmr.mgh.harvard.edu/), a powerful and commonly-used 'neuroimaging'\n software, using system commands. The goal is to be able to interface with\n 'Freesurfer' completely in R, where you pass R objects of class 'nifti',\n implemented by package 'oro.nifti', and the function executes an 'Freesurfer'\n command and returns an R object of class 'nifti' or necessary output.","Published":"2016-09-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FREGAT","Version":"1.0.3","Title":"Family REGional Association Tests","Description":"Fast regional association analysis of quantitative traits for family-based and population studies.","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fRegression","Version":"3011.81","Title":"Rmetrics - Regression Based Decision and Prediction","Description":"Environment for teaching \n\t\"Financial Engineering and Computational Finance\".","Published":"2014-09-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FREQ","Version":"1.0","Title":"FREQ: Estimate population size from capture frequencies","Description":"Real capture frequencies will be fitted to various distributions which provide the basis of estimating population sizes, their standard error, and symmetric as well as asymmetric confidence intervalls. ","Published":"2013-09-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"freqdist","Version":"0.1","Title":"Frequency Distribution","Description":"Generates a frequency distribution. The frequency\n distribution includes raw frequencies, percentages in each category, and\n cumulative frequencies. The frequency distribution can be stored as a data\n frame.","Published":"2016-12-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"freqdom","Version":"1.0.4","Title":"Frequency Domain Analysis for Multivariate Time Series","Description":"Methods for the analysis of multivariate time series using frequency domain techniques. Implementations of dynamic principle components analysis (DPCA) and estimators of operators in lagged regression. Examples of usage in functional data analysis setup. ","Published":"2015-09-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"freqMAP","Version":"0.2","Title":"Frequency Moving Average Plots (MAP) of Multinomial Data by a\nContinuous Covariate","Description":"A frequency moving average plot (MAP) is estimated from a\n multinomial data and a continuous covariate. The frequency MAP\n is a moving average estimate of category frequencies, where\n frequency means and posterior bounds are estimated. Comparisons\n of two frequency MAPs as well as odds ratios can be plotted.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"freqparcoord","Version":"1.0.1","Title":"Novel Methods for Parallel Coordinates","Description":"New approaches to parallel coordinates plots for\n multivariate data visualization, including applications to clustering,\n outlier hunting and regression diagnostics. Includes general functions\n for multivariate nonparametric density and regression estimation, \n using parallel computation. ","Published":"2016-01-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FreqProf","Version":"0.0.1","Title":"Frequency Profiles Computing and Plotting","Description":"Tools for generating an informative type of line graph, the frequency profile, \n which allows single behaviors, multiple behaviors, or the specific behavioral patterns \n of individual subjects to be graphed from occurrence/nonoccurrence behavioral data.","Published":"2016-01-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"frequencies","Version":"0.1.1","Title":"Create Frequency Tables with Counts and Rates","Description":"Provides functions to create frequency tables which display both counts\n and rates.","Published":"2017-06-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"frequencyConnectedness","Version":"0.1.6","Title":"Spectral Decomposition of Connectedness Measures","Description":"Accompanies a paper (Barunik, Krehlik (2017) ) dedicated to spectral decomposition of connectedness measures and their interpretation. We implement all the developed estimators as well as the historical counterparts. For more information, see the help or GitHub page () for relevant information.","Published":"2017-06-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"freqweights","Version":"1.0.4","Title":"Working with Frequency Tables","Description":"The frequency of a particular data value is the number of times it\n occurs. A frequency table is a table of values with their corresponding\n frequencies. Frequency weights are integer numbers that indicate how many\n cases each case represents. This package provides some functions to work\n with such type of collected data.","Published":"2017-05-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FRESA.CAD","Version":"2.2.1","Title":"Feature Selection Algorithms for Computer Aided Diagnosis","Description":"Contains a set of utilities for building and testing formula-based models (linear, logistic or COX) for Computer Aided Diagnosis/Prognosis applications. Utilities include data adjustment, univariate analysis, model building, model-validation, longitudinal analysis, reporting and visualization.","Published":"2016-09-10","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FrF2","Version":"1.7-2","Title":"Fractional Factorial Designs with 2-Level Factors","Description":"Regular and non-regular Fractional Factorial 2-level designs \n can be created. Furthermore, analysis tools for Fractional\n Factorial designs with 2-level factors are offered (main\n effects and interaction plots for all factors simultaneously,\n cube plot for looking at the simultaneous effects of three\n factors, full or half normal plot, alias structure in a more\n readable format than with the built-in function alias). ","Published":"2016-09-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FrF2.catlg128","Version":"1.2-1","Title":"Catalogues of resolution IV 128 run 2-level fractional\nfactorials up to 33 factors that do have 5-letter words","Description":"This package provides catalogues of resolution IV regular\n fractional factorial designs in 128 runs for up to 33 2-level\n factors. The catalogues are complete, excluding resolution IV\n designs without 5-letter words, because these do not add value\n for a search for clear designs. The previous package version\n 1.0 with complete catalogues up to 24 runs (24 runs and a\n namespace added later) can be downloaded from the authors\n website.","Published":"2013-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FRK","Version":"0.1.4","Title":"Fixed Rank Kriging","Description":"Fixed Rank Kriging is a tool for spatial/spatio-temporal modelling\n and prediction with large datasets. The approach, discussed in Cressie and\n Johannesson (2008) , decomposes the field, \n and hence the covariance function, using a fixed set of n basis functions, \n where n is typically much smaller than the number of data points (or polygons) m. \n The method naturally allows for non-stationary, anisotropic covariance functions \n and the use of observations with varying support (with known error variance). The \n projected field is a key building block of the Spatial Random Effects (SRE) model, \n on which this package is based. The package FRK provides helper functions to model, \n fit, and predict using an SRE with relative ease.","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"frm","Version":"1.2.2","Title":"Regression Analysis of Fractional Responses","Description":"Estimation and specification analysis of one- and two-part fractional regression models and calculation of partial effects.","Published":"2015-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"frmhet","Version":"1.1.3","Title":"Regression Analysis of Fractional Responses Under Unobserved\nHeterogeneity","Description":"Estimation and specification analysis of fractional regression models with neglected heterogeneity and/or endogenous covariates.","Published":"2016-08-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"frmpd","Version":"1.1.0","Title":"Regression Analysis of Panel Fractional Responses","Description":"Estimation of panel data regression models for fractional responses.","Published":"2016-08-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"frmqa","Version":"0.1-5","Title":"The Generalized Hyperbolic Distribution, Related Distributions\nand Their Applications in Finance","Description":"A collection of R and C++ functions to work with the\n generalized hyperbolic distribution, related distributions and\n their applications in financial risk management and\n quantitative analysis.","Published":"2012-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fromo","Version":"0.1.3","Title":"Fast Robust Moments","Description":"Fast computation of moments via 'Rcpp'. Supports computation on\n vectors and matrices, and Monoidal append of moments.","Published":"2016-04-05","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"frontier","Version":"1.1-2","Title":"Stochastic Frontier Analysis","Description":"Maximum Likelihood Estimation of\n Stochastic Frontier Production and Cost Functions.\n Two specifications are available:\n the error components specification with time-varying efficiencies\n (Battese and Coelli, 1992)\n and a model specification in which the firm effects are directly \n influenced by a number of variables (Battese and Coelli, 1995).","Published":"2017-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"frontiles","Version":"1.2","Title":"Partial Frontier Efficiency Analysis","Description":"It calculates the alpha-quantile and order-m efficiency score in multi-dimension and computes several summaries and representation of the associated frontiers in 2d and 3d.","Published":"2013-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"frt","Version":"0.1","Title":"Full Randomization Test","Description":"Perform full randomization tests.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FSA","Version":"0.8.13","Title":"Simple Fisheries Stock Assessment Methods","Description":"A variety of simple fish stock assessment methods.\n Detailed vignettes are available on the fishR website .","Published":"2017-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FSAdata","Version":"0.3.6","Title":"Data to Support Fish Stock Assessment ('FSA') Package","Description":"The datasets to support the Fish Stock Assessment ('FSA') package.","Published":"2017-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fscaret","Version":"0.9.4.1","Title":"Automated Feature Selection from 'caret'","Description":"Automated feature selection using variety of models\n provided by 'caret' package.\n This work was funded by Poland-Singapore bilateral cooperation\n project no 2/3/POL-SIN/2012.","Published":"2016-10-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"FSelector","Version":"0.21","Title":"Selecting Attributes","Description":"Functions for selecting attributes from a given dataset. Attribute\n subset selection is the process of identifying and removing as much of the\n irrelevant and redundant information as possible.","Published":"2016-06-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FSelectorRcpp","Version":"0.1.3","Title":"'Rcpp' Implementation of 'FSelector' Entropy-Based Feature\nSelection Algorithms with a Sparse Matrix Support","Description":"'Rcpp' (free of 'Java'/'Weka') implementation of 'FSelector' entropy-based feature selection algorithms with a sparse matrix support. It is also equipped with a parallel backend.","Published":"2017-04-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fsia","Version":"1.1.1","Title":"Import and Analysis of OMR Data from FormScanner","Description":"Import data of tests and questionnaires from FormScanner. FormScanner is an open source software that converts scanned images to data using optical mark recognition (OMR) and it can be downloaded from . The spreadsheet file created by FormScanner is imported in a convenient format to perform the analyses provided by the package. These analyses include the conversion of multiple responses to binary (correct/incorrect) data, the computation of the number of corrected responses for each subject or item, scoring using weights,the computation and the graphical representation of the frequencies of the responses to each item and the report of the responses of a few subjects.","Published":"2017-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"FSInteract","Version":"0.1.2","Title":"Fast Searches for Interactions","Description":"Performs fast detection of interactions in large-scale data using the\n method of random intersection trees introduced in\n Shah, R. D. and Meinshausen, N. (2014) . \n The algorithm finds potentially high-order interactions in high-dimensional binary\n two-class classification data, without requiring lower order interactions\n to be informative. The search is particularly fast when the matrices of\n predictors are sparse. It can also be used to perform market basket analysis\n when supplied with a single binary data matrix. Here it will find collections\n of columns which for many rows contain all 1's.","Published":"2017-04-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fslr","Version":"2.12","Title":"Wrapper Functions for FSL ('FMRIB' Software Library) from\nFunctional MRI of the Brain ('FMRIB')","Description":"Wrapper functions that interface with 'FSL' \n , a powerful and commonly-used 'neuroimaging'\n software, using system commands. The goal is to be able to interface with 'FSL'\n completely in R, where you pass R objects of class 'nifti', implemented by\n package 'oro.nifti', and the function executes an 'FSL' command and returns an R\n object of class 'nifti' if desired.","Published":"2017-03-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fso","Version":"2.0-1","Title":"Fuzzy Set Ordination","Description":"Fuzzy set ordination is a multivariate analysis used in\n ecology to relate the composition of samples to possible\n explanatory variables. While differing in theory and method,\n in practice, the use is similar to 'constrained ordination.'\n The package contains plotting and summary functions as well as\n the analyses","Published":"2013-02-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fSRM","Version":"0.6.4","Title":"Social Relations Analyses with Roles (\"Family SRM\")","Description":"Social Relations Analysis with roles (\"Family SRM\") are computed,\n using a structural equation modeling approach. Groups ranging from three members\n up to an unlimited number of members are supported and the mean structure can\n be computed. Means and variances can be compared between different groups of\n families and between roles.","Published":"2016-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fst","Version":"0.7.2","Title":"Lightning Fast Serialization of Data Frames for R","Description":"Read and write data frames at high speed. Compress your data with fast and efficient type-optimized algorithms that allow for random access of stored data frames (columns and rows).","Published":"2017-01-12","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ftDK","Version":"1.0","Title":"A Wrapper for the API of the Danish Parliament","Description":"A wrapper for the API of the Danish Parliament. It makes it \n possible to get data from the API easily into a data frame. Learn more at \n .","Published":"2017-06-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FTICRMS","Version":"0.8","Title":"Programs for Analyzing Fourier Transform-Ion Cyclotron Resonance\nMass Spectrometry Data","Description":"This package was developed partially with funding from the\n NIH Training Program in Biomolecular Technology\n (2-T32-GM08799).","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ftnonpar","Version":"0.1-88","Title":"Features and Strings for Nonparametric Regression","Description":"The package contains R-functions to perform the methods in\n nonparametric regression and density estimation, described in\n Davies, P. L. and Kovac, A. (2001) Local Extremes, Runs,\n Strings and Multiresolution (with discussion) Annals of\n Statistics. 29. p1-65 Davies, P. L. and Kovac, A. (2004)\n Densities, Spectral Densities and Modality Annals of\n Statistics. Annals of Statistics. 32. p1093-1136 Kovac, A.\n (2006) Smooth functions and local extreme values. Computational\n Statistics and Data Analysis (to appear) D\\\"umbgen, L. and\n Kovac, A. (2006) Extensions of smoothing via taut strings\n Davies, P. L. (1995) Data features. Statistica Neerlandica\n 49,185-245.","Published":"2012-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fTrading","Version":"3010.78","Title":"Technical Trading Analysis","Description":"Environment for teaching \"Financial Engineering and\n Computational Finance\"","Published":"2013-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FTRLProximal","Version":"0.3.0","Title":"FTRL Proximal Implementation for Elastic Net Regression","Description":"Implementation of Follow The Regularized Leader (FTRL) Proximal algorithm, proposed by McMahan et al. (2013) , used for online training of large scale regression models using a mixture of L1 and L2 regularization.","Published":"2017-01-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fts","Version":"0.9.9","Title":"R interface to tslib (a time series library in c++)","Description":"fast operations for time series objects","Published":"2014-04-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ftsa","Version":"4.8","Title":"Functional Time Series Analysis","Description":"Functions for visualizing, modeling, forecasting and hypothesis testing of functional time series.","Published":"2017-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ftsspec","Version":"1.0.0","Title":"Spectral Density Estimation and Comparison for Functional Time\nSeries","Description":"Functions for estimating spectral density operator of functional\n time series (FTS) and comparing the spectral density operator of two\n functional time series, in a way that allows detection of differences of\n the spectral density operator in frequencies and along the curve length.","Published":"2015-09-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fueleconomy","Version":"0.1","Title":"EPA fuel economy data","Description":"Fuel economy data from the EPA, 1985-2015, conveniently\n packaged for consumption by R users.","Published":"2014-07-22","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"fugeR","Version":"0.1.2","Title":"FUzzy GEnetic, a machine learning algorithm to construct\nprediction model based on fuzzy logic","Description":"This is an evolutionary algorithm for fuzzy systems, a\n genetic algorithm is used to construct a fuzzy system able to\n fit the given training data. This fuzzy system can then be\n used as a prediction model, it's composed of fuzzy logic rules\n that provide a good lingustic representation.","Published":"2012-08-08","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"fullfact","Version":"1.2","Title":"Full Factorial Breeding Analysis","Description":"We facilitate the analysis of full factorial mating designs with mixed-effects models. The observed data functions extract the variance explained by random and fixed effects and provide their significance. We then calculate the additive genetic, nonadditive genetic, and maternal variance components explaining the phenotype. In particular, we integrate nonnormal error structures for estimating these components for nonnormal data types. The resampled data functions are used to produce bootstrap confidence intervals, which can then be plotted using a simple function. This package will facilitate the analyses of full factorial mating designs in R, especially for the analysis of binary, proportion, and/or count data types and for the ability to incorporate additional random and fixed effects and power analyses. The paper associated with the package including worked examples is: Houde ALS, Pitcher TE (2016) .","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fulltext","Version":"0.1.8","Title":"Full Text of 'Scholarly' Articles Across Many Data Sources","Description":"Provides a single interface to many sources of full text\n 'scholarly' data, including 'Biomed Central', Public Library of\n Science, 'Pubmed Central', 'eLife', 'F1000Research', 'PeerJ',\n 'Pensoft', 'Hindawi', 'arXiv' 'preprints', and more. Functionality\n included for searching for articles, downloading full or partial\n text, downloading supplementary materials, converting to various\n data formats used in and outside of R.","Published":"2016-07-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fun","Version":"0.1-0","Title":"Use R for Fun","Description":"This is a collection of R games and other funny stuff,\n such as the classical Mine sweeper and sliding puzzles.","Published":"2011-08-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"funchir","Version":"0.1.4","Title":"Convenience Functions by Michael Chirico","Description":"A set of functions, some subset of which I use in every .R file I write. Examples are table2(), which adds useful functionalities to base table (sorting, built-in proportion argument, etc.); lyx.xtable(), which converts xtable() output to a format more easily copy-pasted into LyX; pdf2(), which writes a plot to file while also displaying it in the RStudio plot window; and abbr_to_colClass(), which is a much more concise way of feeding many types to a colClass argument in a data reader.","Published":"2017-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FunChisq","Version":"2.4.3","Title":"Chi-Square and Exact Tests for Non-Parametric Functional\nDependencies","Description":"Statistical hypothesis testing methods for\n non-parametric functional dependencies using asymptotic\n chi-square or exact distributions. Functional chi-squares are\n asymmetric and functionally optimal, unique from other related\n statistics. Tests in this package reveal evidence for causality \n based on the causality-by-functionality principle. They include\n asymptotic functional chi-square tests, an exact functional\n test, a comparative functional chi-square test, and also a\n comparative chi-square test. The normalized non-constant\n functional chi-square test was used by Best Performer\n NMSUSongLab in HPN-DREAM (DREAM8) Breast Cancer Network\n Inference Challenges. For continuous data, these tests offer an\n advantage over regression analysis when a parametric functional\n form cannot be assumed; for categorical data, they provide a\n novel means to assess directional dependencies not possible with\n symmetrical Pearson's chi-square or Fisher's exact tests.","Published":"2017-05-02","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"FunCluster","Version":"1.09","Title":"Functional Profiling of Microarray Expression Data","Description":"FunCluster performs a functional analysis of microarray\n expression data based on Gene Ontology & KEGG functional\n annotations. From expression data and functional annotations\n FunCluster builds classes of putatively co-regulated biological\n processes through a specially designed clustering procedure.","Published":"2012-07-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Funclustering","Version":"1.0.1","Title":"A package for functional data clustering","Description":"This packages proposes a model-based clustering algorithm for\n multivariate functional data. The parametric mixture model, based on the\n assumption of normality of the principal components resulting from a\n multivariate functional PCA, is estimated by an EM-like algorithm. The main\n advantage of the proposed algorithm is its ability to take into account the\n dependence among curves.","Published":"2014-01-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FuncMap","Version":"1.0.8","Title":"Hive Plots of R Package Function Calls","Description":"Analyzes the function calls in an R package and creates a hive plot of the calls, dividing them among functions that only make outgoing calls (sources), functions that have only incoming calls (sinks), and those that have both incoming calls and make outgoing calls (managers). Function calls can be mapped by their absolute numbers, their normalized absolute numbers, or their rank. FuncMap should be useful for comparing packages at a high level for their overall design. Plus, it's just plain fun. The hive plot concept was developed by Martin Krzywinski (www.hiveplot.com) and inspired this package. Note: this package is maintained for historical reasons. HiveR is a full package for creating hive plots.","Published":"2015-07-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"functional","Version":"0.6","Title":"Curry, Compose, and other higher-order functions","Description":"Curry, Compose, and other higher-order functions","Published":"2014-07-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FunctionalNetworks","Version":"1.0.0","Title":"An algorithm for gene and gene set network inference","Description":"R package providing functions to perform gene and gene set network inference.","Published":"2014-07-02","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"functools","Version":"0.2.0","Title":"Functional Programming in R","Description":"Extends functional programming in R by\n providing support to the usual higher order functional\n suspects (Map, Reduce, Filter, etc.).","Published":"2015-09-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"funcy","Version":"0.8.6","Title":"Functional Clustering Algorithms","Description":"Unified framework to cluster functional data according to one of\n seven models. All models are based on the projection of the curves onto a basis.\n The main function funcit() calls wrapper functions for the existing algorithms,\n so that input parameters are the same. A list is returned with each entry\n representing the same or extended output for the corresponding method. Method\n specific as well as general visualization tools are available.","Published":"2017-02-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"funData","Version":"1.1","Title":"An S4 Class for Functional Data","Description":"S4 classes for univariate and multivariate functional data with\n utility functions.","Published":"2017-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"funFEM","Version":"1.1","Title":"Clustering in the Discriminative Functional Subspace","Description":"The funFEM algorithm (Bouveyron et al., 2014) allows to cluster functional data by modeling the curves within a common and discriminative functional subspace.","Published":"2015-03-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fungible","Version":"1.5","Title":"Fungible Coefficients and Monte Carlo Functions","Description":"Computes fungible coefficients and Monte Carlo data. \n Underlying theory for these functions is described in the following publications:\n Waller, N. (2008). Fungible Weights in Multiple Regression. Psychometrika, 73(4), 691-703, . \n Waller, N. & Jones, J. (2009). Locating the Extrema of Fungible Regression Weights. \n Psychometrika, 74(4), 589-602, .\n Waller, N. G. (2016). Fungible Correlation Matrices: \n A Method for Generating Nonsingular, Singular, and Improper Correlation Matrices for \n Monte Carlo Research. Multivariate Behavioral Research, 51(4), 554-568, . \n Jones, J. A. & Waller, N. G. (2015). The normal-theory and asymptotic distribution-free (ADF) \n covariance matrix of standardized regression coefficients: theoretical extensions \n and finite sample behavior. Psychometrika, 80, 365-378, . ","Published":"2016-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"funHDDC","Version":"1.0","Title":"Model-based clustering in group-specific functional subspaces","Description":"The package provides the funHDDC algorithm (Bouveyron & Jacques, 2011) which allows to cluster functional data by modeling each group within a specific functional subspace. ","Published":"2014-09-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"funModeling","Version":"1.6.4","Title":"Exploratory Data Analysis and Data Preparation Tool-Box Book","Description":"Around 10% of almost any predictive modeling project is spent in predictive modeling, 'funModeling' and the book Data Science Live Book () are intended to cover remaining 90%: data preparation, profiling, selecting best variables 'dataViz', assessing model performance and other functions.","Published":"2017-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"funr","Version":"0.3.2","Title":"Simple Utility Providing Terminal Access to all R Functions","Description":"A small utility which wraps Rscript and provides access to all R\n functions from the shell.","Published":"2016-04-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"funrar","Version":"1.1.0","Title":"Functional Rarity Indices Computation","Description":"Computes functional rarity indices as proposed by Violle et al.\n (2017) . Various indices can be computed\n using both regional and local information. Functional Rarity combines both\n the functional aspect of rarity as well as the extent aspect of rarity.","Published":"2017-05-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"funreg","Version":"1.2","Title":"Functional Regression for Irregularly Timed Data","Description":"Performs functional regression, and some related\n approaches, for intensive longitudinal data (see the book by Walls & Schafer, \n 2006, Models for Intensive Longitudinal Data, Oxford) when such data is not\n necessarily observed on an equally spaced grid of times. The\n approach generally follows the ideas of Goldsmith, Bobb, Crainiceanu,\n Caffo, and Reich (2011) and the approach taken in their sample code, but\n with some modifications to make it more feasible to use with long rather\n than wide, non-rectangular longitudinal datasets with unequal and\n potentially random measurement times. It also allows easy plotting of the\n correlation between the smoothed covariate and the outcome as a function of\n time, which can add additional insights on how to interpret a functional\n regression. Additionally, it also provides several permutation tests for\n the significance of the functional predictor. The heuristic interpretation\n of ``time'' is used to describe the index of the functional predictor, but\n the same methods can equally be used for another unidimensional continuous\n index, such as space along a north-south axis. The development of this\n package was part of a research project supported by Award R03 CA171809-01\n from the National Cancer Institute and Award P50 DA010075 from the National\n Institute on Drug Abuse. The content is solely the responsibility of the\n authors and does not necessarily represent the official views of the\n National Institute on Drug Abuse, the National Cancer Institute, or the\n National Institutes of Health.","Published":"2016-08-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FUNTA","Version":"0.1.0","Title":"Functional Tangential Angle Pseudo-Depth","Description":"Computes the functional tangential angle pseudo-depth and its robustified version from the paper by Kuhnt and Rehage (2016). See Kuhnt, S.; Rehage, A. (2016): An angle-based multivariate functional pseudo-depth for shape outlier detection, JMVA 146, 325-340, for details. ","Published":"2016-04-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"funtimes","Version":"4.0","Title":"Functions for Time Series Analysis","Description":"Includes non-parametric estimators and tests for time series analysis. The functions are to test for presence of possibly non-monotonic trends and for synchronism of trends in multiple time series, using modern bootstrap techniques and robust non-parametric difference-based estimators.","Published":"2017-02-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"furniture","Version":"1.5.0","Title":"Furniture for Quantitative Scientists","Description":"Contains two main functions (i.e., two pieces of furniture):\n table1() which produces a well-formatted table of descriptives common as Table 1\n in research articles and washer() which is helpful in cleaning up your data. \n These furniture functions are designed to simplify common tasks in \n quantitative analysis.","Published":"2017-03-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"FusedPCA","Version":"0.2","Title":"Community Detection via Fused Principal Component Analysis","Description":"Efficient procedures for community detection in network studies, especially for sparse networks. The algorithms impose penalties on the differences of the coordinates which represent the community labels of the nodes.","Published":"2013-11-10","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"futile.any","Version":"1.3.2","Title":"A Tiny Utility Providing Polymorphic Operations","Description":"This utility package provides polymorphism over common operations and is now subsumed by lambda.tools.","Published":"2015-07-07","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"futile.logger","Version":"1.4.3","Title":"A Logging Utility for R","Description":"Provides a simple yet powerful logging utility. Based loosely on\n log4j, futile.logger takes advantage of R idioms to make logging a\n convenient and easy to use replacement for cat and print statements.","Published":"2016-07-10","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"futile.matrix","Version":"1.2.6","Title":"Random Matrix Generation and Manipulation","Description":"A collection of functions for manipulating matrices and generating\n ensembles of random matrices. Used primarily to identify the cutoff point\n for the noise portion of the eigenvalue spectrum.","Published":"2016-07-10","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"futile.options","Version":"1.0.0","Title":"Futile options management","Description":"A scoped options management framework","Published":"2010-04-06","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"futile.paradigm","Version":"2.0.4","Title":"A framework for working in a functional programming paradigm in\nR","Description":"Provides dispatching implementations suitable for\n functional programming paradigms. The framework provides a\n mechanism for attaching guards to functions similar to Erlang,\n while also providing the safety of assertions reminiscent of\n Eiffel.","Published":"2012-02-06","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"future","Version":"1.5.0","Title":"Unified Parallel and Distributed Processing in R for Everyone","Description":"The purpose of this package is to provide a lightweight and\n unified Future API for sequential and parallel processing of R\n expression via futures. The simplest way to evaluate an expression\n in parallel is to use `x %<-% { expression }` with `plan(multiprocess)`.\n This package implements sequential, multicore, multisession, and\n cluster futures. With these, R expressions can be evaluated on the\n local machine, on in parallel a set of local machines, or distributed\n on a mix of local and remote machines.\n Extensions to this package implements additional backends for\n processing futures via compute cluster schedulers etc.\n Because of its unified API, there is no need to modify code in order\n switch from sequential on the local machine to, say, distributed\n processing on a remote compute cluster.\n Another strength of this package is that global variables and functions\n are automatically identified and exported as needed, making it\n straightforward to tweak existing code to make use of futures.","Published":"2017-05-26","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"future.BatchJobs","Version":"0.14.1","Title":"A Future API for Parallel and Distributed Processing using\nBatchJobs","Description":"Implements of the Future API on top of the 'BatchJobs' package.\n This allows you to process futures, as defined by the 'future' package,\n in parallel out of the box, not only on your local machine or ad-hoc\n cluster of machines, but also via high-performance compute ('HPC') job\n schedulers such as 'LSF', 'OpenLava', 'Slurm', 'SGE', and 'TORQUE' / 'PBS',\n e.g. 'y <- future_lapply(files, FUN = process)'.","Published":"2017-05-31","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"future.batchtools","Version":"0.5.0","Title":"A Future API for Parallel and Distributed Processing using\n'batchtools'","Description":"Implements of the Future API on top of the 'batchtools' package.\n This allows you to process futures, as defined by the 'future' package,\n in parallel out of the box, not only on your local machine or ad-hoc\n cluster of machines, but also via high-performance compute ('HPC') job\n schedulers such as 'LSF', 'OpenLava', 'Slurm', 'SGE', and 'TORQUE' / 'PBS',\n e.g. 'y <- future_lapply(files, FUN = process)'.","Published":"2017-06-03","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"futureheatwaves","Version":"1.0.3","Title":"Find, Characterize, and Explore Extreme Events in Climate\nProjections","Description":"Inputs a directory of climate projection files and, for each,\n identifies and characterizes heat waves for specified study locations. The\n definition used to identify heat waves can be customized. Heat wave\n characterizations include several metrics of heat wave length, intensity,\n and timing in the year. The heat waves that are identified can be\n explored using a function to apply user-created functions across all\n generated heat wave files.This work was supported in part by grants from\n the National Institute of Environmental Health Sciences (R00ES022631), the\n National Science Foundation (1331399), and the Colorado State University\n Vice President for Research.","Published":"2016-12-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fuzzr","Version":"0.2.1","Title":"Fuzz-Test R Functions","Description":"Test function arguments with a wide array of inputs, and produce\n reports summarizing messages, warnings, errors, and returned values.","Published":"2017-05-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Fuzzy.p.value","Version":"1.0","Title":"Computing Fuzzy p-Value","Description":"The main goal of this package is drawing the membership function of the fuzzy p-value which is defined as a fuzzy set on the unit interval for three following problems: (1) testing crisp hypotheses based on fuzzy data, (2) testing fuzzy hypotheses based on crisp data, and (3) testing fuzzy hypotheses based on fuzzy data. In all cases, the fuzziness of data or/and the fuzziness of the boundary of null fuzzy hypothesis transported via the p-value function and causes to produce the fuzzy p-value. If the p-value is fuzzy, it is more appropriate to consider a fuzzy significance level for the problem. Therefore, the comparison of the fuzzy p-value and the fuzzy significance level is evaluated by a fuzzy ranking method in this package.","Published":"2016-08-07","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"FuzzyAHP","Version":"0.9.0","Title":"(Fuzzy) AHP Calculation","Description":"Calculation of AHP (Analytic Hierarchy Process -\n )\n with classic and fuzzy weights based on Saaty's pairwise\n comparison method for determination of weights.","Published":"2017-03-09","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"fuzzyFDR","Version":"1.0","Title":"Exact calculation of fuzzy decision rules for multiple testing","Description":"Exact calculation of fuzzy decision rules for multiple\n\ttesting. Choose to control FDR (false discovery rate) using the\n\tBenjamini and Hochberg method, or FWER (family wise error rate)\n\tusing the Bonferroni method. Kulinsakaya and Lewin (2007).","Published":"2007-10-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"fuzzyforest","Version":"1.0.3","Title":"Fuzzy Forests","Description":"Fuzzy forests, a new algorithm based on random forests,\n is designed to reduce the bias seen in random forest feature selection\n caused by the presence of correlated features. Fuzzy forests uses\n recursive feature elimination random forests to select\n features from separate blocks of correlated features where the\n correlation within each block of features is high\n and the correlation between blocks of features is low.\n One final random forest is fit using the surviving features.\n This package fits random forests using the 'randomForest' package and\n allows for easy use of 'WGCNA' to split features into distinct blocks.","Published":"2017-06-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"fuzzyjoin","Version":"0.1.3","Title":"Join Tables Together on Inexact Matching","Description":"Join tables together based not on whether columns\n match exactly, but whether they are similar by some comparison.\n Implementations include string distance and regular expression\n matching.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FuzzyLP","Version":"0.1-3","Title":"Fuzzy Linear Programming","Description":"Methods to solve Fuzzy Linear Programming Problems with \n\tfuzzy constraints (by Verdegay, Zimmermann, Werner, Tanaka), \n\tfuzzy costs (multiobjective, interval arithmetic, stratified piecewise reduction,\n\tdefuzzification-based), and fuzzy technological matrix.","Published":"2015-07-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"FuzzyMCDM","Version":"1.1","Title":"Multi-Criteria Decision Making Methods for Fuzzy Data","Description":"Implementation of several MCDM methods for fuzzy data (triangular\n fuzzy numbers) for decision making problems. The methods that are implemented in\n this package are Fuzzy TOPSIS (with two normalization procedures), Fuzzy VIKOR,\n Fuzzy Multi-MOORA and Fuzzy WASPAS. In addition, function MetaRanking() calculates\n a new ranking from the sum of the rankings calculated, as well as an aggregated ranking.","Published":"2016-09-22","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"FuzzyNumbers","Version":"0.4-1","Title":"Tools to Deal with Fuzzy Numbers","Description":"S4 classes and methods\n to deal with fuzzy numbers. With them you can compute any arithmetic\n operations (e.g. by using the Zadeh extension principle),\n perform approximation of arbitrary FNs by trapezoidal and piecewise\n linear FNs, prepare plots of FNs for publications, calculate \n possibility and necessity values for comparisons, etc.","Published":"2015-02-26","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"FuzzyNumbers.Ext.2","Version":"1.0","Title":"Apply Two Fuzzy Numbers on a Monotone Function","Description":"One can easily draw the membership function of f(x,y) by package 'FuzzyNumbers.Ext.2' in which f(.,.) is supposed monotone and x and y are two fuzzy numbers. This work is possible using function f2apply() which is an extension of function fapply() from Package 'FuzzyNumbers' for two-variable monotone functions.","Published":"2017-03-18","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"FuzzyR","Version":"2.1","Title":"Fuzzy Logic Toolkit for R","Description":"Design and simulate fuzzy logic systems using Type 1 Fuzzy Logic.\n This toolkit includes with graphical user interface (GUI) and an adaptive neuro-\n fuzzy inference system (ANFIS). This toolkit is a continuation from the previous\n package ('FuzzyToolkitUoN'). Produced by the Intelligent Modelling & Analysis\n Group, University of Nottingham.","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fuzzyRankTests","Version":"0.3-10","Title":"Fuzzy Rank Tests and Confidence Intervals","Description":"Does fuzzy tests and confidence intervals (following Geyer\n and Meeden, Statistical Science, 2005, )\n for sign test and Wilcoxon signed rank and rank sum tests.","Published":"2017-03-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"FuzzyStatProb","Version":"2.0.2","Title":"Fuzzy Stationary Probabilities from a Sequence of Observations\nof an Unknown Markov Chain","Description":"An implementation of a method for computing fuzzy numbers representing stationary probabilities of an unknown Markov chain,\n from which a sequence of observations along time has been obtained. The algorithm is based on the proposal presented by James Buckley \n in his book on Fuzzy probabilities (Springer, 2005), chapter 6. Package 'FuzzyNumbers' is used to represent the output probabilities.","Published":"2016-07-30","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"FuzzyStatTra","Version":"1.0","Title":"Statistical Methods for Trapezoidal Fuzzy Numbers","Description":"The aim of the package is to provide some basic functions\n for doing statistics with trapezoidal fuzzy numbers. In particular,\n the package contains several functions for simulating trapezoidal \n fuzzy numbers, as well as for calculating some central tendency \n measures (mean and two types of median), some scale measures \n (variance, ADD, MDD, Sn, Qn, Tn and some M-estimators) and \n one diversity index and one inequality index. Moreover, \n functions for calculating the 1-norm distance, the mid/spr \n distance and the (phi,theta)-wabl/ldev/rdev distance between \n fuzzy numbers are included, and a function to calculate the \n value phi-wabl given a sample of trapezoidal fuzzy numbers.","Published":"2017-02-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FuzzyToolkitUoN","Version":"1.0","Title":"Type 1 Fuzzy Logic Toolkit","Description":"A custom framework for working with Type 1 Fuzzy Logic,\n produced by the University of Nottingham IMA Group.","Published":"2013-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"fuzzywuzzyR","Version":"1.0.0","Title":"Fuzzy String Matching","Description":"Fuzzy string matching implementation of the 'fuzzywuzzy' 'python' package. It uses the Levenshtein Distance to calculate the differences between sequences. ","Published":"2017-04-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fwdmsa","Version":"0.2","Title":"Forward search for Mokken scale analysis","Description":"fwdmsa performs the Forward Search for Mokken scale\n analysis. It detects outliers, it produces several types of\n diagnostic plots.","Published":"2011-07-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"FWDselect","Version":"2.1.0","Title":"Selecting Variables in Regression Models","Description":"A simple method\n to select the best model or best subset of variables using\n different types of data (binary, Gaussian or Poisson) and\n applying it in different contexts (parametric or non-parametric).","Published":"2015-12-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"fwi.fbp","Version":"1.7","Title":"Fire Weather Index System and Fire Behaviour Prediction System\nCalculations","Description":"Provides three functions to calculate the outputs of the two main components of the Canadian Forest Fire Danger Rating System (CFFDRS): the Fire Weather Index (FWI) System and the Fire Behaviour Prediction (FBP) System.","Published":"2016-01-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fwsim","Version":"0.3.3","Title":"Fisher-Wright Population Simulation","Description":"Simulates a population under the Fisher-Wright model (fixed or stochastic population size) with a one-step neutral mutation process (stepwise mutation model, logistic mutation model and exponential mutation model supported). The stochastic population sizes are random Poisson distributed and different kinds of population growth are supported. For the stepwise mutation model, it is possible to specify locus and direction specific mutation rate (in terms of upwards and downwards mutation rate). Intermediate generations can be saved in order to study e.g. drift.","Published":"2015-01-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"fxregime","Version":"1.0-3","Title":"Exchange Rate Regime Analysis","Description":"Exchange rate regression and structural change tools\n for estimating, testing, dating, and monitoring\n\t (de facto) exchange rate regimes.","Published":"2013-12-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"g.data","Version":"2.4","Title":"Delayed-Data Packages","Description":"Create and maintain delayed-data packages (ddp's). Data stored in\n a ddp are available on demand, but do not take up memory until requested.\n You attach a ddp with g.data.attach(), then read from it and assign to it in\n a manner similar to S-PLUS, except that you must run g.data.save() to\n actually commit to disk.","Published":"2013-12-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"G1DBN","Version":"3.1.1","Title":"A package performing Dynamic Bayesian Network inference","Description":"G1DBN performs DBN inference using 1st order conditional\n dependencies.","Published":"2013-09-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"g2f","Version":"0.1","Title":"Find and Fill Gaps in Metabolic Networks","Description":"For a given metabolic network, this package finds the gaps (metabolites not produced or not consumed in any other reaction), and fills it from the stoichiometric reactions of a reference metabolic reconstruction using a weighting function. Also the option to download all the set of gene-associated stoichiometric reactions for a specific organism from the KEGG database is available.","Published":"2016-10-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"G2Sd","Version":"2.1.5","Title":"Grain-Size Statistics and Description of Sediment","Description":"Full descriptive statistics, physical description of sediment,\n metric or phi sieves.","Published":"2015-12-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GA","Version":"3.0.2","Title":"Genetic Algorithms","Description":"An R package for optimisation using genetic algorithms. The package provides a flexible general-purpose set of tools for implementing genetic algorithms search in both the continuous and discrete case, whether constrained or not. Users can easily define their own objective function depending on the problem at hand. Several genetic operators are available and can be combined to explore the best settings for the current task. Furthermore, users can define new genetic operators and easily evaluate their performances. Local search using general-purpose optimisation algorithms can be applied stochastically to exploit interesting regions. GAs can be run sequentially or in parallel, using an explicit master-slave parallelisation or a coarse-grain islands approach.","Published":"2016-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GA4Stratification","Version":"1.0","Title":"A genetic algorithm approach to determine stratum boundaries and\nsample sizes of each stratum in stratified sampling","Description":"This is a Genetic Algorithm package for the determination\n of the stratum boundaries and sample sizes of each stratum in\n stratified sampling","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GAabbreviate","Version":"1.3","Title":"Abbreviating Items Measures using Genetic Algorithms","Description":"Scale abbreviation using Genetic Algorithms that maximally capture the variance in the original data.","Published":"2016-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GABi","Version":"0.1","Title":"Framework for Generalized Subspace Pattern Mining","Description":"Generalized subspace pattern mining in data arrays, using a genetic algorithm framework. ","Published":"2013-09-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GAD","Version":"1.1.1","Title":"GAD: Analysis of variance from general principles","Description":"This package analyses complex ANOVA models with any\n combination of orthogonal/nested and fixed/random factors, as\n described by Underwood (1997). There are two restrictions: (i)\n data must be balanced; (ii) fixed nested factors are not\n allowed. Homogeneity of variances is checked using Cochran's C\n test and 'a posteriori' comparisons of means are done using\n Student-Newman-Keuls (SNK) procedure.","Published":"2012-10-29","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"GADAG","Version":"0.99.0","Title":"A Genetic Algorithm for Learning Directed Acyclic Graphs","Description":"Sparse large Directed Acyclic Graphs learning with a combination of a convex program and a tailored genetic algorithm (see Champion et al. (2017) ). ","Published":"2017-04-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GaDiFPT","Version":"1.0","Title":"First Passage Time Simulation for Gaussian Diffusion Processes","Description":"In this package we consider Gaussian Diffusion processes and smooth thresholds. After evaluating the mean of the process to check the subthreshold regimen hypothesis, the FPT density function is reconstructed via the numerical quadrature of the integral equation in (Buonocore 1987); first passage times are also generated by the method in (Buonocore 2014) and results are compared. The timestep of the simulations can iteratively be refined. User should provide the functional form for the drift and the infinitesimal variance in the script 'userfunc.R' and for the threshold in the script 'userthresh.R'. All the parameters required by the implementation are to be set in the script 'userparam.R'. Example scripts for common drifts and thresholds are given. ","Published":"2015-02-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GADMTools","Version":"2.1-1","Title":"Easy Use of 'GADM' Shapefiles","Description":"Manipulate, assemble, export shapefiles. Create choropleth, heatmaps, dots plot, proportional dots and more.","Published":"2017-04-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gafit","Version":"0.5.1","Title":"Genetic Algorithm for Curve Fitting","Description":"A group of sample points are evaluated against a\n user-defined expression, the sample points are lists of\n parameters with values that may be substituted into that\n expression. The genetic algorithm attempts to make the result\n of the expression as low as possible (usually this would be the\n sum of residuals squared).","Published":"2016-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gaiah","Version":"0.0.2","Title":"Genetic and Isotopic Assignment Accounting for Habitat\nSuitability","Description":"Tools for using genetic markers, stable isotope data, and habitat\n suitability data to calculate posterior probabilities of breeding origin of\n migrating birds.","Published":"2017-03-02","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"gains","Version":"1.1","Title":"Gains Table Package","Description":"This package constructs gains tables and lift charts for prediction algorithms. Gains tables and lift charts are commonly used in direct marketing applications.","Published":"2013-07-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GAIPE","Version":"1.0","Title":"Graphical Extension with Accuracy in Parameter Estimation\n(GAIPE)","Description":"GAIPE implements graphical representation of accuracy in\n parameter estimation (AIPE) on RMSEA for sample size planning\n in structural equation modeling. Sample sizes suggested by\n RMSEA with AIPE method and power analysis approach can also be\n obtained separately using the provided functions.","Published":"2013-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"galts","Version":"1.3","Title":"Genetic algorithms and C-steps based LTS (Least Trimmed Squares)\nestimation","Description":"This package includes the ga.lts function that estimates\n LTS (Least Trimmed Squares) parameters using genetic algorithms\n and C-steps. ga.lts() constructs a genetic algorithm to form a\n basic subset and iterates C-steps as defined in Rousseeuw and\n van-Driessen (2006) to calculate the cost value of the LTS\n criterion. OLS(Ordinary Least Squares) regression is known to\n be sensitive to outliers. A single outlying observation can\n change the values of estimated parameters. LTS is a resistant\n estimator even the number of outliers is up to half of the\n data. This package is for estimating the LTS parameters with\n lower bias and variance in a reasonable time. Version 1.3\n included the function medmad for fast outlier detection in\n linear regression.","Published":"2013-02-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gam","Version":"1.14-4","Title":"Generalized Additive Models","Description":"Functions for fitting and working with generalized\n\t\tadditive models, as described in chapter 7 of \"Statistical Models in\n\t\tS\" (Chambers and Hastie (eds), 1991), and \"Generalized Additive\n\t\tModels\" (Hastie and Tibshirani, 1990).","Published":"2017-04-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gamair","Version":"1.0-0","Title":"Data for \"GAMs: An Introduction with R\"","Description":"Data sets and scripts used in the book \"Generalized\n Additive Models: An Introduction with R\", Wood (2006) CRC.","Published":"2016-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gambin","Version":"1.4","Title":"Fit the GamBin Model to Species Abundance Distributions","Description":"Fits the gambin distribution to species-abundance distributions from \n ecological data. 'gambin' is short for 'gamma-binomial'. The main function is \n fitGambin, which estimates the 'alpha' parameter of the gambin distribution using \n maximum likelihood. Functions are also provided to generate the gambin distribution \n and for calculating likelihood statistics.","Published":"2016-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GAMBoost","Version":"1.2-3","Title":"Generalized linear and additive models by likelihood based\nboosting","Description":"This package provides routines for fitting generalized\n linear and and generalized additive models by likelihood based\n boosting, using penalized B-splines","Published":"2013-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gamboostLSS","Version":"2.0-0","Title":"Boosting Methods for 'GAMLSS'","Description":"Boosting models for fitting generalized additive models for\n location, shape and scale ('GAMLSS') to potentially high dimensional\n data.","Published":"2017-05-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gamboostMSM","Version":"1.1.87","Title":"Estimating multistate models using gamboost()","Description":"Provides features to use function gamboost() from package mboost for estimation of multistate models","Published":"2014-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gamclass","Version":"0.56","Title":"Functions and Data for a Course on Modern Regression and\nClassification","Description":"Functions and data are provided that support a course that emphasizes statistical\n issues of inference and generalizability. Attention is restricted to a relatively small\n number of methods, often (misleadingly in my view) referred to as algorithms.","Published":"2015-08-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gamCopula","Version":"0.0-2","Title":"Generalized Additive Models for Bivariate Conditional Dependence\nStructures and Vine Copulas","Description":"Implementation of various inference and simulation tools to\n apply generalized additive models to bivariate dependence structures and\n non-simplified vine copulas.","Published":"2017-01-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GAMens","Version":"1.2","Title":"Applies GAMbag, GAMrsm and GAMens Ensemble Classifiers for\nBinary Classification","Description":"Ensemble classifiers based upon generalized additive models for binary\n classification (De Bock et al. (2010) ). The ensembles\n implement Bagging (Breiman (1996) ), the Random Subspace Method (Ho (1998) ), or\n both, and use Hastie and Tibshirani's (1990) generalized additive models (GAMs)\n as base classifiers. Once an ensemble classifier has been trained, it can\n be used for predictions on new data. A function for cross validation is also\n included.","Published":"2016-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"games","Version":"1.1.2","Title":"Statistical Estimation of Game-Theoretic Models","Description":"Provides estimation and analysis functions for\n strategic statistical models.","Published":"2015-02-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gamesGA","Version":"1.1.3.2","Title":"Genetic Algorithm for Sequential Symmetric Games","Description":"Finds adaptive strategies for sequential symmetric \n games using a genetic algorithm. Currently, any symmetric two by two matrix\n is allowed, and strategies can remember the history of an opponent's play\n from the previous three rounds of moves in iterated interactions between\n players. The genetic algorithm returns a list of adaptive strategies given\n payoffs, and the mean fitness of strategies in each generation.","Published":"2017-06-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GameTheory","Version":"2.5","Title":"Cooperative Game Theory","Description":"Implementation of a common set of punctual solutions for Cooperative Game Theory.","Published":"2017-02-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GameTheoryAllocation","Version":"1.0","Title":"Tools for Calculating Allocations in Game Theory","Description":"Many situations can be modeled as game theoretic situations. Some procedures are included in this package to calculate the most important allocations rules in Game Theory: Shapley value, Owen value or nucleolus, among other. First, we must define as an argument the value of the unions of the envolved agents with the characteristic function. ","Published":"2016-07-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gamlr","Version":"1.13-3","Title":"Gamma Lasso Regression","Description":"The gamma lasso algorithm provides regularization paths corresponding to a range of non-convex cost functions between L0 and L1 norms. As much as possible, usage for this package is analogous to that for the glmnet package (which does the same thing for penalization between L1 and L2 norms). For details see: Taddy (2015), One-Step Estimator Paths for Concave Regularization, http://arxiv.org/abs/1308.5623.","Published":"2015-08-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss","Version":"5.0-2","Title":"Generalised Additive Models for Location Scale and Shape","Description":"Functions for fitting Generalized Additive Models for Location Scale and Shape.","Published":"2017-05-25","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss.add","Version":"5.0-1","Title":"Extra Additive Terms for GAMLSS Models","Description":"Interface for extra smooth functions including tensor products, neural networks and decision trees.","Published":"2016-10-19","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss.cens","Version":"4.3-5","Title":"Fitting an Interval Response Variable Using `gamlss.family'\nDistributions","Description":"This is an add-on package to GAMLSS. The purpose of this\n package is to allow users to fit interval response variables in\n GAMLSS models. The main function gen.cens() generates a\n censored version of an existing GAMLSS family distribution.","Published":"2016-05-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss.data","Version":"5.0-0","Title":"GAMLSS Data","Description":"Data for GAMLSS models.","Published":"2016-10-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss.demo","Version":"4.3-3","Title":"Demos for GAMLSS","Description":"Demos for smoothing and gamlss.family distributions.","Published":"2015-07-17","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss.dist","Version":"5.0-2","Title":"Distributions for Generalized Additive Models for Location Scale\nand Shape","Description":"The different distributions used for the response variables in Generalized Additive Models for Location Scale and Shape.","Published":"2017-06-09","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss.inf","Version":"1.0-0","Title":"Fitting Mixed (Inflated and Adjusted) Distributions","Description":"This is an add-on package to 'gamlss'. The purpose of this package is to allow users to fit GAMLSS (Generalised Additive Models for Location Scale and Shape) models when the response variable is defined either in the intervals [0,1), (0,1] and [0,1] (inflated at zero and/or one distributions), or in the positive real line including zero (zero-adjusted distributions). The mass points at zero and/or one are treated as extra parameters with the possibility to include a linear predictor for both. The package also allows transformed or truncated distributions from the GAMLSS family to be used for the continuous part of the distribution. Standard methods and GAMLSS diagnostics can be used with the resulting fitted object. ","Published":"2017-05-02","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss.mx","Version":"4.3-5","Title":"Fitting Mixture Distributions with GAMLSS","Description":"The main purpose of this package is to allow fitting of\n mixture distributions with GAMLSS models.","Published":"2016-05-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss.nl","Version":"4.1-0","Title":"Fitting non linear parametric GAMLSS models","Description":"This is an add on package to GAMLSS. It allows one extra\n method for fitting GAMLSS models. The main function nlgamlss()\n can fit any parametric (up to four parameter) GAMLSS\n distribution.","Published":"2012-02-15","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss.spatial","Version":"1.3.2","Title":"Spatial Terms in Generalized Additive Models for Location Scale\nand Shape Models","Description":"It allows us to fit Gaussian Markov Random Field within the\n Generalized Additive Models for Location Scale and Shape algorithms.","Published":"2017-05-30","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss.tr","Version":"5.0-0","Title":"Generating and Fitting Truncated `gamlss.family' Distributions","Description":"This is an add on package to GAMLSS. The purpose of this\n package is to allow users to defined truncated distributions in\n GAMLSS models. The main function gen.trun() generates truncated\n version of an existing GAMLSS family distribution.","Published":"2016-11-21","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlss.util","Version":"4.3-4","Title":"GAMLSS Utilities","Description":"Extra functions for GAMLSS and others models.","Published":"2016-05-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamlssbssn","Version":"0.1.0","Title":"Bimodal Skew Symmetric Normal Distribution","Description":"Density, distribution function, quantile function and random generation for the bimodal skew symmetric normal distribution of Hassan and El-Bassiouni (2016) .","Published":"2017-06-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gamm4","Version":"0.2-4","Title":"Generalized Additive Mixed Models using 'mgcv' and 'lme4'","Description":"Estimate generalized additive mixed models via a version of\n function gamm() from 'mgcv', using 'lme4' for estimation.","Published":"2016-09-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Gammareg","Version":"1.0","Title":"classic gamma regression: joint modeling of mean and shape\nparameters","Description":"This package performs gamma regression, where both mean and shape parameters follows lineal regression structures. ","Published":"2014-01-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gammSlice","Version":"1.3","Title":"Generalized additive mixed model analysis via slice sampling","Description":"Uses a slice sampling-based Markov chain Monte Carlo to\n conduct Bayesian fitting and inference for generalized additive\n mixed models (GAMM). Generalized linear mixed models and\n generalized additive models are also handled as special cases\n of GAMM.","Published":"2015-01-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gamreg","Version":"0.2","Title":"Robust and Sparse Regression via Gamma-Divergence","Description":"Robust regression via gamma-divergence with L1, elastic net and ridge.","Published":"2016-10-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gamRR","Version":"0.1.0","Title":"Calculate the RR for the GAM","Description":"To calculate the relative risk (RR) for the generalized additive model.","Published":"2017-05-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gamsel","Version":"1.7-3","Title":"Fit Regularization Path for Generalized Additive Models","Description":"Using overlap grouped lasso penalties, gamsel selects whether a term in a gam is nonzero, linear, or a non-linear spline (up to a specified max df per variable). It fits the entire regularization path on a grid of values for the overall penalty lambda, both for gaussian and binomial families. ","Published":"2015-06-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GANPA","Version":"1.0","Title":"Gene Association Network-based Pathway Analysis","Description":"This package implements a network-based gene weighting\n algorithm for pathways, as well as a gene-weighted gene set\n analysis approach for microarray data pathway analysis.","Published":"2011-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GANPAdata","Version":"1.0","Title":"The GANPA Datasets Package","Description":"This is a dataset package for GANPA, which implements a\n network-based gene weighting approach to pathway analysis. This\n package includes data useful for GANPA, such as a functional\n association network, pathways, an expression dataset and\n multi-subunit proteins.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gaoptim","Version":"1.1","Title":"Genetic Algorithm optimization for real-based and\npermutation-based problems","Description":"Performs a Genetic Algorithm Optimization, given a\n real-based or permutation-based function and the associated\n search space.","Published":"2013-03-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gap","Version":"1.1-17","Title":"Genetic Analysis Package","Description":"It is designed as an integrated package for genetic data\n analysis of both population and family data. Currently, it\n contains functions for sample size calculations of both\n population-based and family-based designs, probability of\n familial disease aggregation, kinship calculation, statistics\n in linkage analysis, and association analysis involving genetic\n markers including haplotype analysis with or without environmental\n covariates.","Published":"2017-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gapfill","Version":"0.9.5-3","Title":"Fill Missing Values in Satellite Data","Description":"Tools to fill missing values in satellite data and to develop new\n gap-fill algorithms. The methods are tailored to data (images) observed\n at equally-spaced points in time. The package is illustrated with MODIS\n NDVI data.","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gapmap","Version":"0.0.4","Title":"Functions for Drawing Gapped Cluster Heatmap with ggplot2","Description":"The gap encodes the distance between clusters and improves\n interpretation of cluster heatmaps. The gaps can be of the same\n distance based on a height threshold to cut the dendrogram. Another\n option is to vary the size of gaps based on the distance between\n clusters.","Published":"2016-12-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gapminder","Version":"0.2.0","Title":"Data from Gapminder","Description":"An excerpt of the data available at Gapminder.org. For each of 142\n countries, the package provides values for life expectancy, GDP per capita,\n and population, every five years, from 1952 to 2007.","Published":"2015-12-31","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"GAR","Version":"1.1","Title":"Authorize and Request Google Analytics Data","Description":"The functions included are used to obtain initial authentication with Google Analytics as well as simple and organized data retrieval from the API. Allows for retrieval from multiple profiles at once.","Published":"2015-09-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GAS","Version":"0.2.1","Title":"Generalized Autoregressive Score Models","Description":"Simulate, estimate and forecast using univariate and multivariate GAS models.","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gaselect","Version":"1.0.5","Title":"Genetic Algorithm (GA) for Variable Selection from\nHigh-Dimensional Data","Description":"Provides a genetic algorithm for finding variable\n subsets in high dimensional data with high prediction performance. The\n genetic algorithm can use ordinary least squares (OLS) regression models or\n partial least squares (PLS) regression models to evaluate the prediction\n power of variable subsets. By supporting different cross-validation\n schemes, the user can fine-tune the tradeoff between speed and quality of\n the solution.","Published":"2015-02-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gasfluxes","Version":"0.2-1","Title":"Greenhouse Gas Flux Calculation from Chamber Measurements","Description":"Functions for greenhouse gas flux calculation from chamber\n measurements.","Published":"2017-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gastempt","Version":"0.4.01","Title":"Analyzing Gastric Emptying from MRI or Scintigraphy","Description":"Fits gastric emptying time series from MRI or scintigraphic measurements\n using nonlinear mixed-model population fits with 'nlme' and Bayesian methods with \n Stan; computes derived parameters such as t50 and AUC.","Published":"2017-05-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"gaston","Version":"1.5","Title":"Genetic Data Handling (QC, GRM, LD, PCA) & Linear Mixed Models","Description":"Manipulation of genetic data (SNPs), computation of Genetic Relationship Matrix, Linkage Disequilibrium, etc. Efficient algorithms for Linear Mixed Model (AIREML, diagonalization trick).","Published":"2017-05-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gatepoints","Version":"0.1.3","Title":"Easily Gate or Select Points on a Scatter Plot","Description":"Allows user to choose/gate a region on the plot and returns points\n within it.","Published":"2017-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GauPro","Version":"0.1.0","Title":"Gaussian Process Fitting","Description":"Fits a Gaussian process model to data. Gaussian processes\n are commonly used in computer experiments to fit an interpolating model.\n The model is stored as an 'R6' object and can be easily updated with new \n data. There are options to run in parallel (not for Windows), and 'Rcpp'\n has been used to speed up calculations. Other R packages that perform\n similar calculations include 'laGP', 'DiceKriging', 'GPfit', and 'mlegp'.","Published":"2016-10-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gaussDiff","Version":"1.1","Title":"Difference measures for multivariate Gaussian probability\ndensity functions","Description":"A collection difference measures for multivariate Gaussian\n probability density functions, such as the Euclidea mean, the\n Mahalanobis distance, the Kullback-Leibler divergence, the\n J-Coefficient, the Minkowski L2-distance, the Chi-square\n divergence and the Hellinger Coefficient.","Published":"2012-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gaussfacts","Version":"0.0.2","Title":"The Greatest Mathematician Since Antiquity","Description":"Display a random fact about Carl Friedrich Gauss\n based the on collection curated by Mike Cavers via the\n site.","Published":"2016-08-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gaussquad","Version":"1.0-2","Title":"Collection of functions for Gaussian quadrature","Description":"A collection of functions to perform Gaussian quadrature\n with different weight functions corresponding to the orthogonal\n polynomials in package orthopolynom. Examples verify the\n orthogonality and inner products of the polynomials.","Published":"2013-02-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gazepath","Version":"1.2","Title":"Parse Eye-Tracking Data into Fixations","Description":"Eye-tracking data must be transformed into fixations and saccades before it can be analyzed. This package provides a non-parametric speed-based approach to do this on a trial basis. The method is especially useful when there are large differences in data quality, as the thresholds are adjusted accordingly. The same pre-processing procedure can be applied to all participants, while accounting for individual differences in data quality.","Published":"2017-03-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gb","Version":"1.1.8-8","Title":"Generalize Lambda Distribution and Generalized Bootstrapping","Description":"This package collects algorithms and functions for fitting data to a generalized lambda distribution via moment matching methods, and generalized bootstrapping.","Published":"2013-08-08","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"GB2","Version":"2.1","Title":"Generalized Beta Distribution of the Second Kind: Properties,\nLikelihood, Estimation","Description":"Package GB2 explores the Generalized Beta distribution of the second kind. Density, cumulative distribution function, quantiles and moments of the distributions are given. Functions for the full log-likelihood, the profile log-likelihood and the scores are provided. Formulas for various indicators of inequality and poverty under the GB2 are implemented. The GB2 is fitted by the methods of maximum pseudo-likelihood estimation using the full and profile log-likelihood, and non-linear least squares estimation of the model parameters. Various plots for the visualization and analysis of the results are provided. Variance estimation of the parameters is provided for the method of maximum pseudo-likelihood estimation. A mixture distribution based on the compounding property of the GB2 is presented (denoted as \"compound\" in the documentation). This mixture distribution is based on the discretization of the distribution of the underlying random scale parameter. The discretization can be left or right tail. Density, cumulative distribution function, moments and quantiles for the mixture distribution are provided. The compound mixture distribution is fitted using the method of maximum pseudo-likelihood estimation. The fit can also incorporate the use of auxiliary information. In this new version of the package, the mixture case is complemented with new functions for variance estimation by linearization and comparative density plots. ","Published":"2015-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gbm","Version":"2.1.3","Title":"Generalized Boosted Regression Models","Description":"An implementation of extensions to Freund and\n Schapire's AdaBoost algorithm and Friedman's gradient boosting\n machine. Includes regression methods for least squares,\n absolute loss, t-distribution loss, quantile regression,\n logistic, multinomial logistic, Poisson, Cox proportional\n hazards partial likelihood, AdaBoost exponential loss,\n Huberized hinge loss, and Learning to Rank measures\n (LambdaMart).","Published":"2017-03-21","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gbm2sas","Version":"2.1","Title":"Convert GBM Object Trees to SAS Code","Description":"Writes SAS code to get predicted values from every tree of a gbm.object.","Published":"2015-11-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gbp","Version":"0.1.0.4","Title":"A Bin Packing Problem Solver","Description":"Basic infrastructure and several algorithms for 1d-4d bin packing\n problem. This package provides a set of c-level classes and solvers for\n 1d-4d bin packing problem, and an r-level solver for 4d bin packing problem,\n which is a wrapper over the c-level 4d bin packing problem solver.\n The 4d bin packing problem solver aims to solve bin packing problem, a.k.a\n container loading problem, with an additional constraint on weight.\n Given a set of rectangular-shaped items, and a set of rectangular-shaped bins\n with weight limit, the solver looks for an orthogonal packing solution\n such that minimizes the number of bins and maximize volume utilization.\n Each rectangular-shaped item i = 1, .. , n is characterized by length l_i,\n depth d_i, height h_i, and weight w_i, and each rectangular-shaped bin\n j = 1, .. , m is specified similarly by length l_j, depth d_j, height h_j,\n and weight limit w_j.\n The item can be rotated into any orthogonal direction, and no further\n restrictions implied.","Published":"2017-01-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gbRd","Version":"0.4-11","Title":"Utilities for processing Rd objects and files","Description":"Provides utilities for processing Rd objects and files.\n Extract argument descriptions and other parts of the help pages\n of functions.","Published":"2012-10-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gbs2ploidy","Version":"1.0","Title":"Inference of Ploidy from (Genotyping-by-Sequencing) GBS Data","Description":"Functions for inference of ploidy from (Genotyping-by-sequencing) GBS data, including a function to infer allelic ratios and allelic proportions in a Bayesian framework. ","Published":"2016-12-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gbts","Version":"1.2.0","Title":"Hyperparameter Search for Gradient Boosted Trees","Description":"An implementation of hyperparameter optimization for Gradient\n Boosted Trees on binary classification and regression problems. The current\n version provides two optimization methods: Bayesian optimization and random\n search. Instead of giving the single best model, the final output is an \n ensemble of Gradient Boosted Trees constructed via the method of ensemble \n selection.","Published":"2017-02-27","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gbutils","Version":"0.2-0","Title":"Simulation of Real and Complex Numbers and Small Programming\nUtilities","Description":"Simulate real and complex numbers from distributions of\n their magnitude and arguments. Optionally, the magnitudes and/or\n arguments may be fixed in almost arbitrary ways. Small\n programming utilities: check if an object is identical to NA,\n count positional arguments in a call, set intersection of more\n than two sets, check if an argument is unnamed.","Published":"2016-01-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GCAI.bias","Version":"1.0","Title":"Guided Correction Approach for Inherited bias (GCAI.bias)","Description":"Many inherited biases and effects exists in RNA-seq due to both biological and technical effects. We observed the biological variance of testing target transcripts can influence the yield of sequencing reads, which might indicate a resource competition existing in RNA-seq. We developed this package to capture the bias depending on local sequence and perform the correction of this type of bias by borrowing information from spike-in measurement.","Published":"2014-07-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GCalignR","Version":"0.1.0","Title":"Simple Peak Alignment for Gas-Chromatography Data","Description":"Aligns chromatography peaks with a three step algorithm: (1) Linear\n transformation of retention times to maximise shared peaks among samples\n (2) Align peaks within a certain error-interval (3) Merges rows that are likely\n representing the same substance (i.e. no sample shows peaks in both rows and\n the rows have similar retention time means).\n The method was first described in Stoffel et al. (2015) .","Published":"2017-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gCat","Version":"0.1","Title":"Graph-based two-sample tests for categorical data","Description":"These are two-sample tests for categorical data utilizing similarity information among the categories. They are useful when there is underlying structure on the categories.","Published":"2014-08-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gcbd","Version":"0.2.6","Title":"'GPU'/CPU Benchmarking in Debian-Based Systems","Description":"'GPU'/CPU Benchmarking on Debian-package based systems\n This package benchmarks performance of a few standard linear algebra\n operations (such as a matrix product and QR, SVD and LU decompositions)\n across a number of different 'BLAS' libraries as well as a 'GPU' implementation.\n To do so, it takes advantage of the ability to 'plug and play' different\n 'BLAS' implementations easily on a Debian and/or Ubuntu system. The current\n version supports\n - 'Reference BLAS' ('refblas') which are un-accelerated as a baseline\n - Atlas which are tuned but typically configure single-threaded\n - Atlas39 which are tuned and configured for multi-threaded mode\n - 'Goto Blas' which are accelerated and multi-threaded\n - 'Intel MKL' which is a commercial accelerated and multithreaded version.\n As for 'GPU' computing, we use the CRAN package\n - 'gputools'\n For 'Goto Blas', the 'gotoblas2-helper' script from the ISM in Tokyo can be\n used. For 'Intel MKL' we use the Revolution R packages from Ubuntu 9.10.","Published":"2016-09-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GCD","Version":"3.0.5","Title":"Global Charcoal Database","Description":"Contains the Global Charcoal database data. Data include charcoal\n series (age, depth, charcoal quantity, associated units and methods) and\n informations on sedimentary sites (localisation, depositional environment, biome, etc.).","Published":"2015-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gcdnet","Version":"1.0.4","Title":"LASSO and (adaptive) Elastic-Net penalized least squares,\nlogistic regression, HHSVM and squared hinge loss SVM using a\nfast GCD algorithm","Description":"This package implements a generalized coordinate descent (GCD) algorithm for computing the solution path of the hybrid Huberized support vector machine (HHSVM) and its generalization, including the elastic net penalized least squares, the elastic net penalized SVM with the squared hinge loss and the elastic net penalized logistic regression.","Published":"2013-11-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gcerisk","Version":"16.1.3","Title":"Generalized Competing Event Model","Description":"Generalized competing event model based on Cox PH model and Fine-Gray model.\n This function is designed to develop optimized risk-stratification methods for competing\n risks data, such as described in:\n 1. Carmona R, Gulaya S, Murphy JD, Rose BS, Wu J, Noticewala S,McHale MT, Yashar CM, Vaida F,\n and Mell LK.(2014) . Validated competing event model for thestage I-II\n endometrial cancer population. Int J Radiat Oncol Biol Phys.89:888-98.\n 2. Carmona R, Zakeri K, Green G, Hwang L, Gulaya S, Xu B, Verma R, Williamson CW, Triplett DP, Rose\n BS, Shen H, Vaida F, Murphy JD, and Mell LK.(2016) . Improved method to stratify\n elderly cancer patients at risk for competing events. J Clin Oncol.in press.","Published":"2016-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gcite","Version":"0.6","Title":"Google Citation Parser","Description":"Scrapes Google Citation pages and creates data frames of \n citations over time.","Published":"2017-05-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gcKrig","Version":"1.0.2","Title":"Analysis of Geostatistical Count Data using Gaussian Copulas","Description":"Provides a variety of functions to analyze and model\n geostatistical count data with Gaussian copulas, including\n 1) data simulation and visualization; \n 2) correlation structure assessment (here also known as the Normal To Anything); \n 3) calculate multivariate normal rectangle probabilities; \n 4) likelihood inference and parallel prediction at predictive locations.","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gclus","Version":"1.3.1","Title":"Clustering Graphics","Description":"Orders panels in scatterplot matrices and parallel\n coordinate displays by some merit index. Package contains\n various indices of merit, ordering functions, and enhanced\n versions of pairs and parcoord which color panels according to\n their merit level.","Published":"2012-06-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gcmr","Version":"1.0.0","Title":"Gaussian Copula Marginal Regression","Description":"Likelihood inference in Gaussian copula marginal\n regression models.","Published":"2017-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gconcord","Version":"0.41","Title":"Concord method for Graphical Model Selection","Description":"Estimates a sparse inverse covariance matrix from a convex\n pseudo-likelihood function with L1 penalty","Published":"2014-01-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gcookbook","Version":"1.0","Title":"Data for \"R Graphics Cookbook\"","Description":"This package contains data sets used in the book \"R\n Graphics Cookbook\" by Winston Chang, published by O'Reilly\n Media.","Published":"2012-11-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GCPM","Version":"1.2.2","Title":"Generalized Credit Portfolio Model","Description":"Analyze the default risk of credit portfolios. Commonly known models, \n\t\tlike CreditRisk+ or the CreditMetrics model are implemented in their very basic settings.\n\t\tThe portfolio loss distribution can be achieved either by simulation or analytically \n\t\tin case of the classic CreditRisk+ model. Models are only implemented to respect losses\n\t\tcaused by defaults, i.e. migration risk is not included. The package structure is kept\n\t\tflexible especially with respect to distributional assumptions in order to quantify the\n\t\tsensitivity of risk figures with respect to several assumptions. Therefore the package\n\t\tcan be used to determine the credit risk of a given portfolio as well as to quantify\n\t\tmodel sensitivities.","Published":"2016-12-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GDAdata","Version":"0.93","Title":"Datasets for the Book Graphical Data Analysis with R","Description":"Datasets used in the book 'Graphical Data Analysis with R' (Antony Unwin, CRC Press 2015).","Published":"2015-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gdalUtils","Version":"2.0.1.7","Title":"Wrappers for the Geospatial Data Abstraction Library (GDAL)\nUtilities","Description":"Wrappers for the Geospatial Data Abstraction Library (GDAL)\n Utilities.","Published":"2015-10-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gdata","Version":"2.18.0","Title":"Various R Programming Tools for Data Manipulation","Description":"Various R programming tools for data manipulation, including:\n - medical unit conversions ('ConvertMedUnits', 'MedUnits'),\n - combining objects ('bindData', 'cbindX', 'combine', 'interleave'),\n - character vector operations ('centerText', 'startsWith', 'trim'),\n - factor manipulation ('levels', 'reorder.factor', 'mapLevels'),\n - obtaining information about R objects ('object.size', 'elem', 'env',\n 'humanReadable', 'is.what', 'll', 'keep', 'ls.funs',\n 'Args','nPairs', 'nobs'),\n - manipulating MS-Excel formatted files ('read.xls',\n 'installXLSXsupport', 'sheetCount', 'xlsFormats'),\n - generating fixed-width format files ('write.fwf'),\n - extricating components of date & time objects ('getYear', 'getMonth',\n 'getDay', 'getHour', 'getMin', 'getSec'),\n - operations on columns of data frames ('matchcols', 'rename.vars'),\n - matrix operations ('unmatrix', 'upperTriangle', 'lowerTriangle'),\n - operations on vectors ('case', 'unknownToNA', 'duplicated2', 'trimSum'),\n - operations on data frames ('frameApply', 'wideByFactor'),\n - value of last evaluated expression ('ans'), and\n - wrapper for 'sample' that ensures consistent behavior for both\n scalar and vector arguments ('resample').","Published":"2017-06-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GDAtools","Version":"1.4","Title":"A Toolbox for the Analysis of Categorical Data in Social\nSciences, and Especially Geometric Data Analysis","Description":"Contains functions for 'specific' MCA (Multiple Correspondence Analysis), \n\t'class specific' MCA, computing and plotting structuring factors and concentration ellipses, \n\tMultiple Factor Analysis, 'standardized' MCA, inductive tests and others tools for Geometric Data Analysis. It also provides functions\n\tfor the translation of logit models coefficients into percentages, weighted contingency tables and an association \n measure - i.e. Percentages of Maximum Deviation from Independence (PEM).","Published":"2017-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GDELTtools","Version":"1.2","Title":"Download, slice, and normalize GDELT data","Description":"The GDELT data set is over 60 GB now and growing 100 MB a month.\n The number of source articles has increased over time and unevenly across\n countries. This package makes it easy to download a subset of that data,\n then normalize that data to facilitate valid time series analysis.","Published":"2014-02-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gdimap","Version":"0.1-9","Title":"Generalized Diffusion Magnetic Resonance Imaging","Description":"Diffusion anisotropy has been used to characterize\n white matter neuronal pathways in the human brain, and infer global\n connectivity in the central nervous system. The package implements\n algorithms to estimate and visualize the orientation of neuronal\n pathways in model-free methods (q-space imaging methods).\n For estimating fibre orientations two methods have been\n implemented. One method implements fibre orientation detection\n through local maxima extraction. A second more robust method\n is based on directional statistical clustering of ODF voxel data.\n Fibre orientations in multiple fibre voxels are estimated using\n a mixture of von Mises-Fisher (vMF) distributions. This statistical\n estimation procedure is used to resolve crossing fibre\n configurations.\n Reconstruction of orientation distribution function (ODF)\n profiles may be performed using the standard generalized\n q-sampling imaging (GQI) approach, Garyfallidis' GQI (GQI2)\n approach, or Aganj's variant of the Q-ball imaging (CSA-QBI)\n approach. Procedures for the visualization of RGB-maps,\n line-maps and glyph-maps of real diffusion magnetic resonance\n imaging (dMRI) data-sets are included in the package.","Published":"2015-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GDINA","Version":"1.4.2","Title":"The Generalized DINA Model Framework","Description":"A set of psychometric tools for cognitive diagnostic analyses for both dichotomous and polytomous responses. Various cognitive diagnosis models can be estimated, include the generalized deterministic inputs, noisy and gate (G-DINA) model by de la Torre (2011) , the sequential G-DINA model by Ma and de la Torre (2016) , and many other models they subsume. Joint attribute distribution can be saturated, higher-order or structured. Q-matrix validation, item and model fit statistics, model comparison at test and item level and differential item functioning can also be conducted. A graphical user interface is also provided.","Published":"2017-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gdistance","Version":"1.2-1","Title":"Distances and Routes on Geographical Grids","Description":"Calculate distances and routes on geographic grids.","Published":"2017-02-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gdm","Version":"1.3.2","Title":"Generalized Dissimilarity Modeling","Description":"A toolkit with functions to fit, plot, and summarize Generalized Dissimilarity Models (Ferrier et al. 2007, ).","Published":"2017-03-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gdmp","Version":"0.1.0","Title":"Genomic Data Management","Description":"Manage and analyze high-dimensional SNP data from chips with multiple densities.","Published":"2016-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gdns","Version":"0.2.0","Title":"Tools to Work with Google DNS Over HTTPS API","Description":"To address the problem of insecurity of UDP-based DNS requests,\n Google Public DNS offers DNS resolution over an encrypted HTTPS\n connection. DNS-over-HTTPS greatly enhances privacy and security\n between a client and a recursive resolver, and complements DNSSEC\n to provide end-to-end authenticated DNS lookups. Functions that enable\n querying individual requests that bulk requests that return detailed\n responses and bulk requests are both provided. Support for reverse\n lookups is also provided. See \n for more information.","Published":"2016-10-01","License":"AGPL + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gdpc","Version":"1.0.2","Title":"Generalized Dynamic Principal Components","Description":"Functions to compute the Generalized Dynamic Principal Components\n introduced in Peña and Yohai (2016) .","Published":"2017-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gds","Version":"0.1.0","Title":"Descriptive Statistics of Grouped Data","Description":"Contains a function called gds() which accepts three input\n parameters like lower limits, upper limits and the frequencies of the\n corresponding classes. The gds() function calculate and return the values\n of mean ('gmean'), median ('gmedian'), mode ('gmode'), variance ('gvar'), standard\n deviation ('gstdev'), coefficient of variance ('gcv'), quartiles ('gq1', 'gq2', 'gq3'),\n inter-quartile range ('gIQR'), skewness ('g1'), and kurtosis ('g2') which facilitate\n effective data analysis. For skewness and kurtosis calculations we use moments.","Published":"2016-04-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gdtools","Version":"0.1.4","Title":"Utilities for Graphical Rendering","Description":"Useful tools for writing vector graphics devices.","Published":"2017-03-17","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gear","Version":"0.1.1","Title":"Geostatistical Analysis in R","Description":"Implements common geostatistical methods in a clean,\n straightforward, efficient manner. A quasi reboot of the SpatialTools R package.","Published":"2015-12-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gee","Version":"4.13-19","Title":"Generalized Estimation Equation Solver","Description":"Generalized Estimation Equation solver.","Published":"2015-06-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gee4","Version":"0.1.0.0","Title":"Generalised Estimating Equations (GEE/WGEE) using 'Armadillo'\nand S4","Description":"Fit joint mean-covariance models for longitudinal data within the \n framework of (weighted) generalised estimating equations (GEE/WGEE). The \n models and their components are represented using S4 classes and methods. \n The core computational algorithms are implemented using the 'Armadillo' C++ \n library for numerical linear algebra and 'RcppArmadillo' glue.","Published":"2017-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GEEaSPU","Version":"1.0.2","Title":"Adaptive Association Tests for Multiple Phenotypes using\nGeneralized Estimating Equations (GEE)","Description":"Provides adaptive association tests for SNP level, gene level and pathway level analyses.","Published":"2016-08-04","License":"GNU General Public License (>= 3)","snapshot_date":"2017-06-23"} {"Package":"geeM","Version":"0.10.0","Title":"Solve Generalized Estimating Equations","Description":"GEE estimation of the parameters in mean structures with possible\n correlation between the outcomes. User-specified mean link and variance\n functions are allowed, along with observation weighting. The \"M\" in the name\n \"geeM\" is meant to emphasize the use of the Matrix package, which allows for an\n implementation based fully in R.","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GEEmediate","Version":"1.1.1","Title":"Mediation Analysis for Generalized Linear Models Using the\nDifference Method","Description":"Causal mediation analysis for a single exposure/treatment and a\n single mediator, both allowed to be either continuous or binary. The package\n implements the difference method and provide point and interval estimates as\n well as testing for the natural direct and indirect effects and the mediation\n proportion.","Published":"2017-05-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"geepack","Version":"1.2-1","Title":"Generalized Estimating Equation Package","Description":"Generalized estimating equations solver for parameters in\n mean, scale, and correlation structures, through mean link,\n scale link, and correlation link. Can also handle clustered\n categorical responses.","Published":"2016-09-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"geesmv","Version":"1.3","Title":"Modified Variance Estimators for Generalized Estimating\nEquations","Description":"Generalized estimating equations with the original sandwich variance estimator proposed by Liang and Zeger (1986), and eight types of more recent modified variance estimators for improving the finite small-sample performance.","Published":"2015-10-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"geigen","Version":"2.0","Title":"Calculate Generalized Eigenvalues, the Generalized Schur\nDecomposition and the Generalized Singular Value Decomposition\nof a Matrix Pair with Lapack","Description":"Functions to compute generalized eigenvalues and eigenvectors,\n the generalized Schur decomposition and\n the generalized Singular Value Decomposition of a matrix pair,\n using Lapack routines.","Published":"2017-02-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geiger","Version":"2.0.6","Title":"Analysis of Evolutionary Diversification","Description":"Methods for fitting macroevolutionary models to phylogenetic trees.","Published":"2015-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GEint","Version":"0.1.2","Title":"Misspecified Models for Gene-Environment Interaction","Description":"First major functionality is to compute the bias in misspecified linear gene-environment interaction models. The most \n\tgeneralized function for this objective is GE_bias(). However GE_bias() requires specification of many\n\thigher order moments of covariates in the model. If users are unsure about how to calculate/estimate\n\tthese higher order moments, it may be easier to use GE_bias_normal_squaredmis(). This function places\n\tmany more assumptions on the covariates (most notably that they are all jointly generated from a multivariate\n\tnormal distribution) and is thus able to automatically calculate many of the higher order moments automatically,\n\tnecessitating only that the user specify some covariances. There are also functions to solve for the bias \n\tthrough simulation and non-linear equation solvers, these can be used to check your work. Second major functionality\n\tis to implement the Bootstrap Inference with Correct Sandwich (BICS) testing procedure, which we have found to provide better finite-sample\n\tperformance than other inference procedures for testing GxE interaction. More details on these functions \n\tare available in Sun, Carroll, Christiani, and Lin, \"Testing for Gene-Environment Interaction Under Exposure Misspecification\"\n\t(Submitted).","Published":"2017-01-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gelnet","Version":"1.2.1","Title":"Generalized Elastic Nets","Description":"Implements several extensions of the elastic net regularization\n scheme. These extensions include individual feature penalties for the L1 term,\n feature-feature penalties for the L2 term, as well as translation coefficients\n for the latter.","Published":"2016-04-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"gem","Version":"0.19","Title":"File Conversion for 'Gem Infrasound Logger'","Description":"Reads data files from the 'Gem infrasound logger' for analysis and converts to segy format (which is convenient for reading with traditional seismic analysis software). The Gem infrasound logger is an in-development low-cost, lightweight, low-power instrument for recording infrasound in field campaigns; email the maintainer for more information.","Published":"2016-08-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gemlog","Version":"0.20","Title":"File Conversion for 'Gem Infrasound Logger'","Description":"Reads data files from the 'Gem infrasound logger' for analysis and converts to segy format (which is convenient for reading with traditional seismic analysis software). The Gem infrasound logger is an in-development low-cost, lightweight, low-power instrument for recording infrasound in field campaigns; email the maintainer for more information.","Published":"2016-11-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gems","Version":"1.1.1","Title":"Generalized Multistate Simulation Model","Description":"Simulate and analyze multistate models with general hazard\n functions. gems provides functionality for the preparation of hazard functions\n and parameters, simulation from a general multistate model and predicting future\n events. The multistate model is not required to be a Markov model and may take\n the history of previous events into account. In the basic version, it allows\n to simulate from transition-specific hazard function, whose parameters are\n multivariable normally distributed.","Published":"2017-03-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gemtc","Version":"0.8-2","Title":"Network Meta-Analysis Using Bayesian Methods","Description":"Network meta-analyses (mixed treatment comparisons) in the Bayesian\n framework using JAGS. Includes methods to assess heterogeneity and\n inconsistency, and a number of standard visualizations.","Published":"2016-12-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gemtc.jar","Version":"0.14.3","Title":"GeMTC Java binary","Description":"An R package providing the Java JAR for the gemtc package","Published":"2013-01-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GenABEL","Version":"1.8-0","Title":"genome-wide SNP association analysis","Description":"a package for genome-wide association analysis between \n quantitative or binary traits and single-nucleotide\n polymorphisms (SNPs). ","Published":"2013-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GenABEL.data","Version":"1.0.0","Title":"Package contains data which is used by GenABEL example and test\nfunctions","Description":"GenABEL.data package consists of a data set used by GenABEL functions","Published":"2013-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"genalg","Version":"0.2.0","Title":"R Based Genetic Algorithm","Description":"R based genetic algorithm for binary and floating point\n chromosomes.","Published":"2015-03-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"genasis","Version":"1.0","Title":"Global ENvironmental ASsessment Information System (GENASIS)\ncomputational tools","Description":"genasis package contains methods for air pollution assessment. Concerned on persistent organic pollutants, the package allows to compute trends of their concentrations, compare different datasets and estimate relations between values from active and passive air samplers.","Published":"2014-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GenBinomApps","Version":"1.0-2","Title":"Clopper-Pearson Confidence Interval and Generalized Binomial\nDistribution","Description":"Density, distribution function, quantile function and random generation for the Generalized Binomial Distribution. Functions to compute the Clopper-Pearson Confidence Interval and the required sample size. Enhanced model for burn-in studies, where failures are tackled by countermeasures.","Published":"2014-06-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GenCAT","Version":"1.0.3","Title":"Genetic Class Association Testing","Description":"Implementation of the genetic class level association testing (GenCAT) method from SNP level association data. Refer to: \"Qian J, Nunez S, Reed E, Reilly MP, Foulkes AS (2016) A Simple Test of Class-Level Genetic Association Can Reveal Novel Cardiometabolic Trait Loci. PLoS ONE 11(2): e0148218\".","Published":"2016-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gencve","Version":"0.3","Title":"General Cross Validation Engine","Description":"Engines for cross-validation of many types of regression and class prediction models are provided. These engines include built-in support for 'glmnet', 'lars', 'plus', 'MASS', 'rpart', 'C50' and 'randomforest'. It is easy for the user to add other regression or classification algorithms. The 'parallel' package is used to improve speed. Several data generation algorithms for problems in regression and classification are provided.","Published":"2016-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gendata","Version":"1.1","Title":"Generate and Modify Synthetic Datasets","Description":"Set of functions to create datasets using a correlation matrix. ","Published":"2015-03-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gender","Version":"0.5.1","Title":"Predict Gender from Names Using Historical Data","Description":"Encodes gender based on names and dates of birth using historical\n datasets. By using these datasets instead of lists of male and female names,\n this package is able to more accurately guess the gender of a name, and it\n is able to report the probability that a name was male or female.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"genderizeR","Version":"2.0.0","Title":"Gender Prediction Based on First Names","Description":"Utilizes the 'genderize.io' Application Programming Interface \n to predict gender from first names extracted from a text vector. \n The accuracy of prediction could be controlled by two parameters: \n counts of a first name in the database and probability of prediction.","Published":"2016-05-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gendist","Version":"1.0","Title":"Generated Probability Distribution Models","Description":"Computes the probability density function (pdf), cumulative distribution function (cdf), quantile function (qf) and generates random values (rg) for the following general models : mixture models, composite models, folded models, skewed symmetric models and arc tan models.","Published":"2015-08-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GENEAread","Version":"1.1.1","Title":"Package For Reading Binary files","Description":"Functions and analytics for GENEA-compatible accelerometer\n data into R objects. See topic 'GENEAread' for an introduction\n to the package.","Published":"2013-04-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"GeneClusterNet","Version":"1.0.1","Title":"Gene Expression Clustering and Gene Network","Description":"Functions for clustering time-course gene expression and reconstructing of gene regulatory network based on Dynamic Bayesian Network.","Published":"2017-01-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GeneCycle","Version":"1.1.2","Title":"Identification of Periodically Expressed Genes","Description":"The GeneCycle package implements the approaches of Wichert\n et al. (2004), Ahdesmaki et al. (2005) and Ahdesmaki et al.\n (2007) for detecting periodically expressed genes from gene\n expression time series data.","Published":"2012-04-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"GeneF","Version":"1.0","Title":"Package for Generalized F-statistics","Description":"This package implements several generalized F-statistics.\n The current version includes a generalized F-statistic based on\n the flexible isotonic/monotonic regression or order restricted\n hypothesis testing.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GeneFeST","Version":"1.0.1","Title":"Bayesian calculation of gene-specific FST from genomic SNP data","Description":"GeneFeST is a genome scan method to detect outlier loci (genes) which are under balancing or directional selection.","Published":"2014-05-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"geneListPie","Version":"1.0","Title":"Profiling a gene list into GOslim or KEGG function pie","Description":"\"geneListPie\" package is for mapping a gene list to\n function categories defined in GOSlim or Kegg. The results can\n be plotted as a pie chart to provide a quick view of the genes\n distribution of the gene list among the function categories.\n The gene list must contain a list of gene symbols. The package\n contains a set of pre-processed gene sets obtained from Gene\n Ontology and MSigDB including human, mouse, rat and yeast. To\n provide a high level concise view, only GO slim and kegg are\n provided. The gene sets are regulared updated. User can also\n use customized gene sets. User can use the R Pie() or Pie3D()\n function for plotting the pie chart. Users can also choose to\n output the gene function mapping results and use external\n software such as Excel(R) for ploting.","Published":"2012-07-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"genemodel","Version":"1.1.0","Title":"Gene Model Plotting in R","Description":"Using simple input, this package creates plots of gene models. Users can create plots of alternatively spliced gene variants and the positions of mutations and other gene features.","Published":"2017-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GeneNet","Version":"1.2.13","Title":"Modeling and Inferring Gene Networks","Description":"Analyzes gene expression\n (time series) data with focus on the inference of gene networks.\n In particular, GeneNet implements the methods of Schaefer and \n Strimmer (2005a,b,c) and Opgen-Rhein and Strimmer (2006, 2007)\n for learning large-scale gene association networks (including\n assignment of putative directions). ","Published":"2015-08-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"geneNetBP","Version":"2.0.1","Title":"Belief Propagation in Genotype-Phenotype Networks","Description":"Belief propagation methods in genotype-phenotype networks (Conditional Gaussian and Discrete Bayesian Networks) to propagate phenotypic evidence through the network.","Published":"2016-08-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"genepi","Version":"1.0.1","Title":"Genetic Epidemiology Design and Inference","Description":"Functions for Genetic Epi Methods Developed at MSKCC","Published":"2010-10-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"genepop","Version":"1.0","Title":"Population Genetic Data Analysis Using Genepop","Description":"Makes the Genepop software available in R. This software implements a mixture of traditional population genetic methods and some more focused developments: it computes exact tests for Hardy-Weinberg equilibrium, for population differentiation and for genotypic disequilibrium among pairs of loci; it computes estimates of F-statistics, null allele frequencies, allele size-based statistics for microsatellites, etc.; and it performs analyses of isolation by distance from pairwise comparisons of individuals or population samples. ","Published":"2017-06-14","License":"CeCILL-2","snapshot_date":"2017-06-23"} {"Package":"generalCorr","Version":"1.0.5","Title":"Generalized Correlations and Initial Causal Path","Description":"Since causal paths from data are important for all sciences, the\n package provides sophisticated functions. The idea is simply \n that if X causes Y (path: X to Y) then non-deterministic variation in X\n is more \"original or independent\" than similar variation in Y. We compare \n two flipped kernel regressions: X=f(Y, Z) and Y=g(X,Z), where Z are control \n variables. Our first two criteria compare absolute gradients (Cr1) and \n absolute residuals (Cr2), both quantified by stochastic dominance of four \n orders (SD1 to SD4). Our third criterion (Cr3) expects X to be better able \n to predict Y than the other way around using generalized partial correlation \n If |r*(x|y)|> |r*(y|x)| it suggests that y is more likely\n the \"kernel cause\" of x. The usual partial correlations are generalized for \n the asymmetric matrix of r*'s developed here.\n Partial correlations help asses effect of x on y after removing the effect of a\n set of variables. \n The package provides additional tools for causal assessment,\n for printing the causal directions in a clear, comprehensive compact summary form,\n for matrix algebra, for \"outlier detection\", and for numerical integration by the\n trapezoidal rule, stochastic dominance, etc. \n The package has functions for bootstrap-based statistical inference and one \n for a heuristic t-test.","Published":"2017-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"generalhoslem","Version":"1.2.4","Title":"Goodness of Fit Tests for Logistic Regression Models","Description":"Functions to assess the goodness of fit of binary, multinomial and ordinal logistic models.\n\tIncluded are the Hosmer-Lemeshow tests (binary, multinomial and ordinal) and the Lipsitz and \n\tPulkstenis-Robinson tests (ordinal).","Published":"2016-12-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GeneralizedHyperbolic","Version":"0.8-1","Title":"The Generalized Hyperbolic Distribution","Description":"This package provides functions for the hyperbolic and\n related distributions. Density, distribution and quantile\n functions and random number generation are provided for the\n hyperbolic distribution, the generalized hyperbolic\n distribution, the generalized inverse Gaussian distribution and\n the skew-Laplace distribution. Additional functionality is\n provided for the hyperbolic distribution, normal inverse\n Gaussian distribution and generalized inverse Gaussian\n distribution, including fitting of these distributions to data.\n Linear models with hyperbolic errors may be fitted using\n hyperblmFit.","Published":"2015-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GeneralizedUmatrix","Version":"0.9.4","Title":"Credible Visualization for Two-Dimensional Projections of Data","Description":"Projections from a high-dimensional data space onto a two-dimensional plane are used to detect structures, such as clusters, in multivariate data. The generalized Umatrix is able to visualize errors of these two-dimensional scatter plots by using a 3D topographic map.","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GeneralOaxaca","Version":"1.0","Title":"Blinder-Oaxaca Decomposition for Generalized Linear Model","Description":"Perform the Blinder-Oaxaca decomposition for generalized linear \n model with bootstrapped standard errors. The twofold and threefold \n decomposition are given, even the generalized linear model output in each group.","Published":"2015-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"generator","Version":"0.1.0","Title":"Generate Data Containing Fake Personally Identifiable\nInformation","Description":"Allows users to quickly and easily generate fake data containing\n Personally Identifiable Information (PII) through convenience functions.","Published":"2015-08-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GeneReg","Version":"1.1.2","Title":"Construct time delay gene regulatory network","Description":"GeneReg is an R package for inferring time delay gene\n regulatory network using time course gene expression profiles.\n The main idea of time delay linear model is to fit a linear\n regression model using a set of putative regulators to estimate\n the transcription pattern of a specific target gene.","Published":"2012-10-29","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"geneSignatureFinder","Version":"2014.02.17","Title":"A Gene-signatures finder tools","Description":"A tool for finding an ensemble gene-signature by a steepest ascending algorithm partially supervised by survival time data. ","Published":"2014-03-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"geneSLOPE","Version":"0.37.0","Title":"Genome-Wide Association Study with SLOPE","Description":"Genome-wide association study (GWAS) performed with SLOPE,\n short for Sorted L-One Penalized Estimation, a\n method for estimating the vector of coefficients in linear model.\n In the first step of GWAS, SNPs are clumped according to their correlations and\n distances. Then, SLOPE is performed on data where each clump has\n one representative.","Published":"2016-10-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"genetics","Version":"1.3.8.1","Title":"Population Genetics","Description":"Classes and methods for handling genetic data. Includes\n classes to represent genotypes and haplotypes at single markers\n up to multiple markers on multiple chromosomes. Function\n include allele frequencies, flagging homo/heterozygotes,\n flagging carriers of certain alleles, estimating and testing\n for Hardy-Weinberg disequilibrium, estimating and testing for\n linkage disequilibrium, ...","Published":"2013-09-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"GeneticSubsetter","Version":"0.8","Title":"Identify Favorable Subsets of Germplasm Collections","Description":"Finds subsets of sets of genotypes with a high Heterozygosity, and Mean of Transformed Kinships (MTK), measures that can indicate a subset would be beneficial for rare-trait discovery and genome-wide association scanning, respectively.","Published":"2016-10-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GeNetIt","Version":"0.1-0","Title":"Spatial Graph-Theoretic Genetic Gravity Modelling","Description":"Implementation of spatial graph-theoretic genetic gravity models.\n The model framework is applicable for other types of spatial flow questions.\n Includes functions for constructing spatial graphs, sampling and summarizing\n associated raster variables and building unconstrained and singly constrained\n gravity models.","Published":"2016-03-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GenForImp","Version":"1.0","Title":"The Forward Imputation: A Sequential Distance-Based Approach for\nImputing Missing Data","Description":"Two methods based on the Forward Imputation approach are implemented for the imputation of quantitative missing data. One method alternates Nearest Neighbour Imputation and Principal Component Analysis (function 'ForImp.PCA'), the other uses Nearest Neighbour Imputation with the Mahalanobis distance (function 'ForImp.Mahala').","Published":"2015-02-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"genie","Version":"1.0.4","Title":"A New, Fast, and Outlier Resistant Hierarchical Clustering\nAlgorithm","Description":"A new hierarchical clustering linkage criterion:\n the Genie algorithm links two clusters in such a way that a chosen\n economic inequity measure (e.g., the Gini index) of the cluster\n sizes does not increase drastically above a given threshold. Benchmarks\n indicate a high practical usefulness of the introduced method:\n it most often outperforms the Ward or average linkage in terms of\n the clustering quality while retaining the single linkage speed,\n see (Gagolewski et al. 2016a ,\n 2016b )\n for more details.","Published":"2017-04-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"GenKern","Version":"1.2-60","Title":"Functions for generating and manipulating binned kernel density\nestimates","Description":"Computes generalised KDEs","Published":"2013-11-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"genlasso","Version":"1.3","Title":"Path algorithm for generalized lasso problems","Description":"This package computes the solution path for generalized lasso problems. Important use cases are the fused lasso over an arbitrary graph, and trend fitting of any given polynomial order. Specialized implementations for the latter two subproblems are given to improve stability and speed.","Published":"2014-09-15","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"GENLIB","Version":"1.0.4","Title":"Genealogical Data Analysis","Description":"Genealogical data analysis including descriptive statistics (e.g., kinship and inbreeding coefficients) and gene-dropping simulations.","Published":"2015-04-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"genMOSS","Version":"1.2","Title":"Functions for the Bayesian Analysis of GWAS Data","Description":"Implements the Mode Oriented Stochastic Search (MOSS) algorithm as well as a simple moving window approach to look for combinations of SNPs that are associated with a response.","Published":"2014-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"genMOSSplus","Version":"1.0","Title":"Application of MOSS algorithm to genome-wide association study\n(GWAS)","Description":"This is a genMOSS package with additional datafile preprocessing functions. Performs genome-wide analysis of dense SNP array data using the mode oriented stochastic search (MOSS) algorithm in a case-control design. The MOSS algorithm is a Bayesian variable selection procedure that is applicable to GWAS data. It identifies combinations of the best predictive SNPs associated with the response. It also performs a hierarchical log-linear model search to identify the most relevant associations among the resulting subsets of SNPs. This package also includes preprocessing of the data from Plink format to the format required by the MOSS algorithm. ","Published":"2013-08-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"genomeplot","Version":"1.0","Title":"'Plot genome wide values for all chromosomes'","Description":"Plot values of markers(SNPs, expression, genes, RNA,...) for all chromosomes.","Published":"2016-04-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"GenomicMating","Version":"1.2","Title":"Efficient Breeding by Genomic Mating","Description":"Implements the genomic mating approach in the recently published article: Akdemir, D., & Sanchez, J. I. (2016). Efficient Breeding by Genomic Mating. Frontiers in Genetics, 7. . ","Published":"2017-05-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"genomicper","Version":"1.6","Title":"Circular Genomic Permutation using Gwas p-Values of Association","Description":"Circular genomic permutation approach uses GWAS results to establish the significance of pathway/gene-set associations whilst accounting for genomic structure. All SNPs in the GWAS are placed in a 'circular genome' according to their location. Then the complete set of SNP association p-values are permuted by rotation with respect to the SNPs' genomic locations. Two testing frameworks are available: permutations at the gene level, and permutations at the SNP level. The permutation at the gene level uses fisher's combination test to calculate a single gene p-value, followed by the hypergeometric test. The SNP count methodology maps each SNP to pathways/gene-sets and calculates the proportion of SNPs for the real and the permutated datasets above a pre-defined threshold. Genomicper requires a matrix of GWAS association p-values. The SNPs annotation and pathways annotations can be performed within the package or provided by the user.","Published":"2016-07-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GenomicTools","Version":"0.2.4","Title":"Collection of Tools for Genomic Data Analysis","Description":"A loose collection of tools for the analysis of expression and genotype data, currently with the main focus on (e)QTL and MDR analysis.","Published":"2017-04-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"genoPlotR","Version":"0.8.6","Title":"Plot Publication-Grade Gene and Genome Maps","Description":"Draws gene or genome maps and comparisons between these, in a \n publication-grade manner. Starting from simple, common files, it will \n draw postscript or PDF files that can be sent as such to journals.","Published":"2017-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GenOrd","Version":"1.4.0","Title":"Simulation of Discrete Random Variables with Given Correlation\nMatrix and Marginal Distributions","Description":"A gaussian copula based procedure for generating samples from discrete random variables with prescribed correlation matrix and marginal distributions.","Published":"2015-09-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"genpathmox","Version":"0.3","Title":"Generalized Pathmox Approach Segmentation Tree Analysis","Description":"Provides a very interesting solution for\n handling segmentation variables in complex statistical methodology. It\n contains en extended version of the \"Pathmox\" algorithm (Lamberti, Sanchez\n and Aluja,(2016)) in the context of\n Partial Least Squares Path Modeling including the F-block\n test (to detect the responsible latent endogenous equations of the\n difference), the F-coefficient (to detect the path coefficients responsible\n of the difference) and the \"invariance\" test (to realize a comparison\n between the sub-models' latent variables). Furthermore, the package\n contains a generalized version of the \"Pathmox\" algorithm to approach\n different methodologies: linear regression and least absolute regression\n models.","Published":"2017-05-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"genridge","Version":"0.6-5","Title":"Generalized Ridge Trace Plots for Ridge Regression","Description":"\n\tThe genridge package introduces generalizations of the standard univariate\n\tridge trace plot used in ridge regression and related methods. These graphical methods\n\tshow both bias (actually, shrinkage) and precision, by plotting the covariance ellipsoids of the estimated\n\tcoefficients, rather than just the estimates themselves. 2D and 3D plotting methods are provided,\n\tboth in the space of the predictor variables and in the transformed space of the PCA/SVD of the\n\tpredictors. ","Published":"2014-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GenSA","Version":"1.1.6","Title":"R Functions for Generalized Simulated Annealing","Description":"Performs search for global minimum of a very complex non-linear objective function with a very large number of optima.","Published":"2016-02-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gensemble","Version":"1.0","Title":"generalized ensemble methods","Description":"Generalized ensemble methods allowing arbitrary underlying\n models to be used. Currently only bagging is supported.","Published":"2013-02-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gensphere","Version":"1.0","Title":"Generalized Spherical Distributions","Description":"Define and compute with generalized spherical distributions - multivariate probability\n laws that are specified by a star shaped contour (directional behavior) and a radial component.","Published":"2016-05-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"genSurv","Version":"1.0.3","Title":"Generating Multi-State Survival Data","Description":"Generation of survival data with one (binary)\n time-dependent covariate. Generation of survival data arising\n from a progressive illness-death model.","Published":"2015-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GenWin","Version":"0.1","Title":"Spline Based Window Boundaries for Genomic Analyses","Description":"Defines window or bin boundaries for the analysis of genomic data.\n Boundaries are based on the inflection points of a cubic smoothing spline\n fitted to the raw data. Along with defining boundaries, a technique to\n evaluate results obtained from unequally-sized windows is provided.\n Applications are particularly pertinent for, though not limited to, genome\n scans for selection based on variability between populations (e.g. using\n Wright's fixations index, Fst, which measures variability in subpopulations\n relative to the total population).","Published":"2014-09-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"geo","Version":"1.4-3","Title":"Draw and Annotate Maps, Especially Charts of the North Atlantic","Description":"Used by Hafro staff to draw maps showing the distribution of\n fishing intensity and catches, and of survey data for Icelandic fish stocks.\n Potentially useful for others.","Published":"2015-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geoaxe","Version":"0.1.0","Title":"Split 'Geospatial' Objects into Pieces","Description":"Split 'geospatial' objects into pieces. Includes\n support for some spatial object inputs, 'Well-Known Text', and\n 'GeoJSON'.","Published":"2016-02-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"geoBayes","Version":"0.3.3","Title":"Analysis of Geostatistical Data using Bayes and Empirical Bayes\nMethods","Description":"Functions to fit geostatistical data. The data can be\n continuous, binary or count data and the models implemented are\n flexible. Conjugate priors are assumed on some parameters while\n inference on the other parameters can be done through a full\n Bayesian analysis of by empirical Bayes methods.","Published":"2015-11-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GeoBoxplot","Version":"1.0","Title":"Geographic Box Plot","Description":"Make geographic box plot as detailed in Willmott et al. (2007).","Published":"2015-10-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geoCount","Version":"1.150120","Title":"Analysis and Modeling for Geostatistical Count Data","Description":"This package provides a variety of functions to analyze and model geostatistical count data with generalized linear spatial models, including\n 1) simulate and visualize the data;\n 2) posterior sampling with robust MCMC algorithms (in serial or parallel way);\n 3) perform prediction for unsampled locations;\n 4) conduct Bayesian model checking procedure to evaluate the goodness of fitting;\n 5) conduct transformed residual checking procedure.\n In the package, seamlessly embedded C++ programs and parallel computing techniques are implemented to speed up the computing processes.","Published":"2015-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GeoDE","Version":"1.0","Title":"A geometrical Approach to Differential expression and gene-set\nenrichment","Description":"Given expression data this package calculate a multivariate\n geometrical characterization of the differential expression and can also\n perform gene-set enrichment.","Published":"2014-07-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"geoelectrics","Version":"0.1.5","Title":"3D-Visualization of Geoelectric Resistivity Measurement Profiles","Description":"Visualizes two-dimensional geoelectric resistivity measurement profiles in three dimensions.","Published":"2015-12-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"geofacet","Version":"0.1.4","Title":"'ggplot2' Faceting Utilities for Geographical Data","Description":"Provides geofaceting functionality for 'ggplot2'. Geofaceting arranges a sequence of plots of data for different geographical entities into a grid that preserves some of the geographical orientation.","Published":"2017-06-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"geofd","Version":"1.0","Title":"Spatial Prediction for Function Value Data","Description":"Kriging based methods are used for predicting functional data \n (curves) with spatial dependence.","Published":"2015-10-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"geoGAM","Version":"0.1-1","Title":"Select Sparse Geoadditive Models for Spatial Prediction","Description":"A model building procedure to select a sparse geoadditive model from a large number of covariates. Continuous, binary and ordered categorical responses are supported. The model building is based on component wise gradient boosting with linear effects and smoothing splines. The resulting covariate set after gradient boosting is further reduced through cross validated backward selection and aggregation of factor levels. The package provides a model based bootstrap method to simulate prediction intervals for point predictions. A test data set of a soil mapping case study is provided. ","Published":"2016-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GeoGenetix","Version":"0.0.2","Title":"Quantification of the effect of geographic versus environmental\nisolation on genetic differentiation","Description":"Quantification of the effect of geographic versus environmental isolation on genetic differentiation","Published":"2014-07-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"geohash","Version":"0.1.2","Title":"Tools for Geohash Creation and Manipulation","Description":"Provides tools to encode lat/long pairs into geohashes, decode those geohashes,\n and identify their neighbours.","Published":"2016-06-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"geojson","Version":"0.1.2","Title":"Classes for 'GeoJSON'","Description":"Classes for 'GeoJSON' to make working with 'GeoJSON' easier.\n Includes S3 classes for 'GeoJSON' classes with brief summary output,\n and a few methods such as extracting and addin bounding boxes, \n properties, and coordinate reference systems; linting through \n the geojsonlint package; and serializing to/from Geobuf binary 'GeoJSON'\n format.","Published":"2017-02-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"geojsonio","Version":"0.3.2","Title":"Convert Data from and to 'GeoJSON' or 'TopoJSON'","Description":"Convert data to 'GeoJSON' or 'TopoJSON' from various R classes,\n including vectors, lists, data frames, shape files, and spatial classes.\n geojsonio does not aim to replace packages like 'sp', 'rgdal', 'rgeos', \n but rather aims to be a high level client to simplify conversions of data \n from and to 'GeoJSON' and 'TopoJSON'.","Published":"2017-02-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"geojsonlint","Version":"0.2.0","Title":"Tools for Validating 'GeoJSON'","Description":"Tools for linting 'GeoJSON'. Includes tools for interacting with the\n online tool , the 'Javascript' library 'geojsonhint'\n (), and validating against a\n 'GeoJSON' schema via the 'Javascript' library\n (). Some tools work locally\n while others require an internet connection.","Published":"2016-11-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"geojsonR","Version":"1.0.1","Title":"A GeoJson Processing Toolkit","Description":"Includes functions for processing GeoJson objects relying on 'RFC 7946' . The geojson encoding is based on 'json11', a tiny JSON library for 'C++11' . Furthermore, the source code is exported in R through the 'Rcpp' and 'RcppArmadillo' packages.","Published":"2017-03-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"geoknife","Version":"1.5.5","Title":"Web-Processing of Large Gridded Datasets","Description":"Processes gridded datasets found on the U.S. Geological Survey\n Geo Data Portal web application or elsewhere, using a web-enabled workflow\n that eliminates the need to download and store large datasets that are reliably\n hosted on the Internet. The package provides access to several data subset and\n summarization algorithms that are available on remote web processing servers.","Published":"2017-06-06","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"GeoLight","Version":"2.0.0","Title":"Analysis of Light Based Geolocator Data","Description":"Provides basic functions for global\n positioning based on light intensity measurements over time.\n Positioning process includes the determination of sun events, a\n discrimination of residency and movement periods, the\n calibration of period-specific data and, finally, the\n calculation of positions.","Published":"2015-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GEOmap","Version":"2.4-0","Title":"Topographic and Geologic Mapping","Description":"Set of routines for making Map Projections (forward and inverse), Topographic Maps, Perspective plots, Geological Maps, geological map symbols, geological databases, interactive plotting and selection of focus regions.","Published":"2017-03-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geomapdata","Version":"1.0-4","Title":"Data for topographic and Geologic Mapping","Description":"Set of data for use in package GEOmap. Includes world\n map, USA map, Coso map, Japan Map, ETOPO5","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"GeomComb","Version":"1.0","Title":"(Geometric) Forecast Combination Methods","Description":"Provides eigenvector-based (geometric) forecast\n combination methods; also includes simple approaches (simple average,\n median, trimmed and winsorized mean, inverse rank method) and regression-based\n combination. Tools for data pre-processing are available in order to deal with \n common problems in forecast combination (missingness, collinearity).","Published":"2016-11-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geomedb","Version":"0.1","Title":"Fetch 'GeOMe-db' FIMS Data","Description":"The Genomic Observatory Metadatabase (GeOMe Database) is an open access repository for\n geographic and ecological metadata associated with sequenced samples. This package is used to retrieve\n GeOMe data for analysis. See for more information regarding GeOMe.","Published":"2017-05-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"geometa","Version":"0.1-0","Title":"Tools for Reading and Writing ISO/OGC Geographic Metadata","Description":"Provides facilities to handle reading and writing of geographic metadata \n defined with OGC/ISO 19115 and 19139 (XML) standards.","Published":"2017-05-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"geometry","Version":"0.3-6","Title":"Mesh Generation and Surface Tesselation","Description":"Makes the qhull library (www.qhull.org)\n available in R, in a similar manner as in Octave and MATLAB. Qhull\n computes convex hulls, Delaunay triangulations, halfspace\n intersections about a point, Voronoi diagrams, furthest-site\n Delaunay triangulations, and furthest-site Voronoi diagrams. It\n runs in 2-d, 3-d, 4-d, and higher dimensions. It implements the\n Quickhull algorithm for computing the convex hull. Qhull does not\n support constrained Delaunay triangulations, or mesh generation of\n non-convex objects, but the package does include some R functions\n that allow for this. Currently the package only gives access to\n Delaunay triangulation and convex hull computation.","Published":"2015-09-09","License":"GPL (>= 3) + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"geomnet","Version":"0.2.0","Title":"Network Visualization in the 'ggplot2' Framework","Description":"Network visualization in the 'ggplot2' framework. Network\n functionality is provided in a single 'ggplot2' layer by calling the geom 'net'.\n Layouts are calculated using the 'sna' package, example networks are included.","Published":"2016-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geomorph","Version":"3.0.4","Title":"Geometric Morphometric Analyses of 2D/3D Landmark Data","Description":"Read, manipulate, and digitize landmark data, generate shape\n variables via Procrustes analysis for points, curves and surfaces, perform\n shape analyses, and provide graphical depictions of shapes and patterns of\n shape variation.","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geonames","Version":"0.998","Title":"Interface to www.geonames.org web service","Description":"Code for querying the web service at www.geonames.org","Published":"2014-12-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"geoparser","Version":"0.1.1","Title":"Interface to the Geoparser.io API for Identifying and\nDisambiguating Places Mentioned in Text","Description":"A wrapper for the Geoparser.io API version 0.4.0 (see ), which is a web service\n that identifies places mentioned in text, disambiguates those places, and\n returns detailed data about the places found in the text. Basic, limited\n API access is free with paid plans to accommodate larger workloads.","Published":"2017-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geophys","Version":"1.3-9","Title":"Geophysics, Continuum Mechanics, Mogi Models, Gravity","Description":"Codes for analyzing various problems of Geophysics, Continuum Mechanics and Mogi Models.","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geoR","Version":"1.7-5.2","Title":"Analysis of Geostatistical Data","Description":"Geostatistical analysis including traditional, likelihood-based and Bayesian methods.","Published":"2016-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GeoRange","Version":"0.1.0","Title":"Calculating Geographic Range from Occurrence Data","Description":"Calculates and analyzes six measures of geographic range from a set of longitudinal and latitudinal occurrence data. Measures included are minimum convex hull area, minimum spanning tree distance, longitudinal range, latitudinal range, maximum pairwise great circle distance, and number of X by X degree cells occupied.","Published":"2017-06-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"geoRglm","Version":"0.9-8","Title":"A Package for Generalised Linear Spatial Models","Description":"Functions for inference in generalised linear spatial models. The posterior and predictive inference is based on Markov chain Monte Carlo methods. Package geoRglm is an extension to the package geoR, which must be installed first.","Published":"2015-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"georob","Version":"0.3-4","Title":"Robust Geostatistical Analysis of Spatial Data","Description":"Provides functions for efficiently fitting linear models with \n spatially correlated errors by robust and Gaussian (Restricted) \n Maximum Likelihood and for computing robust and customary point \n and block external-drift Kriging predictions, along with utility\n functions for variogram modelling in ad hoc geostatistical analyses, \n model building, model evaluation by cross-validation and for \n unbiased back-transformation of Kriging predictions of \n log-transformed data.","Published":"2017-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geosapi","Version":"0.1-0","Title":"GeoServer REST API R Interface","Description":"Provides an R interface to the GeoServer REST API, allowing to upload \n and publish data in a GeoServer web-application and expose data to OGC Web-Services. \n The package currently supports all CRUD (Create,Read,Update,Delete) operations\n on GeoServer workspaces, namespaces, datastores (stores of vector data), featuretypes,\n layers, styles, as well as vector data upload operations. For more information about \n the GeoServer REST API, see .","Published":"2017-02-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"geoscale","Version":"2.0","Title":"Geological Time Scale Plotting","Description":"Function for adding the geological timescale to bivariate plots.","Published":"2015-05-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geospacom","Version":"0.5-8","Title":"Facilitate Generating of Distance Matrices Used in Package\n'spacom' and Plotting Data on Maps","Description":"Generates distance matrices from shape files and represents spatially weighted multilevel analysis results (see 'spacom')","Published":"2015-06-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geoSpectral","Version":"0.17.3","Title":"Classes and Methods for Working with Spectral Data with\nSpace-Time Attributes","Description":"Provides S4 classes and data import, preprocessing, graphing, \n manipulation and export methods for geo-Spectral datasets (datasets with space/time/spectral \n dimensions). These type of data are frequently collected within earth observation projects \n (remote sensing, spectroscopy, bio-optical oceanography, mining, agricultural, atmospheric, \n environmental or similar branch of science).","Published":"2017-04-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"geosphere","Version":"1.5-5","Title":"Spherical Trigonometry","Description":"Spherical trigonometry for geographic applications. That is, compute distances and related measures for angular (longitude/latitude) locations. ","Published":"2016-06-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"geospt","Version":"1.0-2","Title":"Geostatistical Analysis and Design of Optimal Spatial Sampling\nNetworks","Description":"Estimation of the variogram through trimmed mean, radial basis \n functions (optimization, prediction and cross-validation), summary\n statistics from cross-validation, pocket plot, and design of\n optimal sampling networks through sequential and simultaneous\n points methods.","Published":"2015-08-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geosptdb","Version":"0.5-0","Title":"Spatio-Temporal; Inverse Distance Weighting and Radial Basis\nFunctions with Distance-Based Regression","Description":"Spatio-temporal: Inverse Distance Weighting (IDW) and radial basis functions; optimization, prediction, summary statistics from leave-one-out cross-validation, adjusting distance-based linear regression model and generation of the principal coordinates of a new individual from Gower's distance.","Published":"2015-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geostatsp","Version":"1.4.4","Title":"Geostatistical Modelling with Likelihood and Bayes","Description":"Geostatistical modelling facilities using Raster and SpatialPoints objects are provided. Non-Gaussian models are fit using INLA, and Gaussian geostatistical models use Maximum Likelihood Estimation.","Published":"2016-07-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"geotech","Version":"1.0","Title":"Geotechnical Engineering","Description":"A compilation of functions for performing calculations and\n creating plots that commonly arise in geotechnical engineering and soil\n mechanics. The types of calculations that are currently included are:\n (1) phase diagrams and index parameters, (2) grain-size distributions, \n (3) plasticity, (4) soil classification, (5) compaction, (6) groundwater,\n (7) subsurface stresses (geostatic and induced), (8) Mohr circle analyses,\n (9) consolidation settlement and rate, (10) shear strength, (11) bearing\n capacity, (12) lateral earth pressures, (13) slope stability, and (14)\n subsurface explorations. Geotechnical engineering students, educators,\n researchers, and practitioners will find this package useful.","Published":"2016-02-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"geotools","Version":"0.1","Title":"Geo tools","Description":"Tools","Published":"2012-01-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"geotopbricks","Version":"1.4","Title":"An R Plug-in for the Distributed Hydrological Model GEOtop","Description":"It analyzes raster maps and other information as input/output\n files from the Hydrological Distributed Model GEOtop. It contains functions\n and methods to import maps and other keywords from geotop.inpts file. Some\n examples with simulation cases of GEOtop 2.0 are presented in the package.\n Any information about the GEOtop Distributed Hydrological Model source code\n is available on https://github.com/geotopmodel or\n https://github.com/se27xx/GEOtop. Technical details about the model are\n available in Endrizzi et al, 2014\n (http://www.geosci-model-dev.net/7/2831/2014/gmd-7-2831-2014.html).","Published":"2016-07-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GeoXp","Version":"1.6.2","Title":"Interactive exploratory spatial data analysis","Description":"GeoXp is a tool for researchers in spatial statistics,\n spatial econometrics, geography, ecology etc allowing to link\n dynamically statistical plots with elementary maps. This\n coupling consists in the fact that the selection of a zone on\n the map results in the automatic highlighting of the\n corresponding points on the statistical graph or reversely the\n selection of a portion of the graph results in the automatic\n highlighting of the corresponding points on the map. GeoXp\n includes tools from different areas of spatial statistics\n including geostatistics as well as spatial econometrics and\n point processes. Besides elementary plots like boxplots,\n histograms or simple scatterplot, GeoXp also couples with maps\n Moran scatterplots, variogram cloud, Lorentz Curves,...In order\n to make the most of the multidimensionality of the data, GeoXp\n includes some dimension reduction techniques such as PCA.","Published":"2013-08-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"geozoo","Version":"0.5.1","Title":"Zoo of Geometric Objects","Description":"Geometric objects defined in 'geozoo' can be simulated or displayed in the R package 'tourr'.","Published":"2016-05-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gepaf","Version":"0.1.0","Title":"Google Encoded Polyline Algorithm Format","Description":"Encode and decode the Google Encoded Polyline Algorithm Format ().","Published":"2016-05-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GERGM","Version":"0.11.2","Title":"Estimation and Fit Diagnostics for Generalized Exponential\nRandom Graph Models","Description":"Estimation and diagnosis of the convergence of Generalized\n Exponential Random Graph Models via Gibbs sampling or Metropolis\n Hastings with exponential down weighting.","Published":"2017-03-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GerminaR","Version":"1.1","Title":"Germination Indexes for Seed Germination Variables for\nEcophysiological Studies","Description":"Different types of seed indexes, rates and visualization techniques\n are used to provide a robust approach for germination data analysis. The package\n aims to make available germination seed indexes and graphical functions to\n analyze germination seed data.","Published":"2017-03-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gesca","Version":"1.0.3","Title":"Generalized Structured Component Analysis (GSCA)","Description":"Fit a variety of component-based structural equation models.","Published":"2016-07-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GESE","Version":"2.0.1","Title":"Gene-Based Segregation Test","Description":"Implements the gene-based segregation test(GESE) and the weighted GESE test for identifying genes with causal variants of large effects for family-based sequencing data. The methods are described in Qiao, D. Lange, C., Laird, N.M., Won, S., Hersh, C.P., et al. (2017). . Gene-based segregation method for identifying rare variants for family-based sequencing studies. Genet Epidemiol 41(4):309-319. More details can be found at .","Published":"2017-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gesis","Version":"0.2.1","Title":"R Client for GESIS Data Catalogue (DBK)","Description":"Provides programmatic access to the GESIS - Leibniz-Institute for\n the Social Sciences Data Catalogue/Datenbestandkatalog (DBK), which\n maintains a large repository of data sets related to the social sciences.\n See for more information.","Published":"2017-01-30","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"GESTr","Version":"0.1","Title":"Gene Expression State Transformation","Description":"The Gene Expression State Transformation (GESTr) models\n the states of expression of genes across a compendium of\n samples in order to provide a universal scale of gene\n expression for all genes. TranSAM is a modification of the SAM\n approach designed to utilise GESTr-transformed gene expression\n data.","Published":"2013-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"getCRUCLdata","Version":"0.1.6","Title":"Use and Explore CRU CL v. 2.0 Climatology Elements in R","Description":"Provides functions that automate downloading and importing\n University of East Anglia Climate Research Unit (CRU) CL v. 2.0 climatology\n data into R, facilitates the calculation of minimum temperature and maximum\n temperature and formats the data into a tidy data frame as a tibble or a \n list of raster stack objects for use in an R session. CRU CL v. 2.0 data \n are a gridded climatology of 1961-1990 monthly means released in 2002 and\n cover all land areas (excluding Antarctica) at 10 arcminutes\n (0.1666667 degree) resolution. For more information see the description of\n the data provided by the University of East Anglia Climate Research Unit,\n .","Published":"2017-06-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GetHFData","Version":"1.3","Title":"Download and Aggregate High Frequency Trading Data from Bovespa","Description":"Downloads and aggregates high frequency trading data for Brazilian instruments directly from Bovespa ftp site .","Published":"2017-06-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"getlandsat","Version":"0.1.0","Title":"Get Landsat 8 Data from Amazon Public Data Sets","Description":"Get Landsat 8 Data from Amazon Web Services ('AWS')\n public data sets ().\n Includes functions for listing images and fetching them, and handles\n caching to prevent unnecessary additional requests.","Published":"2016-08-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"getMet","Version":"0.3.2","Title":"Get Meteorological Data for Hydrologic Models","Description":"Hydrologic models often require users to collect and format input meteorological data. This package contains functions for sourcing, formatting, and\n editing meteorological data for hydrologic models.","Published":"2016-03-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"getmstatistic","Version":"0.1.1","Title":"Quantifying Systematic Heterogeneity in Meta-Analysis","Description":"Quantifying systematic heterogeneity in meta-analysis using R.\n The M statistic aggregates heterogeneity information across multiple\n variants to, identify systematic heterogeneity patterns and their direction\n of effect in meta-analysis. It's primary use is to identify outlier studies,\n which either show \"null\" effects or consistently show stronger or weaker\n genetic effects than average across, the panel of variants examined in a\n GWAS meta-analysis. In contrast to conventional heterogeneity metrics\n (Q-statistic, I-squared and tau-squared) which measure random heterogeneity\n at individual variants, M measures systematic (non-random)\n heterogeneity across multiple independently associated variants. Systematic\n heterogeneity can arise in a meta-analysis due to differences in the study\n characteristics of participating studies. Some of the differences may\n include: ancestry, allele frequencies, phenotype definition, age-of-disease\n onset, family-history, gender, linkage disequilibrium and quality control\n thresholds. See for statistical\n statistical theory, documentation and examples.","Published":"2017-06-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"getopt","Version":"1.20.0","Title":"C-like getopt behavior","Description":"Package designed to be used with Rscript to write\n ``#!'' shebang scripts that accept short and long flags/options.\n Many users will prefer using instead the packages optparse or argparse\n which add extra features like automatically generated help option and usage,\n support for default values, positional argument support, etc.","Published":"2013-08-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GetoptLong","Version":"0.1.6","Title":"Parsing Command-Line Arguments and Variable Interpolation","Description":"This is yet another command-line argument parser which wraps the \n powerful Perl module Getopt::Long and with some adaptation for easier use\n\tin R. It also provides a simple way for variable interpolation in R.","Published":"2017-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"getPass","Version":"0.1-1","Title":"Masked User Input","Description":"A micro-package for reading \"passwords\", i.e. reading\n user input with masking, so that the input is not displayed as it \n is typed. Currently we have support for 'RStudio', the command line\n (every OS), and any platform where 'tcltk' is present.","Published":"2016-04-26","License":"BSD 2-clause License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GetR","Version":"0.1","Title":"GetR: Calculate Guttman error trees in R","Description":"The GetR package calculates Guttman error trees, which can\n be used to find homogeneous subgroups regarding Guttman errors.","Published":"2013-02-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gets","Version":"0.12","Title":"General-to-Specific (GETS) Modelling and Indicator Saturation\nMethods","Description":"Automated General-to-Specific (GETS) modelling of the mean and variance of a regression, and indicator saturation methods for detecting and testing for structural breaks in the mean.","Published":"2017-02-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GetTDData","Version":"1.2.5","Title":"Get Data for Brazilian Bonds (Tesouro Direto)","Description":"Downloads and aggregates data for Brazilian government issued bonds directly from the website of Tesouro Direto .","Published":"2016-11-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gettingtothebottom","Version":"3.2","Title":"Learning Optimization and Machine Learning for Statistics","Description":"Getting to the Bottom accompanies the \"Getting to\n the Bottom\" optimization methods series at Statisticsviews.com. It\n contains data and code to reproduce the examples in the articles.","Published":"2014-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gettz","Version":"0.0.3","Title":"Get the Timezone Information","Description":"A function to retrieve the system timezone on Unix systems\n which has been found to find an answer when 'Sys.timezone()' has failed.\n It is based on an answer by Duane McCully posted on 'StackOverflow', and\n adapted to be callable from R. The package also builds on Windows, but\n just returns NULL.","Published":"2016-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GEVcdn","Version":"1.1.5","Title":"GEV Conditional Density Estimation Network","Description":"Implements a flexible nonlinear modelling framework for nonstationary\n generalized extreme value analysis in hydroclimatology.","Published":"2017-03-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GEVStableGarch","Version":"1.1","Title":"ARMA-GARCH/APARCH Models with GEV and Stable Distributions","Description":"Package for simulation and estimation of ARMA-GARCH/APARCH models with GEV and stable distributions.","Published":"2015-08-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GExMap","Version":"1.1.3","Title":"A visual, intuitive, easy to use software giving access to a new\ntype of information buried into your microarray data","Description":"Perform statistical tests to unveil genomic clusters,\n produces garphical interpretations of the statistical results\n in pdf files, perform a Gene Ontology analysis and produces\n graphic results in pdf files","Published":"2012-07-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"GFA","Version":"1.0.1","Title":"Group Factor Analysis","Description":"Factor analysis implementation for multiple data sources, i.e., for groups of variables. The whole data analysis pipeline is provided, including functions and recommendations for data normalization and model definition, as well as missing value prediction and model visualization. The model group factor analysis (GFA) is inferred with Gibbs sampling, and it has been presented originally by Virtanen et al. (2012), and extended in Klami et al. (2015) and Bunte et al. (2016) ; for details, see the citation info.","Published":"2017-03-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gfcanalysis","Version":"1.4","Title":"Tools for Working with Hansen et al. Global Forest Change\nDataset","Description":"The gfcanalysis package supports analyses using the Global\n Forest Change dataset released by Hansen et al. gfcanalysis was\n written for the Tropical Ecology Assessment and Monitoring (TEAM) Network\n (http://www.teamnetwork.org). For additional details on the Global Forest\n Change dataset, see: Hansen, M. et al. 2013. \"High-Resolution Global Maps\n of 21st-Century Forest Cover Change.\" Science 342 (15 November): 850-53.\n The forest change data and more information on the product is available at\n http://earthenginepartners.appspot.com.","Published":"2015-11-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"GFD","Version":"0.2.2","Title":"Tests for General Factorial Designs","Description":"Implemented are the Wald-type statistic,\n a permuted version thereof as well as the ANOVA-type statistic\n for general factorial designs, even with non-normal error terms\n and/or heteroscedastic variances, for crossed designs with an\n arbitrary number of factors and nested designs with up to three factors.","Published":"2016-04-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gfer","Version":"0.1.6","Title":"Green Finance and Environmental Risk","Description":"Focuses on data collecting and analyzing in green finance and environmental \n risk research and analysis. Main function includes environmental data collecting from \n official websites like MEP (Ministry of Environmental Protection of China, ), water \n related projects identification and environmental data visualization.","Published":"2017-03-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gfmR","Version":"1.0-1","Title":"Implements Group Fused Multinomial Regression","Description":"Software to implement methodology to preform automatic response category\n combinations in multinomial logistic regression. There are functions for both cross validation\n and AIC for model selection. The method provides regression coefficient estimates\n that may be useful for better understanding the true probability distribution of\n multinomial logistic regression when category probabilities are similar. These methods are not\n recommended for a large number of predictor variables. ","Published":"2017-05-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GGally","Version":"1.3.1","Title":"Extension to 'ggplot2'","Description":"\n The R package 'ggplot2' is a plotting system based on the grammar of graphics.\n 'GGally' extends 'ggplot2' by adding several functions\n to reduce the complexity of combining geometric objects with transformed data.\n Some of these functions include a pairwise plot matrix, a two group pairwise plot\n matrix, a parallel coordinates plot, a survival plot, and several functions to\n plot networks.","Published":"2017-06-08","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"ggalt","Version":"0.4.0","Title":"Extra Coordinate Systems, 'Geoms', Statistical Transformations,\nScales and Fonts for 'ggplot2'","Description":"A compendium of new geometries, coordinate systems, statistical \n transformations, scales and fonts for 'ggplot2', including splines, 1d and 2d densities, \n univariate average shifted histograms, a new map coordinate system based on the \n 'PROJ.4'-library along with geom_cartogram() that mimics the original functionality of \n geom_map(), formatters for \"bytes\", a stat_stepribbon() function, increased 'plotly'\n compatibility and the 'StateFace' open source font 'ProPublica'. Further new \n functionality includes lollipop charts, dumbbell charts, the ability to encircle\n points and coordinate-system-based text annotations.","Published":"2017-02-15","License":"AGPL + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ggbeeswarm","Version":"0.5.3","Title":"Categorical Scatter (Violin Point) Plots","Description":"Provides two methods of plotting categorical scatter plots such\n that the arrangement of points within a category reflects the density of\n data at that region, and avoids over-plotting.","Published":"2016-12-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ggCompNet","Version":"0.1.0","Title":"Compare Timing of Network Visualizations","Description":"We provide two primary resources in 'ggCompNet'. The first is a function to compare the speed of network drawing using several different packages. The second is the vignette folder which contains two vignettes that provide code for reproducing examples comparing the three network visualization packages 'geomnet', 'ggnetwork', and the ggnet2() function from the 'GGally' package. ","Published":"2016-12-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ggcorrplot","Version":"0.1.1","Title":"Visualization of a Correlation Matrix using 'ggplot2'","Description":"The 'ggcorrplot' package can be used to visualize easily a\n correlation matrix using 'ggplot2'. It provides a solution for reordering the\n correlation matrix and displays the significance level on the plot. It also\n includes a function for computing a matrix of correlation p-values.","Published":"2016-01-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ggdendro","Version":"0.1-20","Title":"Create Dendrograms and Tree Diagrams Using 'ggplot2'","Description":"This is a set of tools for dendrograms and\n tree plots using 'ggplot2'. The 'ggplot2' philosophy is to\n clearly separate data from the presentation.\n Unfortunately the plot method for dendrograms plots\n directly to a plot device without exposing the data.\n The 'ggdendro' package resolves this by making available\n functions that extract the dendrogram plot data. The package\n provides implementations for tree, rpart, as well as diana and agnes\n cluster diagrams.","Published":"2016-04-27","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggdmc","Version":"0.1.3.9","Title":"Dynamic Model of Choice with Parallel Computation, and C++\nCapabilities","Description":"A fast engine for computing hierarchical Bayesian model\n implemented in the Dynamic Model of Choice.","Published":"2017-03-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gge","Version":"1.2","Title":"Genotype Plus Genotype-by-Environment Biplots","Description":"Create biplots for GGE (genotype plus genotype-by-environment) and\n GGB (genotype plus genotype-by-block-of-environments) models.","Published":"2017-05-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GGEBiplotGUI","Version":"1.0-9","Title":"Interactive GGE Biplots in R","Description":"Description: A GUI with which to construct and interact with GGE biplots.","Published":"2016-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ggedit","Version":"0.2.1","Title":"Interactive 'ggplot2' Layer and Theme Aesthetic Editor","Description":"Interactively edit 'ggplot2' layer and theme aesthetics definitions.","Published":"2017-04-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggeffects","Version":"0.1.2","Title":"Create Tidy Data Frames of Marginal Effects for 'ggplot' from\nModel Outputs","Description":"Compute marginal effects at the mean or average marginal effects from \n statistical models and returns the result as tidy data frames. These \n data frames are ready to use with the 'ggplot2'-package.\n Marginal effects can be calculated for many different models. Interaction\n terms, splines and polynomial terms are also supported. The two main\n functions are 'ggpredict()' and 'ggaverage()', however, there are\n some convenient wrapper-functions especially for polynomials or\n interactions. There is a generic 'plot()'-method to plot the results\n using 'ggplot2'.","Published":"2017-06-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggenealogy","Version":"0.3.0","Title":"Visualization Tools for Genealogical Data","Description":"Methods for searching through genealogical data and displaying the results. Plotting algorithms assist with data exploration and publication-quality image generation. Includes interactive genealogy visualization tools. Provides parsing and calculation methods for variables in descendant branches of interest. Uses the Grammar of Graphics.","Published":"2016-12-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ggExtra","Version":"0.6","Title":"Add Marginal Histograms to 'ggplot2', and More 'ggplot2'\nEnhancements","Description":"Collection of functions and layers to enhance 'ggplot2'. The main\n function is ggMarginal(), which can be used to add marginal\n histograms/boxplots/density plots to 'ggplot2' scatterplots.","Published":"2016-11-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ggforce","Version":"0.1.1","Title":"Accelerating 'ggplot2'","Description":"The aim of 'ggplot2' is to aid in visual data investigations. This\n focus has led to a lack of facilities for composing specialised plots.\n 'ggforce' aims to be a collection of mainly new stats and geoms that fills\n this gap. All additional functionality is aimed to come through the official\n extension system so using 'ggforce' should be a stable experience.","Published":"2016-11-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ggformula","Version":"0.4.0","Title":"Formula Interface to the Grammar of Graphics","Description":"Provides a formula interface to 'ggplot2' graphics.","Published":"2017-06-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ggfortify","Version":"0.4.1","Title":"Data Visualization Tools for Statistical Analysis Results","Description":"Unified plotting tools for statistics commonly used, such as GLM,\n time series, PCA families, clustering and survival analysis. The package offers\n a single plotting interface for these analysis results and plots in a unified\n style using 'ggplot2'.","Published":"2017-02-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ggghost","Version":"0.2.1","Title":"Capture the Spirit of Your 'ggplot2' Calls","Description":"Creates a reproducible 'ggplot2' object by storing the data and calls. ","Published":"2016-08-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ggguitar","Version":"0.1.1","Title":"Utilities for Creating Guitar Tablature","Description":"Utilities for Creating Guitar Tablature using tidyverse packages.","Published":"2016-12-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gghalfnorm","Version":"1.1.2","Title":"Create a Half Normal Plot Using 'ggplot2'","Description":"Reproduce the halfnorm() function found in the 'faraway' package \n using the 'ggplot2' API.","Published":"2017-06-06","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"ggimage","Version":"0.0.4","Title":"Use Image in 'ggplot2'","Description":"Supports image files and graphic objects to be visualized in\n 'ggplot2' graphic system.","Published":"2017-03-29","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"GGIR","Version":"1.5-9","Title":"Raw Accelerometer Data Analysis","Description":"A tool to process and analyse data collected with wearable raw acceleration sensors. The package has been developed and tested for binary data from GENEActiv and GENEA devices, .csv-export data from Actigraph devices, and .cwa and .wav-format data from Axivity. These devices are currently widely used in research on human daily physical activity.","Published":"2017-05-21","License":"LGPL (>= 2.0, < 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ggiraph","Version":"0.3.3","Title":"Make 'ggplot2' Graphics Interactive","Description":"Create interactive 'ggplot2' graphics using 'htmlwidgets'.","Published":"2017-03-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggiraphExtra","Version":"0.1.0","Title":"Make Interactive 'ggplot2'. Extension to 'ggplot2' and 'ggiraph'","Description":"Collection of functions to enhance 'ggplot2' and 'ggiraph'. Provides functions for exploratory plots.\n All plot can be a 'static' plot or an 'interactive' plot using 'ggiraph'.","Published":"2016-12-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gglasso","Version":"1.3","Title":"Group Lasso Penalized Learning Using A Unified BMD Algorithm","Description":"This package implements a unified algorithm, blockwise-majorization-decent (BMD), for efficiently computing the solution paths of the group-lasso penalized least squares, logistic regression, Huberized SVM and squared SVM.","Published":"2014-08-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gglogo","Version":"0.1.3","Title":"Geom for Logo Sequence Plots","Description":"Visualize sequences in (modified) logo plots. The design choices\n used by these logo plots allow sequencing data to be more easily analyzed.\n Because it is integrated into the 'ggplot2' geom framework, these logo plots\n support native features such as faceting.","Published":"2017-02-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggloop","Version":"0.1.0","Title":"Create 'ggplot2' Plots in a Loop","Description":"Pass a data frame and mapping aesthetics to ggloop() in order\n to create a list of 'ggplot2' plots. The way x-y and dots are paired together\n is controlled by the remapping arguments. Geoms, themes, facets, and other\n features can be added with the special %L+% (L-plus) operator.","Published":"2016-10-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ggm","Version":"2.3","Title":"Functions for graphical Markov models","Description":"Functions and datasets for maximum likelihood fitting of some classes of graphical Markov models.","Published":"2015-01-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ggmap","Version":"2.6.1","Title":"Spatial Visualization with ggplot2","Description":"A collection of functions to visualize spatial data and models\n on top of static maps from various online sources (e.g Google Maps and Stamen\n Maps). It includes tools common to those tasks, including functions for\n geolocation and routing.","Published":"2016-01-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ggmcmc","Version":"1.1","Title":"Tools for Analyzing MCMC Simulations from Bayesian Inference","Description":"Tools for assessing and diagnosing convergence of\n Markov Chain Monte Carlo simulations, as well as for graphically display\n results from full MCMC analysis. The package also facilitates the graphical\n interpretation of models by providing flexible functions to plot the\n results against observed variables.","Published":"2016-06-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ggmosaic","Version":"0.1.2","Title":"Mosaic Plots in the 'ggplot2' Framework","Description":"Mosaic plots in the 'ggplot2' framework. Mosaic plot functionality\n is provided in a single 'ggplot2' layer by calling the geom 'mosaic'.","Published":"2017-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GGMridge","Version":"1.1","Title":"Gaussian Graphical Models Using Ridge Penalty Followed by\nThresholding and Reestimation","Description":"Estimation of partial correlation matrix using ridge penalty followed by thresholding and reestimation. Under multivariate Gaussian assumption, the matrix constitutes an Gaussian graphical model (GGM).","Published":"2016-02-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GGMselect","Version":"0.1-12","Title":"Gaussian Graphs Models Selection","Description":"Graph estimation in Gaussian Graphical Models. The main functions return the adjacency matrix of an undirected graph estimated from a data matrix. ","Published":"2017-04-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ggnetwork","Version":"0.5.1","Title":"Geometries to Plot Networks with 'ggplot2'","Description":"Geometries to plot network objects with 'ggplot2'.","Published":"2016-03-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggparallel","Version":"0.2.0","Title":"Variations of Parallel Coordinate Plots for Categorical Data","Description":"Create hammock plots, parallel sets, and common angle plots\n with 'ggplot2'.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ggplot2","Version":"2.2.1","Title":"Create Elegant Data Visualisations Using the Grammar of Graphics","Description":"A system for 'declaratively' creating graphics,\n based on \"The Grammar of Graphics\". You provide the data, tell 'ggplot2'\n how to map variables to aesthetics, what graphical primitives to use,\n and it takes care of the details.","Published":"2016-12-30","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ggplot2movies","Version":"0.0.1","Title":"Movies Data","Description":"A dataset about movies. This was previously contained in ggplot2,\n but has been moved its own package to reduce the download size of ggplot2.","Published":"2015-08-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggpmisc","Version":"0.2.15","Title":"Miscellaneous Extensions to 'ggplot2'","Description":"Extensions to 'ggplot2' respecting the grammar of graphics\n paradigm. Provides new statistics to locate and tag peaks and valleys in 2D\n plots, a statistics to add a label with the equation of a polynomial fitted\n with lm(), or R^2 or adjusted R^2 or information criteria for any model\n fitted with function lm(). Additional statistics give access to functions\n in package 'broom'. Provides a function for flexibly converting time\n series to data frames suitable for plotting with ggplot(). In addition\n provides statistics and ggplot geometries useful for diagnosing what data\n are passed to compute_group() and compute_panel() functions and to\n geometries.","Published":"2017-05-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ggpolypath","Version":"0.1.0","Title":"Polygons with Holes for the Grammar of Graphics","Description":"Tools for working with polygons with holes in 'ggplot2', with a\n new 'geom' for drawing a 'polypath' applying the 'evenodd' or 'winding'\n rules.","Published":"2016-08-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggpubr","Version":"0.1.3","Title":"'ggplot2' Based Publication Ready Plots","Description":"'ggplot2' is an excellent and flexible package for elegant data\n visualization in R. However the default generated plots requires some formatting\n before we can send them for publication. Furthermore, to customize a 'ggplot',\n the syntax is opaque and this raises the level of difficulty for researchers\n with no advanced R programming skills. 'ggpubr' provides some easy-to-use\n functions for creating and customizing 'ggplot2'- based publication ready plots.","Published":"2017-06-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ggpval","Version":"0.1.0","Title":"Annotate Statistical Tests for 'ggplot2'","Description":"Automatically perform desired statistical tests (e.g. wilcox.test(), t.test()) to compare between groups, and add test p-values to the plot with annotation bar. \n Visualizing group differences are frequently performed by boxplots, violin plots etc.. \n Statistical test results are often needed to be annotated on the plots. This package provide a convenient function that work on 'ggplot2' objects, \n perform desired statistical test between groups of interest and annotate the test results on the plot. ","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggQC","Version":"0.0.1","Title":"Quality Control Charts for the Grammar of Graphics Plotting\nSystem","Description":"Plot single and faceted type quality control charts\n within the grammar of graphics plotting framework. ","Published":"2017-03-29","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ggRandomForests","Version":"2.0.1","Title":"Visually Exploring Random Forests","Description":"Graphic elements for exploring Random Forests using the 'randomForest' or\n 'randomForestSRC' package for survival, regression and classification forests and\n 'ggplot2' package plotting.","Published":"2016-09-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ggraph","Version":"1.0.0","Title":"An Implementation of Grammar of Graphics for Graphs and Networks","Description":"The grammar of graphics as implemented in ggplot2 is a poor fit for\n graph and network visualizations due to its reliance on tabular data input.\n ggraph is an extension of the ggplot2 API tailored to graph visualizations\n and provides the same flexible approach to building up plots layer by layer.","Published":"2017-02-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggraptR","Version":"0.1","Title":"Allows Interactive Visualization of Data Through a Web Browser\nGUI","Description":"Intended for both technical and non-technical users to create\n interactive data visualizations through a web browser GUI without writing any\n code.","Published":"2016-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ggrepel","Version":"0.6.5","Title":"Repulsive Text and Label Geoms for 'ggplot2'","Description":"\n Provides text and label geoms for 'ggplot2' that help to avoid overlapping\n text labels. Labels repel away from each other and away from the data\n points.","Published":"2016-11-24","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ggROC","Version":"1.0","Title":"package for roc curve plot with ggplot2","Description":"package for roc curve plot with ggplot2","Published":"2013-05-26","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"ggsci","Version":"2.7","Title":"Scientific Journal and Sci-Fi Themed Color Palettes for\n'ggplot2'","Description":"A collection of 'ggplot2' color palettes inspired by\n plots in scientific journals, data visualization libraries,\n science fiction movies, and TV shows.","Published":"2017-06-12","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ggseas","Version":"0.5.1","Title":"'stats' for Seasonal Adjustment on the Fly with 'ggplot2'","Description":"Provides 'ggplot2' 'stats' that estimate seasonally adjusted series \n and rolling summaries such as rolling average on the fly for time series.","Published":"2016-10-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggseqlogo","Version":"0.0.1","Title":"A 'ggplot2' Extension for Drawing Publication-Ready Sequence\nLogos","Description":"The extensive range of functions provided by this package makes it possible to draw highly versatile sequence logos. Features include, but not limited to, modifying colour schemes and fonts used to draw the logo, generating multiple logo plots, and aiding the visualisation with annotations. Sequence logos can easily be combined with other plots 'ggplot2' plots.","Published":"2017-06-13","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"ggsignif","Version":"0.2.0","Title":"Significance Bars for 'ggplot2'","Description":"Enrich your ggplots with group-wise comparisons.\n This package provides an easy way to indicate if two groups are significantly different.\n Commonly this is shown by a bar on top connecting the groups of interest which itself is annotated with the level of significance (NS, *, **, ***).\n The package provides a single layer (geom_signif()) that takes the groups for comparison and the test (t.test(), wilcox.text() etc.) as arguments and adds the annotation\n to the plot.","Published":"2017-04-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggsn","Version":"0.4.0","Title":"North Symbols and Scale Bars for Maps Created with 'ggplot2' or\n'ggmap'","Description":"Adds north symbols (18 options) and scale bars in kilometers to\n maps in geographic or metric coordinates created with 'ggplot2' or 'ggmap'.","Published":"2017-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ggspatial","Version":"0.2.1","Title":"Spatial Data Framework for ggplot2","Description":"Spatial data plus the power of the ggplot2 framework means easier mapping when input \n data are already in the form of Spatial* objects.","Published":"2017-04-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggspectra","Version":"0.2.1","Title":"Extensions to 'ggplot2' for Radiation Spectra","Description":"Additional annotations, and statsfor plotting \"light\"\n spectra with 'ggplot2', together with specializations of ggplot()\n and plot() methods for spectral data stored in objects of the classes\n defined in package 'photobiology' and a plot() method for objects of\n class \"waveband\", also defined in package 'photobiology'.","Published":"2017-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ggstance","Version":"0.3","Title":"Horizontal 'ggplot2' Components","Description":"A 'ggplot2' extension that provides flipped components:\n horizontal versions of 'Stats' and 'Geoms', and vertical versions\n of 'Positions'.","Published":"2016-11-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggswissmaps","Version":"0.1.1","Title":"Offers Various Swiss Maps as Data Frames and 'ggplot2' Objects","Description":"Offers various swiss maps as data frames and 'ggplot2' objects and gives the\n possibility to add layers of data on the maps. Data are publicly available\n from the swiss federal statistical office.","Published":"2016-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ggtern","Version":"2.2.0","Title":"An Extension to 'ggplot2', for the Creation of Ternary Diagrams","Description":"Extends the functionality of 'ggplot2', providing the capability\n to plot ternary diagrams for (subset of) the 'ggplot2' geometries. Additionally,\n 'ggtern' has implemented several NEW geometries which are unavailable to the\n standard 'ggplot2' release. For further examples and documentation, please\n proceed to the 'ggtern' website.","Published":"2016-11-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ggThemeAssist","Version":"0.1.5","Title":"Add-in to Customize 'ggplot2' Themes","Description":"Rstudio add-in that delivers a graphical interface for editing 'ggplot2' theme elements.","Published":"2016-08-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ggthemes","Version":"3.4.0","Title":"Extra Themes, Scales and Geoms for 'ggplot2'","Description":"Some extra themes, geoms, and scales for 'ggplot2'.\n Provides 'ggplot2' themes and scales that replicate the look of plots\n by Edward Tufte, Stephen Few, 'Fivethirtyeight', 'The Economist', 'Stata',\n 'Excel', and 'The Wall Street Journal', among others.\n Provides 'geoms' for Tufte's box plot and range frame.","Published":"2017-02-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ggvis","Version":"0.4.3","Title":"Interactive Grammar of Graphics","Description":"An implementation of an interactive grammar of graphics, taking the\n best parts of 'ggplot2', combining them with the reactive framework of\n 'shiny' and drawing web graphics using 'vega'.","Published":"2016-07-22","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GHap","Version":"1.2.2","Title":"Genome-Wide Haplotyping","Description":"Haplotype calling from phased SNP data.","Published":"2017-02-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ghit","Version":"0.2.17","Title":"Lightweight GitHub Package Installer","Description":"A lightweight, vectorized drop-in replacement for\n 'devtools::install_github()' that uses native git and R methods to clone and\n install a package from GitHub. From v0.2.15, also includes an analogue for \n 'install_bitbucket()'.","Published":"2017-02-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GHQp","Version":"1.0","Title":"Gauss Hermite Quadrature with pruning","Description":"The GHQ function can be used to obtain the quadrature points and weights to approximate an integral in two or more dimensions. This function uses the pruning approach to eliminate that points that do not contribute to the approximation of the integral and increases computational cost. The advantage to conducting this elimination of points is the decrease in the number of times that the function of interest is evaluated. This advantage is crucial in mixed models in which we must address several integrations within an iterative process to obtain model parameters.","Published":"2014-04-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ghyp","Version":"1.5.7","Title":"A Package on Generalized Hyperbolic Distribution and Its Special\nCases","Description":"Detailed functionality for working\n with the univariate and multivariate Generalized Hyperbolic\n distribution and its special cases (Hyperbolic (hyp), Normal\n Inverse Gaussian (NIG), Variance Gamma (VG), skewed Student-t\n and Gaussian distribution). Especially, it contains fitting\n procedures, an AIC-based model selection routine, and functions\n for the computation of density, quantile, probability, random\n variates, expected shortfall and some portfolio optimization\n and plotting routines as well as the likelihood ratio test. In\n addition, it contains the Generalized Inverse Gaussian\n distribution.","Published":"2016-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GiANT","Version":"1.2","Title":"Gene Set Uncertainty in Enrichment Analysis","Description":"Toolbox for various enrichment analysis methods and quantification of uncertainty of gene sets.","Published":"2015-12-29","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"gibbs.met","Version":"1.1-3","Title":"Naive Gibbs Sampling with Metropolis Steps","Description":"This package provides two generic functions for performing\n Markov chain sampling in a naive way for a user-defined target\n distribution, which involves only continuous variables. The\n function \"gibbs_met\" performs Gibbs sampling with each\n 1-dimensional distribution sampled with Metropolis update using\n Gaussian proposal distribution centered at the previous state.\n The function \"met_gaussian\" updates the whole state with\n Metropolis method using independent Gaussian proposal\n distribution centered at the previous state. The sampling is\n carried out without considering any special tricks for\n improving efficiency. This package is aimed at only routine\n applications of MCMC in moderate-dimensional problems.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GibbsACOV","Version":"1.1","Title":"Gibbs Sampler for One-Way Mixed-Effects ANOVA and ANCOVA Models","Description":"Gibbs sampler for one-way linear mixed-effects models\n (ANOVA, ANCOVA) with homoscedasticity of errors and uniform\n priors.","Published":"2013-05-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gifti","Version":"0.7.1","Title":"Reads in Neuroimaging 'GIFTI' Files with Geometry Information","Description":"Functions to read in the geometry format under the \n Neuroimaging Informatics Technology Initiative ('NIfTI'), called \n 'GIFTI' . \n These files contain surfaces of brain imaging data.","Published":"2017-05-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GIGrvg","Version":"0.5","Title":"Random Variate Generator for the GIG Distribution","Description":"\n Generator and density function for the\n Generalized Inverse Gaussian (GIG) distribution.","Published":"2017-06-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GillespieSSA","Version":"0.5-4","Title":"Gillespie's Stochastic Simulation Algorithm (SSA)","Description":"GillespieSSA provides a simple to use, intuitive, and\n extensible interface to several stochastic simulation\n algorithms for generating simulated trajectories of finite\n population continuous-time model. Currently it implements\n Gillespie's exact stochastic simulation algorithm (Direct\n method) and several approximate methods (Explicit tau-leap,\n Binomial tau-leap, and Optimized tau-leap). The package also\n contains a library of template models that can be run as demo\n models and can easily be customized and extended. Currently the\n following models are included, decaying-dimerization reaction\n set, linear chain system, logistic growth model, Lotka\n predator-prey model, Rosenzweig-MacArthur predator-prey model,\n Kermack-McKendrick SIR model, and a metapopulation SIRS model.","Published":"2012-01-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"gimme","Version":"0.2-2","Title":"Group Iterative Multiple Model Estimation","Description":"Automated identification and estimation of group- and\n individual-level relations in time series data from within a structural\n equation modeling framework.","Published":"2017-05-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gimms","Version":"1.0.0","Title":"Download and Process GIMMS NDVI3g Data","Description":"We provide a set of functions to retrieve information about GIMMS NDVI3g files currently available online; download (and re-arrange, in the case of NDVI3g.v0) the half-monthly data sets; import downloaded files from ENVI binary (NDVI3g.v0) or NetCDF format (NDVI3g.v1) directly into R based on the widespread 'raster' package; conduct quality control; and generate monthly composites (e.g., maximum values) from the half-monthly input data. As a special gimmick, a method is included to conveniently apply the Mann-Kendall trend test upon 'Raster*' images, optionally featuring trend-free pre-whitening to account for lag-1 autocorrelation.","Published":"2016-12-16","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GiNA","Version":"1.0.1","Title":"High Throughput Phenotyping","Description":"Performs image segmentation in fruit or seeds pictures in order to measure physical features in a high-throughput manner for genome-wide association (GWAS) and genomic selection programs.","Published":"2016-04-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GiniWegNeg","Version":"1.0.1","Title":"Computing the Gini-Based Coefficients for Weighted and Negative\nAttributes","Description":"Gini-based coefficients and plot of the ordinary and generalized curve of maximum inequality in the presence of weighted and negative attributes. ","Published":"2016-05-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gIPFrm","Version":"2.0","Title":"Generalized Iterative Proportional Fitting for Relational Models","Description":"Maximum likelihood estimation under relational models, with or without the overall effect.","Published":"2014-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"giphyr","Version":"0.1.1","Title":"R Interface to the Giphy API","Description":"An interface to the 'API' of 'Giphy', a popular index-based search \n engine for 'GIFs' and animated stickers (see and \n for more information about 'Giphy' and \n its 'API') . This package also provides a 'RStudio Addin', which can help \n users easily search and download 'GIFs' and insert them to a 'rmarkdown' \n presentation. ","Published":"2017-05-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GiRaF","Version":"1.0","Title":"Gibbs Random Fields Analysis","Description":"Allows calculation on, and\n sampling from Gibbs Random Fields, and more precisely general \n homogeneous Potts model. The primary tool is the exact computation of\n the intractable normalising constant for small rectangular lattices. \n Beside the latter function, it contains method that give exact sample from the likelihood\n for small enough rectangular lattices or approximate sample from the \n likelihood using MCMC samplers for large lattices.","Published":"2016-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"giRaph","Version":"0.1.2","Title":"The giRaph package for graph representation in R","Description":"Supply classes and methods to represent and manipulate\n graphs","Published":"2013-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GISTools","Version":"0.7-4","Title":"Some further GIS capabilities for R","Description":"Some mapping and spatial data manipulation tools - in particular\n drawing choropleth maps with nice looking legends, and aggregation of point\n data to polygons.","Published":"2014-10-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gistr","Version":"0.4.0","Title":"Work with 'GitHub' 'Gists'","Description":"Work with 'GitHub' 'gists' from 'R' (e.g., \n , \n ). A 'gist'\n is simply one or more files with code/text/images/etc. This package allows\n the user to create new 'gists', update 'gists' with new files, rename files,\n delete files, get and delete 'gists', star and 'un-star' 'gists', fork 'gists',\n open a 'gist' in your default browser, get embed code for a 'gist', list\n 'gist' 'commits', and get rate limit information when 'authenticated'. Some\n requests require authentication and some do not. 'Gists' website: \n .","Published":"2017-04-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"git2r","Version":"0.18.0","Title":"Provides Access to Git Repositories","Description":"Interface to the 'libgit2' library, which is a pure C\n implementation of the 'Git' core methods. Provides access to 'Git'\n repositories to extract data and running some basic 'Git'\n commands.","Published":"2017-01-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gitgadget","Version":"0.2.1","Title":"Rstudio Addin for Version Control and Assignment Management\nusing Git","Description":"An Rstudio addin for version control that allows users to clone\n repositories, create and delete branches, and sync forks on GitHub, GitLab, etc.\n Furthermore, the addin uses the GitLab API to allow instructors to create\n forks and merge requests for all students/teams with one click of a button.","Published":"2016-10-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"githubinstall","Version":"0.2.1","Title":"A Helpful Way to Install R Packages Hosted on GitHub","Description":"Provides an helpful way to install packages hosted on GitHub.","Published":"2016-11-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gitlabr","Version":"0.9","Title":"Access to the Gitlab API","Description":"Provides R functions to access the API of the project and\n repository management web application gitlab. For many common tasks (repository\n file access, issue assignment and status, commenting) convenience wrappers\n are provided, and in addition the full API can be used by specifying request\n locations. Gitlab is open-source software and can be self-hosted or used on\n gitlab.com.","Published":"2017-04-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"gitter","Version":"1.1.1","Title":"Quantification of Pinned Microbial Cultures","Description":"The goal of this package is to allow users to robustly\n and quickly grid and quantify sizes of pinned colonies in plate images.\n gitter works by first finding the grid of colonies from a preprocessed image and then locating the bounds of each colony separately.\n It includes several image pre-processing techniques, such\n as autorotation of plates, noise removal, contrast adjustment and image\n resizing.","Published":"2015-10-11","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"givitiR","Version":"1.3","Title":"The GiViTI Calibration Test and Belt","Description":"Functions to assess the calibration of logistic regression models with the GiViTI (Gruppo Italiano per la Valutazione degli interventi in Terapia Intensiva, Italian Group for the Evaluation of the Interventions in Intensive Care Units - see ) approach. The approach consists in a graphical tool, namely the GiViTI calibration belt, and in the associated statistical test. These tools can be used both to evaluate the internal calibration (i.e. the goodness of fit) and to assess the validity of an externally developed model.","Published":"2017-01-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Giza","Version":"1.0","Title":"Constructing panels of population pyramid plots based on lattice","Description":"`Giza' offers a simple way of creating multiple pyramid\n plots in one graphics window, exploiting the power of the\n lattice package. It is a handy way of visualizing longitudinal\n grouped (i.e.: age- and education-structured) data.","Published":"2012-09-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gjam","Version":"2.1.4","Title":"Generalized Joint Attribute Modeling","Description":"Analyzes joint attribute data (e.g., species abundance) that are combinations of continuous and discrete data with Gibbs sampling.","Published":"2017-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gk","Version":"0.5.0","Title":"g-and-k and g-and-h Distribution Functions","Description":"Functions for the g-and-k and generalised g-and-h distributions.","Published":"2017-06-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GK2011","Version":"0.1.3","Title":"Gaines and Kuklinski (2011) Estimators for Hybrid Experiments","Description":"Implementations of the treatment effect estimators for hybrid (self-selection) experiments, as developed by Brian J. Gaines and James H. Kuklinski, (2011), \"Experimental Estimation of Heterogeneous Treatment Effects Related to Self-Selection,\" American Journal of Political Science 55(3): 724-736.","Published":"2016-05-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gkmSVM","Version":"0.71.0","Title":"Gapped-Kmer Support Vector Machine","Description":"Imports the 'gkmSVM' v2.0 functionalities into R (www.beerlab.org/gkmsvm). It also uses the 'kernlab' library (separate R package by different authors) for various SVM algorithms. ","Published":"2016-07-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"glamlasso","Version":"2.0.1","Title":"Penalization in Large Scale Generalized Linear Array Models","Description":"Efficient design matrix free procedure for penalized estimation\n in large scale 2 and 3-dimensional generalized linear array models. Currently\n either Lasso or SCAD penalized estimation is possible for the followings models:\n The Gaussian model with identity link, the Binomial model with logit link, the\n Poisson model with log link and the Gamma model with log link.","Published":"2016-08-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"glarma","Version":"1.5-0","Title":"Generalized Linear Autoregressive Moving Average Models","Description":"Functions are provided for estimation, testing, diagnostic checking and forecasting of generalized linear autoregressive moving average (GLARMA) models for discrete valued time series with regression variables. These are a class of observation driven non-linear non-Gaussian state space models. The state vector consists of a linear regression component plus an observation driven component consisting of an autoregressive-moving average (ARMA) filter of past predictive residuals. Currently three distributions (Poisson, negative binomial and binomial) can be used for the response series. Three options (Pearson, score-type and unscaled) for the residuals in the observation driven component are available. Estimation is via maximum likelihood (conditional on initializing values for the ARMA process) optimized using Fisher scoring or Newton Raphson iterative methods. Likelihood ratio and Wald tests for the observation driven component allow testing for serial dependence in generalized linear model settings. Graphical diagnostics including model fits, autocorrelation functions and probability integral transform residuals are included in the package. Several standard data sets are included in the package.","Published":"2017-01-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"glasso","Version":"1.8","Title":"Graphical lasso- estimation of Gaussian graphical models","Description":"Graphical lasso","Published":"2014-07-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glba","Version":"0.2","Title":"General Linear Ballistic Accumulator Models","Description":"Analyses response times and accuracies from psychological experiments with the linear ballistic accumulator (LBA) model from Brown and Heathcote (2008). The LBA model is optionally fitted with explanatory variables on the parameters such as the drift rate, the boundary and the starting point parameters. A log-link function on the linear predictors can be used to ensure that parameters remain positive when needed. ","Published":"2015-02-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"glcm","Version":"1.6.1","Title":"Calculate Textures from Grey-Level Co-Occurrence Matrices\n(GLCMs)","Description":"Enables calculation of image textures derived from grey-level\n co-occurrence matrices (GLCMs). Supports processing images that cannot\n fit in memory.","Published":"2016-03-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"gld","Version":"2.4.1","Title":"Estimation and Use of the Generalised (Tukey) Lambda\nDistribution","Description":"The generalised lambda distribution, or Tukey lambda distribution, provides a wide variety of shapes with one functional form. \n This package provides random numbers, quantiles, probabilities, densities and density quantiles for four different parameterisations of the distribution. \n It provides the density function, distribution function, and Quantile-Quantile plots. \n It implements a variety of estimation methods for the distribution, including diagnostic plots. \n Estimation methods include the starship (all 4 parameterisations) and a number of methods for only the FKML parameterisation. \n These include maximum likelihood, maximum product of spacings, Titterington's method, Moments, L-Moments, Trimmed L-Moments and Distributional Least Absolutes. ","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GLDEX","Version":"2.0.0.5","Title":"Fitting Single and Mixture of Generalised Lambda Distributions\n(RS and FMKL) using Various Methods","Description":"The fitting algorithms considered in this package have two major objectives. One is to provide a smoothing device to fit distributions to data using the weight and unweighted discretised approach based on the bin width of the histogram. The other is to provide a definitive fit to the data set using the maximum likelihood and quantile matching estimation. Other methods such as moment matching, starship method, L moment matching are also provided. Diagnostics on goodness of fit can be done via qqplots, KS-resample tests and comparing mean, variance, skewness and kurtosis of the data with the fitted distribution.","Published":"2016-12-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"GLDreg","Version":"1.0.7","Title":"Fit GLD Regression Model and GLD Quantile Regression Model to\nEmpirical Data","Description":"Owing to the rich shapes of Generalised Lambda Distributions (GLDs), GLD standard/quantile regression is a competitive flexible model compared to standard/quantile regression. The proposed method has some major advantages: 1) it provides a reference line which is very robust to outliers with the attractive property of zero mean residuals and 2) it gives a unified, elegant quantile regression model from the reference line with smooth regression coefficients across different quantiles. The goodness of fit of the proposed model can be assessed via QQ plots and Kolmogorov-Smirnov tests and data driven smooth test, to ensure the appropriateness of the statistical inference under consideration. Statistical distributions of coefficients of the GLD regression line are obtained using simulation, and interval estimates are obtained directly from simulated data. ","Published":"2017-02-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"GLIDE","Version":"1.0.1","Title":"Global and Individual Tests for Direct Effects","Description":"Functions evaluate global and individual tests for direct effects in Mendelian randomization studies.","Published":"2017-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"glinternet","Version":"1.0.4","Title":"Learning Interactions via Hierarchical Group-Lasso\nRegularization","Description":"Group-Lasso INTERaction-NET. Fits linear pairwise-interaction models that satisfy strong hierarchy: if an interaction coefficient is estimated to be nonzero, then its two associated main effects also have nonzero estimated coefficients. Accommodates categorical variables (factors) with arbitrary numbers of levels, continuous variables, and combinations thereof. Implements the machinery described in the paper \"Learning interactions via hierarchical group-lasso regularization\" (JCGS 2015, Volume 24, Issue 3). Michael Lim & Trevor Hastie (2015) .","Published":"2017-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gllm","Version":"0.35","Title":"Generalised log-linear model","Description":"Routines for log-linear models of incomplete contingency tables,\n including some latent class models, via EM and Fisher scoring\n approaches. Allows bootstrapping.","Published":"2013-10-31","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"glm.ddR","Version":"0.1.1","Title":"Distributed 'glm' for Big Data using 'ddR' API","Description":"Distributed training and prediction of generalized linear models using 'ddR' (Distributed Data Structures) API in the 'ddR' package.","Published":"2017-02-28","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"glm.predict","Version":"2.5-1","Title":"Predicted Values and Discrete Changes for GLM","Description":"Functions to calculate predicted values and the difference between\n the two cases with confidence interval for glm(), glm.nb(), polr() and multinom().","Published":"2017-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"glm2","Version":"1.1.2","Title":"Fitting Generalized Linear Models","Description":"Fits generalized linear models using the same model specification as glm in the stats package, but with a modified default fitting method that provides greater stability for models that may fail to converge using glm","Published":"2014-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GLMaSPU","Version":"1.0","Title":"An Adaptive Test on High Dimensional Parameters in Generalized\nLinear Models","Description":"Several tests for high dimensional generalized linear models have been proposed recently. In this package, we implemented a new test called adaptive sum of powered score (aSPU) for high dimensional generalized linear models, which is often more powerful than the existing methods in a wide scenarios. We also implemented permutation based version of several existing methods for research purpose. We recommend users use the aSPU test for their real testing problem. You can learn more about the tests implemented in the package via the following papers: 1. Pan, W., Kim, J., Zhang, Y., Shen, X. and Wei, P. (2014) A powerful and adaptive association test for rare variants, Genetics, 197(4). 2. Guo, B., and Chen, S. X. (2016) . Tests for high dimensional generalized linear models. Journal of the Royal Statistical Society: Series B. 3. Goeman, J. J., Van Houwelingen, H. C., and Finos, L. (2011) . Testing against a high-dimensional alternative in the generalized linear model: asymptotic type I error control. Biometrika, 98(2).","Published":"2016-12-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glmbb","Version":"0.3","Title":"All Hierarchical or Graphical Models for Generalized Linear\nModel","Description":"Find all hierarchical models of specified generalized linear\n model with information criterion (AIC, BIC, or AICc) within specified\n cutoff of minimum value. Alternatively, find all such graphical models.\n Use branch and bound algorithm so we do not have to fit all models.","Published":"2017-06-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"glmBfp","Version":"0.0-48","Title":"Bayesian Fractional Polynomials for GLMs","Description":"Implements the Bayesian paradigm\n for fractional polynomials in generalized linear\n models. See package 'bfp' for the treatment of normal\n models.","Published":"2016-07-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"glmc","Version":"0.2-4","Title":"Fitting Generalized Linear Models Subject to Constraints","Description":"Fits generalized linear models where the parameters are\n subject to linear constraints. The model is specified by giving\n a symbolic description of the linear predictor, a description\n of the error distribution, and a matrix of constraints on the\n parameters.","Published":"2012-12-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glmdm","Version":"2.60","Title":"R Code for Simulation of GLMDM","Description":"This package contains functions to perform generalized\n linear mixed Dirichlet models using posterior simulation.","Published":"2013-04-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"glmertree","Version":"0.1-1","Title":"Generalized Linear Mixed Model Trees","Description":"Recursive partitioning based on (generalized) linear mixed models\n (GLMMs) combining lmer()/glmer() from lme4 and lmtree()/glmtree() from partykit.","Published":"2017-06-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"glmgraph","Version":"1.0.3","Title":"Graph-Constrained Regularization for Sparse Generalized Linear\nModels","Description":"We propose to use sparse regression model to achieve variable selection while accounting for graph-constraints among coefficients. Different linear combination of a sparsity penalty(L1) and a smoothness(MCP) penalty has been used, which induces both sparsity of the solution and certain smoothness on the linear coefficients.","Published":"2015-07-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glmlep","Version":"0.1","Title":"Fit GLM with LEP-based penalized maximum likelihood","Description":"Efficient algorithms for fitting regularization paths for\n linear or logistic regression models penalized by LEP.","Published":"2013-06-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glmm","Version":"1.1.1","Title":"Generalized Linear Mixed Models via Monte Carlo Likelihood\nApproximation","Description":"Approximates the likelihood of a generalized linear mixed model using Monte Carlo likelihood approximation. Then maximizes the likelihood approximation to return maximum likelihood estimates, observed Fisher information, and other model information.","Published":"2016-08-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glmmBUGS","Version":"2.4.0","Title":"Generalised Linear Mixed Models with BUGS and JAGS","Description":"Automates running Generalized Linear Mixed Models, including\n spatial models, with WinBUGS, OpenBUGS and JAGS. Models are specified with\n formulas, with the package writings model files, arranging unbalanced data\n in ragged arrays, and creating starting values. The model is re-parameterized,\n and functions are provided for converting model outputs to the original\n parameterization.","Published":"2016-09-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"glmmLasso","Version":"1.5.1","Title":"Variable Selection for Generalized Linear Mixed Models by\nL1-Penalized Estimation","Description":"A variable selection approach for generalized linear mixed models by L1-penalized estimation is provided.","Published":"2017-05-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glmmML","Version":"1.0.2","Title":"Generalized Linear Models with Clustering","Description":"Binomial and Poisson regression for clustered data, fixed\n and random effects with bootstrapping.","Published":"2017-05-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"GLMMRR","Version":"0.2.0","Title":"Generalized Linear Mixed Model (GLMM) for Binary Randomized\nResponse Data","Description":"Generalized Linear Mixed Model (GLMM) for Binary Randomized Response Data.\n Includes Cauchit, Compl. Log-Log, Logistic, and Probit link functions for Bernoulli Distributed RR data.\n RR Designs: Warner, Forced Response, Unrelated Question, Kuk, Crosswise, and Triangular.","Published":"2016-08-09","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"glmmsr","Version":"0.1.1","Title":"Fit a Generalized Linear Mixed Model","Description":"Conduct inference about generalized linear mixed models, with a\n choice about which method to use to approximate the likelihood. In addition\n to the Laplace and adaptive Gaussian quadrature approximations, which are\n borrowed from 'lme4', the likelihood may be approximated by the sequential\n reduction approximation, or an importance sampling approximation. These\n methods provide an accurate approximation to the likelihood in some\n situations where it is not possible to use adaptive Gaussian quadrature.","Published":"2016-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"glmmTMB","Version":"0.1.1","Title":"Generalized Linear Mixed Models using Template Model Builder","Description":"Fit linear and generalized linear mixed models with various\n extensions, including zero-inflation. The models are fitted using maximum\n likelihood estimation via 'TMB' (Template Model Builder). Random effects are\n assumed to be Gaussian on the scale of the linear predictor and are integrated\n out using the Laplace approximation. Gradients are calculated using automatic\n differentiation.","Published":"2017-02-20","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"glmnet","Version":"2.0-10","Title":"Lasso and Elastic-Net Regularized Generalized Linear Models","Description":"Extremely efficient procedures for fitting the entire lasso or elastic-net regularization path for linear regression, logistic and multinomial regression models, Poisson regression and the Cox model. Two recent additions are the multiple-response Gaussian, and the grouped multinomial regression. The algorithm uses cyclical coordinate descent in a path-wise fashion, as described in the paper linked to via the URL below.","Published":"2017-05-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glmnetcr","Version":"1.0.2","Title":"Fit a penalized constrained continuation ratio model for\npredicting an ordinal response","Description":"This packages includes functions for restructuring an ordinal response dataset for fitting continuation ratio models for datasets \n where the number of covariates exceeds the sample size or when there is collinearity among the covariates. This package uses the glmnet package \n fitting algorithm. ","Published":"2014-01-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glmnetUtils","Version":"1.0.2","Title":"Utilities for 'Glmnet'","Description":"Provides a formula interface for the 'glmnet' package for\n elasticnet regression, a method for cross-validating the alpha parameter,\n and other quality-of-life tools.","Published":"2017-04-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glmpath","Version":"0.97","Title":"L1 Regularization Path for Generalized Linear Models and Cox\nProportional Hazards Model","Description":"A path-following algorithm for L1 regularized generalized\n linear models and Cox proportional hazards model","Published":"2013-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"glmpathcr","Version":"1.0.3","Title":"Fit a penalized continuation ratio model for predicting an\nordinal response","Description":"Provides a function for fitting a penalized constrained continuation ratio model using the glmpath algorithm and methods for extracting coefficient estimates, predicted class, class probabilities, and plots.","Published":"2014-01-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glmulti","Version":"1.0.7","Title":"Model selection and multimodel inference made easy","Description":"Automated model selection and model-averaging. Provides a\n wrapper for glm and other functions, automatically generating\n all possible models (under constraints set by the user) with\n the specified response and explanatory variables, and finding\n the best models in terms of some Information Criterion (AIC,\n AICc or BIC). Can handle very large numbers of candidate\n models. Features a Genetic Algorithm to find the best models\n when an exhaustive screening of the candidates is not feasible.","Published":"2013-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"glmvsd","Version":"1.4","Title":"Variable Selection Deviation Measures and Instability Tests for\nHigh-Dimensional Generalized Linear Models","Description":"Variable selection deviation (VSD) measures and instability tests for high-dimensional model selection methods such as LASSO, SCAD and MCP, etc., to decide whether the sparse patterns identified by those methods are reliable. ","Published":"2016-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"glmx","Version":"0.1-1","Title":"Generalized Linear Models Extended","Description":"Extended techniques for generalized linear models (GLMs), especially for binary responses,\n including parametric links and heteroskedastic latent variables.","Published":"2015-11-19","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"globalboosttest","Version":"1.1-0","Title":"Testing the additional predictive value of high-dimensional data","Description":"'globalboosttest' implements a permutation-based testing\n procedure to globally test the (additional) predictive value of\n a large set of predictors given that a small set of predictors\n is already available. Currently, 'globalboosttest' supports\n binary outcomes (via logistic regression) and survival outcomes\n (via Cox regression). It is based on boosting regression as\n implemented in the package 'mboost'.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GlobalDeviance","Version":"0.4","Title":"Global Deviance Permutation Tests","Description":"permutation based global test with deviance as test statistic","Published":"2013-10-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GlobalFit","Version":"1.2","Title":"Bi-Level Optimization of Metabolic Network Models","Description":"Initial metabolic networks often inaccurately predict in-silico growth or non-growth if compared to in-vivo data. This package refines metabolic network models by making networks changes (i.e., removing, adding, changing reversibility of reactions; adding and removing biomass metabolites) and simultaneously matching sets of experimental growth and non-growth data (e.g., KO-mutants, mutants grown under different media conditions,...)","Published":"2016-08-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"globalGSA","Version":"1.0","Title":"Global Gene-Set Analysis for Association Studies","Description":"Implementation of three different Gene set analysis (GSA) algorithms for combining the individual pvalues of a set of genetic variats (SNPs) in a gene level pvalue. The implementation includes the selection of the best inheritance model for each SNP.","Published":"2013-10-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GlobalOptions","Version":"0.0.12","Title":"Generate Functions to Get or Set Global Options","Description":"It provides more controls on the option values such as validation\n and filtering on the values, making options invisible or private.","Published":"2017-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"globalOptTests","Version":"1.1","Title":"Objective functions for benchmarking the performance of global\noptimization algorithms","Description":"This package makes available 50 objective functions for benchmarking the performance of global optimization algorithms ","Published":"2014-09-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"globals","Version":"0.10.0","Title":"Identify Global Objects in R Expressions","Description":"Identifies global (\"unknown\" or \"free\") objects in R expressions\n by code inspection using various strategies, e.g. conservative or liberal.\n The objective of this package is to make it as simple as possible to\n identify global objects for the purpose of exporting them in distributed\n compute environments.","Published":"2017-04-17","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"globe","Version":"1.2-0","Title":"Plot 2D and 3D Views of the Earth, Including Major Coastline","Description":"Basic functions for plotting 2D and 3D views of a sphere, by default the Earth with its major coastline, and additional lines and points. ","Published":"2017-05-12","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"glogis","Version":"1.0-0","Title":"Fitting and Testing Generalized Logistic Distributions","Description":"Tools for the generalized logistic distribution (Type I,\n also known as skew-logistic distribution), encompassing\n\t basic distribution functions (p, q, d, r, score), maximum\n\t likelihood estimation, and structural change methods.","Published":"2014-11-11","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"glpkAPI","Version":"1.3.0","Title":"R Interface to C API of GLPK","Description":"R Interface to C API of GLPK, needs GLPK Version >= 4.42","Published":"2015-01-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"glrt","Version":"2.0","Title":"Generalized Logrank Tests for Interval-censored Failure Time\nData","Description":"Functions to conduct four generalized logrank tests and a score test under a proportional hazards model","Published":"2015-01-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gLRTH","Version":"0.1.0","Title":"Likelihood Ratio Test for Genome-Wide Association under Genetic\nHeterogeneity","Description":"Implements the likelihood ratio test for genome-wide association\n under genetic heterogeneity as described in Qian and Shao (2013) . ","Published":"2017-01-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GLSME","Version":"1.0.3","Title":"Generalized Least Squares with Measurement Error","Description":"Performs linear regression with correlated predictors, responses and correlated measurement errors in predictors and responses, correcting for biased caused by these.","Published":"2015-07-20","License":"GPL (>= 2) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"glue","Version":"1.1.1","Title":"Interpreted String Literals","Description":"An implementation of interpreted string literals, inspired by\n Python's Literal String Interpolation and Docstrings\n and Julia's Triple-Quoted String Literals\n .","Published":"2017-06-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"glycanr","Version":"0.3.0","Title":"Tools for Analysing N-Glycan Data","Description":"Useful utilities in N-glycan data analysis. This package tries\n to fill the gap in N-glycan data analysis by providing easy\n to use functions for basic operations on data\n (see https://en.wikipedia.org/wiki/Glycomics for more\n details on Glycomics). At the moment glycanr is mostly oriented\n to data obtained by UPLC and LCMS analysis of Plasma and IgG glycome.","Published":"2016-04-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GMAC","Version":"1.0","Title":"Genomic Mediation Analysis with Adaptive Confounding Adjustment","Description":"Performs genomic mediation\n analysis with adaptive confounding adjustment (GMAC) proposed by Yang et al. (2017) . It implements large scale\n mediation analysis and adaptively selects potential confounding variables to\n adjust for each mediation test from a pool of candidate confounders. The package\n is tailored for but not limited to genomic mediation analysis (e.g., cis-gene\n mediating trans-gene regulation pattern where an eQTL, its cis-linking gene\n transcript, and its trans-gene transcript play the roles as treatment, mediator\n and the outcome, respectively), restricting to scenarios with the presence of\n cis-association (i.e., treatment-mediator association) and random eQTL (i.e.,\n treatment).","Published":"2017-04-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gmailr","Version":"0.7.1","Title":"Access the Gmail RESTful API","Description":"An interface to the Gmail RESTful API. Allows access to your\n Gmail messages, threads, drafts and labels.","Published":"2016-04-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gmapsdistance","Version":"3.1","Title":"Distance and Travel Time Between Two Points from Google Maps","Description":"Get distance and travel time between two points from Google Maps.\n Four possible modes of transportation (bicycling, walking, driving and\n public transportation).","Published":"2016-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gmatrix","Version":"0.3","Title":"GPU Computing in R","Description":"A general framework for utilizing R to harness the power of NVIDIA GPU's. The \"gmatrix\" and \"gvector\" classes allow for easy management of the separate device and host memory spaces. Numerous numerical operations are implemented for these objects on the GPU. These operations include matrix multiplication, addition, subtraction, the kronecker product, the outer product, comparison operators, logical operators, trigonometric functions, indexing, sorting, random number generation and many more.","Published":"2015-12-01","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GMCM","Version":"1.2.4","Title":"Fast Estimation of Gaussian Mixture Copula Models","Description":"Unsupervised Clustering and Meta-analysis using Gaussian Mixture\n Copula Models.","Published":"2017-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gMCP","Version":"0.8-10","Title":"Graph Based Multiple Comparison Procedures","Description":"Functions and a graphical user interface for graphical described multiple test procedures.","Published":"2015-08-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GMD","Version":"0.3.3","Title":"Generalized Minimum Distance of distributions","Description":"GMD is a package for non-parametric distance measurement between\n two discrete frequency distributions.","Published":"2016-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gmDatabase","Version":"0.5.0","Title":"Accessing a Geometallurgical Database with R","Description":"A template for a geometallurgical database and a fast and easy\n interface for accessing it is provided in this package.","Published":"2016-06-16","License":"GPL (>= 2) | LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GMDH","Version":"1.6","Title":"Short Term Forecasting via GMDH-Type Neural Network Algorithms","Description":"Group method of data handling (GMDH) - type neural network algorithm is the heuristic self-organization method for modelling the complex systems. In this package, GMDH-type neural network algorithms are applied to make short term forecasting for a univariate time series. ","Published":"2016-09-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Gmedian","Version":"1.2.3","Title":"Geometric Median, k-Median Clustering and Robust Median PCA","Description":"Fast algorithms for robust estimation with large samples of multivariate observations. Estimation of the geometric median, robust k-Gmedian clustering, and robust PCA based on the Gmedian covariation matrix.","Published":"2016-09-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gmediation","Version":"0.1.0","Title":"Mediation Analysis for Multiple and Multi-Stage Mediators","Description":"Current version of this R package conducts mediation path analysis for multiple mediators in two stages.","Published":"2017-05-11","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gmeta","Version":"2.2-6","Title":"Meta-Analysis via a Unified Framework of Confidence Distribution","Description":"An implementation of an all-in-one function for a wide range of meta-analysis problems. It contains a single function gmeta() that unifies all standard meta-analysis methods and also several newly developed ones under a framework of combining confidence distributions (CDs). Specifically, the package can perform classical p-value combination methods (such as methods of Fisher, Stouffer, Tippett, etc.), fit meta-analysis fixed-effect and random-effects models, and synthesizes 2x2 tables. Furthermore, it can perform robust meta-analysis, which provides protection against model-misspecifications, and limits the impact of any unknown outlying studies. In addition, the package implements two exact meta-analysis methods from synthesizing 2x2 tables with rare events (e.g., zero total event). A plot function to visualize individual and combined CDs through extended forest plots is also available.","Published":"2016-02-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Gmisc","Version":"1.4.1","Title":"Descriptive Statistics, Transition Plots, and More","Description":"Tools for making the descriptive \"Table 1\" used in medical\n articles, a transition plot for showing changes between categories, a method for\n variable selection based on the SVD, Bézier lines with arrows complementing the\n ones in the 'grid' package, and more.","Published":"2016-12-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"gmm","Version":"1.6-1","Title":"Generalized Method of Moments and Generalized Empirical\nLikelihood","Description":"It is a complete suite to estimate models based on moment\n conditions. It includes the two step Generalized method of\n moments (Hansen 1982; ), the iterated GMM and continuous\n updated estimator (Hansen, Eaton and Yaron 1996; ) and several\n methods that belong to the Generalized Empirical Likelihood\n family of estimators (Smith 1997; ,\n Kitamura 1997; , Newey and Smith 2004; ,\n\tand Anatolyev 2005 ).","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GMMBoost","Version":"1.1.2","Title":"Likelihood-based Boosting for Generalized mixed models","Description":"Likelihood-based Boosting for Generalized mixed models","Published":"2013-11-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gmnl","Version":"1.1-1","Title":"Multinomial Logit Models with Random Parameters","Description":"An implementation of maximum simulated likelihood method for the\n estimation of multinomial logit models with random coefficients.\n Specifically, it allows estimating models with continuous heterogeneity\n such as the mixed multinomial logit and the generalized multinomial logit.\n It also allows estimating models with discrete heterogeneity such as the\n latent class and the mixed-mixed multinomial logit model.","Published":"2015-11-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gmodels","Version":"2.16.2","Title":"Various R Programming Tools for Model Fitting","Description":"Various R programming tools for model fitting.","Published":"2015-07-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gMOIP","Version":"1.1.0","Title":"'2D plots of linear or integer programming models'","Description":"Make 2D plots of the polyeder of a LP or IP problem, including\n integer points and iso profit curve. Can also make a plot of a bi-objective\n criterion space.","Published":"2017-02-20","License":"GPL (>= 3.3.2)","snapshot_date":"2017-06-23"} {"Package":"gmp","Version":"0.5-13.1","Title":"Multiple Precision Arithmetic","Description":"Multiple Precision Arithmetic (big integers and rationals,\n prime number tests, matrix computation), \"arithmetic without limitations\"\n using the C library GMP (GNU Multiple Precision Arithmetic).","Published":"2017-03-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gmt","Version":"2.0-0","Title":"Interface Between GMT Map-Making Software and R","Description":"Interface between the GMT map-making software and R, enabling the\n user to manipulate geographic data within R and call GMT commands to draw and\n annotate maps in postscript format. The gmt package is about interactive data\n analysis, rapidly visualizing subsets and summaries of geographic data, while\n performing statistical analysis in the R console.","Published":"2017-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gmum.r","Version":"0.2.1","Title":"GMUM Machine Learning Group Package","Description":"Direct R interface to Support Vector Machine libraries ('LIBSVM' and 'SVMLight') and efficient C++ implementations of Growing Neural Gas and models developed by 'GMUM' group (Cross Entropy Clustering and 2eSVM).","Published":"2015-10-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gMWT","Version":"1.1","Title":"Generalized Mann-Whitney Type Tests","Description":"Generalized Mann-Whitney type tests based on probabilistic\n indices and new diagnostic plots.","Published":"2016-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GNE","Version":"0.99-1","Title":"Computation of Generalized Nash Equilibria","Description":"Provide functions to compute standard and generalized Nash Equilibria. Optimization methods are available nonsmooth reformulation, fixed-point formulation, minimization problem and constrained-equation reformulation. ","Published":"2015-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gnlm","Version":"1.1.0","Title":"Generalized Nonlinear Regression Models","Description":"A variety of functions to fit linear and nonlinear\n regression with a large selection of distributions.","Published":"2017-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gnm","Version":"1.0-8","Title":"Generalized Nonlinear Models","Description":"Functions to specify and fit generalized nonlinear models, including models with multiplicative interaction terms such as the UNIDIFF model from sociology and the AMMI model from crop science, and many others. Over-parameterized representations of models are used throughout; functions are provided for inference on estimable parameter combinations, as well as standard methods for diagnostics etc.","Published":"2015-04-22","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gnmf","Version":"0.7.1","Title":"Generalized Non-negative Matrix Factorization Based on Renyi\nDivergence","Description":"This package performs generalized non-negative matrix factorization based on Renyi divergence.","Published":"2016-07-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gnumeric","Version":"0.7-8","Title":"Read Data from Files Readable by 'gnumeric'","Description":"Read data files readable by 'gnumeric' into 'R'. Can read\n whole sheet or a range, from several file formats, including\n the native format of 'gnumeric'. Reading is done by using\n 'ssconvert' (a file converter utility included in the 'gnumeric'\n distribution ) to convert\n the requested part to CSV. From 'gnumeric' files (but not other\n formats) can list sheet names and sheet sizes or read all\n sheets.","Published":"2017-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"goalprog","Version":"1.0-2","Title":"Weighted and lexicographical goal programming and optimization","Description":"A collection of functions to solve weighted and lexicographical\n goal programming problems as specified by Lee (1972) and Ignizio (1976).","Published":"2008-09-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"goeveg","Version":"0.3.3","Title":"Functions for Community Data and Ordinations","Description":"A collection of functions useful in (vegetation) community analyses and ordinations, mainly to facilitate plotting and interpretation. Includes automatic species selection for ordination diagrams, species response curves and rank-abundance curves.","Published":"2017-01-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gof","Version":"0.9.1","Title":"Model-diagnostics based on cumulative residuals","Description":"Implementation of model-checking techniques for generalized linear\n models and linear structural equation models based on cumulative residuals","Published":"2014-03-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gofastr","Version":"0.2.1","Title":"Fast DocumentTermMatrix and TermDocumentMatrix Creation","Description":"Harness the power of 'quanteda', 'data.table' & 'stringi'\n to quickly generate 'tm' DocumentTermMatrix and\n TermDocumentMatrix data structures.","Published":"2017-02-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gofCopula","Version":"0.2-3","Title":"Goodness-of-Fit Tests for Copulae","Description":"Several GoF tests for Copulae are provided. A new hybrid test is implemented which supports all of the individual tests. Estimation methods for the margins are provided. All the tests support parameter estimation and predefined values. The parameters are estimated by pseudo maximum likelihood but if it fails the estimation switches automatically to inversion of Kendall's tau. All the tests support automatized parallelization of the bootstrapping tasks.","Published":"2016-10-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"GoFKernel","Version":"2.1-0","Title":"Testing Goodness-of-Fit with the Kernel Density Estimator","Description":"Tests of goodness-of-fit based on a kernel smoothing of the data.","Published":"2016-01-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"GofKmt","Version":"1.0","Title":"Khmaladze Martingale Transformation Goodness-of-Fit Test","Description":"Consider a goodness-of-fit(GOF) problem of testing whether a random sample comes from one sample location-scale model where location and scale parameters are unknown. It is well known that Khmaladze martingale transformation method provides asymptotic distribution free test for the GOF problem. This package contains one function: KhmaladzeTrans(). In this version, KhmaladzeTrans() provides test statistic and critical value of GOF test for normal, Cauchy, and logistic distributions.","Published":"2016-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gofMC","Version":"1.1.2","Title":"Goodness of Fit Noise Analysis Using Monte Carlo Techniques","Description":"Goodness-of-fit metrics, such as R-Squared, RMSE, etc., share a sensitivity to noise, dependent on the degrees of freedom. Some metrics, such as R-Squared, decrease with increasing dof and some, such as RMSE, increase with increasing dof. This package calculates the noise baseline (ceiling) by random sampling, calculating the metric’s value for each sample and counting the number of samples below a desired level, 95% by default. If one’s measure is above (below) the calculation corresponding to the desired level, then the measurement is distinguishable from noise. In addition, the ratio of the measurement to the calculated level provides a way to compare measurements of different degrees of freedom. ","Published":"2016-12-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"goft","Version":"1.3.1","Title":"Tests of Fit for some Probability Distributions","Description":"Goodness-of-fit tests for gamma, inverse Gaussian, lognormal, Weibull, Frechet, Gumbel, univariate normal, multivariate normal, Cauchy, Laplace or double exponential, exponential and generalized Pareto distributions. Parameter estimators for gamma, inverse Gaussian and generalized Pareto distributions.","Published":"2016-05-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"goftest","Version":"1.1-1","Title":"Classical Goodness-of-Fit Tests for Univariate Distributions","Description":"Cramer-Von Mises and Anderson-Darling tests of goodness-of-fit\n\t for continuous univariate distributions, using\n\t efficient algorithms.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"goftte","Version":"1.0.3","Title":"Goodness-of-Fit for Time-to-Event Data","Description":"Extension of 'gof' package to survival models.","Published":"2017-05-31","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gogamer","Version":"0.4.3","Title":"Go Game Data Parser","Description":"\n Easy and flexible interface for manipulating go game (weiqi, baduk) data.\n The package features a reader function for SGF (smart go format) text files,\n and a set of plotting functions that draw go board images.","Published":"2016-09-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GOGANPA","Version":"1.0","Title":"GO-Functional-Network-based Gene-Set-Analysis","Description":"Accounting for genes' functional-non-equivalence within pathways in classical Gene-set-analysis.","Published":"2012-01-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gogarch","Version":"0.7-2","Title":"Generalized Orthogonal GARCH (GO-GARCH) models","Description":"Implementation of the GO-GARCH model class","Published":"2012-07-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gomms","Version":"1.0","Title":"GLM-Based Ordination Method","Description":"A zero-inflated quasi-Poisson factor model to display similarity between samples visually in a low (2 or 3) dimensional space.","Published":"2017-05-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GoodmanKruskal","Version":"0.0.2","Title":"Association Analysis for Categorical Variables","Description":"Association analysis between categorical\n variables using the Goodman and Kruskal tau measure. This asymmetric association\n measure allows the detection of asymmetric relations between categorical\n variables (e.g., one variable obtained by re-grouping another).","Published":"2016-04-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"googleAnalyticsR","Version":"0.4.1","Title":"Google Analytics API into R","Description":"R library for interacting with the Google Analytics \n Reporting API v3 and v4.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"googleAuthR","Version":"0.5.1","Title":"Easy Authentication with Google OAuth2 API","Description":"Create R functions that interact with OAuth2 Google APIs easily,\n with auto-refresh and Shiny compatibility.","Published":"2017-03-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"googleCloudStorageR","Version":"0.3.0","Title":"R Interface with Google Cloud Storage","Description":"Interact with Google Cloud Storage API in R. Part of the 'cloudyr' project.","Published":"2017-05-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"googleComputeEngineR","Version":"0.1.0","Title":"R Interface with Google Compute Engine","Description":"Interact with the Google Compute Engine API in R. Lets you create, \n start and stop instances in the Google Cloud. Support for preconfigured instances, \n with templates for common R needs. ","Published":"2016-11-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"googleformr","Version":"0.0.3","Title":"Collect Data Programmatically by POST Methods to Google Forms","Description":"GET and POST data to Google Forms; an API to Google Forms,\n allowing users to POST data securely to Google Forms without needing authentication or permissioning.","Published":"2016-04-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"googlePublicData","Version":"0.15.7.28","Title":"Working with Google Public Data Explorer DSPL Metadata Files","Description":"Provides a collection of functions designed for working with 'Google Public Data Explorer'. Automatically builds up the corresponding DSPL (XML) metadata files and CSV files; compressing all the files and leaving them ready to be published on the 'Public Data Explorer'.","Published":"2015-07-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"googlesheets","Version":"0.2.2","Title":"Manage Google Spreadsheets from R","Description":"Interact with Google Sheets from R.","Published":"2017-05-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"googleVis","Version":"0.6.2","Title":"R Interface to Google Charts","Description":"R interface to Google's chart tools, allowing users\n to create interactive charts based on data frames. Charts\n are displayed locally via the R HTTP help server. A modern\n browser with an Internet connection is required and for some\n charts a Flash player. The data remains local and is not\n uploaded to Google.","Published":"2017-01-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"googleway","Version":"2.0.0","Title":"Accesses Google Maps APIs to Retrieve Data and Plot Maps","Description":"Provides a mechanism to plot a Google Map from R and overlay\n it with shapes and markers. Also provides access to Google Maps APIs,\n including places, directions, roads, distances, geocoding, elevation and\n timezone.","Published":"2017-05-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GOplot","Version":"1.0.2","Title":"Visualization of Functional Analysis Data","Description":"Implementation of multilayered visualizations for enhanced\n graphical representation of functional analysis data. It combines and integrates\n omics data derived from expression and functional annotation enrichment\n analyses. Its plotting functions have been developed with an hierarchical\n structure in mind: starting from a general overview to identify the most\n enriched categories (modified bar plot, bubble plot) to a more detailed one\n displaying different types of relevant information for the molecules in a given\n set of categories (circle plot, chord plot, cluster plot, Venn diagram, heatmap).","Published":"2016-03-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GORCure","Version":"2.0","Title":"Fit Generalized Odds Rate Mixture Cure Model with Interval\nCensored Data","Description":"Generalized Odds Rate Mixture Cure (GORMC) model is a flexible model of fitting survival data with a cure fraction, including the Proportional Hazards Mixture Cure (PHMC) model and the Proportional Odds Mixture Cure Model as special cases. This package fit the GORMC model with interval censored data.","Published":"2017-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"goric","Version":"0.0-95","Title":"Generalized Order-Restricted Information Criterion for Selecting\nOrder-Restricted (Multivariate) Linear Models","Description":"Generalized Order-Restricted Information Criterion (GORIC) value for a set of hypotheses in multivariate regression models.","Published":"2017-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"govStatJPN","Version":"0.1","Title":"functions to get public survey data in Japan","Description":"This package purposes to deal with public survey data of\n Japanese government via their Application Programming Interface\n (http://statdb.nstac.go.jp/)","Published":"2013-06-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gower","Version":"0.1.2","Title":"Gower's Distance","Description":"Compute Gower's distance (or similarity) coefficient between records. Compute \n the top-n matches between records. Core algorithms are executed in parallel on systems\n supporting OpenMP.","Published":"2017-02-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gpairs","Version":"1.2","Title":"gpairs: The Generalized Pairs Plot","Description":"Produces a generalized pairs (gpairs) plot.","Published":"2014-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GPareto","Version":"1.0.3","Title":"Gaussian Processes for Pareto Front Estimation and Optimization","Description":"Gaussian process regression models, a.k.a. Kriging models, are\n applied to global multi-objective optimization of black-box functions.\n Multi-objective Expected Improvement and Step-wise Uncertainty Reduction\n sequential infill criteria are available. A quantification of uncertainty\n on Pareto fronts is provided using conditional simulations.","Published":"2016-11-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GPArotation","Version":"2014.11-1","Title":"GPA Factor Rotation","Description":"Gradient Projection Algorithm Rotation for Factor Analysis. See ?GPArotation.Intro for more details.","Published":"2014-11-25","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GPB","Version":"1.0","Title":"Generalized Poisson Binomial Distribution","Description":"Functions that compute the distribution functions for the Generalized Poisson Binomial distribution, which provides the cdf, pmf, quantile function, and random number generation for the distribution.","Published":"2017-02-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GPC","Version":"0.1","Title":"Generalized Polynomial Chaos","Description":"A generalized polynomial chaos expansion of a model taking as input independent random variables is achieved. A statistical and a global sensitivity analysis of the model are also carried out.","Published":"2014-12-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gPCA","Version":"1.0","Title":"Batch Effect Detection via Guided Principal Components Analysis","Description":"This package implements guided principal components analysis for the detection of batch effects in high-throughput data.","Published":"2013-07-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gpclib","Version":"1.5-5","Title":"General Polygon Clipping Library for R","Description":"General polygon clipping routines for R based on Alan\n Murta's C library","Published":"2013-04-01","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GPCSIV","Version":"0.1.0","Title":"GPCSIV, Generalized Principal Component of Symbolic Interval\nvariables","Description":"This package implements an extension of principal component analysis (PCA) tailored to handle multiple data tables. It can handle Big Data in the sense that the variation in massive data can be described by intervals [a, b] and multiple tables. ","Published":"2013-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gpDDE","Version":"0.8.2","Title":"General Profiling Method for Delay Differential Equation","Description":"Functions implement collocation-inference for\n stochastic process driven by distributed delay differential equations.\n They also provide tools for selecting the lags for distributed delay\n using shrinkage methods, estimating time-varying coefficients,\n and tools for inference and prediction.","Published":"2015-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gPdtest","Version":"0.4","Title":"Bootstrap goodness-of-fit test for the generalized Pareto\ndistribution","Description":"This package computes the bootstrap goodness-of-fit test\n for the generalized Pareto distribution by Villasenor-Alva and\n Gonzalez-Estrada (2009). The null hypothesis includes heavy and\n non-heavy tailed gPd's. A function for fitting the gPd to data\n using the parameter estimation methods proposed in the same\n article is also provided.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GPFDA","Version":"2.2","Title":"Apply Gaussian Process in Functional data analysis","Description":"Use functional regression as the mean structure and Gaussian Process as the covariance structure.","Published":"2014-09-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GPfit","Version":"1.0-0","Title":"Gaussian Processes Modeling","Description":"A computationally stable approach of fitting a Gaussian Process (GP) model to a deterministic simulator. ","Published":"2015-04-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gpg","Version":"0.5","Title":"GNU Privacy Guard for R","Description":"Bindings to GnuPG for working with OpenGPG (RFC4880) cryptographic methods.\n Includes utilities for public key encryption, creating and verifying digital signatures,\n and managing your local keyring. Note that some functionality depends on the version of \n GnuPG that is installed on the system. On Windows this package can be used together with\n 'GPG4Win' which provides a GUI for managing keys and entering passphrases.","Published":"2017-03-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GPGame","Version":"1.0.0","Title":"Solving Complex Game Problems using Gaussian Processes","Description":"Sequential strategies for finding game equilibria are proposed in a black-box setting (expensive pay-off evaluations, no derivatives). The algorithm handles noiseless or noisy evaluations. Two acquisition functions are available. Graphical outputs can be generated automatically. ","Published":"2017-06-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gpk","Version":"1.0","Title":"100 Data Sets for Statistics Education","Description":"Collection of datasets as prepared by Profs. A.P. Gore, S.A. Paranjape, and M.B. Kulkarni of Department of Statistics, Poona University, India. With their permission, first letter of their names forms the name of this package, the package has been built by me and made available for the benefit of R users. This collection requires a rich class of models and can be a very useful building block for a beginner. ","Published":"2013-07-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gplm","Version":"0.7-4","Title":"Generalized Partial Linear Models (GPLM)","Description":"Provides functions for estimating a generalized partial\n\t linear model, a semiparametric variant of the generalized linear model\n\t (GLM) which replaces the linear predictor by the sum of a linear\n\t and a nonparametric function.","Published":"2016-08-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gplots","Version":"3.0.1","Title":"Various R Programming Tools for Plotting Data","Description":"Various R programming tools for plotting data, including:\n - calculating and plotting locally smoothed summary function as\n ('bandplot', 'wapply'),\n - enhanced versions of standard plots ('barplot2', 'boxplot2',\n 'heatmap.2', 'smartlegend'),\n - manipulating colors ('col2hex', 'colorpanel', 'redgreen',\n 'greenred', 'bluered', 'redblue', 'rich.colors'),\n - calculating and plotting two-dimensional data summaries ('ci2d',\n 'hist2d'),\n - enhanced regression diagnostic plots ('lmplot2', 'residplot'),\n - formula-enabled interface to 'stats::lowess' function ('lowess'),\n - displaying textual data in plots ('textplot', 'sinkplot'),\n - plotting a matrix where each cell contains a dot whose size\n reflects the relative magnitude of the elements ('balloonplot'),\n - plotting \"Venn\" diagrams ('venn'),\n - displaying Open-Office style plots ('ooplot'),\n - plotting multiple data on same region, with separate axes\n ('overplot'),\n - plotting means and confidence intervals ('plotCI', 'plotmeans'),\n - spacing points in an x-y plot so they don't overlap ('space').","Published":"2016-03-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GPLTR","Version":"1.2","Title":"Generalized Partially Linear Tree-Based Regression Model","Description":"Combining a generalized linear model with an additional tree part \n on the same scale. A four-step procedure is proposed to fit the model and test \n the joint effect of the selected tree part while adjusting on confounding factors. \n We also proposed an ensemble procedure based on the bagging to improve prediction \n accuracy and computed several scores of importance for variable selection.","Published":"2015-06-18","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"gpmap","Version":"0.1.1","Title":"Analysing and plotting genotype-phenotype maps","Description":"This package contains tools for studying genotype-phenotype (GP) maps for bi-allelic loci underlying quantitative phenotypes. The 0.1 version is released in connection with the publication of Gjuvsland et al. (2003) and implements basic line plots and the monotonicity measures for GP maps presented in the paper. Reference: Gjuvsland AB, Wang Y, Plahte E and Omholt SW (2013) Monotonicity is a key feature of genotype-phenotype maps. Front. Genet. 4:216. doi: 10.3389/fgene.2013.00216 [\\href{http://www.frontiersin.org/Journal/10.3389/fgene.2013.00216/full}{link}]","Published":"2014-01-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GPoM","Version":"1.0","Title":"Generalized Polynomial Modelling","Description":"Platform dedicated to the Global Modelling technique.","Published":"2017-04-04","License":"CeCILL-2","snapshot_date":"2017-06-23"} {"Package":"gpr","Version":"1.1","Title":"A Minimalistic package to apply Gaussian Process in R","Description":"This package provides a minimalistic functionality necessary to apply Gaussian Process in R. They provide a selection of functionalities of GPML Matlab library.","Published":"2014-02-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GPrank","Version":"0.1.2","Title":"Gaussian Process Ranking of Multiple Time Series","Description":"Implements a Gaussian process (GP)-based ranking method\n which can be used to rank multiple time series according to their\n temporal activity levels. An example is the case when expression\n levels of all genes are measured over a time course and the main\n concern is to identify the most active genes, i.e. genes which\n show significant non-random variation in their expression levels.\n This is achieved by computing Bayes factors for each time series\n by comparing the marginal likelihoods under time-dependent and\n time-independent GP models. Additional variance information from\n pre-processing of the observations is incorporated into the GP\n models, which makes the ranking more robust against model\n overfitting. The package supports exporting the results to\n 'tigreBrowser' for visualisation, filtering or ranking.","Published":"2016-12-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gProfileR","Version":"0.6.1","Title":"Interface to the 'g:Profiler' Toolkit","Description":"Functional enrichment analysis, gene identifier conversion and\n mapping homologous genes across related organisms via the 'g:Profiler' toolkit\n (http://biit.cs.ut.ee/gprofiler/).","Published":"2016-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GPseq","Version":"0.5","Title":"gpseq: Using the generalized Poisson distribution to model\nsequence read counts from high throughput sequencing\nexperiments","Description":"Some functions for modeling sequence read counts as a\n generalized poisson model and to use this model for detecting\n differentially expressed genes in different conditions and\n differentially spliced exons.","Published":"2011-07-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gptk","Version":"1.08","Title":"Gaussian Processes Tool-Kit","Description":"The gptk package implements a general-purpose toolkit for Gaussian\n process regression with a variety of covariance functions (e.g. RBF, Mattern, polynomial, etc).\n Based on a MATLAB implementation by Neil D. Lawrence. See inst/doc/index.html for more details.","Published":"2014-03-07","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gpuR","Version":"1.2.1","Title":"GPU Functions for R Objects","Description":"Provides GPU enabled functions for R objects in a simple and\n approachable manner. New gpu* and vcl* classes have been provided to\n wrap typical R objects (e.g. vector, matrix), in both host and device\n spaces, to mirror typical R syntax without the need to know OpenCL.","Published":"2017-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gputools","Version":"1.1","Title":"A Few GPU Enabled Functions","Description":"Provides R interfaces to a handful of common\n functions implemented using the Nvidia CUDA toolkit. Some of the\n functions require at least GPU Compute Capability 1.3. \n Thanks to Craig Stark at UC Irvine for donating time on his lab's Mac.","Published":"2016-10-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GPvam","Version":"3.0-4","Title":"Maximum Likelihood Estimation of Multiple Membership Mixed\nModels Used in Value-Added Modeling","Description":"An EM algorithm, Karl et al. (2013) , is used to estimate the generalized, variable, and complete persistence models, Mariano et al. (2010) . These are multiple-membership linear mixed models with teachers modeled as \"G-side\" effects and students modeled with either \"G-side\" or \"R-side\" effects.","Published":"2017-03-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gqlr","Version":"0.0.1","Title":"'GraphQL' Server in R","Description":"Server implementation of 'GraphQL' ,\n a query language created by Facebook for describing data requirements on complex application\n data models. Visit to learn more about 'GraphQL'.","Published":"2017-06-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gquad","Version":"2.1-1","Title":"Prediction of G Quadruplexes and Other Non-B DNA Motifs","Description":"Genomic biology is not limited to the confines of the canonical B-\n forming DNA duplex, but includes over ten different types of other secondary\n structures that are collectively termed non-B DNA structures. Of these non-B\n DNA structures, the G-quadruplexes are highly stable four-stranded structures\n that are recognized by distinct subsets of nuclear factors. This package\n provide functions for predicting intramolecular G quadruplexes. In addition, \n functions for predicting other intramolecular nonB DNA structures are included.","Published":"2017-06-06","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"Grace","Version":"0.5.3","Title":"Graph-Constrained Estimation and Hypothesis Tests","Description":"Use the graph-constrained estimation (Grace) procedure (Zhao and Shojaie, 2016 ) to estimate graph-guided linear regression coefficients and use the Grace/GraceI/GraceR tests to perform graph-guided hypothesis tests on the association between the response and the predictors.","Published":"2017-04-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gradDescent","Version":"2.0.1","Title":"Gradient Descent for Regression Tasks","Description":"An implementation of various learning algorithms based on Gradient Descent for dealing with regression tasks. \n\tThe variants of gradient descent algorithm are :\n\tMini-Batch Gradient Descent (MBGD), which is an optimization to use training data partially to reduce the computation load.\n\tStochastic Gradient Descent (SGD), which is an optimization to use a random data in learning to reduce the computation load drastically.\n\tStochastic Average Gradient (SAG), which is a SGD-based algorithm to minimize stochastic step to average.\n\tMomentum Gradient Descent (MGD), which is an optimization to speed-up gradient descent learning.\n\tAccelerated Gradient Descent (AGD), which is an optimization to accelerate gradient descent learning.\n\tAdagrad, which is a gradient-descent-based algorithm that accumulate previous cost to do adaptive learning.\n\tAdadelta, which is a gradient-descent-based algorithm that use hessian approximation to do adaptive learning.\n\tRMSprop, which is a gradient-descent-based algorithm that combine Adagrad and Adadelta adaptive learning ability.\n\tAdam, which is a gradient-descent-based algorithm that mean and variance moment to do adaptive learning.","Published":"2017-03-11","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"grade","Version":"0.2-1","Title":"Binary Grading functions for R","Description":"Provides functions for matching student-answers to teacher answers for a variety of data types.","Published":"2013-11-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GRaF","Version":"0.1-12","Title":"Species distribution modelling using latent Gaussian random\nfields","Description":"Functions to fit, visualise and compare Gaussian random field species distribution models.","Published":"2014-01-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gRain","Version":"1.3-0","Title":"Graphical Independence Networks","Description":"Probability propagation in graphical independence networks, also\n known as Bayesian networks or probabilistic expert systems.","Published":"2016-10-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gramEvol","Version":"2.1-3","Title":"Grammatical Evolution for R","Description":"A native R implementation of grammatical evolution (GE). GE facilitates the discovery of programs that can achieve a desired goal. This is done by performing an evolutionary optimisation over a population of R expressions generated via a user-defined context-free grammar (CFG) and cost function.","Published":"2016-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GrammR","Version":"1.1.0","Title":"Graphical Representation and Modeling of Metagenomic Reads","Description":"Represents metagenomic samples on the Euclidean space to examine similarity amongst samples by studying clusters in the model. Given the matrix of metagenomic counts for samples, this package (1) quantifies dissimilarity between samples using Kendall's tau-distance, (2) constructs multidimensional models of different dimension, and (3) plots the models for visualization and comparison.","Published":"2016-02-14","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"GRANBase","Version":"1.2.1","Title":"Creating Continuously Integrated Package Repositories from\nManifests","Description":"Repository based tools for department and analysis level\n reproducibility. 'GRANBase' allows creation of custom branched, continuous\n integration-ready R repositories, including incremental testing of only packages\n which have changed versions since the last repository build.","Published":"2017-02-09","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"granova","Version":"2.1","Title":"Graphical Analysis of Variance","Description":"This small collection of functions provides what we call elemental graphics for display of anova\n results. The term elemental derives from the fact that each function is aimed at construction of\n graphical displays that afford direct visualizations of data with respect to the fundamental \n questions that drive the particular anova methods. The two main functions are granova.1w \n (a graphic for one way anova) and granova.2w (a corresponding graphic for two way anova). These \n functions were written to display data for any number of groups, regardless of their sizes \n (however, very large data sets or numbers of groups can be problematic). For these two functions \n a specialized approach is used to construct data-based contrast vectors for which anova data are\n displayed. The result is that the graphics use straight lines, and when appropriate flat surfaces,\n to facilitate clear interpretations while being faithful to the standard effect tests in anova. \n The graphic results are complementary to standard summary tables for these two basic kinds of \n analysis of variance; numerical summary results of analyses are also provided as side effects.\n Two additional functions are granova.ds (for comparing two dependent samples), and granova.contr\n (which provides graphic displays for a priori contrasts). All functions provide relevant\n numerical results to supplement the graphic displays of anova data.\n The graphics based on these functions should be especially helpful for learning how the methods have \n been applied to answer the question(s) posed. This means they can be \n particularly helpful for students and non-statistician analysts. But these methods should be\n quite generally helpful for work-a-day applications of all kinds, as they can help to identify\n outliers, clusters or patterns, as well as highlight the role of non-linear transformations of data. In the case \n of granova.1w and granova.ds especially, several arguments are provided to facilitate flexibility\n in the construction of graphics that accommodate diverse features of data, according to their \n corresponding display requirements. See the help files for individual functions.","Published":"2014-08-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"granovaGG","Version":"1.4.0","Title":"Graphical Analysis of Variance Using ggplot2","Description":"This collection of functions in 'granovaGG'\n provides what we call elemental graphics for display of\n anova results. The term elemental derives from the fact\n that each function is aimed at construction of\n graphical displays that afford direct visualizations of\n data with respect to the fundamental questions that\n drive the particular anova methods. This package\n represents a modification of the original granova\n package; the key change is to use 'ggplot2', Hadley\n Wickham's package based on Grammar of Graphics concepts\n (due to Wilkinson). The main function is granovagg.1w()\n (a graphic for one way ANOVA); two other functions\n (granovagg.ds() and granovagg.contr()) are to construct\n graphics for dependent sample analyses and\n contrast-based analyses respectively. (The function\n granova.2w(), which entails dynamic displays of data, is\n not currently part of 'granovaGG'.) The 'granovaGG'\n functions are to display data for any number of groups,\n regardless of their sizes (however, very large data\n sets or numbers of groups can be problematic). For\n granovagg.1w() a specialized approach is used to\n construct data-based contrast vectors for which anova\n data are displayed. The result is that the graphics use\n a straight line to facilitate clear interpretations\n while being faithful to the standard effect test in\n anova. The graphic results are complementary to\n standard summary tables; indeed, numerical summary\n statistics are provided as side effects of the graphic\n constructions. granovagg.ds() and granovagg.contr() provide\n graphic displays and numerical outputs for a dependent\n sample and contrast-based analyses. The graphics based\n on these functions can be especially helpful for\n learning how the respective methods work to answer the\n basic question(s) that drive the analyses. This means\n they can be particularly helpful for students and\n non-statistician analysts. But these methods can be of\n assistance for work-a-day applications of many kinds,\n as they can help to identify outliers, clusters or\n patterns, as well as highlight the role of non-linear\n transformations of data. In the case of granovagg.1w()\n and granovagg.ds() several arguments are provided to\n facilitate flexibility in the construction of graphics\n that accommodate diverse features of data, according to\n their corresponding display requirements. See the help\n files for individual functions.","Published":"2015-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GRAPE","Version":"0.1.0","Title":"Gene-Ranking Analysis of Pathway Expression","Description":"Gene-Ranking Analysis of Pathway Expression (GRAPE) is a tool for\n summarizing the consensus behavior of biological pathways in the form of a\n template, and for quantifying the extent to which individual samples deviate\n from the template. GRAPE templates are based only on the relative rankings\n of the genes within the pathway and can be used for classification of tissue\n types or disease subtypes. GRAPE can be used to represent gene-expression\n samples as vectors of pathway scores, where each pathway score indicates the\n departure from a given collection of reference samples. The resulting pathway-\n space representation can be used as the feature set for various applications,\n including survival analysis and drug-response prediction.\n ----------------------------------------------------------------------------------------------\n GRAPE is a generalization and extension of DIRAC, originally implemented in:\n Eddy, J.A., et al. (2010) . As a result, some\n of the software below may have been previously published in Matlab by the DIRAC\n authors and can be found at:\n https://price.systemsbiology.org/pricelab-resources/software/.","Published":"2016-08-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"grapes","Version":"1.0.0","Title":"Make Binary Operators","Description":"Turn arbitrary functions into binary operators.","Published":"2017-04-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"gRapfa","Version":"1.0","Title":"Acyclic Probabilistic Finite Automata","Description":"gRapfa is for modelling discrete longitudinal data using acyclic probabilistic finite automata (APFA). The package contains functions for constructing APFA models from a given data using penalized likelihood methods. For graphical display of APFA models, gRapfa depends on 'igraph package'. gRapfa also contains an interface function to Beagle software that implements an efficient model selection algorithm. ","Published":"2014-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gRapHD","Version":"0.2.4","Title":"Efficient selection of undirected graphical models for\nhigh-dimensional datasets","Description":"gRapHD is designed for efficient selection of high-dimensional undirected \n graphical models. The package provides tools for selecting trees, forests \n and decomposable models minimizing information criteria such as AIC or BIC, \n and for displaying the independence graphs of the models. It has also some \n useful tools for analysing graphical structures. It supports the use of \n discrete, continuous, or both types of variables.","Published":"2014-03-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"GrapheR","Version":"1.9-86","Title":"A Multi-Platform GUI for Drawing Customizable Graphs in R","Description":"A multi-platform user interface for drawing highly customizable graphs in R. It aims to be a valuable help to quickly draw publishable graphs without any knowledge of R commands. Six kinds of graph are available: histogram, box-and-whisker plot, bar plot, pie chart, curve and scatter plot.","Published":"2016-12-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GraphFactor","Version":"1.1","Title":"Network Topology of Intravariable Clusters with Intervariable\nLinks","Description":"A Network Implementation of Fuzzy Sets: Build Network Objects from Multivariate Flat Files. For more information on fuzzy sets, refer to: Zadeh, L.A. (1964) .","Published":"2016-10-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"graphicalVAR","Version":"0.2","Title":"Graphical VAR for Experience Sampling Data","Description":"Estimates within and between time point interactions in experience sampling data, using the Graphical VAR model in combination with LASSO and EBIC.","Published":"2017-03-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"graphicsQC","Version":"1.0-8","Title":"Quality Control for Graphics in R","Description":"Functions to generate\n graphics files, compare them with \"model\" files,\n and report the results, including visual and textual\n diffs of any differences.","Published":"2016-10-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"graphkernels","Version":"1.2","Title":"Graph Kernels","Description":"A fast C++ implementation for computing various graph kernels including (1) simple kernels between vertex and/or edge label histograms, (2) random walk kernels (popular baselines), and (3) the Weisfeiler-Lehman graph kernel (state-of-the-art).","Published":"2017-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GraphKit","Version":"0.5","Title":"Estimating Structural Invariants of Graphical Models","Description":"Efficient methods for constructing confidence intervals of monotone\n graph invariants, as well as testing for monotone graph properties. Many\n packages are available to estimate precision matrices, this package serves as a\n tool to extract structural properties from their induced graphs. By iteratively\n bootstrapping on only the relevant edge set, we are able to obtain the optimal\n interval size.","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"graphql","Version":"1.3","Title":"A GraphQL Query Parser","Description":"Bindings to the 'libgraphqlparser' C++ library. Currently parses\n GraphQL and exports the AST in JSON format.","Published":"2017-06-03","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"graphscan","Version":"1.1.1","Title":"Cluster Detection with Hypothesis Free Scan Statistic","Description":"Multiple scan statistic with variable window for one dimension data and scan statistic based on connected components in 2D or 3D.","Published":"2016-10-14","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"graphTweets","Version":"0.3.2","Title":"Visualise Twitter Interactions","Description":"Allows building an edge table from data frame of tweets, \n also provides function to build nodes and another create a temporal graph.","Published":"2016-05-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GrassmannOptim","Version":"2.0","Title":"Grassmann Manifold Optimization","Description":"Optimizing a function F(U), where U is a semi-orthogonal matrix and F is invariant under an orthogonal transformation of U","Published":"2013-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"graticule","Version":"0.1.2","Title":"Meridional and Parallel Lines for Maps","Description":"Create graticule lines and labels for maps. Control the creation\n of lines by setting their placement (at particular meridians and parallels)\n and extent (along parallels and meridians). Labels are created independently of\n lines.","Published":"2016-02-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"grattan","Version":"1.5.0.0","Title":"Perform Common Quantitative Tasks for Australian Analysts and to\nSupport Grattan Institute Analysis","Description":"A series of functions focused on costing and evaluating Australian tax policy in support of the Grattan Institute's Australian Perspectives program. For access to the taxstats package, please run install.packages(\"taxstats\", repos = \"https://hughparsonage.github.io/drat/\", type = \"source\"). N.B. The taxstats package is approximately 50 MB.","Published":"2017-05-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gravity","Version":"0.3","Title":"A Compilation of Different Estimation Methods for Gravity Models","Description":"One can use gravity models to explain bilateral flows related to the sizes of bilateral partners, a measure of distance between them and other influences on interaction costs. The underlying idea is rather simple. The greater the masses of two bodies and the smaller the distance between them, the stronger they attract each other. This concept is applied to several research topics such as trade, migration or foreign direct investment. Even though the basic idea of gravity models is rather simple, they can become very complex when it comes to the choice of models or estimation methods. The package gravity targets to provide R users with the functions necessary to execute the most common estimation methods for gravity models, especially for cross-sectional data. It contains the functions Ordinary Least Squares (OLS), Fixed Effects, Double Demeaning (DDM), Bonus vetus OLS with simple averages (BVU) and with GDP-weights (BVW), Structural Iterated Least Squares (SILS), Tetrads as well as Poisson Pseudo Maximum Likelihood (PPML). By considering the descriptions of the estimation methods, users can see which method and data may be suited for a certain research question. In order to illustrate the estimation methods, this package includes a dataset called Gravity (see the description of the dataset for more information). On the Gravity Cookbook website () Keith Head and Thierry Mayer provide Stata code for the most common estimation methods for gravity models when using cross-sectional data. In order to get comparable results in R, the methods presented in the package gravity are designed to be consistent with this Stata code when choosing the option of robust variance estimation. However, compared to the Stata code available, the functions presented in this package provide users with more flexibility regarding the type of estimation (robust or not robust), the number and type of independent variables as well as the possible data. The functions all estimate gravity models, but they differ in whether they estimate them in their multiplicative or additive form, their requirements with respect to the data, their handling of Multilateral Resistance terms as well as their possibilities concerning the inclusion of unilateral independent variables. Therefore, they normally lead to different estimation results. We refer the user to the Gravity Cookbook website () for more information on gravity models in general. Head, K. and Mayer, T. (2014) provide a comprehensive and accessible overview of the theoretical and empirical development of the gravity literature as well as the use of gravity models and the various estimation methods, especially their merits and potential problems regarding applicability as well as different gravity datasets. All functions were tested to work on cross-sectional data and are consistent with the Stata code mentioned above. For the use with panel data no tests were performed. Therefore, it is up to the user to ensure that the functions can be applied to panel data. For a comprehensive overview of gravity models for panel data see Egger, P., & Pfaffermayr, M. (2003) , Gomez-Herrera, E. (2013) and Head, K., Mayer, T., & Ries, J. (2010) as well as the references therein (see also the references included in the descriptions of the different functions). Depending on the panel dataset and the variables - specifically the type of fixed effects - included in the model, it may easily occur that the model is not computable. Also, note that by including bilateral fixed effects such as country-pair effects, the coefficients of time-invariant observables such as distance can no longer be estimated. Depending on the specific model, the code of the respective function may has to be changed in order to exclude the distance variable from the estimation. At the very least, the user should take special care with respect to the meaning of the estimated coefficients and variances as well as the decision about which effects to include in the estimation. As, to our knowledge at the moment, there is no explicit literature covering the estimation of a gravity equation by Double Demeaning, Structural Iterated Least Squares or Bonus Vetus OLS using panel data, we do not recommend to apply these methods in this case. Contributions, extensions and error corrections are very welcome. Please do not hesitate to contact us.","Published":"2017-01-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gRbase","Version":"1.8-3","Title":"A Package for Graphical Modelling in R","Description":"The 'gRbase' package provides general features\n which are used by other graphical modelling packages, in particular\n by the packages 'gRain', 'gRim' and 'gRc'.\n 'gRbase' contains several data sets relevant for use in connection with\n graphical models. Almost all data sets used in the book Graphical\n Models with R (2012) are contained in 'gRbase'.\n 'gRbase' implements several graph algorithms (based mainly on\n representing graphs as adjacency matrices - either in the form\n of a standard matrix or a sparse matrix). Some graph\n algorithms are:\n (i) maximum cardinality search (for marked and unmarked graphs).\n (ii) moralize.\n (iii) triangulate.\n (iv) junction tree.\n 'gRbase' facilitates array operations,\n 'gRbase' implements functions for testing for conditional independence.\n 'gRbase' illustrates how hierarchical log-linear models may be\n implemented and describes concept of graphical meta\n data. These features, however, are not maintained anymore and\n remains in 'gRbase' only because there exists a paper describing\n these facilities: A Common Platform for Graphical Models in R:\n The 'gRbase' Package, Journal of Statistical Software, Vol 14,\n No 17, 2005.\n NOTICE Proper functionality of 'gRbase' requires that the packages graph,\n 'Rgraphviz' and 'RBGL' are installed from 'bioconductor'; for\n installation instructions please refer to the web page given below.","Published":"2017-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gRc","Version":"0.4-2","Title":"Inference in Graphical Gaussian Models with Edge and Vertex\nSymmetries","Description":"Estimation, model selection and other aspects of\n statistical inference in Graphical Gaussian models with edge\n and vertex symmetries (Graphical Gaussian models with colours).","Published":"2016-12-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"GreedyExperimentalDesign","Version":"1.0","Title":"Greedy Experimental Design Construction","Description":"Computes experimental designs for a\n two-arm experiment with covariates by greedily optimizing a\n balance objective function. This optimization provides lower\n variance for the treatment effect estimator (and higher power) \n while preserving a design that is close to complete randomization.\n We return all iterations of the designs for use in a permutation test.","Published":"2016-12-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Greg","Version":"1.2","Title":"Regression Helper Functions","Description":"Methods for manipulating regression models and for describing these in a style adapted for medical journals. \n Contains functions for generating an HTML table with crude and adjusted estimates, plotting hazard ratio, plotting model \n estimates and confidence intervals using forest plots, extending this to comparing multiple models in a single forest plots. \n In addition to the descriptives methods, there are addons for the robust covariance matrix provided by the sandwich\n package, a function for adding non-linearities to a model, and a wrapper around the Epi package's Lexis functions for\n time-spliting a dataset when modeling non-proportional hazards in Cox regressions.","Published":"2016-03-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"greport","Version":"0.7-1","Title":"Graphical Reporting for Clinical Trials","Description":"Contains many functions useful for\n monitoring and reporting the results of clinical trials and other\n experiments in which treatments are compared. LaTeX is\n used to typeset the resulting reports, recommended to be in the\n context of 'knitr'. The 'Hmisc', 'ggplot2', and 'lattice' packages are used\n by 'greport' for high-level graphics.","Published":"2016-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"grex","Version":"1.5","Title":"Gene ID Mapping for Genotype-Tissue Expression (GTEx) Data","Description":"Convert 'Ensembl' gene identifiers from Genotype-Tissue\n Expression (GTEx) data to identifiers in other annotation systems,\n including 'Entrez', 'HGNC', and 'UniProt'.","Published":"2017-06-05","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"greyzoneSurv","Version":"1.0","Title":"Fit a Grey-Zone Model with Survival Data","Description":"Allows one to classify patients into low, intermediate, and high risk groups for disease progression based on a continuous marker that is associated with progression-free survival. It uses a latent class model to link the marker and survival outcome and produces two cutoffs for the marker to divide patients into three groups. See the References section for more details.","Published":"2015-05-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Grid2Polygons","Version":"0.1.6","Title":"Convert Spatial Grids to Polygons","Description":"Converts a spatial object from class SpatialGridDataFrame to\n SpatialPolygonsDataFrame.","Published":"2017-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gridBase","Version":"0.4-7","Title":"Integration of base and grid graphics","Description":"Integration of base and grid graphics","Published":"2014-02-24","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gridDebug","Version":"0.5-0","Title":"Debugging 'grid' Graphics","Description":"Functions for drawing scene trees representing \n scenes that have been drawn using grid graphics.","Published":"2015-11-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gridExtra","Version":"2.2.1","Title":"Miscellaneous Functions for \"Grid\" Graphics","Description":"Provides a number of user-level functions to work with \"grid\" graphics, notably to arrange multiple grid-based plots on a page, and draw tables.","Published":"2016-02-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gridGraphics","Version":"0.2","Title":"Redraw Base Graphics Using 'grid' Graphics","Description":"Functions to convert a page of plots drawn with the \n graphics package into identical output drawn with the grid package.\n The result looks like the original graphics-based plot, but consists\n of grid grobs and viewports that can then be manipulated with \n grid functions (e.g., edit grobs and revisit viewports).","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gridGraphviz","Version":"0.3","Title":"Drawing Graphs with 'grid'","Description":"Functions for drawing node-and-edge graphs that have been \n laid out by graphviz. This provides an alternative \n rendering to that provided by the 'Rgraphviz' package, with\n two main advantages: the rendering provided by 'gridGraphviz'\n should be more similar to what 'graphviz' itself would draw;\n and rendering with 'grid' allows for post-hoc customisations\n using the named viewports and grobs that 'gridGraphviz'\n produces.","Published":"2015-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gridsample","Version":"0.2.0","Title":"Tools for Grid-Based Survey Sampling Design","Description":"Multi-stage cluster surveys of households are commonly performed by \n\tgovernments and programmes to monitor population-level demographic, social, \n\teconomic, and health outcomes. Generally, communities are sampled from \n\tsubpopulations (strata) in a first stage, and then households are listed and \n\tsampled in a second stage. In this typical two-stage design, sampled communities \n\tare the Primary Sampling Units (PSUs) and households are the Secondary Sampling \n\tUnits (SSUs). Census data typically serve as the sample frame from which PSUs \n\tare selected. However, if census data are outdated inaccurate, or too \n\tgeographically course, gridded population data (such as ) \n\tcan be used as a sample frame instead. GridSample generates PSUs from gridded \n\tpopulation data according to user-specified complex survey design characteristics \n\tand household sample size. In gridded population sampling, like census sampling, \n\tPSUs are selected within each stratum using a serpentine sampling method, and can \n\tbe oversampled in urban or rural areas to ensure a minimum sample size in each of \n\tthese important sub-domains. Furthermore, because grid cells are uniform in size \n\tand shape, gridded population sampling allows for samples to be representative of\n\tboth the population and of space, which is not possible with a census sample frame. ","Published":"2017-04-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gridsampler","Version":"0.6","Title":"A Simulation Tool to Determine the Required Sample Size for\nRepertory Grid Studies","Description":"Simulation tool to facilitate determination of\n required sample size to achieve category saturation\n for studies using multiple repertory grids in conjunction with\n content analysis.","Published":"2016-11-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gridSVG","Version":"1.5-1","Title":"Export 'grid' Graphics as SVG","Description":"Functions to export graphics drawn with package grid to SVG\n format. Additional functions provide access to SVG features that\n are not available in standard R graphics, such as hyperlinks, \n animation, filters, masks, clipping paths, and gradient and pattern fills.","Published":"2017-05-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"GriegSmith","Version":"1.0","Title":"Uses Grieg-Smith method on 2 dimentional spatial data","Description":"The function GriegSmith accepts either quadrat count data,\n a point process object(ppp) or a matrix of x and y coordinates.\n The function calculates a nested analysis of variance and\n simulation envelopes.","Published":"2013-03-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gRim","Version":"0.2-0","Title":"Graphical Interaction Models","Description":"Provides the following types of models: Models for for contingency\n tables (i.e. log-linear models) Graphical Gaussian models for multivariate\n normal data (i.e. covariance selection models) Mixed interaction models.","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"grImport","Version":"0.9-0","Title":"Importing Vector Graphics","Description":"Functions for converting, importing, and drawing PostScript \n pictures in R plots.","Published":"2013-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"grnn","Version":"0.1.0","Title":"General regression neural network","Description":"The program GRNN implements the algorithm proposed by\n Specht (1991).","Published":"2013-05-16","License":"AGPL","snapshot_date":"2017-06-23"} {"Package":"groc","Version":"1.0.5","Title":"Generalized Regression on Orthogonal Components","Description":"Robust multiple or multivariate linear regression, nonparametric regression on orthogonal components, classical or robust partial least squares models.","Published":"2015-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"grofit","Version":"1.1.1-1","Title":"The package was developed to fit fit many growth curves obtained\nunder different conditions","Description":"The package was developed to fit fit many growth curves\n obtained under different conditions in order to derive a\n conclusive dose-response curve, for instance for a compound\n that potentially affects growth. grofit fits data to different\n parametric models (function gcFitModel) and in addition\n provides a model free spline fit (function gcFitSpline) to\n circumvent systematic errors that might occur within\n application of parametric methods.","Published":"2014-10-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gromovlab","Version":"0.7-6","Title":"Gromov-Hausdorff Type Distances for Labeled Metric Spaces","Description":"Computing Gromov-Hausdorff type l^p distances for labeled metric spaces. These distances were introduced in V.Liebscher, Gromov meets Phylogenetics - new Animals for the Zoo of Metrics on Tree Space. preprint arXiv:1504.05795, for phylogenetic trees but may apply to much more situations. ","Published":"2015-07-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"groupdata2","Version":"0.1.0","Title":"Creating Groups from Data","Description":"Subsetting methods for balanced cross-validation, time series windowing,\n and general grouping and splitting of data.","Published":"2017-01-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"grouped","Version":"0.6-0","Title":"Regression Analysis of Grouped and Coarse Data","Description":"Regression models for grouped and coarse data, under the\n Coarsened At Random assumption.","Published":"2009-10-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"groupRemMap","Version":"0.1-0","Title":"Regularized Multivariate Regression for Identifying Master\nPredictors Using the GroupRemMap Penalty","Description":"An implementation of the GroupRemMap penalty for fitting regularized multivariate response regression models under the high-dimension-low-sample-size setting. When the predictors naturally fall into groups, the GroupRemMap penalty encourages procedure to select groups of predictors, while control for the overall sparsity of the final model.","Published":"2015-04-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GroupSeq","Version":"1.3.4","Title":"A GUI-Based Program to Compute Probabilities Regarding Group\nSequential Designs","Description":"A graphical user interface to compute group sequential designs\n based on normally distributed test statistics, particularly critical\n boundaries, power, drift, and confidence intervals of such designs. All\n computations are based on the alpha spending approach by Lan-DeMets with\n various alpha spending functions being available to choose among.","Published":"2017-02-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"groupsubsetselection","Version":"1.0.3","Title":"Group Subset Selection","Description":"Group subset selection for linear regression models is provided in this package. Given response variable, and explanatory variables, which are organised in groups, group subset selection selects a small number of groups to explain response variable linearly using least squares. ","Published":"2017-04-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GroupTest","Version":"1.0.1","Title":"Multiple Testing Procedure for Grouped Hypotheses","Description":"Contains functions for a two-stage multiple testing procedure for grouped hypothesis, aiming at controlling both the total posterior false discovery rate and within-group false discovery rate. ","Published":"2015-11-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"grove","Version":"1.0","Title":"Wavelet Functional ANOVA Through Markov Groves","Description":"Functional denoising and functional ANOVA through wavelet-domain \n Markov groves. Fore more details see: Ma L. and Soriano J. (2016) \n Efficient functional ANOVA through wavelet-domain Markov groves. \n .","Published":"2017-02-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"growcurves","Version":"0.2.4.1","Title":"Bayesian Semi and Nonparametric Growth Curve Models that\nAdditionally Include Multiple Membership Random Effects","Description":"Employs a non-parametric formulation for by-subject random effect\n parameters to borrow strength over a constrained number of repeated\n measurement waves in a fashion that permits multiple effects per subject.\n One class of models employs a Dirichlet process (DP) prior for the subject\n random effects and includes an additional set of random effects that\n utilize a different grouping factor and are mapped back to clients through\n a multiple membership weight matrix; e.g. treatment(s) exposure or dosage.\n A second class of models employs a dependent DP (DDP) prior for the subject\n random effects that directly incorporates the multiple membership pattern.","Published":"2016-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"growfunctions","Version":"0.13","Title":"Bayesian Non-Parametric Dependent Models for Time-Indexed\nFunctional Data","Description":"Estimates a collection of time-indexed functions under\n either of Gaussian process (GP) or intrinsic Gaussian Markov\n random field (iGMRF) prior formulations where a Dirichlet process\n mixture allows sub-groupings of the functions to share the same\n covariance or precision parameters. The GP and iGMRF formulations\n both support any number of additive covariance or precision terms,\n respectively, expressing either or both of multiple trend and\n seasonality.","Published":"2016-08-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"GrowingSOM","Version":"0.1.1","Title":"Growing Self-Organizing Maps","Description":"A growing self-organizing map (GrowingSOM, GSOM) is a growing variant of the popular self-organizing map (SOM).\n A growing self-organizing map is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a \n two-dimensional representation of the input space of the training samples, called a map. ","Published":"2017-02-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"growth","Version":"1.1.0","Title":"Multivariate Normal and Elliptically-Contoured Repeated\nMeasurements Models","Description":"Functions for fitting various normal theory (growth\n curve) and elliptically-contoured repeated measurements models\n with ARMA and random effects dependence.","Published":"2017-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"growthcurver","Version":"0.2.1","Title":"Simple Metrics to Summarize Growth Curves","Description":"This is a simple package that fits the logistic equation to\n microbial growth curve data (e.g., repeated absorbance measurements\n taken from a plate reader over time). From this fit, a variety of\n metrics are provided, including the maximum growth rate,\n the doubling time, the carrying capacity, the area under the logistic\n curve, and the time to the inflection point.","Published":"2016-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"growthmodels","Version":"1.2.0","Title":"Nonlinear Growth Models","Description":"A compilation of nonlinear growth models used in many areas","Published":"2013-11-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"growthrate","Version":"1.3","Title":"Bayesian reconstruction of growth velocity","Description":"A nonparametric empirical Bayes method for recovering\n gradients (or growth velocities) from observations of smooth\n functions (e.g., growth curves) at isolated time points.","Published":"2014-08-13","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"growthrates","Version":"0.6.5","Title":"Estimate Growth Rates from Experimental Data","Description":"A collection of methods to determine growth rates from\n experimental data, in particular from batch experiments and\n plate reader trials.","Published":"2016-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"grplasso","Version":"0.4-5","Title":"Fitting user specified models with Group Lasso penalty","Description":"Fits user specified (GLM-) models with Group Lasso penalty","Published":"2015-01-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"grppenalty","Version":"2.1-0","Title":"Concave 1-norm and 2-norm group penalty in linear and logistic\nregression","Description":"The package implements the concave 1-norm and 2-norm group penalty in linear and logistic regression.\t \n\tThe concave 1-norm group penalty includes 1-norm group SCAD and 1-norm group MCP. \n\tThe concave 1-norm group penalty has bi-level selection features. That is it selects variables at group and individual levels with proper tuning parameters. \n\tThe concave 1-norm group penalty is robust to mis-specified group information.\n\tThe concave 2-norm group penalty includes 2-norm group SCAD and 2-norm group MCP. The concave 2-norm group penalty select variable at group level only. \n\tThe package can also fit group Lasso, which is a special case of concave 2-norm group penalty when the regularization parameter kappa equals zero. \n\tThe highly efficient (block) coordinate descent algorithm (CDA) is used to compute the solutions for both penalties in linear models. \n\tThe highly stable and efficient (block) CDA and minimization-majorization approach are used to compute the solution for both penalties in logistic models. \n\tIn the computation of solution surface, the solution path along kappa is implemented. \n\tThis provides a better solution path compared to the solution path along lambda. \n\tThe package also provides a tuning parameter selection method based on cross-validation for both linear and logistic models.","Published":"2014-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"grpreg","Version":"3.1-1","Title":"Regularization Paths for Regression Models with Grouped\nCovariates","Description":"Efficient algorithms for fitting the regularization path of\n linear or logistic regression models with grouped penalties. This\n includes group selection methods such as group lasso, group MCP, and\n group SCAD as well as bi-level selection methods such as the group\n exponential lasso, the composite MCP, and the group bridge.","Published":"2017-06-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"grpregOverlap","Version":"2.2-0","Title":"Penalized Regression Models with Overlapping Grouped Covariates","Description":"Fit the regularization path of linear, logistic or Cox models with \n\toverlapping grouped covariates based on the latent group lasso approach. Latent \n\tgroup MCP/SCAD as well as bi-level selection methods, namely the group exponential \n\tlasso and the composite MCP are also available. This package serves as an \n\textension of R package 'grpreg' (by Dr. Patrick Breheny )\n\tfor grouped variable selection involving overlaps between groups.","Published":"2016-12-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"grpSLOPE","Version":"0.2.1","Title":"Group Sorted L1 Penalized Estimation","Description":"Group SLOPE is a penalized linear regression method that is used\n for adaptive selection of groups of significant predictors in a\n high-dimensional linear model.\n The Group SLOPE method can control the (group) false discovery rate at a\n user-specified level (i.e., control the expected proportion of irrelevant\n among all selected groups of predictors).","Published":"2016-11-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"grpss","Version":"3.0.1","Title":"Group Screening and Selection","Description":"Contains the tools to screen grouped variables, and select screened grouped variables afterwards. The main function grpss() can perform the grouped variables screening as well as selection for ultra-high dimensional data with group structure. The screening step is primarily used to reduce the dimensions of data so that the selection procedure can easily handle the moderate or low dimensions instead of ultra-high dimensions.","Published":"2016-01-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"GrpString","Version":"0.3.1","Title":"Patterns and Statistical Differences Between Two Groups of\nStrings","Description":"Methods include converting series of event names to strings, finding common patterns\n in a group of strings, discovering \"unique\" patterns when comparing two groups of strings as well\n as the number and starting position of each pattern in each string, obtaining transition matrix, \n computing transition entropy, statistically comparing the difference between two groups of strings,\n and clustering string groups.Event names can be any action names or labels such as events in log files\n or areas of interest (AOIs) in eye tracking research.","Published":"2017-04-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"grr","Version":"0.9.5","Title":"Alternative Implementations of Base R Functions","Description":"Alternative implementations of some base R functions, including sort, order, and match. Functions are simplified but can be faster or have other advantages.","Published":"2016-08-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GRS.test","Version":"1.0","Title":"GRS Test for Portfolio Efficiency and Its Statistical Power\nAnalysis","Description":"Computational resources for test proposed by Gibbons, Ross, Shanken (1989).","Published":"2016-09-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"grt","Version":"0.2","Title":"General Recognition Theory","Description":"Functions to generate and analyze data for psychology\n experiments based on the General Recognition Theory.","Published":"2014-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GRTo","Version":"1.3","Title":"Tools for the Analysis of Gutenberg-Richter Distributions of\nEarthquake Magnitudes","Description":"Offers functions for the comparison of Gutenberg-Richter \n\tb-values. Several functions in GRTo are helpful for the assessment of the\n\tquality of seismicity catalogs. ","Published":"2015-09-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"GSA","Version":"1.03","Title":"Gene set analysis","Description":"Gene set analysis","Published":"2010-01-04","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"GSAgm","Version":"1.0","Title":"Gene Set Analysis using the Gamma Method","Description":"GSAgm is an R package that completes a self-contained gene set analysis (GSA) for RNA-seq and SNP data using the Gamma Method. ","Published":"2014-04-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gsalib","Version":"2.1","Title":"Utility Functions For GATK","Description":"This package contains utility functions used by the Genome Analysis Toolkit (GATK) to load tables and plot data. The GATK is a toolkit for variant discovery in high-throughput sequencing data.","Published":"2014-12-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GSAQ","Version":"1.0","Title":"Gene Set Analysis with QTL","Description":"Computation of Quantitative Trait Loci hits in the selected gene set. Performing gene set validation with Quantitative Trait Loci information. Performing gene set enrichment analysis with available Quantitative Trait Loci data and computation of statistical significance value from gene set analysis. Obtaining the list of Quantitative Trait Loci hit genes along with their overlapped Quantitative Trait Loci names.","Published":"2016-09-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gsarima","Version":"0.1-4","Title":"Two functions for Generalized SARIMA time series simulation","Description":"Write SARIMA models in (finite) AR representation and simulate \n\tgeneralized multiplicative seasonal autoregressive moving average (time) series \n\twith Normal / Gaussian, Poisson or negative binomial distribution. ","Published":"2014-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gsbDesign","Version":"1.00","Title":"Group Sequential Bayes Design","Description":"Group Sequential Operating Characteristics for Clinical,\n Bayesian two-arm Trials with known Sigma and Normal Endpoints.","Published":"2016-03-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gsDesign","Version":"3.0-1","Title":"Group Sequential Design","Description":"Derives group sequential designs and describes their properties.","Published":"2016-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GSE","Version":"4.1","Title":"Robust Estimation in the Presence of Cellwise and Casewise\nContamination and Missing Data","Description":"Robust Estimation of Multivariate Location and Scatter in the\n Presence of Cellwise and Casewise Contamination and Missing Data.","Published":"2016-12-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gsEasy","Version":"1.1","Title":"Gene Set Enrichment Analysis in R","Description":"R-interface to C++ implementation of the rank/score permutation based GSEA test (Subramanian et al 2005 ).","Published":"2016-08-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GSED","Version":"1.5","Title":"Group Sequential Enrichment Design","Description":"Provides function to apply \"Group sequential enrichment design incorporating subgroup selection\" (GSED) method proposed by Magnusson and Turnbull (2013) .","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gSeg","Version":"0.3","Title":"Graph-Based Change-Point Detection (g-Segmentation)","Description":"Using an approach based on similarity graph to estimate change-point(s) and the corresponding p-values. Can be applied to any type of data (high-dimensional, non-Euclidean, etc.) as long as a reasonable similarity measure is available.","Published":"2016-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gSEM","Version":"0.4.3.4","Title":"Semi-Supervised Generalized Structural Equation Modeling","Description":"Conducts a semi-gSEM statistical analysis (semi-supervised generalized structural equation modeling) on a data frame of coincident observations of multiple predictive or intermediate variables and a final continuous, outcome variable, via two functions sgSEMp1() and sgSEMp2(), representing fittings based on two statistical principles. Principle 1 determines all sensible univariate relationships in the spirit of the Markovian process. The relationship between each pair of variables, including predictors and the final outcome variable, is determined with the Markovian property that the value of the current predictor is sufficient in relating to the next level variable, i.e., the relationship is independent of the specific value of the preceding-level variables to the current predictor, given the current value. Principle 2 resembles the multiple regression principle in the way multiple predictors are considered simultaneously. Specifically, the relationship of the first-level predictors (such as Time and irradiance etc) to the outcome variable (such as, module degradation or yellowing) is fit by a supervised additive model. Then each significant intermediate variable is taken as the new outcome variable and the other variables (except the final outcome variable) as the predictors in investigating the next-level multivariate relationship by a supervised additive model. This fitting process is continued until all sensible models are investigated.","Published":"2016-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gset","Version":"1.1.0","Title":"Group Sequential Design in Equivalence Studies","Description":"calculate equivalence and futility boundaries based on the exact bivariate $t$ test statistics for group sequential designs in studies with equivalence hypotheses.","Published":"2014-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gsg","Version":"2.0","Title":"Calculation of selection coefficients","Description":"gsg (gam selection gradients) provides a unified approach to the regression analysis of selection from longitudinal data collected from natural populations.","Published":"2014-10-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gsheet","Version":"0.4.2","Title":"Download Google Sheets Using Just the URL","Description":"Simple package to download Google Sheets using just the sharing\n link. Spreadsheets can be downloaded as a data frame, or as plain text to parse\n manually. Google Sheets is the new name for Google Docs Spreadsheets.","Published":"2016-12-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GSIF","Version":"0.5-4","Title":"Global Soil Information Facilities","Description":"Global Soil Information Facilities - tools (standards and functions) and sample datasets for global soil mapping.","Published":"2017-05-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gskat","Version":"1.0","Title":"GEE_KM","Description":"Family based association test via GEE Kernel Machine score test","Published":"2013-11-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gsl","Version":"1.9-10.3","Title":"Wrapper for the Gnu Scientific Library","Description":"\n An R wrapper for the special functions and quasi random number\n generators of the Gnu Scientific Library\n (http://www.gnu.org/software/gsl/). See gsl-package.Rd for details of \n overall package organization, and Misc.Rd for some functions that are\n widely used in the package, and some tips on installation.","Published":"2017-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GSM","Version":"1.3.2","Title":"Gamma Shape Mixture","Description":"Implementation of a Bayesian approach for estimating a mixture of gamma distributions in which the mixing occurs over the shape parameter. This family provides a flexible and novel approach for modeling heavy-tailed distributions, it is computationally efficient, and it only requires to specify a prior distribution for a single parameter.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gsmoothr","Version":"0.1.7","Title":"Smoothing tools","Description":"Tools rewritten in C for various smoothing tasks","Published":"2014-06-10","License":"LGPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"GSMX","Version":"0.1","Title":"Multivariate Genomic Selection","Description":"Estimating trait heritability and handling overfitting. This package includes a collection of functions for (1) estimating genetic variance-covariances and calculate trait heritability; and (2) handling overfitting by calculating the variance components and the heritability through cross validation.","Published":"2016-10-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GSODR","Version":"1.0.3","Title":"Global Summary Daily Weather Data in R","Description":"Provides automated downloading, parsing, cleaning, unit conversion\n and formatting of Global Surface Summary of the Day (GSOD) weather data from\n the from the USA National Centers for Environmental Information (NCEI) for\n use in R. Units are converted from from United States Customary System\n (USCS) units to International System of Units (SI). Stations may be \n individually checked for number of missing days defined by the user, where\n stations with too many missing observations are omitted. Only stations with \n valid reported latitude and longitude values are permitted in the final \n data. Additional useful elements, saturation vapour pressure (es), actual \n vapour pressure (ea) and relative humidity are calculated from the original \n data and included in the final data set. The resulting data include station\n identification information, state, country, latitude, longitude, elevation,\n weather observations and associated flags. Data may be automatically saved \n to disk. File output may be returned as a comma-separated values (CSV) or \n GeoPackage (GPKG) file. Additional data are included with this R package: a \n list of elevation values for stations between -60 and 60 degrees latitude \n derived from the Shuttle Radar Topography Measuring Mission (SRTM). For \n information on the GSOD data from NCEI, please see the GSOD readme.txt file\n available from, .","Published":"2017-06-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GSparO","Version":"1.0","Title":"Group Sparse Optimization","Description":"Approaches a group sparse solution of an underdetermined linear system. It implements the proximal gradient algorithm to solve a lower regularization model of group sparse learning. For details, please refer to the paper \"Y. Hu, C. Li, K. Meng, J. Qin and X. Yang. Group sparse optimization via l_{p,q} regularization. Journal of Machine Learning Research, to appear, 2017\".","Published":"2017-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gsrc","Version":"1.1","Title":"Genome Structure Rearrangement Calling in Genomes with High\nSynteny","Description":"Pipeline to read and analyze raw SNP array data.\n The data is preprocessed and normalized.\n Genotypes and CNVs are called.\n Synteny blocks are calculated and translocations detected.\n The results can be plotted with special functions.","Published":"2016-10-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gsrsb","Version":"1.0.3","Title":"Group Sequential Refined Secondary Boundary","Description":"A gate-keeping procedure to test a primary and a secondary endpoint in a group sequential design with multiple interim looks. Computations related to group sequential primary and secondary boundaries. Refined secondary boundaries are calculated for a gate-keeping test on a primary and a secondary endpoint in a group sequential design with multiple interim looks. The choices include both the standard boundaries and the boundaries using error spending functions. Version 1.0.0 was released on April 12, 2017. See Tamhane et al. (2017+) \"A gatekeeping procedure to test a primary and a secondary endpoint in a group sequential design with multiple interim looks\", Biometrics, to appear.","Published":"2017-04-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gss","Version":"2.1-7","Title":"General Smoothing Splines","Description":"A comprehensive package for structural multivariate\n function estimation using smoothing splines.","Published":"2017-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gsscopu","Version":"0.9-3","Title":"Copula Density and 2-D Hazard Estimation using Smoothing Splines","Description":"A collection of routines for the estimation of copula density\n and 2-D hazard function using smoothing splines.","Published":"2015-07-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GSSE","Version":"0.1","Title":"Genotype-Specific Survival Estimation","Description":"We propose a fully efficient sieve maximum likelihood method to estimate genotype-specific distribution of time-to-event outcomes under a nonparametric model. We can handle missing genotypes in pedigrees. We estimate the time-dependent hazard ratio between two genetic mutation groups using B-splines, while applying nonparametric maximum likelihood estimation to the reference baseline hazard function. The estimators are calculated via an expectation-maximization algorithm.","Published":"2015-10-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gstat","Version":"1.1-5","Title":"Spatial and Spatio-Temporal Geostatistical Modelling, Prediction\nand Simulation","Description":"Variogram modelling; simple, ordinary and universal point or block (co)kriging; spatio-temporal kriging; sequential Gaussian or indicator (co)simulation; variogram and variogram map plotting utility functions.","Published":"2017-03-12","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"gsubfn","Version":"0.6-6","Title":"Utilities for strings and function arguments","Description":"gsubfn is like gsub but can take a replacement function\n or certain other objects instead of the replacement string.\n Matches and back references are input to the replacement function and \n replaced by the function output. gsubfn can be used to split strings \n based on content rather than delimiters and for quasi-perl-style string \n interpolation. The package also has facilities for translating formulas \n to functions and allowing such formulas in function calls instead of \n functions. This can be used with R functions such as apply, sapply,\n lapply, optim, integrate, xyplot, Filter and any other function that \n expects another function as an input argument or functions like cat\n or sql calls that may involve strings where substitution is desirable.","Published":"2014-08-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gsw","Version":"1.0-3","Title":"Gibbs Sea Water Functions","Description":"Provides an interface to the Gibbs SeaWater (TEOS-10) C library, which derives from Matlab and other code written by WG127 (Working Group 127) of SCOR/IAPSO (Scientific Committee on Oceanic Research / International Association for the Physical Sciences of the Oceans).","Published":"2015-01-19","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GsymPoint","Version":"1.1.1","Title":"Estimation of the Generalized Symmetry Point, an Optimal\nCutpoint in Continuous Diagnostic Tests","Description":"Estimation of the cutpoint defined by the Generalized Symmetry point in a binary classification setting based on a continuous diagnostic test or marker. Two methods have been implemented to construct confidence intervals for this optimal cutpoint, one based on the Generalized Pivotal Quantity and the other based on Empirical Likelihood. Numerical and graphical outputs for these two methods are easily obtained.","Published":"2017-02-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gsynth","Version":"1.0.3","Title":"Generalized Synthetic Control Method","Description":"Generalized synthetic control method: causal inference with interactive fixed-effect models. It imputes counterfactuals for each treated unit using control group information based on a linear interactive fixed effects model that incorporates unit-specific intercepts interacted with time-varying coefficients. This method generalizes the synthetic control method to the case of multiple treated units and variable treatment periods, and improves efficiency and interpretability. Data must be in form of a balanced panel with a dichotomous treatment.","Published":"2017-03-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gt4ireval","Version":"2.0","Title":"Generalizability Theory for Information Retrieval Evaluation","Description":"Provides tools to measure the reliability of an Information Retrieval test collection.\n It allows users to estimate reliability using Generalizability Theory and map those estimates onto\n well-known indicators such as Kendall tau correlation or sensitivity.","Published":"2017-03-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gtable","Version":"0.2.0","Title":"Arrange 'Grobs' in Tables","Description":"Tools to make it easier to work with \"tables\" of 'grobs'.","Published":"2016-02-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gtcorr","Version":"0.2-1","Title":"Calculate efficiencies of group testing algorithms with\ncorrelated responses","Description":"This package provides functions to calculate the\n efficiencies (expected tests per unit) of hierarchical and\n matrix group testing procedures. Efficiencies can be\n calculated in the presense of correlated responses under\n multiple arrangements of clustesrs. Efficiencies can also be\n evaluated in the presense of test error.","Published":"2011-05-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gte","Version":"1.2-2","Title":"Generalized Turnbull's Estimator","Description":"Generalized Turnbull's estimator proposed by Dehghan and Duchesne\n (2011).","Published":"2015-02-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gTests","Version":"0.1","Title":"Graph-Based Two-Sample Tests","Description":"Three graph-based tests are provided for testing whether two samples are from the same distribution.","Published":"2016-09-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gtheory","Version":"0.1.2","Title":"Apply Generalizability Theory with R","Description":"Estimates variance components, generalizability coefficients,\n universe scores, and standard errors when observed scores contain variation from\n one or more measurement facets (e.g., items and raters).","Published":"2016-10-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gtools","Version":"3.5.0","Title":"Various R Programming Tools","Description":"Functions to assist in R programming, including:\n - assist in developing, updating, and maintaining R and R packages ('ask', 'checkRVersion',\n 'getDependencies', 'keywords', 'scat'),\n - calculate the logit and inverse logit transformations ('logit', 'inv.logit'),\n - test if a value is missing, empty or contains only NA and NULL values ('invalid'),\n - manipulate R's .Last function ('addLast'),\n - define macros ('defmacro'),\n - detect odd and even integers ('odd', 'even'),\n - convert strings containing non-ASCII characters (like single quotes) to plain ASCII ('ASCIIfy'),\n - perform a binary search ('binsearch'),\n - sort strings containing both numeric and character components ('mixedsort'),\n - create a factor variable from the quantiles of a continuous variable ('quantcut'),\n - enumerate permutations and combinations ('combinations', 'permutation'),\n - calculate and convert between fold-change and log-ratio ('foldchange',\n 'logratio2foldchange', 'foldchange2logratio'),\n - calculate probabilities and generate random numbers from Dirichlet distributions\n ('rdirichlet', 'ddirichlet'),\n - apply a function over adjacent subsets of a vector ('running'),\n - modify the TCP\\_NODELAY ('de-Nagle') flag for socket objects,\n - efficient 'rbind' of data frames, even if the column names don't match ('smartbind'),\n - generate significance stars from p-values ('stars.pval'),\n - convert characters to/from ASCII codes.","Published":"2015-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gtop","Version":"0.2.0","Title":"Game-Theoretically OPtimal (GTOP) Reconciliation Method","Description":"In hierarchical time series (HTS) forecasting, the hierarchical relation between multiple time series is exploited to make better forecasts. This hierarchical relation implies one or more aggregate consistency constraints that the series are known to satisfy. Many existing approaches, like for example bottom-up or top-down forecasting, therefore attempt to achieve this goal in a way that guarantees that the forecasts will also be aggregate consistent. This package provides with an implementation of the Game-Theoretically OPtimal (GTOP) reconciliation method proposed in van Erven and Cugliari (2015), which is guaranteed to only improve any given set of forecasts. This opens up new possibilities for constructing the forecasts. For example, it is not necessary to assume that bottom-level forecasts are unbiased, and aggregate forecasts may be constructed by regressing both on bottom-level forecasts and on other covariates that may only be available at the aggregate level.","Published":"2015-03-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gtrendsR","Version":"1.3.5","Title":"Perform and Display Google Trends Queries","Description":"An interface for retrieving and displaying the information returned\n online by Google Trends is provided. Trends (number of hits) over the time as\n well as geographic representation of the results can be displayed.","Published":"2016-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gtx","Version":"0.0.8","Title":"Genetics ToolboX","Description":"Assorted tools for genetic association analyses. The\n current focus is on implementing (either exactly or\n approximately) regression analyses using summary statistics\n instead of using subject-specific data. So far, functions\n exist to support multi-SNP risk score analyses, multi-SNP\n conditional regression analyses, and multi-phenotype analyses,\n using summary statistics. There are helper functions for\n reading and manipulating subject-specific genotype data, which\n provide a platform for calculating the summary statistics, or\n for using R to conduct other analyses not supported by specific\n GWAS analysis tools.","Published":"2013-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GuardianR","Version":"0.8","Title":"The Guardian API Wrapper","Description":"Provides an interface to the Open Platform's Content API of the Guardian Media Group. It retrieves content from news outlets The Observer, The Guardian, and guardian.co.uk from 1999 to current day.","Published":"2016-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Guerry","Version":"1.6-1","Title":"Maps, data and methods related to Guerry (1833) \"Moral\nStatistics of France\"","Description":"This package comprises maps of France in 1830, multivariate data from A.-M. Guerry and others, and statistical and \n\tgraphic methods related to Guerry's \"Moral Statistics of France\". The goal is to facilitate the exploration and\n\tdevelopment of statistical and graphic methods for multivariate data in a geo-spatial context of historical interest.","Published":"2014-09-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"guess","Version":"0.1","Title":"Adjust Estimates of Learning for Guessing","Description":"Adjust Estimates of Learning for Guessing. The package provides \n standard guessing correction, and a latent class model that leverages\n informative pre-post transitions. For details of the latent class model,\n see .","Published":"2016-02-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"GUIDE","Version":"1.2.3.1","Title":"GUI for DErivatives in R","Description":"A nice GUI for financial DErivatives in R.","Published":"2016-09-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GUIgems","Version":"0.1","Title":"Graphical User Interface for Generalized Multistate Simulation\nModel","Description":"A graphical user interface for the R package Gems. \n Apart from the functionality of Gems package in the Graphical User interface, GUIgems\n allows adding states to a defined model, merging states for the analysis and plotting \n progression paths between states based on the simulated cohort.\n There is also a module in the GUIgems which allows to compare costs and QALYs between different cohorts.","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"GUILDS","Version":"1.3","Title":"Implementation of Sampling Formulas for the Unified Neutral\nModel of Biodiversity and Biogeography, with or without Guild\nStructure","Description":"A collection of sampling formulas for the unified neutral model of biogeography and biodiversity. Alongside the sampling formulas, it includes methods to perform maximum likelihood optimization of the sampling formulas, methods to generate data given the neutral model, and methods to estimate the expected species abundance distribution. Sampling formulas included in the GUILDS package are the Etienne Sampling Formula (Etienne 2005), the guild sampling formula, where guilds are assumed to differ in dispersal ability (Janzen et al. 2015), and the guilds sampling formula conditioned on guild size (Janzen et al. 2015).","Published":"2016-09-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GUIProfiler","Version":"2.0.1","Title":"Graphical User Interface for Rprof()","Description":"Show graphically the results of profiling R functions by tracking their execution time.","Published":"2015-08-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"gumbel","Version":"1.10-1","Title":"The Gumbel-Hougaard Copula","Description":"Provides probability functions (cumulative distribution and density functions), simulation function (Gumbel copula multivariate simulation) and estimation functions (Maximum Likelihood Estimation, Inference For Margins, Moment Based Estimation and Canonical Maximum Likelihood).","Published":"2015-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GUniFrac","Version":"1.0","Title":"Generalized UniFrac distances","Description":"Generalized UniFrac distance for comparing microbial\n communities. Permutational multivariate analysis of variance\n using multiple distance matrices.","Published":"2012-04-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gunsales","Version":"0.1.2","Title":"Statistical Analysis of Monthly Background Checks of Gun\nPurchases","Description":"Statistical analysis of monthly background checks of gun purchases for the New York Times \n story \"What Drives Gun Sales: Terrorism, Obama and Calls for Restrictions\" at \n is provided.","Published":"2017-01-30","License":"Apache License (== 2)","snapshot_date":"2017-06-23"} {"Package":"gutenbergr","Version":"0.1.3","Title":"Download and Process Public Domain Works from Project Gutenberg","Description":"Download and process public domain works in the Project\n Gutenberg collection . Includes metadata for\n all Project Gutenberg works, so that they can be searched and retrieved.","Published":"2017-06-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GUTS","Version":"1.0.0","Title":"Fast Calculation of the Likelihood of a Stochastic Survival\nModel","Description":"Given exposure and survival time series as well as parameter values, GUTS allows for the fast calculation of the survival probabilities as well as the logarithm of the corresponding likelihood.","Published":"2015-06-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gvc","Version":"0.5.2","Title":"Global Value Chains Tools","Description":"Several tools for Global Value Chain ('GVC') analysis are\n implemented.","Published":"2015-11-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gvcm.cat","Version":"1.9","Title":"Regularized Categorical Effects/Categorical Effect\nModifiers/Continuous/Smooth Effects in GLMs","Description":"Generalized structured regression models with regularized categorical effects, categorical effect modifiers, continuous effects and smooth effects. ","Published":"2015-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gvlma","Version":"1.0.0.2","Title":"Global Validation of Linear Models Assumptions","Description":"Methods from the paper: Pena, EA and Slate, EH, \"Global Validation of Linear Model Assumptions,\" J. American Statistical Association, 101(473):341-354, 2006.","Published":"2014-01-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"GWAF","Version":"2.2","Title":"Genome-Wide Association/Interaction Analysis and Rare Variant\nAnalysis with Family Data","Description":"Functions for genome-wide association/interaction analysis and rare variant analysis on a continuous/dichotomous trait using family data, and for making genome-wide p-value plot and QQ plot. ","Published":"2015-03-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GWASExactHW","Version":"1.01","Title":"Exact Hardy-Weinburg testing for Genome Wide Association Studies","Description":"This package contains a function to do exact\n Hardy-Weinburg testing (using Fisher's test) for SNP genotypes\n as typically obtained in a Genome Wide Association Study\n (GWAS).","Published":"2013-01-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gwdegree","Version":"0.1.1","Title":"A Shiny App to Aid Interpretation of Geometrically-Weighted\nDegree Estimates in Exponential Random Graph Models","Description":"This is a Shiny application intended to provide better understanding of how geometrically-weighted degree terms function in exponential random graph models of networks. It contains just one user function, gwdegree(), which launches the Shiny application.","Published":"2016-07-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gwerAM","Version":"1.0","Title":"Controlling the genome-wide type I error rate in association\nmapping experiments","Description":"This package provides functions to calculate the\n significance threshold for controlling the type I error rate in\n mixed-model association mapping analyses.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gwfa","Version":"0.0.4","Title":"Geographically Weighted Fractal Analysis","Description":"Performs Geographically Weighted Fractal Analysis (GWFA) to calculate the local fractal dimension of a set of points. GWFA mixes the Sandbox multifractal algorithm and the Geographically Weighted Regression. Unlike fractal box-counting algorithm, the sandbox algorithm avoids border effects because the boxes are adjusted on the set of points. The Geographically Weighted approach consists in applying a kernel that describes the way the neighbourhood of each estimated point is taken into account to estimate its fractal dimension. GWFA can be used to discriminate built patterns of a city, a region, or a whole country.","Published":"2016-11-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GWG","Version":"1.0","Title":"Calculation of probabilities for inadequate and excessive\ngestational weight gain","Description":"Based on calculations of 758 women this package calculates\n positive predictive values (PPV) and negative predictive values\n (NPV) for inadequate and excessive gestational weight gain\n (GWG) for different prevalences for different BMI categories.","Published":"2013-01-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gWidgets","Version":"0.0-54","Title":"gWidgets API for building toolkit-independent, interactive GUIs","Description":"gWidgets provides a toolkit-independent API for building interactive GUIs. At least one of the 'gWidgetsXXX packages', such as gWidgetstcltk, needs to be installed. Some icons are on loan from the scigraphica project http://scigraphica.sourceforge.net.","Published":"2014-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gWidgets2","Version":"1.0-7","Title":"Rewrite of gWidgets API for Simplified GUI Construction","Description":"Re-implementation of the 'gWidgets' API. The API is defined in this\n package. A second, toolkit-specific package is required to use it. There\n are three in development: 'gWidgets2RGtk2', 'gWidgets2Qt', and 'gWidgets2tcltk'.","Published":"2016-06-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"gWidgets2RGtk2","Version":"1.0-5","Title":"Implementation of gWidgets2 for the RGtk2 Package","Description":"Implements the 'gWidgets2' API for 'RGtk2.'","Published":"2016-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"gWidgets2tcltk","Version":"1.0-5","Title":"Toolkit Implementation of gWidgets2 for tcltk","Description":"Port of the 'gWidgets2' API for the 'tcltk' package.","Published":"2016-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gWidgetsRGtk2","Version":"0.0-83","Title":"Toolkit implementation of gWidgets for RGtk2","Description":"Port of gWidgets API to RGtk2","Published":"2014-11-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gWidgetstcltk","Version":"0.0-55","Title":"Toolkit implementation of gWidgets for tcltk package","Description":"Port of the gWidgets API to the tcltk package. Requires Tk 8.5 or greater.","Published":"2014-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GWLelast","Version":"1.1","Title":"Geographically Weighted Logistic Elastic Net Regression","Description":"Fit a geographically weighted logistic elastic net regression.","Published":"2015-04-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"GWmodel","Version":"2.0-4","Title":"Geographically-Weighted Models","Description":"In GWmodel, we introduce techniques from a particular branch of spatial statistics,termed geographically-weighted (GW) models. GW models suit situations when data are not described well by some global model, but where there are spatial regions where a suitably localised calibration provides a better description. GWmodel includes functions to calibrate: GW summary statistics, GW principal components analysis, GW discriminant analysis and various forms of GW regression; some of which are provided in basic and robust (outlier resistant) forms.","Published":"2017-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"gWQS","Version":"1.0.0","Title":"Generalized Weighted Quantile Sum Regression","Description":"Fits Weighted Quantile Sum (WQS) regressions for continuous or binomial outcomes.","Published":"2016-10-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GWRM","Version":"2.1.0.2","Title":"Generalized Waring Regression Model for Count Data","Description":"Statistical functions to fit, validate and describe a Generalized\n Waring Regression Model (GWRM).","Published":"2016-04-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"gwrr","Version":"0.2-1","Title":"Fits geographically weighted regression models with diagnostic\ntools","Description":"Fits geographically weighted regression (GWR) models and\n has tools to diagnose and remediate collinearity in the GWR\n models. Also fits geographically weighted ridge regression\n (GWRR) and geographically weighted lasso (GWL) models.","Published":"2013-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GWsignif","Version":"1.2","Title":"Estimating Genome-Wide Significance for Whole Genome Sequencing\nStudies, Either Single SNP Tests or Region-Based Tests","Description":"The correlations and linkage disequilibrium between tests can vary as a function of minor allele frequency thresholds used to filter variants, and also varies with different choices of test statistic for region-based tests. Appropriate genome-wide significance thresholds can be estimated empirically through permutation on only a small proportion of the whole genome. ","Published":"2016-09-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"GxM","Version":"1.1","Title":"Maximum Likelihood Estimation for Gene-by-Measured Environment\nInteraction Models","Description":"Quantifying and testing gene-by-measured-environment interaction in behavior genetic designs.","Published":"2014-09-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"gym","Version":"0.1.0","Title":"Provides Access to the OpenAI Gym API","Description":"OpenAI Gym is a open-source Python toolkit for developing and comparing\n reinforcement learning algorithms. This is a wrapper for the OpenAI Gym API,\n and enables access to an ever-growing variety of environments.\n For more details on OpenAI Gym, please see here: .\n For more details on the OpenAI Gym API specification, please see here:\n .","Published":"2016-10-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"gyriq","Version":"1.0.2","Title":"Kinship-Adjusted Survival SNP-Set Analysis","Description":"SNP-set association testing for censored phenotypes in the presence of intrafamilial correlation.","Published":"2016-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"h2o","Version":"3.10.5.2","Title":"R Interface for H2O","Description":"R scripting functionality for H2O, the open source\n math engine for big data that computes parallel distributed\n machine learning algorithms such as generalized linear models,\n gradient boosting machines, random forests, and neural networks\n (deep learning) within various cluster environments.","Published":"2017-06-23","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"h5","Version":"0.9.8","Title":"Interface to the 'HDF5' Library","Description":"S4 Interface to the 'HDF5' library supporting fast storage and\n retrieval of R-objects like vectors, matrices and arrays to binary files in\n a language independent format. The 'HDF5' format can therefore be used as\n an alternative to R's save/load mechanism. Since h5 is able to access only\n subsets of stored data it can also handle data sets which do not fit into\n memory.","Published":"2016-07-16","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"haarfisz","Version":"4.5","Title":"Software to perform Haar Fisz transforms","Description":"A Haar-Fisz algorithm for Poisson intensity estimation","Published":"2010-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HAC","Version":"1.0-5","Title":"Estimation, Simulation and Visualization of Hierarchical\nArchimedean Copulae (HAC)","Description":"Package provides the estimation of the structure and the parameters, sampling methods and structural plots of Hierarchical Archimedean Copulae (HAC).","Published":"2016-11-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"HadoopStreaming","Version":"0.2","Title":"Utilities for using R scripts in Hadoop streaming","Description":"Provides a framework for writing map/reduce scripts for\n use in Hadoop Streaming. Also facilitates operating on data in\n a streaming fashion, without Hadoop.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"hail","Version":"0.1.1","Title":"Read HYDRA Rainfall Data","Description":"Read data from the City of Portland's 'HYDRA' rainfall datasets within R.","Published":"2017-01-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"hamlet","Version":"0.9.5","Title":"Hierarchical Optimal Matching and Machine Learning Toolbox","Description":"Various functions and algorithms are provided here for solving optimal matching tasks in the context of preclinical cancer studies. Further, various helper and plotting functions are provided for unsupervised and supervised machine learning as well as longitudinal mixed-effects modeling of tumor growth response patterns.","Published":"2016-10-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HandTill2001","Version":"0.2-12","Title":"Multiple Class Area under ROC Curve","Description":"An S4 implementation of Eq. (3) and Eq. (7) by David J. Hand and \n Robert J. Till (2001) .","Published":"2016-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Hankel","Version":"0.0-1","Title":"Univariate non-parametric two-sample test based on empirical\nHankel transforms","Description":"Provides an R routine for a Cramer-von Mises type two sample test which is based on empirical Hankel transforms of the non-negative sample variables. The test is non-parametric and not distribution free. The exact value of the test statistic for univariate data as well as the p-value and the critical value are computed.","Published":"2014-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hansard","Version":"0.4.6","Title":"Provides Easy Downloading Capabilities for the UK Parliament API","Description":"Provides functions to download data from the APIs. Because of the structure of the API, there is a named function for each type of available data for ease of use, as well as some functions designed to retrieve specific pieces of commonly used data. Functions for each new API will be added as and when they become available.","Published":"2017-06-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HAP.ROR","Version":"1.0","Title":"Recursive Organizer (ROR)","Description":"Functions to perform ROR for sequence-based association\n analysis","Published":"2013-03-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hapassoc","Version":"1.2-8","Title":"Inference of Trait Associations with SNP Haplotypes and Other\nAttributes using the EM Algorithm","Description":"The following R functions are used for inference of trait\n associations with haplotypes and other covariates in\n generalized linear models. The functions are developed\n primarily for data collected in cohort or cross-sectional\n studies. They can accommodate uncertain haplotype phase and\n handle missing genotypes at some SNPs.","Published":"2015-05-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HapEstXXR","Version":"0.1-8","Title":"Multi-Locus Stepwise Regression","Description":"The multi-locus stepwise regression (MSR) combines the advantages of stepwise regression and haplotype-based analysis. The MSR can be used to identify informative combinations of single nucleotide polymorphisms (SNPs) from unlinked SNPs (allele combinations) or SNPs within a chromosomal region (haplotypes).","Published":"2015-06-02","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"HAPim","Version":"1.3","Title":"HapIM","Description":"The package provides a set of functions whose aim is to\n propose 4 methods of QTL detection. HAPimLD is an\n interval-mapping method designed for unrelated individuals with\n no family information that makes use of linkage disequilibrium.\n HAPimLDL is an interval-mapping method for design of half-sib\n families. It combines linkage analysis and linkage\n disequilibrium. HaploMax is based on an analysis of variance\n with a dose haplotype effect. HaploMaxHS is based on an\n analysis of variance with a sire effect and a dose haplotype\n effect in half-sib family design. Fundings for the package\n development were provided to the LDLmapQTL project by the ANR\n GENANIMAL program and APIS-GENE.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Haplin","Version":"6.2.0","Title":"Analyzing Case-Parent Triad and/or Case-Control Data with SNP\nHaplotypes","Description":"Performs genetic association analyses of case-parent triad (trio) data with multiple markers. It can also incorporate complete or incomplete control triads, for instance independent control children. Estimation is based on haplotypes, for instance SNP haplotypes, even though phase is not known from the genetic data. Haplin estimates relative risk (RR + conf.int.) and p-value associated with each haplotype. It uses maximum likelihood estimation to make optimal use of data from triads with missing genotypic data, for instance if some SNPs has not been typed for some individuals. Haplin also allows estimation of effects of maternal haplotypes and parent-of-origin effects, particularly appropriate in perinatal epidemiology. Haplin allows special models, like X-inactivation, to be fitted on the X-chromosome. A GxE analysis allows testing interactions between environment and all estimated genetic effects.","Published":"2017-02-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"haplo.ccs","Version":"1.3.1","Title":"Estimate Haplotype Relative Risks in Case-Control Data","Description":"'haplo.ccs' estimates haplotype and covariate relative\n risks in case-control data by weighted logistic regression.\n Diplotype probabilities, which are estimated by EM computation\n with progressive insertion of loci, are utilized as weights.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"haplo.stats","Version":"1.7.7","Title":"Statistical Analysis of Haplotypes with Traits and Covariates\nwhen Linkage Phase is Ambiguous","Description":"Routines for the analysis of indirectly measured haplotypes. The statistical methods assume that all subjects are unrelated and that haplotypes are ambiguous (due to unknown linkage phase of the genetic markers). The main functions are: haplo.em(), haplo.glm(), haplo.score(), and haplo.power(); all of which have detailed examples in the vignette.","Published":"2016-04-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"haploR","Version":"1.4.6","Title":"Query HaploReg and RegulomeDB","Description":"A set of utilities for querying \n HaploReg \n and RegulomeDB web-based tools. The package connects to \n HaploReg or RegulomeDB, searches and downloads results, without \n opening web pages, directly from R environment. \n Results are stored in a data frame that can be directly used in various \n kinds of downstream analyses.","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"haploReconstruct","Version":"0.1.2","Title":"Reconstruction of Haplotype-Blocks from Time Series Data","Description":"Reconstruction of founder haplotype blocks from time series data.","Published":"2016-10-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HaploSim","Version":"1.8.4","Title":"Functions to simulate haplotypes","Description":"Simulate haplotypes through meioses. Allows specification\n of population parameters.","Published":"2013-11-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"haplotyper","Version":"0.1","Title":"Tool for Clustering Genotypes in Haplotypes","Description":"Function to identify haplotypes\n within QTL (Quantitative Trait Loci). One haplotype is a combination of SNP\n (Single Nucleotide Polymorphisms) within the QTL. This function groups\n together all individuals of a population with the same haplotype.\n Each group contains individual with the same allele in each SNP,\n whether or not missing data. Thus, haplotyper groups individuals,\n that to be imputed, have a non-zero probability of having the same alleles\n in the entire sequence of SNP's. Moreover, haplotyper calculates such\n probability from relative frequencies.","Published":"2016-04-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"haplotypes","Version":"1.0","Title":"Haplotype Inference and Statistical Analysis of Genetic\nVariation","Description":"Provides S4 classes and methods for reading and manipulating aligned DNA sequences, supporting an indel coding methods (only simple indel coding method is available in the current version), showing base substitutions and indels, calculating absolute pairwise distances between DNA sequences, and inferring haplotypes from DNA sequences or user provided absolute character difference matrix. This package also includes S4 classes and methods for estimating genealogical relationships among haplotypes using statistical parsimony. ","Published":"2015-04-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hapsim","Version":"0.31","Title":"Haplotype Data Simulation","Description":"Package for haplotype-based genotype simulations. Haplotypes are\n generated such that their allele frequencies and linkage\n disequilibrium coefficients match those estimated from an input\n data set.","Published":"2017-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HardyWeinberg","Version":"1.5.8","Title":"Statistical Tests and Graphics for Hardy-Weinberg Equilibrium","Description":"Contains tools for exploring Hardy-Weinberg equilibrium for\n diallelic genetic marker data. All classical tests (chi-square, exact,\n likelihood-ratio and permutation tests) for Hardy-Weinberg equilibrium\n are included in the package, as well as functions for power computation and\n for the simulation of marker data under equilibrium and disequilibrium.\n Routines for dealing with markers on the X-chromosome are included.\n Functions for testing equilibrium in the presence of missing data by\n using multiple imputation are also provided. Implements several graphics\n for exploring the equilibrium status of a large set of diallelic markers: \n ternary plots with acceptance regions, log-ratio plots and Q-Q plots. ","Published":"2017-05-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HarmonicRegression","Version":"1.0","Title":"Harmonic Regression to One or more Time Series","Description":"Fits the first harmonics in a Fourier expansion to one or more time series. Trend elimination can be performed. Computed values include estimates of amplitudes and phases, as well as confidence intervals and p-values for the null hypothesis of Gaussian noise.","Published":"2015-04-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"harrietr","Version":"0.2.2","Title":"Wrangle Phylogenetic Distance Matrices and Other Utilities","Description":"Harriet was Charles Darwin's pet tortoise (possibly). 'harritr'\n implements some function to manipulate distance matrices and phylogenetic trees\n to make it easier to plot with 'ggplot2' and to manipulate using 'tidyverse'\n tools.","Published":"2017-02-22","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HARtools","Version":"0.0.5","Title":"Read HTTP Archive ('HAR') Data","Description":"The goal of 'HARtools' is to provide a simple set of functions\n to read/parse, write and visualise HTTP Archive ('HAR') files in R.","Published":"2016-11-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Harvest.Tree","Version":"1.1","Title":"Harvest the Classification Tree","Description":"Aimed at applying the Harvest classification tree algorithm, modified algorithm of classic classification tree.The harvested tree has advantage of deleting redundant rules in trees, leading to a simplify and more efficient tree model.It was firstly used in drug discovery field, but it also performs well in other kinds of data, especially when the region of a class is disconnected. This package also improves the basic harvest classification tree algorithm by extending the field of data of algorithm to both continuous and categorical variables. To learn more about the harvest classification tree algorithm, you can go to http://www.stat.ubc.ca/Research/TechReports/techreports/220.pdf for more information. ","Published":"2015-07-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"harvestr","Version":"0.7.1","Title":"A Parallel Simulation Framework","Description":"Functions for easy and reproducible simulation.","Published":"2016-08-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hash","Version":"2.2.6","Title":"Full feature implementation of hash/associated\narrays/dictionaries","Description":"This package implements a data structure similar to hashes\n in Perl and dictionaries in Python but with a purposefully R\n flavor. For objects of appreciable size, access using hashes\n outperforms native named lists and vectors.","Published":"2013-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hashFunction","Version":"1.0","Title":"A collection of non-cryptographic hash functions","Description":"This package provides common non-cryptographic hash\n functions for R. For example, SpookyHash, Murmur3Hash, Google\n CityHash.","Published":"2013-03-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"hashids","Version":"0.9.0","Title":"Generate Short Unique YouTube-Like IDs (Hashes) from Integers","Description":"An R port of the hashids library. hashids generates YouTube-like hashes from integers or vector of integers. Hashes generated from integers are relatively short, unique and non-seqential. hashids can be used to generate unique ids for URLs and hide database row numbers from the user. By default hashids will avoid generating common English cursewords by preventing certain letters being next to each other. hashids are not one-way: it is easy to encode an integer to a hashid and decode a hashid back into an integer.","Published":"2015-09-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"hashmap","Version":"0.2.0","Title":"The Faster Hash Map","Description":"Provides a hash table class for fast\n key-value storage of atomic vector types.\n Internally, 'hashmap' makes extensive use of 'Rcpp', 'boost::variant',\n and 'boost::unordered_map' to achieve high performance, type-safety,\n and versatility, while maintaining compliance with the C++98 standard.","Published":"2017-03-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"hashr","Version":"0.1.0","Title":"Hash R Objects to Integers Fast","Description":"Apply the SuperFastHash algorithm to any R object. Hash whole R objects or, \n for vectors or lists, hash R objects to obtain a set of hash values that is stored \n in a structure equivalent to the input. ","Published":"2015-08-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hasseDiagram","Version":"0.1.3","Title":"Drawing Hasse Diagram","Description":"Drawing Hasse diagram - visualization of transitive reduction of a finite partially ordered set.","Published":"2017-02-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"haven","Version":"1.0.0","Title":"Import and Export 'SPSS', 'Stata' and 'SAS' Files","Description":"Import foreign statistical formats into R via the embedded\n 'ReadStat' C library (https://github.com/WizardMac/ReadStat).","Published":"2016-09-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"hawkes","Version":"0.0-4","Title":"Hawkes process simulation and calibration toolkit","Description":"The package allows to simulate Hawkes process both in univariate and multivariate settings. It gives functions to compute different moments of the number of jumps of the process on a given interval, such as mean, variance or autocorrelation of process jumps on time intervals separated by a lag.","Published":"2014-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hazus","Version":"0.1","Title":"Damage functions from FEMA's HAZUS software for use in modeling\nfinancial losses from natural disasters","Description":"Damage Functions (DFs), also known as\n Vulnerability Functions, associate the physical damage\n to a building or a structure (and also its contents and\n inventory) from natural disasters to financial damage.\n The Federal Emergency Management Agency (FEMA) in USA\n developed several thousand DFs and these serve as a\n benchmark in natural catastrophe modeling, both in\n academia and industry. However, these DFs and their\n documentation are buried within the HAZUS software are\n not easily accessible for analysis and visualization.\n This package provides more than 1300 raw DFs used by FEMA's\n HAZUS software and also functionality to extract and\n visualize DFs specific to the flood hazard. The vignette\n included with this package demonstrates its use.","Published":"2014-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hBayesDM","Version":"0.4.0","Title":"Hierarchical Bayesian Modeling of Decision-Making Tasks","Description":"Fit an array of decision-making tasks with computational models in\n a hierarchical Bayesian framework. Can perform hierarchical Bayesian analysis of\n various computational models with a single line of coding.","Published":"2017-05-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HBglm","Version":"0.1","Title":"Hierarchical Bayesian Regression for GLMs","Description":"Convenient and efficient functions for performing 2-level hierarchical Bayesian regression analysis for multi-group data. The lowest level may belong to the generalized linear model (GLM) family while the prior level, which effects pooling, allows for linear regression on lower level covariates. Constraints on all or part of the parameter set maybe specified with ease. A rich set of methods is included to visualize and analyze results.","Published":"2015-07-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hbim","Version":"1.0.3","Title":"Hill/Bliss Independence Model for Combination Vaccines","Description":"Calculate expected relative risk and proportion protected assuming normally distributed log10 transformed antibody dose for several component vaccine. Uses Hill models for each component which are combined under Bliss independence. ","Published":"2014-03-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hbm","Version":"1.0","Title":"Hierarchical Block Matrix Analysis","Description":"A package for building hierarchical block matrices from association matrices and for performing multi-scale analysis. It specifically targets chromatin contact maps, generated from high-throughput chromosome conformation capture data, such as 5C and Hi-C, and provides methods for detecting movements and for computing chain hierarchy and region communicability across scales.","Published":"2015-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hbmem","Version":"0.3","Title":"Hierarchical Bayesian Analysis of Recognition Memory","Description":"Contains functions for fitting hierarchical versions of\n EVSD, UVSD, DPSD, DPSD with d' restricted to be positive, and\n our gamma signal detection model to recognition memory\n confidence-ratings data.","Published":"2012-12-20","License":"LGPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"hbsae","Version":"1.0","Title":"Hierarchical Bayesian Small Area Estimation","Description":"Functions to compute small area estimates based on a basic\n area or unit-level model. The model is fit using restricted\n maximum likelihood, or in a hierarchical Bayesian way. In the\n latter case numerical integration is used to average over the\n posterior density for the between-area variance. The output\n includes the model fit, small area estimates and corresponding\n MSEs, as well as some model selection measures. Additional\n functions provide means to compute aggregate estimates and\n MSEs, to minimally adjust the small area estimates to\n benchmarks at a higher aggregation level, and to graphically\n compare different sets of small area estimates.","Published":"2012-09-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HBSTM","Version":"1.0.1","Title":"Hierarchical Bayesian Space-Time models for Gaussian space-time\ndata","Description":"This package fits Hierarchical Bayesian space-Time models for Gaussian data. Furthermore, its functions have been implemented for analysing the fitting qualities of those models.","Published":"2014-01-18","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"hcc","Version":"0.54","Title":"Hidden correlation check","Description":"A new diagnostic check for model adequacy in regression\n and generalized linear models is implemented.","Published":"2013-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hcci","Version":"1.0.0","Title":"Interval estimation for the parameters of linear models with\nheteroskedasticity (Wild Bootstrap)","Description":"This package calculates the interval estimates for the parameters of \n linear models heteroscedastic regression using bootstrap - (Wild Bootstrap) and double\n bootstrap-t (Wild Bootstrap). It is also possible to calculate confidence intervals using\n the percentile bootstrap and percentile bootstrap double. It is possible to calculate\n consistent estimates of the covariance matrix of the parameters of linear regression models\n with heteroskedasticity of unknown form. The package also provides function to calculate\n consistently the covariance matrix of the parameters of linear models with heteroskedasticity\n of unknown form.","Published":"2014-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hcp","Version":"0.1","Title":"Change Point Estimation for Regression with Heteroscedastic Data","Description":"Estimation of parameters in 3-segment (i.e. 2 change-point)\n regression models with heteroscedastic variances is provided based on both\n likelihood and hybrid Bayesian approaches, with and without continuity\n constraints at the change points.","Published":"2014-11-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"hda","Version":"0.2-14","Title":"Heteroscedastic Discriminant Analysis","Description":"Functions to perform dimensionality reduction for classification if the covariance matrices of the classes are unequal. ","Published":"2016-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HDCI","Version":"1.0-2","Title":"High Dimensional Confidence Interval Based on Lasso and\nBootstrap","Description":"Fits regression models on high dimensional data to estimate coefficients and use bootstrap method to obtain confidence intervals. Choices for regression models are Lasso, Lasso+OLS, Lasso partial ridge, Lasso+OLS partial ridge. ","Published":"2017-06-06","License":"GNU General Public License version 2","snapshot_date":"2017-06-23"} {"Package":"HDclassif","Version":"2.0.2","Title":"High Dimensional Supervised Classification and Clustering","Description":"Discriminant analysis and data clustering methods for high\n dimensional data, based on the assumption that high-dimensional data live in\n different subspaces with low dimensionality proposing a new parametrization of\n the Gaussian mixture model which combines the ideas of dimension reduction and\n constraints on the model.","Published":"2016-12-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HDDesign","Version":"1.1","Title":"Sample Size Calculation for High Dimensional Classification\nStudy","Description":"Determine the sample size requirement to achieve the target probability of correct classification (PCC) for studies employing high-dimensional features. The package implements functions to 1) determine the asymptotic feasibility of the classification problem; 2) compute the upper bounds of the PCC for any linear classifier; 3) estimate the PCC of three design methods given design assumptions; 4) determine the sample size requirement to achieve the target PCC for three design methods. ","Published":"2016-06-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hddplot","Version":"0.57-2","Title":"Use Known Groups in High-Dimensional Data to Derive Scores for\nPlots","Description":"Cross-validated linear discriminant calculations determine\n the optimum number of features. Test and training scores from\n successive cross-validation steps determine, via a principal\n components calculation, a low-dimensional global space onto which test\n scores are projected, in order to plot them. Further functions are\n included that serve didactic purposes.","Published":"2016-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hddtools","Version":"0.7","Title":"Hydrological Data Discovery Tools","Description":"Facilitates discovery and handling of hydrological data, access to catalogues and databases.","Published":"2017-04-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hdeco","Version":"0.4.1","Title":"Hierarchical DECOmposition of Entropy for Categorical Map\nComparisons","Description":"A flexible and hierarchical framework for comparing categorical map composition and configuration (spatial pattern) along spatial, thematic, or external grouping variables. Comparisons are based on measures of mutual information between thematic classes (colours) and location (spatial partitioning). Results are returned in textual, tabular, and graphical forms.","Published":"2009-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HDGLM","Version":"0.1","Title":"Tests for High Dimensional Generalized Linear Models","Description":"Test the significance of coefficients in high dimensional generalized linear models.","Published":"2015-10-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hdi","Version":"0.1-6","Title":"High-Dimensional Inference","Description":"Implementation of multiple approaches to perform inference in high-dimensional models.","Published":"2016-03-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"HDInterval","Version":"0.1.3","Title":"Highest (Posterior) Density Intervals","Description":"A generic function and a set of methods to calculate highest density intervals for a variety of classes of objects which can specify a probability density distribution, including MCMC output, fitted density objects, and functions.","Published":"2016-05-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hdlm","Version":"1.3.1","Title":"Fitting High Dimensional Linear Models","Description":"Mimics the lm() function found in the package stats to fit\n high dimensional regression models with point estimates,\n standard errors, and p-values. Methods for printing and\n summarizing the results are given.","Published":"2016-09-20","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"hdm","Version":"0.2.0","Title":"High-Dimensional Metrics","Description":"Implementation of selected high-dimensional statistical and\n econometric methods for estimation and inference. Efficient estimators and\n uniformly valid confidence intervals for various low-dimensional causal/\n structural parameters are provided which appear in high-dimensional\n approximately sparse models. Including functions for fitting heteroscedastic\n robust Lasso regressions with non-Gaussian errors and for instrumental variable\n (IV) and treatment effect estimation in a high-dimensional setting. Moreover,\n the methods enable valid post-selection inference and rely on a theoretically\n grounded, data-driven choice of the penalty.","Published":"2016-06-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HDMD","Version":"1.2","Title":"Statistical Analysis Tools for High Dimension Molecular Data\n(HDMD)","Description":"High Dimensional Molecular Data (HDMD) typically have many\n more variables or dimensions than observations or replicates\n (D>>N). This can cause many statistical procedures to fail,\n become intractable, or produce misleading results. This\n package provides several tools to reduce dimensionality and\n analyze biological data for meaningful interpretation of\n results. Factor Analysis (FA), Principal Components Analysis\n (PCA) and Discriminant Analysis (DA) are frequently used\n multivariate techniques. However, PCA methods prcomp and\n princomp do not reflect the proportion of total variation of\n each principal component. Loadings.variation displays the\n relative and cumulative contribution of variation for each\n component by accounting for all variability in data. When D>>N,\n the maximum likelihood method cannot be applied in FA and the\n the principal axes method must be used instead, as in factor.pa\n of the psych package. The factor.pa.ginv function in this\n package further allows for a singular covariance matrix by\n applying a general inverse method to estimate factor scores.\n Moreover, factor.pa.ginv removes and warns of any variables\n that are constant, which would otherwise create an invalid\n covariance matrix. Promax.only further allows users to define\n rotation parameters during factor estimation. Similar to the\n Euclidean distance, the Mahalanobis distance estimates the\n relationship among groups. pairwise.mahalanobis computes all\n such pairwise Mahalanobis distances among groups and is useful\n for quantifying the separation of groups in DA. Genetic\n sequences are composed of discrete alphabetic characters, which\n makes estimates of variability difficult. MolecularEntropy and\n MolecularMI calculate the entropy and mutual information to\n estimate variability and covariability, respectively, of DNA or\n Amino Acid sequences. Functional grouping of amino acids\n (Atchley et al 1999) is also available for entropy and mutual\n information estimation. Mutual information values can be\n normalized by NMI to account for the background distribution\n arising from the stochastic pairing of independent, random\n sites. Alternatively, discrete alphabetic sequences can be\n transformed into biologically informative metrics to be used in\n various multivariate procedures. FactorTransform converts\n amino acid sequences using the amino acid indices determined by\n Atchley et al 2005.","Published":"2013-02-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hdnom","Version":"4.8","Title":"Benchmarking and Visualization Toolkit for Penalized Cox Models","Description":"Creates nomogram visualizations for penalized Cox regression\n models, with the support of reproducible survival model building,\n validation, calibration, and comparison for high-dimensional data.","Published":"2017-03-25","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HDoutliers","Version":"0.15","Title":"Leland Wilkinson's Algorithm for Detecting Multidimensional\nOutliers","Description":"An implementation of an algorithm for outlier detection that can handle a) data with a mixed categorical and continuous variables, b) many columns of data, c) many rows of data, d) outliers that mask other outliers, and e) both unidimensional and multidimensional datasets. Unlike ad hoc methods found in many machine learning papers, HDoutliers is based on a distributional model that uses probabilities to determine outliers. See .","Published":"2016-12-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"hdpca","Version":"1.0.0","Title":"Principal Component Analysis in High-Dimensional Data","Description":"In high-dimensional settings:\n\tEstimate the number of distant spikes based on the Generalized Spiked Population (GSP) model.\n\tEstimate the population eigenvalues, angles between the sample and population eigenvectors, correlations between the sample and population PC scores, and the asymptotic shrinkage factors.\n\tAdjust the shrinkage bias in the predicted PC scores.","Published":"2016-08-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HDPenReg","Version":"0.93.1","Title":"High-Dimensional Penalized Regression","Description":"Algorithms for lasso and fused-lasso problems: implementation of\n the lars algorithm for lasso and fusion penalization and EM-based\n algorithms for (logistic) lasso and fused-lasso penalization.","Published":"2016-01-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hdr","Version":"0.1","Title":"Interface to the UNDR Human Development Report API","Description":"Provides a complete interface to the United Nations\n Development Programme Human Development Report API (). The API\n includes a large amount of human development data, including all the series used\n to compute the Human Development Index (HDI), as well as the HDI itself.","Published":"2015-12-31","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"hdrcde","Version":"3.1","Title":"Highest density regions and conditional density estimation","Description":"Computation of highest density regions in one and two dimensions,\n kernel estimation of univariate density functions conditional on one covariate,\n and multimodal regression.","Published":"2013-10-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hds","Version":"0.8.1","Title":"Hazard Discrimination Summary","Description":"Functions for calculating the hazard discrimination summary and its\n standard errors, as described in Liang and Heagerty (2016) .","Published":"2016-12-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HDtest","Version":"0.1","Title":"High Dimensional Hypothesis Testing for Mean Vectors, Covariance\nMatrices, and White Noise of Vector Time Series","Description":"High dimensional testing procedures on mean, covariance and white noises.","Published":"2016-12-30","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"HDtweedie","Version":"1.1","Title":"The Lasso for the Tweedie's Compound Poisson Model Using an\nIRLS-BMD Algorithm","Description":"This package implements a iteratively reweighed least square (IRLS) strategy that incorporates a blockwise majorization decent (BMD) method, for efficiently computing the solution paths of the (grouped) lasso and the (grouped) elastic net for the Tweedie model.","Published":"2013-10-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"healthcareai","Version":"0.1.12","Title":"Tools for Healthcare Machine Learning","Description":"A machine learning toolbox tailored to healthcare data.\n Aids in data cleaning, model development, hyperparameter tuning, and model\n deployment in a production SQL environment. Algorithms currently supported\n are Lasso, Random Forest, and Linear Mixed Model.","Published":"2017-05-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HEAT","Version":"1.2","Title":"Health Effects of Air Pollution and Temperature (HEAT)","Description":"Timeseries analysis is conducted using Korean mortality and environmental variables","Published":"2013-10-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"heatex","Version":"1.0","Title":"Heat exchange calculations during physical activity","Description":"The heatex package calculates heat storage in the body and\n the components of heat exchange (conductive, convective,\n radiative, and evaporative) between the body and the\n environment during physical activity based on the principles of\n partitional calorimetry. The program enables heat exchange\n calculations for a range of environmental conditions when\n wearing various clothing ensembles.","Published":"2013-02-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"heatmap.plus","Version":"1.3","Title":"Heatmap with more sensible behavior","Description":"Allows heatmap matrix to have non-identical X- and\n Y-dimensions. Allows multiple tracks of annotation for\n RowSideColors and ColSideColors.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"heatmap3","Version":"1.1.1","Title":"An Improved Heatmap Package","Description":"An improved heatmap package. Completely\n compatible with the original R function 'heatmap',\n and provides more powerful and convenient features.","Published":"2015-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"heatmapFit","Version":"2.0.4","Title":"Fit Statistic for Binary Dependent Variable Models","Description":"Generates a fit plot for diagnosing misspecification in models of\n binary dependent variables, and calculates the related heatmap fit\n statistic described in Esarey and Pierce (2012) .","Published":"2016-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"heatmaply","Version":"0.10.1","Title":"Interactive Cluster Heat Maps Using 'plotly'","Description":"Create interactive cluster 'heatmaps' that can be saved as a stand-\n alone HTML file, embedded in 'R Markdown' documents or in a 'Shiny' app, and\n available in the 'RStudio' viewer pane. Hover the mouse pointer over a cell to\n show details or drag a rectangle to zoom. A 'heatmap' is a popular graphical\n method for visualizing high-dimensional data, in which a table of numbers\n are encoded as a grid of colored cells. The rows and columns of the matrix\n are ordered to highlight patterns and are often accompanied by 'dendrograms'.\n 'Heatmaps' are used in many fields for visualizing observations, correlations,\n missing values patterns, and more. Interactive 'heatmaps' allow the inspection\n of specific value by hovering the mouse over a cell, as well as zooming into\n a region of the 'heatmap' by dragging a rectangle around the relevant area.\n This work is based on the 'ggplot2' and 'plotly.js' engine. It produces\n similar 'heatmaps' as 'heatmap.2' or 'd3heatmap', with the advantage of speed\n ('plotly.js' is able to handle larger size matrix), the ability to zoom from\n the 'dendrogram' panes, and the placing of factor variables in the sides of the\n 'heatmap'.","Published":"2017-05-27","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"heavy","Version":"0.38.1","Title":"Robust Estimation Using Heavy-Tailed Distributions","Description":"Functions to perform robust estimation considering heavy-tailed distributions.\n Currently, the package includes routines for linear regression, linear mixed-effect models,\n multivariate location and scatter estimation, multivariate regression, penalized splines\n and random variate generation.","Published":"2017-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"heemod","Version":"0.9.1","Title":"Models for Health Economic Evaluation","Description":"Health Economic Evaluation Modelling:\n decision trees and cohort simulations. Provides a simple\n and consistent interface for Markov models specification,\n comparison, sensitivity and probabilistic analysis, input of\n survival models, etc. Models with time varying properties\n (non-homogeneous Markov models and semi-Markov models)\n are supported.","Published":"2017-05-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"heims","Version":"0.2.4","Title":"Decode and Validate HEIMS Data from Department of Education,\nAustralia","Description":"Decode elements of the Australian Higher Education Information Management System (HEIMS) data for clarity and performance. HEIMS is the record system of the Department of Education, Australia to record enrolments and completions in Australia's higher education system, as well as a range of relevant information. For more information, including the source of the data dictionary, see .","Published":"2017-06-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hellno","Version":"0.0.1","Title":"Providing 'stringsAsFactors=FALSE' Variants of 'data.frame()'\nand 'as.data.frame()'","Description":"Base R's default setting for 'stringsAsFactors' within\n 'data.frame()' and 'as.data.frame()' is supposedly the most often complained\n about piece of code in the R infrastructure. The 'hellno' package provides\n an explicit solution without changing R itself or having to mess around with\n options. It tries to solve this problem by providing alternative\n 'data.frame()' and 'as.data.frame()' functions that are in fact simple\n wrappers around base R's 'data.frame()' and 'as.data.frame()' with\n 'stringsAsFactors' option set to 'HELLNO' ( which in turn equals FALSE )\n by default.","Published":"2015-12-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"helloJavaWorld","Version":"0.0-9","Title":"Hello Java World","Description":"A dummy package to demonstrate how to interface to a jar\n file that resides inside an R package.","Published":"2014-09-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HelpersMG","Version":"1.9","Title":"Tools for Earth Meteorological Analysis","Description":"Contains many functions useful for managing 'NetCDF' files (see ), get tide levels on any point of the globe, get moon phase and time for sun rise and fall, analyse and reconstruct periodic time series of temperature with irregular sinusoidal pattern, show scales and wind rose in plot with change of color of text, Metropolis-Hastings algorithm for Bayesian MCMC analysis, plot graphs or boxplot with error bars, search files in disk by there names or their content, read the contents of all files from a folder at one time.","Published":"2017-03-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"helsinki","Version":"0.9.29","Title":"R Tools for Helsinki Open Data","Description":"Tools for accessing various open data sources in the Helsinki\n region in Finland. Current data sources include\n the Real Estate Department (),\n Service Map API (),\n Linked Events API (),\n Helsinki Region Infoshare statistics API ().","Published":"2017-02-18","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"heplots","Version":"1.3-3","Title":"Visualizing Hypothesis Tests in Multivariate Linear Models","Description":"Provides HE plot and other functions for visualizing hypothesis\n tests in multivariate linear models. HE plots represent sums-of-squares-and-\n products matrices for linear hypotheses and for error using ellipses (in two\n dimensions) and ellipsoids (in three dimensions). The related 'candisc' package\n provides visualizations in a reduced-rank canonical discriminant space when\n there are more than a few response variables.","Published":"2016-10-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"here","Version":"0.1","Title":"A Simpler Way to Find Your Files","Description":"Constructs paths to your project's files.\n The 'here()' function uses a reasonable heuristics to find your project's\n files, based on the current working directory at the time when the package\n is loaded. Use it as a drop-in replacement for 'file.path()', it will always\n locate the files relative to your project root.","Published":"2017-05-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hergm","Version":"3.1-0","Title":"Hierarchical Exponential-Family Random Graph Models","Description":"Hierarchical exponential-family random graph models with local dependence.","Published":"2017-01-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"heritability","Version":"1.2","Title":"Marker-Based Estimation of Heritability Using Individual Plant\nor Plot Data","Description":"Implements marker-based estimation of heritability when observations on genetically identical replicates are available. These can be either observations on individual plants or plot-level data in a field trial. Heritability can then be estimated using a mixed model for the individual plant or plot data. For comparison, also mixed-model based estimation using genotypic means and estimation of repeatability with ANOVA are implemented. For illustration the package contains several datasets for the model species Arabidopsis thaliana.","Published":"2016-12-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HeritSeq","Version":"1.0.0","Title":"Heritability of Gene Expression for Next-Generation Sequencing","Description":"Statistical framework to analyze heritability of gene expression \n based on next-generation sequencing data and simulating sequencing reads. \n Variance partition coefficients (VPC) are computed using linear mixed effects \n and generalized linear mixed effects models. Compound Poisson and negative \n binomial models are included.","Published":"2017-01-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hermite","Version":"1.1.1","Title":"Generalized Hermite Distribution","Description":"Probability functions and other utilities for the generalized Hermite distribution.","Published":"2015-11-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"het.test","Version":"0.1","Title":"White's Test for Heteroskedasticity","Description":"An implementation of White's Test for Heteroskedasticity\n as outlined in Doornik (1996).","Published":"2013-02-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hetmeta","Version":"0.1.0","Title":"Heterogeneity Measures in Meta-Analysis","Description":"Assess the presence of statistical heterogeneity and quantify its impact in the context of meta-analysis. It includes test for heterogeneity as well as other statistical measures (R_b, I^2, R_I).","Published":"2016-05-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hett","Version":"0.3-1","Title":"Heteroscedastic t-regression","Description":"Functions for the fitting and summarizing of heteroscedastic t-regression.","Published":"2012-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"heuristica","Version":"1.0.1","Title":"Heuristics Including Take the Best and Unit-Weight Linear","Description":"Implements various heuristics like Take The Best and\n unit-weight linear, which do two-alternative choice: which of\n two objects will have a higher criterion? Also offers functions\n to assess performance, e.g. percent correct across all row pairs\n in a data set and finding row pairs where models disagree.\n New models can be added by implementing a fit and predict function--\n see vignette.","Published":"2016-07-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"hexbin","Version":"1.27.1","Title":"Hexagonal Binning Routines","Description":"Binning and plotting functions for hexagonal bins. Now\n uses and relies on grid graphics and formal (S4) classes and\n methods.","Published":"2016-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hexSticker","Version":"0.4.1","Title":"Create Hexagon Sticker in R","Description":"Helper functions for creating reproducible hexagon sticker purely\n in R.","Published":"2017-06-19","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"hextri","Version":"0.6","Title":"Hexbin Plots with Triangles","Description":"Display hexagonally binned scatterplots for multi-class data, using coloured triangles to show class proportions.","Published":"2016-04-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"hexView","Version":"0.3-3","Title":"Viewing Binary Files","Description":"Functions to view files in raw binary form like in a hex editor. Additional functions to specify and read arbitrary binary formats.","Published":"2014-12-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hflights","Version":"0.1","Title":"Flights that departed Houston in 2011","Description":"A data only package containing commercial domestic flights that\n departed Houston (IAH and HOU) in 2011.","Published":"2013-12-07","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"hgam","Version":"0.1-2","Title":"High-dimensional Additive Modelling","Description":"High-dimensional additive models as introduced by Meier,\n van der Geer and Buehlmann (2009).","Published":"2013-05-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hglasso","Version":"1.2","Title":"Learning graphical models with hubs","Description":"Implements the hub graphical lasso and hub covariance graph proposal by Tan, KM., London, P., Mohan, K., Lee, S-I., Fazel, M., and Witten, D. (2014). Learning graphical models with hubs. To appear in Journal of Machine Learning Research. arXiv.org/pdf/1402.7349.pdf.","Published":"2014-08-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hglm","Version":"2.1-1","Title":"Hierarchical Generalized Linear Models","Description":"Procedures for fitting hierarchical generalized linear models (HGLM). It can be used for linear mixed models and generalized linear mixed models with random effects for a variety of links and a variety of distributions for both the outcomes and the random effects. Fixed effects can also be fitted in the dispersion part of the mean model.","Published":"2015-08-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hglm.data","Version":"1.0-0","Title":"Data for The hglm Package","Description":"This data-only package was created for distributing data used in the examples of the hglm package.","Published":"2014-07-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hgm","Version":"1.17","Title":"Holonomic Gradient Method and Gradient Descent","Description":"The holonomic gradient method (HGM, hgm) gives a way to evaluate normalization\n constants of unnormalized probability distributions by utilizing holonomic \n systems of differential or difference equations. The holonomic gradient descent (HGD, hgd) gives a method\n to find maximal likelihood estimates by utilizing the HGM.","Published":"2017-04-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HGNChelper","Version":"0.3.5","Title":"Handy Functions for Working with HGNC Gene Symbols and\nAffymetrix Probeset Identifiers","Description":"Contains functions for\n identifying and correcting HGNC gene symbols which have been converted\n to date format by Excel, for reversibly converting between HGNC\n symbols and valid R names, identifying invalid HGNC symbols and\n correcting synonyms and outdated symbols which can be mapped to an\n official symbol.","Published":"2017-06-13","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"HH","Version":"3.1-34","Title":"Statistical Analysis and Data Display: Heiberger and Holland","Description":"Support software for Statistical Analysis and Data Display (Second Edition, Springer, ISBN 978-1-4939-2121-8, 2015) and (First Edition, Springer, ISBN 0-387-40270-5, 2004) by Richard M. Heiberger and Burt Holland. This contemporary presentation of statistical methods features extensive use of graphical displays for exploring data and for displaying the analysis. The second edition includes redesigned graphics and additional chapters. The authors emphasize how to construct and interpret graphs, discuss principles of graphical design, and show how accompanying traditional tabular results are used to confirm the visual impressions derived directly from the graphs. Many of the graphical formats are novel and appear here for the first time in print. All chapters have exercises. All functions introduced in the book are in the package. R code for all examples, both graphs and tables, in the book is included in the scripts directory of the package.","Published":"2017-01-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HHG","Version":"2.0","Title":"Heller-Heller-Gorfine Tests of Independence and Equality of\nDistributions","Description":"Heller-Heller-Gorfine ('HHG') tests are a set of powerful statistical\n tests of multivariate k-sample homogeneity and independence. For the univariate\n case, the package also offers implementations of the 'MinP DDP' and 'MinP ADP'\n tests, which are consistent against all continuous alternatives but are\n distribution-free, and are thus much faster to apply.","Published":"2016-10-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hht","Version":"2.1.3","Title":"The Hilbert-Huang Transform: Tools and Methods","Description":"Builds on the EMD package to provide additional tools for empirical mode decomposition (EMD) and Hilbert spectral analysis. It also implements the ensemble empirical decomposition (EEMD) and the complete ensemble empirical mode decomposition (CEEMD) methods to avoid mode mixing and intermittency problems found in EMD analysis. The package comes with several plotting methods that can be used to view intrinsic mode functions, the HHT spectrum, and the Fourier spectrum. ","Published":"2016-05-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"HI","Version":"0.4","Title":"Simulation from distributions supported by nested hyperplanes","Description":"Simulation from distributions supported by nested\n hyperplanes, using the algorithm described in Petris &\n Tardella, \"A geometric approach to transdimensional Markov\n chain Monte Carlo\", Canadian Journal of Statistics, v.31, n.4,\n (2003). Also random direction multivariate Adaptive Rejection\n Metropolis Sampling.","Published":"2013-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HIBPwned","Version":"0.1.6","Title":"Bindings for the 'HaveIBeenPwned.com' Data Breach API","Description":"Utilising the 'Have I been pwned?' API (see \n for more information), check whether email addresses and/or user names have been present\n in a publicly disclosed data breach.","Published":"2017-05-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HiCblock","Version":"1.0","Title":"Systematic Analysis of Architectural Proteins and Functional\nElements in Blocking Long-Range Contacts Between Loci","Description":"Here we propose a model to systematically analyze the roles of architectural proteins and functional elements in blocking long-range contacts between loci. The proposed model does not rely on topologically associating domain (TAD) mapping from Hi-C data. Instead of testing the enrichment or influence of protein binding at TAD borders, the model directly estimates the blocking effect of proteins on long-range contacts between flanking loci, making the model intuitive and biologically meaningful.","Published":"2017-06-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HiCfeat","Version":"1.2","Title":"Multiple Logistic Regression for 3D Chromatin Domain Border\nAnalysis","Description":"We propose a multiple logistic regression model to assess the influences of genomic features such as DNA-binding proteins and functional elements on topological domain borders. ","Published":"2016-09-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HiCglmi","Version":"1.1","Title":"Probing Factor-Dependent Long-Range Contacts using Regression\nwith Higher-Order Interaction Terms","Description":"We propose a generalized linear regression with higher-order interaction terms to assess the influences of genomic features such as DNA-binding proteins and functional elements on long-range contacts from Hi-C experiments. ","Published":"2017-06-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HiClimR","Version":"1.2.3","Title":"Hierarchical Climate Regionalization","Description":"A tool for Hierarchical Climate Regionalization applicable to any correlation-based clustering.\n It adds several features and a new clustering method (called, 'regional' linkage) to hierarchical \n clustering in R ('hclust' function in 'stats' library): data regridding, coarsening spatial resolution,\n geographic masking (by continents, regions, or countries), data filtering by mean and/or variance \n thresholds, data preprocessing (detrending, standardization, and PCA), faster correlation function\n with preliminary big data support, different clustering methods, hybrid hierarchical clustering, \n multi-variate clustering (MVC), cluster validation, and visualization of region maps.","Published":"2015-08-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HiCseg","Version":"1.1","Title":"Detection of domains in HiC data","Description":"This package allows you to detect domains in HiC data by rephrasing this problem as a two-dimensional segmentation issue.","Published":"2014-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hiddenf","Version":"2.0","Title":"The All-Configurations, Maximum-Interaction F-Test for Hidden\nAdditivity","Description":"Computes the ACMIF test and Bonferroni-adjusted p-value of interaction in two-factor studies. Produces corresponding interaction plot and analysis of variance tables and p-values from several other tests of non-additivity.","Published":"2016-01-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HiddenMarkov","Version":"1.8-8","Title":"Hidden Markov Models","Description":"Contains functions for the analysis of Discrete Time Hidden Markov Models, Markov Modulated GLMs and the Markov Modulated Poisson Process. It includes functions for simulation, parameter estimation, and the Viterbi algorithm. See the topic \"HiddenMarkov\" for an introduction to the package, and \"Change Log\" for a list of recent changes. The algorithms are based of those of Walter Zucchini.","Published":"2017-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HiDimDA","Version":"0.2-4","Title":"High Dimensional Discriminant Analysis","Description":"Performs linear discriminant analysis in high dimensional\n problems based on reliable covariance estimators for problems\n with (many) more variables than observations. Includes routines\n for classifier training, prediction, cross-validation and\n variable selection.","Published":"2015-10-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"HiDimMaxStable","Version":"0.1.1","Title":"Inference on High Dimensional Max-Stable Distributions","Description":"Inference of high dimensional max-stable\n distributions, from the paper \"Likelihood based inference for\n high-dimensional extreme value distributions\", by A. Bienvenüe\n and C. Robert, arXiv:1403.0065 [stat.AP].","Published":"2015-01-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hier.part","Version":"1.0-4","Title":"Hierarchical Partitioning","Description":"Variance partition of a multivariate data set","Published":"2013-01-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"hierarchicalDS","Version":"2.9","Title":"Functions For Performing Hierarchical Analysis of Distance\nSampling Data","Description":"Functions for performing hierarchical analysis of distance\n sampling data, with ability to use an areal spatial ICAR model on\n top of user supplied covariates to get at variation in abundance\n intensity. The detection model can be specified as a function of\n observer and individual covariates, where a parametric model is\n supposed for the population level distribution of covariate values.\n The model uses data augmentation and a reversible jump MCMC\n algorithm to sample animals that were never observed. Also\n included is the ability to include point independence (increasing\n correlation multiple observer's observations as a function of\n distance, with independence assumed for distance=0 or first\n distance bin), as well as the ability to model species\n misclassification rates using a multinomial logit formulation on data\n from double observers. New in version 2.1 is the ability to\n include zero inflation, but this is only recommended for cases where\n sample sizes and spatial coverage of the survey are high.","Published":"2014-11-26","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"hierarchicalSets","Version":"1.0.2","Title":"Set Data Visualization Using Hierarchies","Description":"Pure set data visualization approaches are often limited in\n scalability due to the combinatorial explosion of distinct set families as\n the number of sets under investigation increases. hierarchicalSets applies\n a set centric hierarchical clustering of the sets under investigation and\n uses this hierarchy as a basis for a range of scalable visual\n representations. hierarchicalSets is especially well suited for collections\n of sets that describe comparable comparable entities as it relies on the\n sets to have a meaningful relational structure.","Published":"2016-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hierband","Version":"1.0","Title":"Convex Banding of the Covariance Matrix","Description":"Implementation of the convex banding procedure (using a\n hierarchical group lasso penalty) for covariance estimation that is\n introduced in Bien, Bunea, Xiao (2015) Convex Banding of the Covariance\n Matrix. Accepted for publication in JASA.","Published":"2015-06-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hierDiversity","Version":"0.1","Title":"Hierarchical Multiplicative Partitioning of Complex Phenotypes","Description":"Hierarchical group-wise partitioning of phenotypic diversity into \n within-group (alpha), among-group (beta), and pooled-total (gamma) \n components using Hill numbers.\n Turnover and overlap are also calculated as standardized alternatives to\n beta diversity. Hierarchical bootstrapping is used to approximate \n uncertainty around each diversity component. ","Published":"2015-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hierformR","Version":"0.1.0","Title":"Analysis of Dynamics Hierarchy Formation","Description":"Determine paths and states that social networks develop over\n time to form social hierarchies. Based upon algorithms described in\n W. Brent Lindquist & Ivan D. Chase (2009) .","Published":"2016-06-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hierfstat","Version":"0.04-22","Title":"Estimation and Tests of Hierarchical F-Statistics","Description":"Allows the estimation of hierarchical F-statistics from haploid or diploid genetic data \n with any numbers of levels in the hierarchy, following the algorithm of Yang (Evolution, 1998, 52(4):950-956; \n . Functions are also given to test via randomisations the significance of each F and variance components, \n\tusing the likelihood-ratio statistics G.","Published":"2015-12-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hierNet","Version":"1.6","Title":"A Lasso for Hierarchical Interactions","Description":"Fits sparse interaction models for continuous and binary responses subject to the strong (or weak) hierarchy restriction that an interaction between two variables only be included if both (or at least one of) the variables is included as a main effect. For more details, see Bien, J., Taylor, J., Tibshirani, R., (2013) \"A Lasso for Hierarchical Interactions.\" Annals of Statistics. 41(3). 1111-1141.","Published":"2014-04-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HierO","Version":"0.2","Title":"A graphical user interface for calculating power and sample size\nfor hierarchical data","Description":"HierO is a graphical user interface (GUI) tool for calculating optimal statistical power and sample size for hierarchical data structure. HierO constructs a user defined sample size optimization problem to GAMS (General Algebraic Modeling System)form and uses Rneos package to send the problem to NEOS server for solving.\t\t","Published":"2015-01-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hiertest","Version":"1.1","Title":"Convex Hierarchical Testing of Interactions","Description":"Implementation of the convex hierarchical testing (CHT) procedure\n introduced in Bien, Simon, and Tibshirani (2015) Convex Hierarchical Testing\n of Interactions. Annals of Applied Statistics. Vol. 9, No. 1, 27-42.","Published":"2015-05-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HIest","Version":"2.0","Title":"Hybrid index estimation","Description":"Uses likelihood to estimate ancestry and heterozygosity.\n Evaluates simple hybrid classifications (parentals, F1, F2,\n backcrosses). Estimates genomic clines.","Published":"2013-02-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"highcharter","Version":"0.5.0","Title":"A Wrapper for the 'Highcharts' Library","Description":"A wrapper for the 'Highcharts' library including\n shortcut functions to plot R objects. 'Highcharts' \n is a charting library offering\n numerous chart types with a simple configuration syntax.","Published":"2017-01-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"highD2pop","Version":"1.0","Title":"Two-Sample Tests for Equality of Means in High Dimension","Description":"Performs the generalized component test from Gregory et al (2015), as well as the tests from Chen and Qin (2010), Srivastava and Kubokawa (2013), and Cai, Liu, and Xia (2014) for equality of two population mean vectors when the length of the vectors exceeds the sample size.","Published":"2014-10-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HighDimOut","Version":"1.0.0","Title":"Outlier Detection Algorithms for High-Dimensional Data","Description":"Three high-dimensional outlier detection algorithms and a outlier unification scheme are implemented in this package. The angle-based outlier detection (ABOD) algorithm is based on the work of Kriegel, Schubert, and Zimek [2008]. The subspace outlier detection (SOD) algorithm is based on the work of Kriegel, Kroger, Schubert, and Zimek [2009]. The feature bagging-based outlier detection (FBOD) algorithm is based on the work of Lazarevic and Kumar [2005]. The outlier unification scheme is based on the work of Kriegel, Kroger, Schubert, and Zimek [2011].","Published":"2015-04-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"highfrequency","Version":"0.5.1","Title":"Tools for Highfrequency Data Analysis","Description":"Provide functionality to manage, clean and match highfrequency\n trades and quotes data, calculate various liquidity measures, estimate and\n forecast volatility, and investigate microstructure noise and intraday\n periodicity.","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"highlight","Version":"0.4.7.1","Title":"Syntax Highlighter","Description":"Syntax highlighter for R code based \n\ton the results of the R parser. Rendering in HTML and latex \n\tmarkup. Custom Sweave driver performing syntax highlighting \n\tof R code chunks.","Published":"2017-03-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"highlightHTML","Version":"0.1.1","Title":"Highlight HTML Text and Tables","Description":"A tool to highlight specific cells in an HTML table or more \n generally text from an HTML document. This may be helpful for those \n using markdown to create reproducible documents. In addition, the ability\n to compile directly from R markdown files is also possible using the 'knitr' \n package.","Published":"2017-01-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"highmean","Version":"3.0","Title":"Two-Sample Tests for High-Dimensional Mean Vectors","Description":"Provides various tests for comparing high-dimensional mean vectors in two sample populations.","Published":"2016-10-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"highr","Version":"0.6","Title":"Syntax Highlighting for R Source Code","Description":"Provides syntax highlighting for R source code. Currently it\n supports LaTeX and HTML output. Source code of other languages is supported\n via Andre Simon's highlight package (http://www.andre-simon.de).","Published":"2016-05-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"highriskzone","Version":"1.3-1","Title":"Determining and Evaluating High-Risk Zones","Description":"Functions for determining and evaluating high-risk zones and\n simulating and thinning point process data, as described in 'Determining\n high risk zones using point process methodology - Realization by building\n an R package' (Seibold, 2012) and 'Determining high-risk zones for\n unexploded World War II bombs by using point process methodology' (Mahling\n et al., 2013).","Published":"2016-03-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"highSCREEN","Version":"0.1","Title":"High Throughput Screening for Plate Based Essays","Description":"Can be used to carry out extraction, normalization, quality control (QC), candidate hits identification and visualization for plate based essays, in drug discovery. This project was funded by the Division of Allergy, Immunology, and Transplantation, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Department of Health and Human Services, under contract No. HHSN272201400054C entitled \"Adjuvant Discovery For Vaccines Against West Nile Virus and Influenza\", awarded to Duke University and lead by Drs. Herman Staats and Soman Abraham.","Published":"2016-05-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"highTtest","Version":"1.1","Title":"Simultaneous Critical Values for t-Tests in Very High Dimensions","Description":"Implements the method developed by Cao and Kosorok (2011) for the significance analysis of thousands of features in high-dimensional biological studies. It is an asymptotically valid data-driven procedure to find critical values for rejection regions controlling the k-familywise error rate, false discovery rate, and the tail probability of false discovery proportion.","Published":"2015-02-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hillmakeR","Version":"0.2","Title":"Perform occupancy analysis","Description":"Generate occupancy patterns based on arrival and departure timestamps","Published":"2014-07-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HiLMM","Version":"1.1","Title":"Estimation of Heritability in Linear Mixed Models","Description":"Estimation of heritability with confidence intervals in linear mixed models.","Published":"2015-04-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hindexcalculator","Version":"1.0.0","Title":"H-Index Calculator using Data from a Web of Science (WoS)\nCitation Report","Description":"H(x) is the h-index for the past x years. Here, the h(x) of a scientist/department/etc. can be calculated using the exported excel file from a Web of Science citation report of a search. Also calculated is the year of first publication, total number of publications, and sum of times cited for the specified period. Therefore, for h-10: the date of first publication, total number of publications, and sum of times cited in the past 10 years are calculated. Note: the excel file has to first be saved in a .csv format.","Published":"2015-09-11","License":"AGPL","snapshot_date":"2017-06-23"} {"Package":"hint","Version":"0.1-1","Title":"Tools for hypothesis testing based on Hypergeometric\nIntersection distributions","Description":"Hypergeometric Intersection distributions are a broad group of distributions that describe the probability of picking intersections when drawing independently from two (or more) urns containing variable numbers of balls belonging to the same n categories.","Published":"2013-07-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HiPLARM","Version":"0.1","Title":"High Performance Linear Algebra in R","Description":"Provides multi-core or GPU support (or both if the system\n has GPU and multi-core CPU) for the recommended R package,\n Matrix.","Published":"2012-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hiPOD","Version":"1.0","Title":"hierarchical Pooled Optimal Design","Description":"Based on hierarchical modeling, this package provides a\n few practical functions to find and present the optimal designs\n for a pooled NGS design.","Published":"2012-06-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hisemi","Version":"1.0-319","Title":"Hierarchical Semiparametric Regression of Test Statistics","Description":"This package implements methods for hierarchical semiparametric regression models for test statistics. Specifically, test statistics given the null/alternative hypotheses are modeled parametrically, whereas the unobservable status of null/alternative hypotheses are modeled using nonparametric additive logistic regression over covariates. ","Published":"2013-08-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hisse","Version":"1.8.2","Title":"Hidden State Speciation and Extinction","Description":"Sets up and executes a HiSSE model (Hidden State Speciation and Extinction) on a phylogeny and character sets to test for hidden shifts in trait dependent rates of diversification.","Published":"2017-04-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HistData","Version":"0.8-1","Title":"Data Sets from the History of Statistics and Data Visualization","Description":"The 'HistData' package provides a collection of small data sets\n that are interesting and important in the history of statistics and data\n visualization. The goal of the package is to make these available, both for\n instructional use and for historical research. Some of these present interesting\n challenges for graphics or analysis in R.","Published":"2017-01-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"HistDAWass","Version":"0.1.6","Title":"Histogram-Valued Data Analysis","Description":"In the framework of Symbolic Data Analysis, a relatively new\n approach to the statistical analysis of multi-valued data, we consider\n histogram-valued data, i.e., data described by univariate histograms. The\n methods and the basic statistics for histogram-valued data are mainly based\n on the L2 Wasserstein metric between distributions, i.e., a Euclidean metric\n between quantile functions. The package contains unsupervised classification\n techniques, least square regression and tools for histogram-valued data and for\n histogram time series.","Published":"2017-02-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"histmdl","Version":"0.6-1","Title":"A Most Informative Histogram-Like Model","Description":"Using the MDL principle, it is possible to estimate\n\tparameters for a histogram-like model. The package contains\n\tthe implementation of such an estimation method.","Published":"2016-09-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"histogram","Version":"0.0-24","Title":"Construction of Regular and Irregular Histograms with Different\nOptions for Automatic Choice of Bins","Description":"Automatic construction of regular and irregular histograms as described in Rozenholc/Mildenberger/Gather (2010).","Published":"2016-10-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HistogramTools","Version":"0.3.2","Title":"Utility Functions for R Histograms","Description":"Provides a number of utility functions useful for manipulating large histograms. This includes methods to trim, subset, merge buckets, merge histograms, convert to CDF, and calculate information loss due to binning. It also provides a protocol buffer representations of the default R histogram class to allow histograms over large data sets to be computed and manipulated in a MapReduce environment.","Published":"2015-07-29","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"historydata","Version":"0.1","Title":"Data Sets for Historians","Description":"These sample data sets are intended for historians\n learning R. They include population, institutional, religious,\n military, and prosopographical data suitable for mapping,\n quantitative analysis, and network analysis.","Published":"2014-12-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"hit","Version":"0.4.0","Title":"Hierarchical Inference Testing","Description":"Hierarchical inference testing (HIT) for (generalized) linear models with \n correlated covariates applicable to high-dimensional settings.","Published":"2016-11-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hitandrun","Version":"0.5-3","Title":"\"Hit and Run\" and \"Shake and Bake\" for Sampling Uniformly from\nConvex Shapes","Description":"The \"Hit and Run\" Markov Chain Monte Carlo method for sampling uniformly from convex shapes defined by linear constraints, and the \"Shake and Bake\" method for sampling from the boundary of such shapes. Includes specialized functions for sampling normalized weights with arbitrary linear constraints.","Published":"2016-12-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HIV.LifeTables","Version":"0.1","Title":"HIV calibrated model life tables for countries with generalized\nHIV epidemics","Description":"The functions in this package produce a complete set of mortality rates as a function of a combination of HIV prevalence and either life expectancy at birth (e0), child mortality (5q0), or child mortality with adult mortality (45q15) ","Published":"2013-12-13","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"hive","Version":"0.2-0","Title":"Hadoop InteractiVE","Description":"Hadoop InteractiVE facilitates distributed \n computing via the MapReduce paradigm through R and Hadoop. An easy to use \n interface to Hadoop, the Hadoop Distributed File System (HDFS), \n\t and Hadoop Streaming is provided.","Published":"2015-07-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HiveR","Version":"0.2.55","Title":"2D and 3D Hive Plots for R","Description":"Creates and plots 2D and 3D hive plots. Hive plots are a unique method of displaying networks of many types in which node properties are mapped to axes using meaningful properties rather than being arbitrarily positioned. The hive plot concept was invented by Martin Krzywinski at the Genome Science Center (www.hiveplot.net/). Keywords: networks, food webs, linnet, systems biology, bioinformatics.","Published":"2016-03-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HK80","Version":"0.0.2","Title":"Conversion Tools for HK80 Geographical Coordinate System","Description":"This is a collection of functions for converting coordinates between WGS84UTM, WGS84GEO, HK80UTM, HK80GEO and HK1980GRID Coordinate Systems used in Hong Kong SAR, based on the algorithms described in Explanatory Notes on Geodetic Datums in Hong Kong by Survey and Mapping Office Lands Department, Hong Kong Government (1995).","Published":"2016-07-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hkclustering","Version":"1.0","Title":"Ensemble Clustering using K Means and Hierarchical Clustering","Description":"Implements an ensemble algorithm for clustering combining a k-means and a hierarchical clustering approach.","Published":"2016-09-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hkevp","Version":"1.1.4","Title":"Spatial Extreme Value Analysis with the Hierarchical Model of\nReich and Shaby (2012)","Description":"Several procedures around a particular hierarchical model for extreme value: the HKEVP of Reich and Shaby (2012) . Simulation, estimation and spatial extrapolation of this model are available for extreme value data. A special case of this process is also handled: the Latent Variable Model of Davison et al. (2012) .","Published":"2016-09-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"hkex.api","Version":"0.1","Title":"API to Retrieve Data from Hong Kong Stock Exchange","Description":"A set of functions helps to retrieve data from HKEX (Hong Kong Stock Exchange), see for more information. In addition, a function generates insert SQL statements from a dataframe. ","Published":"2016-06-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HKprocess","Version":"0.0-2","Title":"Hurst-Kolmogorov Process","Description":"Methods to make inference about the Hurst-Kolmogorov and the AR(1) process.","Published":"2016-01-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HLMdiag","Version":"0.3.1","Title":"Diagnostic Tools for Hierarchical (Multilevel) Linear Models","Description":"A suite of diagnostic tools for hierarchical\n (multilevel) linear models. The tools include\n not only leverage and traditional deletion diagnostics (Cook's\n distance, covratio, covtrace, and MDFFITS) but also \n convenience functions and graphics for residual analysis. Models\n can be fit using either lmer in the 'lme4' package or lme in the 'nlme' package,\n but only two-level models fit using lme are currently supported.","Published":"2015-12-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HLSM","Version":"0.7","Title":"Hierarchical Latent Space Network Model","Description":"Implements Hierarchical Latent Space Network Model (HLSM) for ensemble of networks as described in Sweet et. al. (2012). . ","Published":"2016-11-25","License":"GPL (> 3)","snapshot_date":"2017-06-23"} {"Package":"HMDHFDplus","Version":"1.1.8","Title":"Read HMD and HFD Data from the Web","Description":"Utilities for reading data from the Human Mortality Database (), Human Fertility Database (), and similar databases from the web or locally into an R session as data.frame objects. These are the two most widely used sources of demographic data to study basic demographic change, trends, and develop new demographic methods. Other supported databases at this time include the Human Fertility Collection (), The Japanese Mortality Database (), and the Canadian Human Mortality Database (). Arguments and data are standardized.","Published":"2015-09-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hmeasure","Version":"1.0","Title":"The H-measure and other scalar classification performance\nmetrics","Description":"Scalar performance metrics, including the H-measure, based\n on classification scores for several classifiers applied to the\n same dataset.","Published":"2012-09-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hmi","Version":"0.7.4","Title":"Hierarchical Multiple Imputation","Description":"Runs single level and multilevel imputation models. The user just has to pass the data to the main function and, optionally, his analysis model. Basically the package then translates this analysis model into commands to impute the data according to it with functions from 'mice', 'MCMCglmm' or routines build for this package.","Published":"2017-04-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Hmisc","Version":"4.0-3","Title":"Harrell Miscellaneous","Description":"Contains many functions useful for data\n\tanalysis, high-level graphics, utility operations, functions for\n\tcomputing sample size and power, importing and annotating datasets,\n\timputing missing values, advanced table making, variable clustering,\n\tcharacter string manipulation, conversion of R objects to LaTeX and html code,\n\tand recoding variables.","Published":"2017-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HMM","Version":"1.0","Title":"HMM - Hidden Markov Models","Description":"Easy to use library to setup, apply and make inference\n with discrete time and discrete space Hidden Markov Models","Published":"2010-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hmm.discnp","Version":"0.2-4","Title":"Hidden Markov Models with Discrete Non-Parametric Observation\nDistributions","Description":"Fits hidden Markov models with discrete non-parametric \n observation distributions to data sets. Simulates data\n\tfrom such models. Finds most probable underlying hidden\n\tstates, the most probable sequences of such states, and the\n\tlog likelihood of a collection of observations given the\n\tparameters of the model.","Published":"2016-04-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HMMCont","Version":"1.0","Title":"Hidden Markov Model for Continuous Observations Processes","Description":"The package includes the functions designed to analyse continuous observations processes with the Hidden Markov Model approach. They include Baum-Welch and Viterbi algorithms and additional visualisation functions. The observations are assumed to have Gaussian distribution and to be weakly stationary processes. The package was created for analyses of financial time series, but can also be applied to any continuous observations processes.","Published":"2014-02-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hmmm","Version":"1.0-3","Title":"hierarchical multinomial marginal models","Description":"Functions for specifying and fitting marginal models for contingency tables proposed \n\tby Bergsma and Rudas (2002) here called hierarchical multinomial marginal models (hmmm) and their extensions presented by Bartolucci et al. \n\t(2007); multinomial Poisson homogeneous (mph) models and homogeneous linear predictor (hlp) models for contingency\n \ttables proposed by Lang (2004) and (2005); hidden Markov models where the distribution of the observed variables \n\tis described by a marginal model. \n\tInequality constraints on the parameters are allowed and can be tested.","Published":"2014-08-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HMMpa","Version":"1.0","Title":"Analysing accelerometer data using hidden Markov models","Description":"Analysing time-series accelerometer data to quantify length and intensity of physical activity.","Published":"2014-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HMP","Version":"1.4.3","Title":"Hypothesis Testing and Power Calculations for Comparing\nMetagenomic Samples from HMP","Description":"Using Dirichlet-Multinomial distribution to provide several functions for formal hypothesis testing, power and sample size calculations for human microbiome experiments.","Published":"2016-03-04","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"HMPTrees","Version":"1.3","Title":"Statistical Object Oriented Data Analysis of RDP-Based Taxonomic\nTrees from Human Microbiome Data","Description":"Tools to model, compare, and visualize populations of taxonomic tree objects.","Published":"2016-01-19","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"HMR","Version":"0.4.2","Title":"Flux Estimation with Static Chamber Data","Description":"Statistical analysis of static chamber concentration data for trace gas flux estimation.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hms","Version":"0.3","Title":"Pretty Time of Day","Description":"Implements an S3 class for storing and formatting time-of-day\n values, based on the 'difftime' class.","Published":"2016-11-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HMVD","Version":"1.0","Title":"Group Association Test using a Hidden Markov Model","Description":"Perform association test between a group of variable and the outcome. ","Published":"2016-05-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hNMF","Version":"0.3","Title":"Hierarchical Non-Negative Matrix Factorization","Description":"Hierarchical non-negative matrix factorization\n\tfor tumor segmentation based on multi-parametric MRI data. \n\tSeveral NMF algorithms are available.","Published":"2017-06-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hnp","Version":"1.2-2","Title":"Half-Normal Plots with Simulation Envelopes","Description":"Generates (half-)normal plots with simulation envelopes using different diagnostics from a range of different fitted models. A few example datasets are included.","Published":"2016-12-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hoa","Version":"2.1.4","Title":"Higher Order Likelihood Inference","Description":"Performs likelihood-based inference for a wide range of regression models. Provides higher-order approximations for inference based on extensions of saddlepoint type arguments as discussed in the book Applied Asymptotics: Case Studies in Small-Sample Statistics by Brazzale, Davison, and Reid (2007).","Published":"2015-08-12","License":"GPL (>= 2) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"hoardeR","Version":"0.9.2","Title":"Collect and Retrieve Annotation Data for Various Genomic Data\nUsing Different Webservices","Description":"Cross-species identification of novel gene candidates using the NCBI web service is provided. Further, sets of miRNA target genes can be identified by using the targetscan.org API.","Published":"2016-10-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hoardr","Version":"0.2.0","Title":"Manage Cached Files","Description":"Suite of tools for managing cached files, targeting\n use in other R packages. Uses 'rappdirs' for cross-platform paths.\n Provides utilities to manage cache directories, including targeting\n files by path or by key; cached directories can be compressed and\n uncompressed easily to save disk space.","Published":"2017-05-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"holdem","Version":"1.1","Title":"Texas Holdem simulator","Description":"Simulates hands and tournaments of Texas Holdem.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Holidays","Version":"1.0-7","Title":"Holiday and Half-Day Data, for Use with the 'TimeWarp' Package","Description":"Contains trading holiday and half-day data that is automatically registered when loaded.","Published":"2016-07-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"homals","Version":"1.0-6","Title":"Gifi Methods for Optimal Scaling","Description":"Performs a homogeneity analysis (multiple correspondence analysis) and various extensions. Rank restrictions on the category quantifications can be imposed (nonlinear PCA). The categories are transformed by means of optimal scaling with options for nominal, ordinal, and numerical scale levels (for rank-1 restrictions). Variables can be grouped into sets, in order to emulate regression analysis and canonical correlation analysis. ","Published":"2015-07-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"homeR","Version":"0.3.0","Title":"Useful Functions for Building Physics","Description":"A collection of functions useful for the analysis of\n building physics experiments.","Published":"2016-10-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Homeric","Version":"0.1-3","Title":"Doughnut Plots","Description":"A simple implementation of doughnut plots - pie charts with a blank center. The package is named after Homer Simpson - arguably the best-known lover of doughnuts.","Published":"2016-07-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hommel","Version":"1.0","Title":"Methods for Closed Testing with Simes Inequality, in Particular\nHommel's Method","Description":"Provides methods for closed testing using Simes local tests. In particular, calculates adjusted p-values for Hommel's multiple testing method, and provides lower confidence bounds for true discovery proportions. A robust but more conservative variant of the closed testing procedure that does not require the assumption of Simes inequality is also implemented.","Published":"2017-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"homomorpheR","Version":"0.2-1","Title":"Homomorphic Computations in R","Description":"Homomorphic computations in R for privacy-preserving applications. Currently only\n the Paillier Scheme is implemented.","Published":"2017-04-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HomoPolymer","Version":"1.0","Title":"Theoretical Model to Simulate Radical Polymerization","Description":"A theoretical model to simulate radical polymerization. Material, energy and population balances are integrated for batch, semi batch and continuous process in a ideally mixed reactor. Limitations: single monomer (i.e.homo polymer), one phase (organic, aqueous). Datasets with chemical-physical data for the most common monomers is included. Some background in polymer science is suggested for its use. Graphical interface for a quick and friendly use is available.","Published":"2015-01-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"homtest","Version":"1.0-5","Title":"Homogeneity tests for Regional Frequency Analysis","Description":"A collection of homogeneity tests described in: Viglione\n A., Laio F., Claps P. (2007) ``A comparison of homogeneity\n tests for regional frequency analysis'', Water Resources\n Research, 43, W03428, doi:10.1029/2006WR005095. More on\n Regional Frequency Analysis can be found in package nsRFA.","Published":"2012-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hopbyhop","Version":"2.1","Title":"Transmissions and Receptions in a Hop by Hop Network","Description":"Computes the expectation of the number of transmissions and receptions considering a Hop-by-Hop transport model with limited number of retransmissions per packet. It provides the theoretical results shown in Palma et. al.(2016) and also estimated values based on Monte Carlo simulations.","Published":"2016-09-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"horizon","Version":"1.0","Title":"Horizon Search Algorithm","Description":"Calculates horizon elevation angle and sky view factor from a digital terrain model.","Published":"2016-09-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HoRM","Version":"0.1.1","Title":"Supplemental Functions and Datasets for \"Handbook of Regression\nMethods\"","Description":"Supplement for the book \"Handbook of Regression Methods\" by D. S. Young. Some datasets used in the book are included and documented. Wrapper functions are included that simplify the examples in the textbook, such as code for constructing a regressogram and expanding ANOVA tables to reflect the total sum of squares.","Published":"2017-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hornpa","Version":"1.0","Title":"Horn's (1965) Test to Determine the Number of Components/Factors","Description":"A stand-alone function that generates a user specified number of random datasets and computes eigenvalues using the random datasets (i.e., implements Horn's [1965, Psychometrika] parallel analysis). Users then compare the resulting eigenvalues (the mean or the specified percentile) from the random datasets (i.e., eigenvalues resulting from noise) to the eigenvalues generated with the user's data. Can be used for both principal components analysis (PCA) and common/exploratory factor analysis (EFA). The output table shows how large eigenvalues can be as a result of merely using randomly generated datasets. If the user's own dataset has actual eigenvalues greater than the corresponding eigenvalues, that lends support to retain that factor/component. In other words, if the i(th) eigenvalue from the actual data was larger than the percentile of the (i)th eigenvalue generated using randomly generated data, empirical support is provided to retain that factor/component. \n Horn, J. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 32, 179-185.","Published":"2015-03-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"horserule","Version":"0.1.0","Title":"Flexible Non-Linear Regression with the HorseRule Algorithm","Description":"Implementation of the HorseRule model a flexible tree based Bayesian regression method for linear and nonlinear regression and classification described in Nalenz & Villani (2017) .","Published":"2017-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"horseshoe","Version":"0.1.0","Title":"Implementation of the Horseshoe Prior","Description":"Contains functions for applying the horseshoe prior to high-\n dimensional linear regression, yielding the posterior mean and credible\n intervals, amongst other things. The key parameter tau can be equipped with\n a prior or estimated via maximum marginal likelihood estimation (MMLE).\n The main function, horseshoe, is for linear regression. In addition, there\n are functions specifically for the sparse normal means problem, allowing\n for faster computation of for example the posterior mean and posterior\n variance. Finally, there is a function available to perform variable\n selection, using either a form of thresholding, or credible intervals.","Published":"2016-11-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hot.deck","Version":"1.1","Title":"Multiple Hot-Deck Imputation","Description":"Performs multiple hot-deck imputation of categorical and continuous variables in a data frame. ","Published":"2016-01-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HotDeckImputation","Version":"1.1.0","Title":"Hot Deck Imputation Methods for Missing Data","Description":"Hot deck imputation methods to resolve missing data.","Published":"2015-10-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Hotelling","Version":"1.0-3","Title":"Hotelling's T^2 Test and Variants","Description":"A set of R functions which implements Hotelling's T^2 test and some variants of it. Functions are also included for Aitchison's additive log ratio and centred log ratio transformations.","Published":"2017-04-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hotspot","Version":"1.0","Title":"Software Hotspot Analysis","Description":"Contains data for software hotspot analysis, along with a function performing the analysis itself.","Published":"2015-10-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hotspots","Version":"1.0.2","Title":"Hot spots","Description":"Calculates a hot spot cutoff and associated analyses for\n statistical populations","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"housingData","Version":"0.3.0","Title":"U.S. Housing Data from 2008 to 2016","Description":"Monthly median home listing, sale price per square foot, and number of units sold for 2984 counties in the contiguous United States From 2008 to January 2016. Additional data sets containing geographical information and links to Wikipedia are also included.","Published":"2016-03-17","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"howmany","Version":"0.3-1","Title":"A lower bound for the number of correct rejections","Description":"When testing multiple hypotheses simultaneously, this\n package provides functionality to calculate a lower bound for\n the number of correct rejections (as a function of the number\n of rejected hypotheses), which holds simultaneously -with high\n probability- for all possible number of rejections. As a\n special case, a lower bound for the total number of false null\n hypotheses can be inferred. Dependent test statistics can be\n handled for multiple tests of associations. For independent\n test statistics, it is sufficient to provide a list of\n p-values.","Published":"2012-06-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"HPbayes","Version":"0.1","Title":"Heligman Pollard mortality model parameter estimation using\nBayesian Melding with Incremental Mixture Importance Sampling","Description":"This package provides all the functions necessary to\n estimate the 8 parameters of the Heligman Pollard mortality\n model using a Bayesian Melding procedure with IMIS as well as\n to convert those parameters into age-specifc probabilities of\n death and a corresponding life table","Published":"2012-10-29","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"hpcwld","Version":"0.5","Title":"High Performance Cluster Models Based on Kiefer-Wolfowitz\nRecursion","Description":"Probabilistic models describing the behavior \n\tof workload and queue on a High Performance Cluster and computing GRID \n\tunder FIFO service discipline basing on modified Kiefer-Wolfowitz \n\trecursion. Also sample data for inter-arrival times, service times, \n\tnumber of cores per task and waiting times of HPC of Karelian \n\tResearch Centre are included, measurements took place from 06/03/2009 to 02/30/2011.\n\tFunctions provided to import/export workload traces in Standard Workload Format (swf).\n\tStability condition of the model may be verified either exactly, or approximately.","Published":"2015-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hpoPlot","Version":"2.4","Title":"Functions for Plotting HPO Terms","Description":"Collection of functions for manipulating sets of HPO terms and\n plotting them with a various options.","Published":"2015-12-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hqmisc","Version":"0.1-1","Title":"Miscellaneous convenience functions and dataset","Description":"This package contains some miscellaneous convenience functions, \n\tto create a matrix of dummy columns from a factor, \n\tto determine whether x lies in range [a,b], \n\tto add a rectangular bracket to an existing plot, \n\tand to convert frequencies between Hz, semitones, mel and Bark. \n\tThis package also contains an example data set of a stratified sample\n\tof 80 talkers of Dutch. ","Published":"2014-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hqreg","Version":"1.4","Title":"Regularization Paths for Lasso or Elastic-Net Penalized Huber\nLoss Regression and Quantile Regression","Description":"Efficient algorithms for fitting regularization paths for lasso or elastic-net penalized regression models with Huber loss, quantile loss or squared loss.","Published":"2017-02-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hrbrthemes","Version":"0.1.0","Title":"Additional Themes, Theme Components and Utilities for 'ggplot2'","Description":"A compilation of extra 'ggplot2' themes, scales and utilities, including a \n spell check function plot label fields and an overall emphasis on typography. \n A copy of the 'Google' font 'Roboto Condensed' \n is also included to support one of the typography-oriented themes.","Published":"2017-02-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HRM","Version":"0.5.1","Title":"High-Dimensional Repeated Measures","Description":"Methods for testing main and interaction effects in possibly\n high-dimensional repeated measures in factorial designs. The observations\n of the subjects are assumed to be multivariate normal.\n It is possible to use up to 2 whole- and 3 subplot factors.","Published":"2017-03-21","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"HRQoL","Version":"1.0","Title":"Health Related Quality of Life Analysis","Description":"Offers tools and modelling approaches for binomial data with overdispersion, with particular interest in Health Related Quality of Life (HRQoL) questionnaires regression analysis.","Published":"2017-02-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"hrr","Version":"1.1.1","Title":"Horizontal rule for the R language","Description":"Print beautiful horizontal rules in your R scripts.","Published":"2014-03-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HSAR","Version":"0.4.0","Title":"Hierarchical Spatial Autoregressive Model (HSAR)","Description":"A library of the Hierarchical Spatial Autoregressive Model (HSAR), based on a Bayesian Markov Chain Monte Carlo (MCMC) algorithm. ","Published":"2016-05-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HSAUR","Version":"1.3-8","Title":"A Handbook of Statistical Analyses Using R (1st Edition)","Description":"Functions, data sets, analyses and examples from the book \n `A Handbook of Statistical Analyses Using R' (Brian S. Everitt and Torsten\n Hothorn, Chapman & Hall/CRC, 2006). The first chapter\n of the book, which is entitled `An Introduction to R', \n is completely included in this package, for all other chapters,\n a vignette containing all data analyses is available.","Published":"2017-06-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"HSAUR2","Version":"1.1-16","Title":"A Handbook of Statistical Analyses Using R (2nd Edition)","Description":"Functions, data sets, analyses and examples from the \n second edition of the book \n `A Handbook of Statistical Analyses Using R' (Brian S. Everitt and Torsten\n Hothorn, Chapman & Hall/CRC, 2008). The first chapter\n of the book, which is entitled `An Introduction to R', \n is completely included in this package, for all other chapters,\n a vignette containing all data analyses is available. In addition,\n the package contains Sweave code for producing slides for selected\n chapters (see HSAUR2/inst/slides).","Published":"2017-06-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HSAUR3","Version":"1.0-7","Title":"A Handbook of Statistical Analyses Using R (3rd Edition)","Description":"Functions, data sets, analyses and examples from the \n third edition of the book \n `A Handbook of Statistical Analyses Using R' (Torsten Hothorn and Brian S.\n Everitt, Chapman & Hall/CRC, 2014). The first chapter\n of the book, which is entitled `An Introduction to R', \n is completely included in this package, for all other chapters,\n a vignette containing all data analyses is available. In addition,\n Sweave source code for slides of selected chapters is included in \n this package (see HSAUR3/inst/slides).","Published":"2017-06-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hsdar","Version":"0.5.1","Title":"Manage, Analyse and Simulate Hyperspectral Data","Description":"Transformation of reflectance spectra, calculation of vegetation indices and red edge parameters, spectral resampling for hyperspectral remote sensing, simulation of reflectance and transmittance using the leaf reflectance model PROSPECT and the canopy reflectance model PROSAIL.","Published":"2016-12-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"hSDM","Version":"1.4","Title":"hierarchical Bayesian species distribution models","Description":"hSDM is an R package for estimating parameters of hierarchical Bayesian species distribution models. Such models allow interpreting the observations (occurrence and abundance of a species) as a result of several hierarchical processes including ecological processes (habitat suitability, spatial dependence and anthropogenic disturbance) and observation processes (species detectability). Hierarchical species distribution models are essential for accurately characterizing the environmental response of species, predicting their probability of occurrence, and assessing uncertainty in the model results.","Published":"2014-07-02","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"hsicCCA","Version":"1.0","Title":"Canonical Correlation Analysis based on Kernel Independence\nMeasures","Description":"Canonical correlation analysis that extracts nonlinear\n correlation through the use of Hilbert Schmidt Independence\n Criterion and Centered Kernel Target Alignment.","Published":"2013-03-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hsm","Version":"0.1","Title":"A Path-Based BCD for Proximal Function of Latent Group Lasso","Description":"Implementation of the block coordinate descent procedure for\n solving the proximal function of latent group Lasso, highlighted by\n decomposing a DAG into several non-overlapping path graphs, and getting\n closed-form solution for each path graph. The procedure was introduced\n as Algorithm 4 in Yan and Bien (2015) \n \"Hierarchical Sparse Modeling: A Choice of Two Regularizers\", and the\n closed-form solution for each path graph is solved in Algorithm 3 of\n that paper.","Published":"2016-06-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hsmm","Version":"0.4","Title":"Hidden Semi Markov Models","Description":"A package for computation of hidden semi markov models","Published":"2013-04-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"hsphase","Version":"2.0.1","Title":"Phasing, Pedigree Reconstruction, Sire Imputation and\nRecombination Events Identification of Half-sib Families Using\nSNP Data","Description":"Identification of recombination events, haplotype reconstruction, sire imputation and pedigree reconstruction using half-sib family SNP data","Published":"2016-12-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"HSROC","Version":"2.1.8","Title":"Meta-Analysis of Diagnostic Test Accuracy when Reference Test is\nImperfect","Description":"Implements a model for joint meta-analysis of sensitivity and specificity of the diagnostic test under evaluation, while taking into account the possibly imperfect sensitivity and specificity of the reference test. This hierarchical model accounts for both within and between study variability. Estimation is carried out using a Bayesian approach, implemented via a Gibbs sampler. The model can be applied in situations where more than one reference test is used in the selected studies.","Published":"2015-02-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HSSVD","Version":"1.2","Title":"Biclustering with Heterogeneous Variance","Description":"A data mining tool for discovering subgroups of patients and genes that simultaneously display unusual levels of variability compared to other genes and patients. Based on sparse singular value decomposition (SSVD), the method can detect both mean and variance biclusters in the presence of heterogeneous residual variance.","Published":"2014-12-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"htdp","Version":"0.1.4","Title":"Horizontal Time Dependent Positioning","Description":"Provides bindings to the National Geodetic Survey (NGS) Horizontal\n Time Dependent Positioning (HTDP) utility, v3.2.5, written by Richard Snay,\n Chris Pearson, and Jarir Saleh of NGS. HTDP is a utility that allows users to\n transform positional coordinates across time and between spatial reference\n frames. See for more\n information.","Published":"2016-09-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"htmltab","Version":"0.7.1","Title":"Assemble Data Frames from HTML Tables","Description":"HTML tables are a valuable data source but extracting and recasting\n these data into a useful format can be tedious. This package allows to collect\n structured information from HTML tables. It is similar to readHTMLTable()\n of the XML package but provides three major advantages. First, the function\n automatically expands row and column spans in the header and body cells.\n Second, users are given more control over the identification of header and body\n rows which will end up in the R table, including semantic header information\n that appear throughout the body. Third, the function preprocesses table code,\n corrects common types of malformations, removes unneeded parts and so helps to\n alleviate the need for tedious post-processing.","Published":"2016-12-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"htmlTable","Version":"1.9","Title":"Advanced Tables for Markdown/HTML","Description":"Tables with state-of-the-art layout elements such as row spanners,\n column spanners, table spanners, zebra striping, and more. While allowing\n advanced layout, the underlying css-structure is simple in order to maximize\n compatibility with word processors such as 'MS Word' or 'LibreOffice'. The package\n also contains a few text formatting functions that help outputting text\n compatible with HTML/'LaTeX'.","Published":"2017-01-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"htmltidy","Version":"0.3.1","Title":"Tidy Up and Test XPath Queries on HTML and XML Content","Description":"HTML documents can be beautiful and pristine. They can also be\n wretched, evil, malformed demon-spawn. Now, you can tidy up that HTML and XHTML\n before processing it with your favorite angle-bracket crunching tools, going beyond\n the limited tidying that 'libxml2' affords in the 'XML' and 'xml2' packages and\n taming even the ugliest HTML code generated by the likes of Google Docs and Microsoft\n Word. It's also possible to use the functions provided to format or \"pretty print\"\n HTML content as it is being tidied. Utilities are also included that make it \n possible to view formatted and \"pretty printed\" HTML/XML\n content from HTML/XML document objects, nodes, node sets and plain character HTML/XML\n using 'vkbeautify' (by Vadim Kiryukhin) and 'highlight.js' (by Ivan Sagalaev).\n Also (optionally) enables filtering of nodes via XPath or viewing an HTML/XML document\n in \"tree\" view using 'xml-viewer' (by Julian Gruber). See\n and \n for more information about 'vkbeautify'\n and 'xml-viewer', respectively.","Published":"2017-02-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"htmltools","Version":"0.3.6","Title":"Tools for HTML","Description":"Tools for HTML generation and output.","Published":"2017-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HTMLUtils","Version":"0.1.7","Title":"Facilitates Automated HTML Report Creation","Description":"Facilitates automated HTML report creation, in particular\n framed HTML pages and dynamically sortable tables.","Published":"2015-01-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"htmlwidgets","Version":"0.8","Title":"HTML Widgets for R","Description":"A framework for creating HTML widgets that render in various\n contexts including the R console, 'R Markdown' documents, and 'Shiny'\n web applications.","Published":"2016-11-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"htree","Version":"0.1.1","Title":"Historical Tree Ensembles for Longitudinal Data","Description":"Historical regression trees are an extension of standard trees, \n\tproducing a non-parametric estimate of how the response depends on \n\tall of its prior realizations as well as that of any time-varying predictor \n\tvariables. The method applies equally to regularly as well as irregularly \n\tsampled data. The package implements random forest and boosting ensembles \n\tbased on historical regression trees, suitable for longitudinal data. ","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hts","Version":"5.1.4","Title":"Hierarchical and Grouped Time Series","Description":"Methods for analysing and forecasting hierarchical and grouped time\n series.","Published":"2017-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HTSCluster","Version":"2.0.8","Title":"Clustering High-Throughput Transcriptome Sequencing (HTS) Data","Description":"A Poisson mixture model is implemented to cluster genes from high-\n throughput transcriptome sequencing (RNA-seq) data. Parameter estimation is\n performed using either the EM or CEM algorithm, and the slope heuristics are\n used for model selection (i.e., to choose the number of clusters).","Published":"2016-05-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"HTSSIP","Version":"1.1.1","Title":"High Throughput Sequencing of Stable Isotope Probing Data\nAnalysis","Description":"Functions for analyzing high throughput sequencing \n stable isotope probing (HTS-SIP) data.\n Analyses include high resolution stable isotope probing (HR-SIP),\n multi-window high resolution stable isotope probing (MW-HR-SIP), \n and quantitative stable isotope probing (q-SIP). ","Published":"2017-05-23","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"httk","Version":"1.6","Title":"High-Throughput Toxicokinetics","Description":"Functions and data tables for simulation and statistical analysis of chemical toxicokinetics (\"TK\") using data obtained from relatively high throughput, in vitro studies. Both physiologically-based (\"PBTK\") and empirical (e.g., one compartment) \"TK\" models can be parameterized for several hundred chemicals and multiple species. These models are solved efficiently, often using compiled (C-based) code. A Monte Carlo sampler is included for simulating biological variability and measurement limitations. Functions are also provided for exporting \"PBTK\" models to \"SBML\" and \"JARNAC\" for use with other simulation software. These functions and data provide a set of tools for in vitro-in vivo extrapolation (\"IVIVE\") of high throughput screening data (e.g., ToxCast) to real-world exposures via reverse dosimetry (also known as \"RTK\").","Published":"2017-06-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"httpcache","Version":"1.0.0","Title":"Query Cache for HTTP Clients","Description":"In order to improve performance for HTTP API clients, 'httpcache'\n provides simple tools for caching and invalidating cache. It includes the\n HTTP verb functions GET, PUT, PATCH, POST, and DELETE, which are drop-in\n replacements for those in the 'httr' package. These functions are cache-aware\n and provide default settings for cache invalidation suitable for RESTful\n APIs; the package also enables custom cache-management strategies.\n Finally, 'httpcache' includes a basic logging framework to facilitate the\n measurement of HTTP request time and cache performance.","Published":"2017-01-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"httpcode","Version":"0.2.0","Title":"'HTTP' Status Code Helper","Description":"Find and explain the meaning of 'HTTP' status codes.\n Functions included for searching for codes by full or partial number,\n by message, and get appropriate dog and cat images for many\n status codes.","Published":"2016-11-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"httping","Version":"0.1.0","Title":"'Ping' 'URLs' to Time 'Requests'","Description":"A suite of functions to ping 'URLs' and to time\n 'HTTP' 'requests'. Designed to work with 'httr'.","Published":"2016-01-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"httpRequest","Version":"0.0.10","Title":"Basic HTTP Request","Description":"HTTP Request protocols. Implements the GET, POST and multipart POST request.","Published":"2014-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"httptest","Version":"2.0.0","Title":"A Test Environment for HTTP Requests","Description":"Testing code and packages that communicate with remote servers can\n be painful. Dealing with authentication, bootstrapping server state,\n cleaning up objects that may get created during the test run, network\n flakiness, and other complications can make testing seem too costly to\n bother with. But it doesn't need to be that hard. This package enables one\n to test all of the logic on the R sides of the API in your package without\n requiring access to the remote service. Importantly, it provides three test\n contexts that mock the network connection in different ways, and it offers\n additional expectations to assert that HTTP requests were--or were\n not--made. Using these tools, one can test that code is making the intended\n requests and that it handles the expected responses correctly, all without\n depending on a connection to a remote API.","Published":"2017-06-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"httpuv","Version":"1.3.3","Title":"HTTP and WebSocket Server Library","Description":"Provides low-level socket and protocol support for handling\n HTTP and WebSocket requests directly from within R. It is primarily\n intended as a building block for other packages, rather than making it\n particularly easy to create complete web applications using httpuv alone.\n httpuv is built on top of the libuv and http-parser C libraries, both of\n which were developed by Joyent, Inc. (See LICENSE file for libuv and\n http-parser license information.)","Published":"2015-08-04","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"httr","Version":"1.2.1","Title":"Tools for Working with URLs and HTTP","Description":"Useful tools for working with HTTP organised by HTTP verbs\n (GET(), POST(), etc). Configuration functions make it easy to control\n additional request components (authenticate(), add_headers() and so on).","Published":"2016-07-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"huge","Version":"1.2.7","Title":"High-Dimensional Undirected Graph Estimation","Description":"Provides a general framework for\n high-dimensional undirected graph estimation. It integrates\n data preprocessing, neighborhood screening, graph estimation,\n and model selection techniques into a pipeline. In\n preprocessing stage, the nonparanormal(npn) transformation is\n applied to help relax the normality assumption. In the graph\n estimation stage, the graph structure is estimated by\n Meinshausen-Buhlmann graph estimation or the graphical lasso,\n and both methods can be further accelerated by the lossy\n screening rule preselecting the neighborhood of each variable\n by correlation thresholding. We target on high-dimensional data\n analysis usually d >> n, and the computation is\n memory-optimized using the sparse matrix output. We also\n provide a computationally efficient approach, correlation\n thresholding graph estimation. Three\n regularization/thresholding parameter selection methods are\n included in this package: (1)stability approach for\n regularization selection (2) rotation information criterion (3)\n extended Bayesian information criterion which is only available\n for the graphical lasso.","Published":"2015-09-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HUM","Version":"1.0","Title":"compute HUM value and visualize ROC curves","Description":"Tools for computing HUM (Hypervolume Under the Manifold) value to estimate features ability\n to discriminate the class labels, visualizing the ROC curve for two or three class labels.","Published":"2014-01-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"humanFormat","Version":"1.0","Title":"Human-friendly formatting functions","Description":"Format quantities of time or bytes into human-friendly strings.","Published":"2013-11-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"humaniformat","Version":"0.6.0","Title":"A Parser for Human Names","Description":"Human names are complicated and nonstandard things. Humaniformat,\n which is based on Anthony Ettinger's 'humanparser' project (https://github.com/\n chovy/humanparser) provides functions for parsing human names, making a best-\n guess attempt to distinguish sub-components such as prefixes, suffixes, middle\n names and salutations.","Published":"2016-04-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"humarray","Version":"1.1","Title":"Simplify Analysis and Annotation of Human Microarray Datasets","Description":"Utilises GRanges, data.frame or IRanges objects. Integrates gene annotation for ImmunoChip (or your custom chip) with function calls. Intuitive wrappers for annotation lookup (gene lists, exon ranges, etc) and conversion (e.g, between build 36 and 37 coordinates). Conversion between ensembl and HGNC gene ids, chip ids to rs-ids for SNP-arrays. Retrieval of chromosome and position for gene, band or SNP-ids, or reverse lookup. Simulation functions for ranges objects. ","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"humidity","Version":"0.1.1","Title":"An R Package for Calculating Water Vapor Measures from\nTemperature and Relative Humidity","Description":"Vapor pressure, absolute humidity, specific humidity, and mixing ratio are commonly used water vapor measures in meteorology. This R package provides functions for calculating saturation vapor pressure (hPa), partial water vapor pressure (Pa), absolute humidity (kg/m^3), specific humidity (kg/kg), and mixing ratio (kg/kg) from temperature (K) and relative humidity (%).","Published":"2016-07-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hunspell","Version":"2.5","Title":"High-Performance Stemmer, Tokenizer, and Spell Checker for R","Description":"A spell checker and morphological analyzer library designed for\n languages with rich morphology and complex word compounding or character\n encoding. The package can check and analyze individual words as well as\n search for incorrect words within a text, latex, html or xml document. Use\n the 'devtools' package to spell check R documentation with 'hunspell'.","Published":"2017-05-21","License":"GPL-2 | LGPL-2.1 | MPL-1.1","snapshot_date":"2017-06-23"} {"Package":"HURDAT","Version":"0.1.0","Title":"Hurricane Re-Analysis Project","Description":"Scraped dataset of the Hurricane Research Division's Hurricane \n Re-Analysis Project known as HURDAT. Storm details are available for most \n known hurricanes and tropical storms for the Atlantic and northeastern \n Pacific ocean (northwestern hemisphere). See \n for more information.","Published":"2017-05-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"hurricaneexposure","Version":"0.0.1","Title":"Explore and Map County-Level Hurricane Exposure in the United\nStates","Description":"Allows users to create time series of tropical storm\n exposure histories for chosen counties for a number of hazard metrics (wind, \n rain, distance from the storm, etc.). This package interacts with data available \n through the 'hurricaneexposuredata' package, which is available in a 'drat'\n repository. To access this data package, run 'install.packages(\"hurricaneexposuredata\", \n repos = \"https://geanders.github.io/drat/\", type = \"source\")'. The size of the \n 'hurricaneexposuredata' package is approximately 25 MB. This work was supported in \n part by grants from the National Institute of Environmental Health Sciences \n (R00ES022631), the National Science Foundation (1331399), and a NASA Applied \n Sciences Program/Public Health Program Grant (NNX09AV81G).","Published":"2017-01-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"huxtable","Version":"0.3.0","Title":"Simply Create LaTeX and HTML Tables","Description":"Creates HTML and LaTeX tables. Provides similar \n functionality to 'xtable', but does more, with a simpler interface.","Published":"2017-05-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HW.pval","Version":"1.0","Title":"Testing Hardy-Weinberg Equilibrium for Multiallelic Genes","Description":"HW.pval calculates plain and fully conditional\n root-mean-square, chi-square, and log likelihood-ratio P-values\n for the user-provided genotypic counts to be consistent with\n the Hardy-Weinberg equilibrium model. For further information\n on the Hardy-Weinberg equilibrium model and the pseudocode,\n refer to the paper \"Testing Hardy-Weinberg equilibrium with a\n simple root-mean-square statistic\" by Rachel Ward.","Published":"2012-07-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hwde","Version":"0.67","Title":"Models and Tests for Departure from Hardy-Weinberg Equilibrium\nand Independence Between Loci","Description":"Fits models for genotypic disequilibria, as described in\n Huttley and Wilson (2000), Weir (1996) and Weir and Wilson (1986).\n Contrast terms are available that account for first order interactions\n between loci. Also implements, for a single locus in a single\n population, a conditional exact test for Hardy-Weinberg equilibrium.","Published":"2016-01-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HWEBayes","Version":"1.4","Title":"Bayesian investigation of Hardy-Weinberg Equilibrium via\nestimation and testing","Description":"Estimation and testing of HWE using Bayesian methods.\n Three models are currently considered: HWE, a model\n parameterized in terms of the allele frequencies and a single\n inbreeding coefficient f, and the saturated model. Testing is\n based on Bayes factors.","Published":"2013-12-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HWEintrinsic","Version":"1.2.2","Title":"Objective Bayesian Testing for the Hardy-Weinberg Equilibrium\nProblem","Description":"General (multi-allelic) Hardy-Weinberg equilibrium problem from an objective Bayesian testing standpoint. This aim is achieved through the identification of a class of priors specifically designed for this testing problem. A class of intrinsic priors under the full model is considered. This class is indexed by a tuning quantity, the training sample size, as discussed in Consonni, Moreno and Venturini (2010). These priors are objective, satisfy Savage's continuity condition and have proved to behave extremely well for many statistical testing problems.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hwriter","Version":"1.3.2","Title":"HTML Writer - Outputs R objects in HTML format","Description":"Easy-to-use and versatile functions to output R objects in\n HTML format","Published":"2014-09-10","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"hwwntest","Version":"1.3","Title":"Tests of White Noise using Wavelets","Description":"Provides methods to test whether time series is consistent\n\twith white noise.","Published":"2015-01-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HWxtest","Version":"1.1.7","Title":"Exact Tests for Hardy-Weinberg Proportions","Description":"Tests whether a set of genotype counts fits the HW expectations.\n Exact tests performed by an efficient algorithm. Included test statistics\n are likelihood ratio, probability, U-score and Pearson's X2.","Published":"2016-01-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"hybridEnsemble","Version":"1.0.0","Title":"Build, Deploy and Evaluate Hybrid Ensembles","Description":"Functions to build and deploy a hybrid ensemble consisting of eight different sub-ensembles: bagged logistic regressions, random forest, stochastic boosting, kernel factory, bagged neural networks, bagged support vector machines, rotation forest, and bagged k-nearest neighbors. Functions to cross-validate the hybrid ensemble and plot and summarize the results are also provided. There is also a function to assess the importance of the predictors.","Published":"2015-05-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hybridHclust","Version":"1.0-5","Title":"Hybrid Hierarchical Clustering","Description":"Hybrid hierarchical clustering via mutual clusters. A mutual cluster is a set of points closer to each other than to all other points. Mutual clusters are used to enrich top-down hierarchical clustering.","Published":"2015-07-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"HybridMC","Version":"0.2","Title":"Implementation of the Hybrid Monte Carlo and Multipoint Hybrid\nMonte Carlo sampling techniques","Description":"This package is an R implementation of the Hybrid Monte\n Carlo and Multipoint Hybrid Monte Carlo sampling techniques\n described in Liu (2001): \"Monte Carlo Strategies in Computing\".","Published":"2009-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hybridModels","Version":"0.2.9","Title":"Stochastic Hybrid Models in Dynamic Networks","Description":"Simulates stochastic hybrid models for transmission of infectious\n diseases in dynamic networks.","Published":"2016-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HydeNet","Version":"0.10.5","Title":"Hybrid Bayesian Networks Using R and JAGS","Description":"Facilities for easy implementation of hybrid Bayesian networks\n using R. Bayesian networks are directed acyclic graphs representing joint\n probability distributions, where each node represents a random variable and\n each edge represents conditionality. The full joint distribution is therefore\n factorized as a product of conditional densities, where each node is assumed\n to be independent of its non-descendents given information on its parent nodes.\n Since exact, closed-form algorithms are computationally burdensome for inference\n within hybrid networks that contain a combination of continuous and discrete\n nodes, particle-based approximation techniques like Markov Chain Monte Carlo\n are popular. We provide a user-friendly interface to constructing these networks\n and running inference using the 'rjags' package. Econometric analyses (maximum\n expected utility under competing policies, value of information) involving\n decision and utility nodes are also supported.","Published":"2017-01-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"hydroApps","Version":"0.1-1","Title":"Tools and models for hydrological applications","Description":"Package providing tools for hydrological applications and models developed for regional analysis in Northwestern Italy. ","Published":"2014-05-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hydrogeo","Version":"0.6-1","Title":"Groundwater Data Presentation and Interpretation","Description":"Contains one function for drawing Piper diagrams (also\n called Piper-Hill diagrams) of water analyses for major ions.","Published":"2017-03-12","License":"BSD_2_clause + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"hydroGOF","Version":"0.3-8","Title":"Goodness-of-fit functions for comparison of simulated and\nobserved hydrological time series","Description":"S3 functions implementing both statistical and graphical goodness-of-fit measures between observed and simulated values, mainly oriented to be used during the calibration, validation, and application of hydrological models. Missing values in observed and/or simulated values can be removed before computations. Comments / questions / collaboration of any kind are very welcomed.","Published":"2014-02-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"HydroMe","Version":"2.0","Title":"R codes for estimating water retention and infiltration model\nparameters using experimental data","Description":"This package is version 2 of HydroMe v.1 package. It\n estimates the parameters in infiltration and water retention\n models by curve-fitting method. The models considered are those\n that are commonly used in soil science. It has new models for\n water retention characteristic curve and debugging of errors in\n HydroMe v.1","Published":"2013-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hydroPSO","Version":"0.3-4","Title":"Particle Swarm Optimisation, with focus on Environmental Models","Description":"This package implements a state-of-the-art version of the Particle Swarm Optimisation (PSO) algorithm (SPSO-2011 and SPSO-2007 capable). hydroPSO can be used as a replacement of the 'optim' R function for (global) optimization of non-smooth and non-linear functions. However, the main focus of hydroPSO is the calibration of environmental and other real-world models that need to be executed from the system console. hydroPSO is model-independent, allowing the user to easily interface any computer simulation model with the calibration engine (PSO). hydroPSO communicates with the model through the model's own input and output files, without requiring access to the model's source code. Several PSO variants and controlling options are included to fine-tune the performance of the calibration engine to different calibration problems. An advanced sensitivity analysis function together with user-friendly plotting summaries facilitate the interpretation and assessment of the calibration results. hydroPSO is parallel-capable, to alleviate the computational burden of complex models with \"long\" execution time. Bugs reports/comments/questions are very welcomed (in English, Spanish or Italian).","Published":"2014-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hydrostats","Version":"0.2.5","Title":"Hydrologic Indices for Daily Time Series Data","Description":"Calculates a suite of hydrologic indices for daily time series data that are widely used in hydrology and stream ecology.","Published":"2016-11-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hydroTSM","Version":"0.4-2-1","Title":"Time series management, analysis and interpolation for\nhydrological modelling","Description":"S3 functions for management, analysis, interpolation and plotting of time series used in hydrology and related environmental sciences. In particular, this package is highly oriented to hydrological modelling tasks. The focus of this package has been put in providing a collection of tools useful for the daily work of hydrologists (although an effort was made to optimise each function as much as possible, functionality has had priority over speed). Bugs / comments / questions / collaboration of any kind are very welcomed, and in particular, datasets that can be included in this package for academic purposes.","Published":"2014-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hyfo","Version":"1.3.9","Title":"Hydrology and Climate Forecasting","Description":"Focuses on data processing and visualization in hydrology and\n climate forecasting. Main function includes data extraction, data downscaling,\n data resampling, gap filler of precipitation, bias correction of forecasting\n data, flexible time series plot, and spatial map generation. It is a good pre-\n processing and post-processing tool for hydrological and hydraulic modellers.","Published":"2017-03-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hyper.fit","Version":"1.0.3","Title":"Generic N-Dimensional Hyperplane Fitting with Heteroscedastic\nCovariant Errors and Intrinsic Scatter","Description":"Includes two main high level codes for hyperplane fitting (hyper.fit) and visualising (hyper.plot2d / hyper.plot3d). In simple terms this allows the user to produce robust 1D linear fits for 2D x vs y type data, and robust 2D plane fits to 3D x vs y vs z type data. This hyperplane fitting works generically for any N-1 hyperplane model being fit to a N dimension dataset. All fits include intrinsic scatter in the generative model orthogonal to the hyperplane.","Published":"2016-08-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"HyperbolicDist","Version":"0.6-2","Title":"The hyperbolic distribution","Description":"This package provides functions for the hyperbolic and\n related distributions. Density, distribution and quantile\n functions and random number generation are provided for the\n hyperbolic distribution, the generalized hyperbolic\n distribution, the generalized inverse Gaussian distribution and\n the skew-Laplace distribution. Additional functionality is\n provided for the hyperbolic distribution, including fitting of\n the hyperbolic to data.","Published":"2009-09-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hyperdirichlet","Version":"1.5-1","Title":"A Generalization of the Dirichlet Distribution","Description":"A suite of routines for the hyperdirichlet distribution.","Published":"2017-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hypergea","Version":"1.3.3","Title":"Hypergeometric Tests","Description":"Performs (exact) hypergeometric tests on IxJ and 2x2x2 contingency tables using parallelised C code.","Published":"2016-09-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hypergeo","Version":"1.2-13","Title":"The Gauss Hypergeometric Function","Description":"The Gaussian hypergeometric function for complex numbers.","Published":"2016-04-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hypersampleplan","Version":"0.1.1","Title":"Attribute Sampling Plan with Exact Hypergeometric Probabilities\nusing Chebyshev Polynomials","Description":"Implements an algorithm for efficient and exact calculation of hypergeometric \n and binomial probabilities using Chebyshev polynomials, while other algorithm use an \n approximation when N is large. A useful applications is also considered in this package \n for the construction of attribute sampling plans which is an important field of statistical\n quality control. The quantile, and the confidence limit for the attribute sampling plan are\n also implemented in this package. The hypergeometric distribution can be represented in \n terms of Chebyshev polynomials. This representation is particularly useful in the calculation\n of exact values of hypergeometric variables. ","Published":"2017-06-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hyperSMURF","Version":"1.1.2","Title":"Hyper-Ensemble Smote Undersampled Random Forests","Description":"Machine learning supervised method to learn rare genomic features in imbalanced genetic data sets. This method can be also applied to classify or rank examples characterized by a high imbalance between the minority and majority class. hyperSMURF adopts a hyper-ensemble (ensemble of ensembles) approach, undersampling of the majority class and oversampling of the minority class to learn highly imbalanced data. Both single-core and parallel multi-core version of hyperSMURF are implemented.","Published":"2016-08-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hyperSpec","Version":"0.98-20161118","Title":"Work with Hyperspectral Data, i.e. Spectra + Meta Information\n(Spatial, Time, Concentration, ...)","Description":"Comfortable ways to work with hyperspectral data sets.\n I.e. spatially or time-resolved spectra, or spectra with any other kind\n of information associated with each of the spectra. The spectra can be data\n as obtained in XRF, UV/VIS, Fluorescence, AES, NIR, IR, Raman, NMR, MS,\n etc. More generally, any data that is recorded over a discretized variable,\n e.g. absorbance = f (wavelength), stored as a vector of absorbance values\n for discrete wavelengths is suitable.","Published":"2016-11-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"hypervolume","Version":"1.4.1","Title":"High-Dimensional Kernel Density Estimation and Geometry\nOperations","Description":"Estimates the shape and volume of high-dimensional datasets and performs set operations: intersection / overlap, union, unique components, inclusion test, and hole detection. Uses stochastic geometry approach to high-dimensional kernel density estimation. Builds n-dimensional convex hulls (polytopes). Can measure the n-dimensional ecological hypervolume and perform species distribution modeling.","Published":"2015-11-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hyphenatr","Version":"0.3.0","Title":"Tools to Hyphenate Strings Using the 'Hunspell' Hyphenation\nLibrary","Description":"Identifying hyphenation points in strings can be useful for both\n text processing and display functions. The 'Hunspell' hyphenation library\n provides tools to perform hyphenation\n using custom language rule dictionaries. Many hyphenation rules\n dictionaries are included. Words can be hyphenated directly or split into\n hyphenated component strings for further processing.","Published":"2016-03-18","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HyPhy","Version":"1.0","Title":"Macroevolutionary phylogentic analysis of species trees and gene\ntrees","Description":"A Bay Area high level phylogenetic analysis package mostly\n using the birth-death process. Analysis of species tree\n branching times and simulation of species trees under a number\n of different time variable birth-death processes. Analysis of\n gene tree species tree reconciliations and simulations of gene\n trees in species trees.","Published":"2012-07-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"hypoparsr","Version":"0.1.0","Title":"Multi-Hypothesis CSV Parser","Description":"A Multi-Hypothesis CSV Parser. Stresses your computer not you.","Published":"2016-09-06","License":"MPL (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"hypothesestest","Version":"1.0","Title":"Confidence Intervals and Tests of Statistical Hypotheses","Description":"Compute the confidence interval of the population mean\n with one sample or of the difference of population means of two\n samples from normal distributions or t-distributions.Compute\n the confidence interval of population variance with one sample\n or of the difference of population variances of two samples by\n chi-square tests.Test for population mean or the differences of\n two normal samples under normality with the given null\n hypothesis H0,which depends on the user,so that he can know if\n he can reject H0 or not at the significance level alpha.Do the\n chi-square tests with one or two samples which have multinomial\n distributions by using an approximate chi-square distribution\n when n is large enough.","Published":"2012-07-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hypothesisr","Version":"0.1.1","Title":"Wrapper for the 'Hypothes.is' Web Annotation Service","Description":"Interact with the application programming interface for the web\n annotation service 'Hypothes.is' (See for more\n information.) Allows users to download data about public annotations, and\n create, retrieve, update, and delete their own annotations.","Published":"2016-07-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"HYRISK","Version":"1.2","Title":"Hybrid Methods for Addressing Uncertainty in RISK Assessments","Description":"Methods for addressing uncertainty in risk assessments using hybrid representations of uncertainty (probability distributions, fuzzy numbers, intervals, probability distributions with imprecise parameters). The uncertainty propagation procedure combines random sampling using Monte Carlo method with fuzzy interval analysis of Baudrit et al. (2007) . The sensitivity analysis is based on the pinching method of Ferson and Tucker (2006) .","Published":"2017-04-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"hysteresis","Version":"2.5","Title":"Tools for Modeling Rate-Dependent Hysteretic Processes and\nEllipses","Description":"Fit, summarize and plot sinusoidal hysteretic processes using:\n two-step simple harmonic least squares, ellipse-specific non-linear least\n squares, the direct method, geometric least squares or linear least squares.","Published":"2015-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"hzar","Version":"0.2-5","Title":"Hybrid Zone Analysis using R","Description":"A collection of tools for modeling the shape of 1 dimensional clines.","Published":"2013-09-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iadf","Version":"0.1.0","Title":"Analysis of Intra Annual Density Fluctuations","Description":"Calculate false ring proportions from data frames of intra annual \n density fluctuations.","Published":"2017-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IalsaSynthesis","Version":"0.1.6","Title":"Synthesizing Information Across Collaborating Research","Description":"Synthesizes information across collaborating research. Created specifically for Integrative Analysis of Longitudinal Studies of Aging (IALSA).","Published":"2015-09-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"IAPWS95","Version":"1.0.0","Title":"Thermophysical Properties of Water and Steam","Description":"Functions for Water and Steam Properties based on the IAPWS Formulation\n 1995 for the Thermodynamic Properties of Ordinary Water Substance for General and\n Scientific Use and on the releases for viscosity, conductivity, surface tension and\n melting pressure.","Published":"2016-09-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"IASD","Version":"1.1","Title":"Model Selection for Index of Asymmetry Distribution","Description":"Calculate AIC's and AICc's of unimodal model (one normal distribution) and bimodal model(a mixture of two normal distributions) which fit the distribution of indices of asymmetry (IAS), and plot their density, to help determine IAS distribution is unimodal or bimodal.","Published":"2015-09-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IAT","Version":"0.3","Title":"Cleaning and Visualizing Implicit Association Test (IAT) Data","Description":"Implements the standard D-Scoring algorithm\n (Greenwald, Banaji, & Nosek, 2003) for Implicit Association Test (IAT)\n data and includes plotting capabilities for exploring raw IAT data.","Published":"2016-04-30","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"IATScore","Version":"0.1.0","Title":"Scoring Algorithm for the Implicit Association Test (IAT)","Description":"This minimalist package is designed to quickly score raw data outputted from an Implicit Association Test (IAT; Greenwald, McGhee, & Schwartz, 1998) . IAT scores are calculated as specified by Greenwald, Nosek, and Banaji (2003) . Outputted values can be interpreted as effect sizes. The input function consists of three arguments. First, indicate the name of the dataset to be analyzed. This is the only required input. Second, indicate the number of trials in your entire IAT (the default is set to 220, which is typical for most IATs). Last, indicate whether congruent trials (e.g., flowers and pleasant) or incongruent trials (e.g., guns and pleasant) were presented first for this participant (the default is set to congruent). The script will tell you how long it took to run the code, the effect size for the participant, and whether that participant should be excluded based on the criteria outlined by Greenwald et al. (2003). Data files should consist of six columns organized in order as follows: Block (0-6), trial (0-19 for training blocks, 0-39 for test blocks), category (dependent on your IAT), the type of item within that category (dependent on your IAT), a dummy variable indicating whether the participant was correct or incorrect on that trial (0=correct, 1=incorrect), and the participant’s reaction time (in milliseconds). Three sample datasets are included in this package (labeled 'IAT', 'TooFastIAT', and 'BriefIAT') to practice with.","Published":"2017-04-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"IATscores","Version":"0.1-2","Title":"Implicit Association Test Scores Using Robust Statistics","Description":"Compute several variations of the Implicit Association Test (IAT) scores, including the D scores (Greenwald, Nosek, Banaji, 2003) and the new scores that were developed using robust statistics (Richetin, Costantini, Perugini, and Schonbrodt, 2015).","Published":"2015-05-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"iBATCGH","Version":"1.3","Title":"Integrative Bayesian Analysis of Transcriptomic and CGH Data","Description":"Bayesian integrative models of gene expression and comparative genomic hybridization data. The package provides inference on copy number variations and their association with gene expression.","Published":"2015-07-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ibd","Version":"1.2","Title":"INCOMPLETE BLOCK DESIGNS","Description":"This package contains several utility functions related to incomplete block designs. The package contains function to generate efficient incomplete block designs with given numbers of treatments, blocks and block size. The package also contains function to generate an incomplete block design with specified concurrence matrix. There are functions to generate balanced treatment incomplete block designs and incomplete block designs for test versus control treatments comparisons with specified concurrence matrix. Package also allows performing analysis of variance of data and computing least square means of factors from experiments using a connected incomplete block design. Tests of hypothesis of treatment contrasts in incomplete block design set up is supported.","Published":"2014-09-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IBDhaploRtools","Version":"1.8","Title":"Functions for the Analysis of IBD Haplo Output","Description":"Functions to analyze, plot, and store the output of running IBD_Haplo software package. More information regarding IBD_Haplo can be found at http://www.stat.washington.edu/thompson/Genepi/pangaea.shtml.","Published":"2015-02-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IBDLabels","Version":"1.1","Title":"Convert Between Different IBD-State Labelling Schemes","Description":"Convert \"label\", \"lexicographic\", \"jacquard\" and \"vec\", full state description vector. All conversions are done to and from \"label\", as used in IBD_Haplo. More information regarding IBD_Haplo can be found at http://www.stat.washington.edu/thompson/Genepi/pangaea.shtml.","Published":"2015-02-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ibdreg","Version":"0.2.5","Title":"Regression Methods for IBD Linkage With Covariates","Description":"A method to test genetic linkage with covariates by\n regression methods with response IBD sharing for relative\n pairs. Account for correlations of IBD statistics and\n covariates for relative pairs within the same pedigree.","Published":"2013-04-20","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"IBDsim","Version":"0.9-7","Title":"Simulation of Chromosomal Regions Shared by Family Members","Description":"Simulation of segments shared identical-by-descent (IBD) by \n pedigree members. Using sex specific recombination rates along the human\n genome (Kong et. al (2010) ), phased chromosomes\n are simulated for all pedigree members, either by unconditional gene \n dropping or conditional on a specified IBD pattern. Additional functions \n provide summaries and further analysis of the simulated genomes.","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ibeemd","Version":"1.0.1","Title":"Irregular-lattice based ensemble empirical mode decomposition","Description":"A data-driven and adaptive hierarchical-scale decomposition method for irregular-lattice field (represented by polygons).","Published":"2014-08-11","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"ibelief","Version":"1.2","Title":"Belief Function Implementation","Description":"Some basic functions to implement belief functions including:\n transformation between belief functions using the method introduced by\n Philippe Smets (arXiv:1304.1122 [cs.AI]), evidence combination, evidence\n discounting, decision-making, and constructing masses. Currently, thirteen\n combination rules and five decision rules are supported. It can also be\n used to generate different types of random masses when working on belief\n combination and conflict management.","Published":"2015-07-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IBHM","Version":"1.1-11","Title":"Approximation using the IBHM method","Description":"Implementation of an incremental model construction method called IBHM which\n stands for Incrementally Built Heterogeneous Model. The method is designed for solving\n real number approximation problems in a highly automated fashion.","Published":"2014-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ibm","Version":"0.1.0","Title":"Individual Based Models in R","Description":"Implementation of some (simple) Individual Based Models and methods\n to create new ones, particularly for population dynamics models (reproduction, \n mortality and movement). The basic operations for the simulations are \n implemented in Rcpp for speed.","Published":"2016-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ibmcraftr","Version":"1.0.0","Title":"Toolkits to Develop Individual-Based Models in Infectious\nDisease","Description":"It provides a generic set of tools for initializing a synthetic\n population with each individual in specific disease states, and\n making transitions between those disease states according to the rates\n calculated on each timestep. The new version 1.0.0 has C++ code \n integration to make the functions run faster. It has also a higher level\n function to actually run the transitions for the number of timesteps\n that users specify. Additional functions will follow for changing\n attributes on demographic, health belief and movement.","Published":"2016-11-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ibmdbR","Version":"1.48.0","Title":"IBM in-Database Analytics for R","Description":"Functionality required to efficiently use R with\n IBM DB2(C) for Linux, Unix and Windows, IBM dashDB(C) as well as \n DB2 for z/OS (C) in conjunction with IBM DB2 Analytics Accelerator (C).\n Many basic and complex R operations are pushed down into the database, \n which removes the main memory boundary of R and allows to make full \n use of parallel processing in the underlying database. ","Published":"2016-10-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Iboot","Version":"0.1-1","Title":"Iboot: iterated bootstrap tests and confidence sets","Description":"The package implements a general algorithm to obtain\n iterated bootstrap tests and confidence sets for a\n p-dimensional parameter based on the unstudentized version of\n the Rao statistic.","Published":"2013-02-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ibr","Version":"2.0-3","Title":"Iterative Bias Reduction","Description":"Multivariate smoothing using iterative bias reduction with kernel, thin plate splines, Duchon splines or low rank splines. ","Published":"2017-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IBrokers","Version":"0.9-12","Title":"R API to Interactive Brokers Trader Workstation","Description":"Provides native R access to Interactive Brokers Trader Workstation API.","Published":"2014-09-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"iBST","Version":"1.0","Title":"Improper Bagging Survival Tree","Description":"Fit a bagging survival tree on a mixture of population (susceptible and nonsusceptible)\n using either a pseudo R2 criterion or an adjusted Logrank criterion. The predictor is \n evaluated using the Out Of Bag Integrated Brier Score (IBS) and several scores of importance\n are computed for variable selection. The thresholds values for variable selection are \n computed using a nonparametric permutation test.","Published":"2017-01-31","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"iBUGS","Version":"0.1.4","Title":"An Interface to WinBUGS/OpenBUGS/JAGS by gWidgets","Description":"This package has provided an interface to WinBUGS/OpenBUGS/JAGS\n via R2WinBUGS and R2jags. Some options will be intelligently guessed, e.g.\n the path to WinBUGS/OpenBUGS/JAGS and the data/parameter list. Users can\n specify all the options in a GUI.","Published":"2013-12-10","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ic.infer","Version":"1.1-5","Title":"Inequality constrained inference in linear normal situations","Description":"This package implements parameter estimation in normal (linear) models under linear equality and inequality constraints and implements normal likelihood ratio tests involving inequality-constrained hypotheses. For inequality-constrained linear models, averaging over R-squared for different orderings of regressors is also included.","Published":"2014-08-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iC10","Version":"1.1.3","Title":"A Copy Number and Expression-Based Classifier for Breast Tumours","Description":"Implementation of the classifier described in the paper 'Genome-driven integrated classification of breast cancer validated in over 7,500 samples' (Ali HR et al., Genome Biology 2014). It uses copy number and/or expression form breast cancer data, trains a pamr classifier (Tibshirani et al.) with the features available and predicts the iC10 group.","Published":"2015-09-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"iC10TrainingData","Version":"1.0.1","Title":"Training datasets for iC10 package","Description":"Training datasets for iC10; which implements the classifier described in the paper 'Genome-driven integrated classification of breast cancer validated in over 7,500 samples' (Ali HR et al., Genome Biology 2014). It uses copy number and/or expression form breast cancer data, trains a pamr classifier (Tibshirani et al.) with the features available and predicts the iC10 group. Genomic annotation for the training dataset has been obtained from Mark Dunning's lluminaHumanv3.db package.","Published":"2014-09-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IC2","Version":"1.0-1","Title":"Inequality and Concentration Indices and Curves","Description":"Lorenz and concentration curves; Atkinson, Generalized\n entropy and SGini indices (with decomposition)","Published":"2012-08-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ic50","Version":"1.4.2","Title":"Standardized high-throughput evaluation of cell-based compound\nscreens","Description":"Calculation of IC50 values, automatic drawing of\n dose-response curves and validation of compound screens on 96-\n and 384-well plates.","Published":"2010-02-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ica","Version":"1.0-1","Title":"Independent Component Analysis","Description":"Independent Component Analysis (ICA) using various algorithms: FastICA, Information-Maximization (Infomax), and Joint Approximate Diagonalization of Eigenmatrices (JADE).","Published":"2015-08-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICAFF","Version":"1.0.1","Title":"Imperialist Competitive Algorithm","Description":"Imperialist Competitive Algorithm (ICA)\n \n is a computational method that is used to solve optimization\n\t problems of different types and it is the mathematical model\n\t and the computer simulation of human social evolution. \n The package provides a minimum value for the cost function\n\t and the best value for the optimization variables by\n\t Imperialist Competitive Algorithm. \n Users can easily define their own objective function\n\t depending on the problem at hand. \n This version has been successfully applied to solve\n\t optimization problems, for continuous functions. ","Published":"2015-02-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"icamix","Version":"1.0.6","Title":"Estimation of ICA Mixture Models","Description":"Provides R functions which facilitate the estimation of ICA mixture models. We have developed and implemented the NSMM-ICA algorithm that currently integrates npEM and Fast-ICA for non-parametric estimation of ICA mixture models (Zhu, X., & Hunter, D.R., 2016 and Zhu, X., & Hunter, D.R., 2015 ). This is a new tool for unsupervised clustering.","Published":"2017-04-17","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"icaOcularCorrection","Version":"3.0.0","Title":"Independent Components Analysis (ICA) based artifact correction","Description":"Removes eye-movement and other types of known (i.e., recorded) or unknown (i.e., not recorded) artifacts using the fastICA package. The correction method proposed in this package is largely based on the method described in on Flexer, Bauer, Pripfl, and Dorffner (2005). The process of correcting electro- and magneto-encephalographic data (EEG/MEG) begins by running function ``icac'', which first performs independent components analysis (ICA) to decompose the data frame into independent components (ICs) using function ``fastICA'' from the package of the same name. It then calculates for each trial the correlation between each IC and each one of the noise signals -- there can be one or more, e.g., vertical and horizontal electro-oculograms (VEOG and HEOG), electro-myograms (EMG), electro-cardiograms (ECG), galvanic skin responses (GSR), and other noise signals. Subsequently, portions of an IC corresponding to trials at which the correlation between it and a noise signal was at or above threshold (set to 0.4 by default; Flexer et al., 2005, p. 1001) are zeroed-out in the source matrix, ``S''. The user can then identify which ICs correlate with the noise signals the most by looking at the summary of the ``icac'' object (using function ``summary.icac''), the scalp topography of the ICs (using function ``topo_ic''), the time courses of the ICs (using functions ``plot_tric'' and ``plot_nic''), and other diagnostic plots. Once these ICs have been identified, they can be completely zeroed-out using function ``update.icac'' and the resulting correction checked using functions ``plot_avgba'' and ``plot_trba''. Some worked-out examples with R code are provided in the package vignette.","Published":"2013-07-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ICAOD","Version":"0.9.2","Title":"Imperialist Competitive Algorithm for Optimal Designs","Description":"Finding locally D-optimal, minimax D-optimal, standardized maximin D-optimal, optim-on-the-average and multiple objective optimal designs for nonlinear models. Different Fisher information matrices can also be set by user. There are also useful functions for verifying the optimality of the designs with respect to different criteria by equivalence theorem. ICA is a meta-heuristic evolutionary algorithm inspired from the socio-political process of humans. See Masoudi et al. (2016) .","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"icapca","Version":"1.1","Title":"Mixed ICA/PCA","Description":"Implements mixed ICA/PCA model for blind source separation, potentially with inclusion of Gaussian sources","Published":"2014-10-20","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"icarus","Version":"0.3.0","Title":"Calibrates and Reweights Units in Samples","Description":"Provides user-friendly tools for calibration in survey sampling.\n The package is production-oriented, and its interface is inspired by the famous\n popular macro 'Calmar' for SAS, so that 'Calmar' users can quickly get used to\n 'icarus'. In addition to calibration (with linear, raking and logit methods),\n 'icarus' features functions for calibration on tight bounds and penalized\n calibration.","Published":"2017-03-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ICBayes","Version":"1.0","Title":"Bayesian Semiparametric Models for Interval-Censored Data","Description":"Contains functions to fit Bayesian semiparametric regression survival models (proportional hazards model, proportional odds model, and probit model) to interval-censored time-to-event data.","Published":"2015-11-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICC","Version":"2.3.0","Title":"Facilitating Estimation of the Intraclass Correlation\nCoefficient","Description":"Assist in the estimation of the Intraclass Correlation Coefficient (ICC) from variance components of a one-way analysis of variance and also estimate the number of individuals or groups necessary to obtain an ICC estimate with a desired confidence interval width.","Published":"2015-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICC.Sample.Size","Version":"1.0","Title":"Calculation of Sample Size and Power for ICC","Description":"Provides functions to calculate the requisite sample size for studies where ICC is \n the primary outcome. Can also be used for calculation of power. In both cases it\n allows the user to test the impact of changing input variables by calculating the outcome\n for several different values of input variables. Based off the work of Zou.\n Zou, G. Y. (2012). Sample size formulas for estimating intraclass correlation coefficients with\n precision and assurance. Statistics in medicine, 31(29), 3972-3981.","Published":"2015-09-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"iccbeta","Version":"1.0.1","Title":"Multilevel Model Intraclass Correlation for Slope Heterogeneity","Description":"A function and vignettes for computing an intraclass correlation\n described in Aguinis & Culpepper (2015) .\n This package quantifies the share of variance in a dependent variable that\n is attributed to group heterogeneity in slopes.","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICCbin","Version":"1.0","Title":"Facilitates Clustered Binary Data Generation, and Estimation of\nIntracluster Correlation Coefficient (ICC) for Binary Data","Description":"Assists in generating binary clustered data, estimates of Intracluster Correlation coefficient (ICC) for binary response in 14 different methods, and 4 different types of confidence intervals.","Published":"2016-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"icd","Version":"2.2","Title":"Tools for Working with ICD-9 and ICD-10 Codes, and Finding\nComorbidities","Description":"Calculate comorbidities, Charlson scores, perform fast and accurate\n validation, conversion, manipulation, filtering and comparison of ICD-9 and\n ICD-10 codes. Common ambiguities and code formats are handled. This\n package enables a work flow from raw lists of ICD codes in hospital\n billing databases to comorbidities. ICD-9 and ICD-10 comorbidity mappings\n from Quan (Deyo and Elixhauser versions), Elixhauser and AHRQ included. This\n package replaces 'icd9', which should be uninstalled.","Published":"2017-05-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"icd9","Version":"1.3.1","Title":"Tools for Working with ICD-9 Codes, and Finding Comorbidities","Description":"Obsolete: 'icd9' is replaced by CRAN package 'icd'.\n Calculate comorbidities, Charlson scores, perform fast and accurate\n validation, conversion, manipulation, filtering and comparison of ICD-9-CM\n (clinical modification) codes. ICD-9 codes appear numeric but leading and\n trailing zeroes, and both decimal and non-decimal \"short\" format codes\n exist. The package enables a work flow from raw lists of ICD-9 codes from\n hospital billing databases to comorbidities. ICD-9 to comorbidity mappings\n from Quan (Deyo and Elixhauser versions), Elixhauser and AHRQ included. Any\n other mapping of codes, such as ICD-10, to comorbidities can be used.","Published":"2016-04-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"icdGLM","Version":"1.0.0","Title":"EM by the Method of Weights for Incomplete Categorical Data in\nGenerlized Linear Models","Description":"Provides an estimator for generalized linear models with incomplete\n data for discrete covariates. The estimation is based on the EM algorithm by the\n method of weights by Ibrahim (1990) .","Published":"2016-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICE","Version":"0.69","Title":"Iterated Conditional Expectation","Description":"Kernel Estimators for Interval-Censored Data","Published":"2013-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICEbox","Version":"1.1.1","Title":"Individual Conditional Expectation Plot Toolbox","Description":"Implements Individual Conditional Expectation (ICE) plots, a tool for visualizing the model estimated by any supervised learning algorithm. ICE plots refine Friedman's partial dependence plot by graphing the functional relationship between the predicted response and a covariate of interest for individual observations. Specifically, ICE plots highlight the variation in the fitted values across the range of a covariate of interest, suggesting where and to what extent they may exist.","Published":"2017-03-29","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"IceCast","Version":"1.1.0","Title":"Apply Statistical Post-Processing to Improve Sea Ice Predictions","Description":"Tools for modeling and correcting biases in sea ice predictions obtained from dynamical models.","Published":"2017-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICEinfer","Version":"1.0-1","Title":"Incremental Cost-Effectiveness (ICE) Statistical Inference from\nTwo Unbiased Samples","Description":"Given two unbiased samples of patient level data on cost and effectiveness\n for a pair of treatments, make head-to-head treatment comparisons by (i) generating the\n bivariate bootstrap resampling distribution of ICE uncertainty for a specified value of\n the shadow price of health, lambda, (ii) form the wedge-shaped ICE confidence region with\n specified confidence fraction within [0.50, 0.99] that is equivariant with respect to\n changes in lambda, (iii) color the bootstrap outcomes within the above confidence wedge\n with economic preferences from an ICE map with specified values of lambda, beta and gamma\n parameters, (iv) display VAGR and ALICE acceptability curves, and (v) illustrate variation\n in ICE preferences by displaying potentially non-linear indifference(iso-preference) curves\n from an ICE map with specified values of lambda, beta and gamma or eta parameters. ","Published":"2014-01-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"icenReg","Version":"2.0.1","Title":"Regression Models for Interval Censored Data","Description":"Regression models for interval censored data. Currently supports\n Cox-PH, proportional odds, and accelerated failure time models. Allows for\n semi and fully parametric models (parametric only for accelerated failure\n time models) and Bayesian parametric models. Includes functions for easy visual\n diagnostics of model fits and imputation of censored data.","Published":"2017-04-19","License":"LGPL (>= 2.0, < 3)","snapshot_date":"2017-06-23"} {"Package":"icensmis","Version":"1.3.1","Title":"Study Design and Data Analysis in the Presence of Error-Prone\nDiagnostic Tests and Self-Reported Outcomes","Description":"We consider studies in which information from error-prone\n diagnostic tests or self-reports are gathered sequentially to determine the\n occurrence of a silent event. Using a likelihood-based approach\n incorporating the proportional hazards assumption, we provide functions to\n estimate the survival distribution and covariate effects. We also provide\n functions for power and sample size calculations for this setting.","Published":"2016-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"icesAdvice","Version":"1.3-1","Title":"Functions Related to ICES Advice","Description":"Functions that are related to the ICES advisory process.","Published":"2017-05-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"icesDatras","Version":"1.2-0","Title":"DATRAS Trawl Database Web Services","Description":"R interface to access the web services of the ICES (International\n Council for the Exploration of the Sea) DATRAS trawl survey\n database .","Published":"2017-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"icesSAG","Version":"1.3-2","Title":"Stock Assessment Graphs Database Web Services","Description":"R interface to access the web services of the ICES Stock Assessment\n Graphs database .","Published":"2017-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"icesTAF","Version":"1.3-2","Title":"Functions to Support the ICES Transparent Assessment Framework","Description":"Functions to support the ICES Transparent Assessment Framework\n to organize data, methods, and results used in ICES\n assessments. ICES is an organization facilitating international collaboration\n in marine science.","Published":"2017-06-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"icesVocab","Version":"1.1-2","Title":"ICES Vocabularies Database Web Services","Description":"R interface to access the RECO POX web services of the ICES (International\n Council for the Exploration of the Sea) Vocabularies database .","Published":"2016-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICGE","Version":"0.3","Title":"Estimation of number of clusters and identification of atypical\nunits","Description":"ICGE is a package that helps to estimate the number of real clusters in data as well as to identify atypical units. The underlying methods are based on distances rather than on unit x variables.","Published":"2014-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICGOR","Version":"2.0","Title":"Fit Generalized Odds Rate Hazards Model with Interval Censored\nData","Description":"Generalized Odds Rate Hazards (GORH) model is a flexible model of fitting survival data, including the Proportional Hazards (PH) model and the Proportional Odds (PO) Model as special cases. This package fit the GORH model with interval censored data.","Published":"2017-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iClick","Version":"1.2","Title":"A Button-Based GUI for Financial and Economic Data Analysis","Description":"A GUI designed to support the analysis of financial-economic time\n series data.","Published":"2016-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iCluster","Version":"2.1.0","Title":"Integrative clustering of multiple genomic data types","Description":"Integrative clustering of multiple genomic data types\n using a joint latent variable model.","Published":"2012-05-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"icosa","Version":"0.9.81","Title":"Global Triangular and Penta-Hexagonal Grids Based on Tessellated\nIcosahedra","Description":"Employs triangular tessellation to refine icosahedra\n defined in 3d space. The procedures can be set to provide a grid with a\n custom resolution. Both the primary triangular and their inverted penta-\n hexagonal grids are available for implementation. Additional functions\n are provided to position points (latitude-longitude data) on the grids,\n to allow 2D and 3D plotting, use raster data and shapefiles.","Published":"2017-04-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"icpsrdata","Version":"0.3.0","Title":"Reproducible Data Retrieval from the ICPSR Archive","Description":"Reproducible, programmatic retrieval of datasets from the\n Inter-university Consortium for Political and Social Research archive.","Published":"2016-12-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"icRSF","Version":"1.1","Title":"A Modified Random Survival Forest Algorithm","Description":"Implements a modification to the Random Survival Forests algorithm for obtaining variable importance in high dimensional datasets. The proposed algorithm is appropriate for settings in which a silent event is observed through sequentially administered, error-prone self-reports or laboratory based diagnostic tests. The modified algorithm incorporates a formal likelihood framework that accommodates sequentially administered, error-prone self-reports or laboratory based diagnostic tests. The original Random Survival Forests algorithm is modified by the introduction of a new splitting criterion based on a likelihood ratio test statistic.","Published":"2017-05-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICS","Version":"1.3-0","Title":"Tools for Exploring Multivariate Data via ICS/ICA","Description":"Implementation of Tyler, Critchley, Duembgen and Oja's (JRSS B, 2009,\n ) and Oja, Sirkia and Eriksson's\n (AJS, 2006, ) method of two different\n scatter matrices to obtain an invariant coordinate system or independent\n components, depending on the underlying assumptions. ","Published":"2016-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICSNP","Version":"1.1-0","Title":"Tools for Multivariate Nonparametrics","Description":"Tools for multivariate nonparametrics, as location tests based on marginal ranks, spatial median and spatial signs computation, Hotelling's T-test, estimates of shape are implemented.","Published":"2015-09-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICSOutlier","Version":"0.2-0","Title":"Outlier Detection Using Invariant Coordinate Selection","Description":"Multivariate outlier detection is performed using invariant coordinates where the package offers different methods to choose the appropriate components.","Published":"2016-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICsurv","Version":"1.0","Title":"A package for semiparametric regression analysis of\ninterval-censored data","Description":"Currently using the proportional hazards (PH) model. More methods under other semiparametric regression models will be included in later versions. ","Published":"2014-06-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"icsw","Version":"0.9","Title":"Inverse Compliance Score Weighting","Description":"Provides the necessary tools to estimate average treatment effects with an instrumental variable by re-weighting observations using a model of compliance. ","Published":"2015-07-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ICtest","Version":"0.2","Title":"Estimating and Testing the Number of Interesting Components in\nLinear Dimension Reduction","Description":"For different linear dimension reduction methods like principal components analysis (PCA), independent components analysis (ICA) and supervised linear dimension reduction tests and estimates for the number of interesting components (ICs) are provided.","Published":"2016-12-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ICV","Version":"1.0","Title":"Indirect Cross-Validation (ICV) for Kernel Density Estimation","Description":"Functions for computing the global and local Gaussian density estimates based on the ICV bandwidth. See the article of Savchuk, O.Y., Hart, J.D., Sheather, S.J. (2010). Indirect cross-validation for density estimation. Journal of the American Statistical Association, 105(489), 415-423 .","Published":"2017-01-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"idar","Version":"1.0","Title":"Individual Diversity-Area Relationships","Description":"Computes and tests individual (species, phylogenetic and functional) diversity-area relationships, i.e., how species-, phylogenetic- and functional-diversity varies with spatial scale around the individuals of some species in a community. See applications of these methods in Wiegand et al. (2007) or Chacon-Labella et al. (2016) .","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"idbg","Version":"1.0","Title":"R debugger","Description":"An interactive R debugger","Published":"2012-02-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"idbr","Version":"0.2","Title":"R Interface to the US Census Bureau International Data Base API","Description":"Use R to make requests to the US Census Bureau's International Data Base API.\n Results are returned as R data frames. For more information about the IDB API, visit\n .","Published":"2016-07-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"idem","Version":"2.0","Title":"Inference in Randomized Controlled Trials with Death and\nMissingness","Description":"In randomized studies involving severely ill patients, functional\n outcomes are often unobserved due to missed clinic visits, premature\n withdrawal or death. It is well known that if these unobserved functional\n outcomes are not handled properly, biased treatment comparisons can be\n produced. In this package, we implement a procedure for comparing treatments\n that is based on the composite endpoint of both the functional outcome and\n survival. The procedure was proposed in Wang et al. (2016) .\n It considers missing data imputation with a sensitivity\n analysis strategy to handle the unobserved functional outcomes not due to\n death.","Published":"2017-04-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"idendr0","Version":"1.5.3","Title":"Interactive Dendrograms","Description":"Interactive dendrogram that enables the user to select and\n color clusters, to zoom and pan the dendrogram, and to visualize\n the clustered data not only in a built-in heat map, but also in\n 'GGobi' interactive plots and user-supplied plots. \n This is a backport of Qt-based 'idendro' \n () to base R graphics and \n Tcl/Tk GUI.","Published":"2017-02-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"identity","Version":"0.2-1","Title":"Jacquard Condensed Coefficients of Identity","Description":"Calculate identity coefficients, based on Mark Abney's C\n code.","Published":"2010-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ider","Version":"0.1.0","Title":"Various Methods for Estimating Intrinsic Dimension","Description":"An implementation of various methods for estimating intrinsic\n dimension of vector-valued dataset or distance matrix. Most methods implemented\n are based on different notion of fractal dimension such as the capacity\n dimension, the box-counting dimension, and the information dimension.","Published":"2017-04-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"idm","Version":"1.8.1","Title":"Incremental Decomposition Methods","Description":"Incremental Multiple Correspondence Analysis and Principal\n Component Analysis.","Published":"2017-04-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IDmining","Version":"1.0.1","Title":"Intrinsic Dimension for Data Mining","Description":"Contains techniques for mining large high-dimensional data sets \n by using the concept of Intrinsic Dimension (ID). Here the ID is \n not necessarily integer. It is extended to fractal dimensions. And \n the Morisita estimator is used for the ID estimation, but other \n tools are included as well.","Published":"2017-06-15","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"iDOS","Version":"1.0.0","Title":"Integrated Discovery of Oncogenic Signatures","Description":"Integrate molecular profiles to discover candidate oncogenic drivers.","Published":"2016-07-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"IDPmisc","Version":"1.1.17","Title":"Utilities of Institute of Data Analyses and Process Design\n(www.idp.zhaw.ch)","Description":"The IDPmisc package contains different high-level graphics\n functions for displaying large datasets, displaying circular\n data in a very flexible way, finding local maxima, brewing\n color ramps, drawing nice arrows, zooming 2D-plots, creating\n figures with differently colored margin and plot region. In\n addition, the package contains auxiliary functions for data\n manipulation like omitting observations with irregular values\n or selecting data by logical vectors, which include NAs. Other\n functions are especially useful in spectroscopy and analyses of\n environmental data: robust baseline fitting, finding peaks in\n spectra.","Published":"2012-11-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"IDPSurvival","Version":"1.2","Title":"Imprecise Dirichlet Process for Survival Analysis","Description":"Functions to perform robust\n\t\t nonparametric survival analysis with right censored \n\t\t data using a prior near-ignorant Dirichlet Process.\n\t\t Mangili, F., Benavoli, A., de Campos, C.P., Zaffalon, M. (2015) .","Published":"2017-02-26","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"idr","Version":"1.2","Title":"Irreproducible discovery rate","Description":"This is a package for estimating the copula mixture model and plotting correspondence curves in \"Measuring reproducibility of high-throughput experiments\" (2011), Annals of Applied Statistics, Vol. 5, No. 3, 1752-1779, by Li, Brown, Huang, and Bickel ","Published":"2014-09-04","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"ids","Version":"1.0.1","Title":"Generate Random Identifiers","Description":"Generate random or human readable and pronounceable identifiers.","Published":"2017-05-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"IDSpatialStats","Version":"0.2.2","Title":"Estimate Global Clustering in Infectious Disease","Description":"Implements various novel and standard\n clustering statistics and other analyses useful for understanding the\n spread of infectious disease.","Published":"2016-09-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IDTurtle","Version":"1.2","Title":"Identify Turtles by their Plastral Biometries","Description":"It is a method to identify individually turtles using the plastral biometries. It is also useful to detect errors on the identification of turtles as is explained in Valdeon & Longares 2015. ","Published":"2015-07-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iDynoR","Version":"1.0","Title":"R Analysis package for iDynoMiCS Simulation Results","Description":"iDynoMiCS is a computer program, developed by an international team of researchers, whose purpose is to model and simulate microbial communities in an individual-based way. It is described in detail in the paper \"iDynoMiCS: next-generation individual-based modelling of biofilms\" by Lardon et al, published in Environmental Microbiology in 2011. The simulation produces results in XML file format, describing the state of each species in each timestep (agent_State), a summary of the species statistics for a timepoint (agent_Sum), the state of each solute grid in each timestep (env_State) and a summary of the solutes for a timestep (env_Sum). This R package provides a means of reading this XML data into R such that the simulation response can be statistically analysed. iDynoMiCS is available from the website iDynoMiCS.org, where a full tutorial on using both the simulation and this R package is provided.","Published":"2014-01-14","License":"CeCILL","snapshot_date":"2017-06-23"} {"Package":"ie2misc","Version":"0.8.5","Title":"Irucka Embry's Miscellaneous USGS Functions","Description":"A collection of Irucka Embry's miscellaneous USGS functions\n (processing .exp and .psf files, statistical error functions,\n \"+\" dyadic operator for use with NA, creating ADAPS and QW\n spreadsheet files, calculating saturated enthalpy). Irucka created these\n functions while a Cherokee Nation Technology Solutions (CNTS) United States\n Geological Survey (USGS) Contractor and/or USGS employee.","Published":"2016-08-09","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"iECAT","Version":"0.8","Title":"Integrating External Controls into Association Test","Description":"Functions for single-variant and region-based tests with external control samples. These methods use external study samples as control samples with adjusting for possible batch effects. ","Published":"2017-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ieeeround","Version":"0.2-0","Title":"Functions to set and get the IEEE rounding mode","Description":"A pair of functions for getting and setting the IEEE\n rounding mode for floating point computations.","Published":"2011-08-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iemisc","Version":"0.9.6","Title":"Irucka Embry's Miscellaneous Functions","Description":"A collection of Irucka Embry's miscellaneous functions\n (Engineering Economics, Civil & Environmental/Water Resources Engineering,\n Geometry, Statistics, GNU Octave length functions, Trigonometric functions\n in degrees, etc.).","Published":"2016-10-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"iemiscdata","Version":"0.6.1","Title":"Irucka Embry's Miscellaneous Data Collection","Description":"Miscellaneous data sets [Engineering Economics, Environmental/\n Water Resources Engineering, US Presidential Elections].","Published":"2016-07-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"iemisctext","Version":"0.9.99","Title":"Irucka Embry's Miscellaneous Text Collection","Description":"The eclectic collection includes the following written pieces:\n \"The War Prayer\" by Mark Twain, \"War Is A Racket\" by Major General\n Smedley Butler, \"The Mask of Anarchy: Written on the Occasion of the\n Massacre at Manchester\" by Percy Bysshe Shelley, \"Connect the D.O.T.S.\" by\n Obiora Embry, \"Untitled: Climate Strange\" by Irucka Ajani Embry, and\n \"Untitled: Us versus Them or People Screwing over Other People (as we all\n live on one Earth and there is no \"us versus them\" in the actual Ultimate\n Reality}\" by Irucka Ajani Embry.","Published":"2016-09-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ifa","Version":"7.0","Title":"Independent Factor Analysis","Description":"The package performes Independent Factor Analysis","Published":"2012-01-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"iFad","Version":"3.0","Title":"An integrative factor analysis model for drug-pathway\nassociation inference","Description":"This package implements a Bayesian sparse factor model for the joint analysis of paired datasets, one is the gene expression dataset and the other is the drug sensitivity profiles, measured across the same panel of samples, e.g., cell lines. Prior knowledge about gene-pathway associations can be easily incorporated in the model to aid the inference of drug-pathway associations.","Published":"2014-03-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ifaTools","Version":"0.14","Title":"Toolkit for Item Factor Analysis with 'OpenMx'","Description":"Tools, tutorials, and demos of Item Factor Analysis using 'OpenMx'.","Published":"2017-04-17","License":"AGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ifctools","Version":"0.3.2","Title":"Italian Fiscal Code ('Codice Fiscale') Utilities","Description":"Provides utility functions to deal with Italian fiscal\n code ('codice fiscale').","Published":"2015-12-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IFP","Version":"0.2.1","Title":"Identifying Functional Polymorphisms","Description":"A suite for identifying causal models using relative concordances and identifying causal polymorphisms in case-control genetic association data, especially with large controls re-sequenced data.","Published":"2016-02-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ifs","Version":"0.1.5","Title":"Iterated Function Systems","Description":"Iterated Function Systems Estimator.","Published":"2015-08-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ifultools","Version":"2.0-4","Title":"Insightful Research Tools","Description":"Insightful Research Tools.","Published":"2016-05-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ig.vancouver.2014.topcolour","Version":"0.1.2.0","Title":"Instagram 2014 Vancouver Top Colour Dataset","Description":"A dataset of the top colours of photos from Instagram \n taken in 2014 in the city of Vancouver, British Columbia, Canada.\n It consists of: top colour and counts data. This data was\n obtained using the Instagram API. Instagram is a web photo \n sharing service. It can be found at: .\n The Instagram API is documented at: . ","Published":"2015-05-02","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"iGasso","Version":"1.4","Title":"Statistical Tests and Utilities for Genetic Association","Description":"A collection of statistical tests for genetic association studies.","Published":"2016-06-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IGM.MEA","Version":"0.3.5","Title":"IGM MEA Analysis","Description":"Software tools for the characterization of neuronal networks as recorded on multi-electrode arrays.","Published":"2017-03-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"IgorR","Version":"0.8.1","Title":"Read Binary Files Saved by 'Igor Pro' (Including 'Neuromatic'\nData)","Description":"Provides function to read data from the 'Igor Pro' data analysis\n program by Wavemetrics. The data formats supported are 'Igor' packed \n experiment format (pxp) and 'Igor' binary wave (ibw). See: \n http://www.wavemetrics.com/ for details. Also includes functions to load \n special pxp files produced by the 'Igor Pro' 'Neuromatic' and 'Nclamp' \n packages for recording and analysing neuronal data. See \n http://www.neuromatic.thinkrandom.com/ for details.","Published":"2017-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"igraph","Version":"1.0.1","Title":"Network Analysis and Visualization","Description":"Routines for simple graphs and network analysis. It can\n handle large graphs very well and provides functions for generating random\n and regular graphs, graph visualization, centrality methods and much more.","Published":"2015-06-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"igraphdata","Version":"1.0.1","Title":"A Collection of Network Data Sets for the 'igraph' Package","Description":"A small collection of various network data sets,\n to use with the 'igraph' package: the Enron email network, various food webs,\n interactions in the immunoglobulin protein, the karate club network,\n Koenigsberg's bridges, visuotactile brain areas of the macaque monkey,\n UK faculty friendship network, domestic US flights network, etc.","Published":"2015-07-13","License":"CC BY-SA 4.0 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"igraphinshiny","Version":"0.1","Title":"Use 'shiny' to Demo 'igraph'","Description":"Using 'shiny' to demo 'igraph' package makes learning graph theory easy and fun.","Published":"2016-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"igraphtosonia","Version":"1.0","Title":"Convert iGraph graps to SoNIA .son files","Description":"This program facilitates exporting igraph graphs to the\n SoNIA file format","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iGSEA","Version":"1.2","Title":"Integrative Gene Set Enrichment Analysis Approaches","Description":"To integrate multiple GSEA studies, we propose a hybrid strategy,\n iGSEA-AT, for choosing random effects (RE) versus fixed effect (FE) models,\n with an attempt to achieve the potential maximum statistical efficiency as \n well as stability in performance in various practical situations. In addition\n to iGSEA-AT, this package also provides options to perform integrative GSEA\n with testing based on a FE model (iGSEA-FE) and testing based on a RE model\n (iGSEA-RE). The approaches account for different set sizes when testing a\n database of gene sets. The function is easy to use, and the three approaches\n can be applied to both binary and continuous phenotypes. ","Published":"2017-05-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ihs","Version":"1.0","Title":"Inverse Hyperbolic Sine Distribution","Description":"Density, distribution function, quantile function and random generation for the inverse hyperbolic sine distribution. This package also provides a function that can fit data to the inverse hyperbolic sine distribution using maximum likelihood estimation.","Published":"2015-02-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"IHSEP","Version":"0.1","Title":"Inhomogeneous Self-Exciting Process","Description":"Simulate an inhomogeneous self-exciting process (IHSEP), or Hawkes process, with a given (possibly time-varying) baseline intensity and an excitation function. Calculate the likelihood of an IHSEP with given baseline intensity and excitation functions for an (increasing) sequence of event times. Calculate the point process residuals (integral transforms of the original event times). Calculate the mean intensity process.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iJRF","Version":"1.1-4","Title":"Integrative Joint Random Forest","Description":"Integrative framework for the simultaneous estimation of interactions from different class of data.","Published":"2017-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iki.dataclim","Version":"1.0","Title":"Consistency, Homogeneity and Summary Statistics of\nClimatological Data","Description":"The package offers an S4 infrastructure to store climatological\n station data of various temporal aggregation scales. In-built quality\n control and homogeneity tests follow the methodology from the European\n Climate Assessment & Dataset project. Wrappers for climate indices\n defined by the Expert Team on Climate Change Detection and Indices\n (ETCCDI), a quick summary of important climate statistics and climate\n diagram plots provide a fast overview of climatological\n characteristics of the station. ","Published":"2014-09-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"iLaplace","Version":"1.1.0","Title":"Improved Laplace Approximation for Integrals of Unimodal\nFunctions","Description":"Improved Laplace approximation for integrals of unimodal functions.\n The method requires user-supplied R functions for: the integrand, its gradient\n and its Hessian matrix. The computations are run in parallel.","Published":"2016-08-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ilc","Version":"1.0","Title":"Lee-Carter Mortality Models using Iterative Fitting Algorithms","Description":"Fitting a class of Lee-Carter mortality models using iterative fitting algorithm. ","Published":"2014-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ILS","Version":"0.1.0","Title":"Interlaboratory Study","Description":"It performs interlaboratory studies (ILS) to detect those laboratories that provide non-consistent results when comparing to others.\n It permits to work simultaneously with various testing materials, from standard univariate, and functional data analysis (FDA) perspectives.\n The univariate approach based on ASTM E691-08 consist of estimating the Mandel's h and k statistics to identify those laboratories\n that provide more significant different results, testing also the presence of outliers by Cochran and Grubbs tests, Analysis of variance (ANOVA) \n techniques are provided (F and Tuckey tests) to test differences in means corresponding to different laboratories per each material.\n Taking into account the functional nature of data retrieved in analytical chemistry, applied physics and engineering (spectra, thermograms, etc.).\n ILS package provides a FDA approach for finding the Mandel's k and h statistics distribution by smoothing bootstrap resampling.","Published":"2016-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IM","Version":"1.0","Title":"Orthogonal Moment Analysis","Description":"Compute moments of images and perform reconstruction from\n moments.","Published":"2013-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"imageData","Version":"0.1-26","Title":"Aids in Processing and Plotting Data from a Lemna-Tec\nScananalyzer","Description":"Extracts traits from imaging data produced using a Lemna-Tec Scananalyzer \n (see for more \n information). Growth rates between successive imagings are obtained and those for \n a nominated set of intervals can also be calculated. Profile or longitudinal plots \n of the traits and growth rates can be produced. These allow one to check for \n anomalous data and to explore growth patterns in the data.","Published":"2016-10-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"imager","Version":"0.40.2","Title":"Image Processing Library Based on 'CImg'","Description":"Fast image processing for images in up to 4 dimensions (two spatial\n dimensions, one time/depth dimension, one colour dimension). Provides most\n traditional image processing tools (filtering, morphology, transformations,\n etc.) as well as various functions for easily analysing image data using R. The\n package wraps CImg, , a simple, modern C++ library for image\n processing.","Published":"2017-04-24","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"imaginator","Version":"0.1.1","Title":"Simulate General Insurance Policies and Losses","Description":"Simulate general insurance policies, losses and loss emergence. The package contemplates \n deterministic and stochastic policy retention and growth scenarios. Retention and growth rates are percentages relative\n to the expiring portfolio. Claims are simulated for each policy. This is accomplished either be assuming a frequency\n distribution per development lag or by generating random wait times until claim emergence and settlement. Loss simulation \n uses standard loss distributions for claim amounts.","Published":"2017-06-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"imagine","Version":"1.2.1","Title":"Imaging Engine, Tools for Application of Image Filters to Data\nMatrices","Description":"Provides fast application of image filters to data matrices,\n using R and C++ algorithms.","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ImaginR","Version":"0.1.7","Title":"Delimit and Characterize Color Phenotype of the Pearl Oyster","Description":"The pearl oyster, Pinctada margaritifera (Linnaeus, 1758), represents the second economic resource of French Polynesia. It is one of the only bivalves expressing a large varied range of inner shell color, & by correlation, of pearl color. This phenotypic variability is partly under genetic control, but also under environmental influence. With ImaginR, it's now possible to delimit the color phenotype of the pearl oyster's inner shell and to characterize their color variations (by the HSV color code system) with pictures.","Published":"2017-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IMak","Version":"1.1.2","Title":"Item Maker","Description":"This is an Automatic Item Generator for Psychological Testing. It is recommended for research purposes only.","Published":"2016-05-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Imap","Version":"1.32","Title":"Interactive Mapping","Description":"Zoom in and out of maps or any supplied lines or points,\n with control for color, poly fill, and aspect.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iMediate","Version":"0.3","Title":"Methods for Mediation Analysis","Description":"Implements likelihood based methods for mediation analysis. ","Published":"2017-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iMessager","Version":"1.0","Title":"Send 'iMessages' from R","Description":"Send 'iMessages' from R running 'macOS' 10.8.x or later. You must have 'Messages.app' configured for the user account you are running R on.","Published":"2017-03-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IMFData","Version":"0.2.0","Title":"R Interface for International Monetary Fund(IMF) Data API","Description":"Search, extract and formulate IMF's datasets.","Published":"2016-10-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"imfr","Version":"0.1.4","Title":"Download Data from the International Monetary Fund's Data API","Description":"Explore and download data from the International Monetary Fund's\n data API .","Published":"2017-03-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"imguR","Version":"1.0.3","Title":"An Imgur.com API Client Package","Description":"A complete API client for the image hosting service Imgur.com, including the an imgur graphics device, enabling the easy upload and sharing of plots.","Published":"2016-03-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IMIFA","Version":"1.3.0","Title":"Fitting, Diagnostics, and Plotting Functions for Infinite\nMixtures of Infinite Factor Analysers and Related Models","Description":"Provides flexible Bayesian estimation of Infinite Mixtures of Infinite Factor Analysers and related models, for nonparametrically clustering high-dimensional data, introduced by Murphy et al. (2017) . The IMIFA model conducts Bayesian nonparametric model-based clustering with factor analytic covariance structures without recourse to model selection criteria to choose the number of clusters or cluster-specific latent factors, mostly via efficient Gibbs updates. Model-specific diagnostic tools are also provided, as well as many options for plotting results and conducting posterior inference on parameters of interest.","Published":"2017-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IMIS","Version":"0.1","Title":"Increamental Mixture Importance Sampling","Description":"IMIS algorithm draws samples from the posterior\n distribution. The user has to define the following R functions\n in advance: prior(x) calculates prior density of x,\n likelihood(x) calculates the likelihood of x, and\n sample.prior(n) draws n samples from the prior distribution.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"immer","Version":"0.8-5","Title":"Item Response Models for Multiple Ratings","Description":"\n Implements some item response models for multiple\n ratings, including the hierarchical rater model, \n conditional maximum likelihood estimation of linear \n logistic partial credit model and a wrapper function\n to the commercial FACETS program.","Published":"2017-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IMP","Version":"1.1","Title":"Interactive Model Performance Evaluation","Description":"Contains functions for evaluating & comparing the performance of Binary classification models. Functions can be called either statically or interactively (as Shiny Apps).","Published":"2016-01-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"imp4p","Version":"0.4","Title":"Imputation for Proteomics","Description":"Functions to analyse missing value mechanisms and to impute data sets in the context of bottom-up MS-based proteomics.","Published":"2017-06-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IMPACT","Version":"0.1.0","Title":"The Impact of Items","Description":"Implement a multivariate analysis of the impact of items to identify a bias in the questionnaire validation of Likert-type scale variables. The items requires considering a null value (category doesn't have tendency). Offering frequency, importance and impact of the items.","Published":"2016-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ImpactIV","Version":"1.0","Title":"Identifying Causal Effect for Multi-Component Intervention Using\nInstrumental Variable Method","Description":"In this package, you can find two functions proposed in\n Ding, Geng and Zhou (2011) to estimate direct and indirect\n causal effects with randomization and multiple-component\n intervention using instrumental variable method.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"implyr","Version":"0.2.0","Title":"R Interface for Apache Impala","Description":"'SQL' back-end to 'dplyr' for Apache Impala (incubating), the \n massively parallel processing query engine for Apache 'Hadoop'. Impala \n enables low-latency 'SQL' queries on data stored in the 'Hadoop' \n Distributed File System '(HDFS)', Apache 'HBase', Apache 'Kudu', and \n Amazon Simple Storage Service '(S3)'. See \n for more information about Impala.","Published":"2017-06-21","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"imPois","Version":"0.1.4","Title":"Imprecise Inference for Poisson Sampling Models","Description":"Tools performing an imprecise inference for estimating the parameter of Poisson sampling model. Extended the original work done in the PhD thesis of Lee (2014). The theory of imprecise probabilities introduced by Peter Walley in 1991 becomes the basis of this inferential framework. ","Published":"2017-03-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"import","Version":"1.1.0","Title":"An Import Mechanism for R","Description":"This is an alternative mechanism for importing\n objects from packages. The syntax allows for importing multiple objects\n from a package with a single command in an expressive way. The import\n package bridges some of the gap between using library (or require) and\n direct (single-object) imports. Furthermore the imported objects are not\n placed in the current environment. It is also possible to import\n objects from stand-alone .R files. For more information, refer to\n the package vignette.","Published":"2015-06-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ImportExport","Version":"1.1","Title":"Import and Export Data","Description":"Import and export data from the most common statistical formats by using \n\t R functions that guarantee the least loss of the data information, giving special\n\t attention to the date variables and the labelled ones.","Published":"2015-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"imprProbEst","Version":"1.0.1","Title":"Minimum distance estimation in an imprecise probability model","Description":"A minimum distance estimator is calculated for an\n imprecise probability model. The imprecise probability model\n consists of upper coherent previsions whose credal sets are\n given by a finite number of constraints on the expectations.\n The parameter set is finite. The estimator chooses that\n parameter such that the empirical measure lies next to the\n corresponding credal set with respect to the total variation\n norm.","Published":"2010-05-07","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"imputeLCMD","Version":"2.0","Title":"A collection of methods for left-censored missing data\nimputation","Description":"The package contains a collection of functions for left-censored missing data imputation. Left-censoring is a special case of missing not at random (MNAR) mechanism that generates non-responses in proteomics experiments. The package also contains functions to artificially generate peptide/protein expression data (log-transformed) as random draws from a multivariate Gaussian distribution as well as a function to generate missing data (both randomly and non-randomly). For comparison reasons, the package also contains several wrapper functions for the imputation of non-responses that are missing at random. * New functionality has been added: a hybrid method that allows the imputation of missing values in a more complex scenario where the missing data are both MAR and MNAR.","Published":"2015-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"imputeMDR","Version":"1.1.2","Title":"The Multifactor Dimensionality Reduction (MDR) Analysis for\nIncomplete Data","Description":"This package provides various approaches to handling\n missing values for the MDR analysis to identify gene-gene\n interactions using biallelic marker data in genetic association\n studies","Published":"2012-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"imputeMissings","Version":"0.0.3","Title":"Impute Missing Values in a Predictive Context","Description":"Compute missing values on a training data set and impute them on a new data set. Current available options are median/mode and random forest.","Published":"2016-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"imputeMulti","Version":"0.6.4","Title":"Imputation Methods for Multivariate Multinomial Data","Description":"Implements imputation methods using EM and Data Augmentation for\n multinomial data following the work of Schafer 1997 .","Published":"2017-02-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"imputePSF","Version":"0.1.0","Title":"Impute Missing Data in Time Series Data with PSF Based Method","Description":"Imputes the missing values in time series data with PSF algorithm based approach.\n The details about PSF algorithm are available at:\n .","Published":"2016-05-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"imputeR","Version":"2.0","Title":"A General Imputation Framework in R","Description":"General imputation framework based on\n variable selection methods including regularisation methods,\n tree-based models and dimension reduction methods.","Published":"2017-03-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ImputeRobust","Version":"1.1-2","Title":"Robust Multiple Imputation with Generalized Additive Models for\nLocation Scale and Shape","Description":"Provides new imputation methods for the 'mice' package based on generalized additive models for location, scale, and shape (GAMLSS) as described in de Jong, van Buuren and Spiess .","Published":"2017-04-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"imputeTestbench","Version":"3.0.1","Title":"Test Bench for the Comparison of Imputation Methods","Description":"Provides a test bench for the comparison of missing data imputation \n methods in uni-variate time series. Imputation methods are compared using \n different error metrics. Proposed imputation methods and alternative error \n metrics can be used.","Published":"2017-06-23","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"imputeTS","Version":"2.5","Title":"Time Series Missing Value Imputation","Description":"Imputation (replacement) of missing values \n in univariate time series. \n Offers several imputation functions\n and missing data plots. \n Available imputation algorithms include: \n 'Mean', 'LOCF', 'Interpolation', \n 'Moving Average', 'Seasonal Decomposition', \n 'Kalman Smoothing on Structural Time Series models',\n 'Kalman Smoothing on ARIMA models'.","Published":"2017-06-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"imputeYn","Version":"1.3","Title":"Imputing the Last Largest Censored Observation(s) Under Weighted\nLeast Squares","Description":"Method brings less bias and more efficient estimates for AFT models.","Published":"2015-10-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"IMTest","Version":"1.0.0","Title":"Information Matrix Test for Generalized Partial Credit Models","Description":"Implementation of the information matrix test for generalized partial credit models.","Published":"2017-04-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"in2extRemes","Version":"1.0-3","Title":"Into the extRemes Package","Description":"Graphical User Interface (GUI) to some of the functions in the package extRemes version >= 2.0 are included.","Published":"2016-10-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"inarmix","Version":"0.4","Title":"Mixture models for longitudinal count data","Description":"Fits mixtures models for longitudinal data. Appropriate when the data are counts and when the correlation structure is assumed to be AR(1).","Published":"2014-03-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"inbreedR","Version":"0.3.2","Title":"Analysing Inbreeding Based on Genetic Markers","Description":"A framework for analysing inbreeding and heterozygosity-fitness\n correlations (HFCs) based on microsatellite and SNP markers.","Published":"2016-09-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"inca","Version":"0.0.2","Title":"Integer Calibration","Description":"Specific functions are provided for rounding real weights to\n integers and performing an integer programming algorithm for calibration\n problems. They are useful for census-weights adjustments, or for performing\n linear regression with integer parameters.","Published":"2016-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"incadata","Version":"0.5.3","Title":"Recognize and Handle Data in Formats Used by Swedish Cancer\nCenters","Description":"\n Handle data in formats used by cancer centers in Sweden, both from INCA \n (the current register platform, (see for more information) and \n by the older register platform Rockan (used in the Western and Northern part \n of the country). \n All variables are coerced to suitable classes based on their \n format. \n Dates (from various formats such as with missing month or day, with or \n without century prefix or with just a week number) are all recognised as\n dates and coerced to the ISO 8601 standard (Y-m-d).\n Boolean variables (internally stored either as 0/1 or \"True\"/\"False\"/blanks \n when exported) are coerced to logical. \n Variable names ending in '_Beskrivning' and '_Varde' will be character, \n and 'PERSNR' will be coerced (if possible) to a valid personal identification \n number 'pin' (by the 'sweidnumbr' package).\n The package also allow the user to interactively choose if a variable should \n be coerced into a potential format even though not all of its values might \n conform to the recognised pattern.\n It also contain a caching mechanism in order to temporarily store data sets \n with its newly decided formats in order to not rerun the identification \n process each time. ","Published":"2017-02-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"incidence","Version":"1.2.0","Title":"Compute, Handle, Plot and Model Incidence of Dated Events","Description":"Provides functions and classes to compute, handle and visualise incidence from dated events for a defined time interval. Dates can be provided in various standard formats. The class 'incidence' is used to store computed incidence and can be easily manipulated, subsetted, and plotted. In addition, log-linear models can be fitted to 'incidence' objects using 'fit'. This package is part of the RECON () toolkit for outbreak analysis.","Published":"2017-05-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"inctools","Version":"1.0.10","Title":"Incidence Estimation Tools","Description":"Tools for estimating incidence from biomarker data in cross-\n sectional surveys, and for calibrating tests for recent infection. \n Implements and extends the method of Kassanjee et al. (2012)\n .","Published":"2017-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IncucyteDRC","Version":"0.5.4","Title":"Dose Response Curves from Incucyte Proliferation Assays","Description":"Package to import data generated by the Incucyte Zoom from Essen Biosciences and use this to fit dose response curves using the drc package.","Published":"2016-04-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"indelmiss","Version":"1.0.7","Title":"Insertion Deletion Analysis While Accounting for Possible\nMissing Data","Description":"Genome-wide gene insertion and deletion rates can be modelled in a maximum \n likelihood framework with the additional flexibility of modelling potential missing \n data using the models included within. These models simultaneously estimate insertion \n and deletion (indel) rates of gene families and proportions of \"missing\" data for \n (multiple) taxa of interest. The likelihood framework is utilized for parameter \n estimation. A phylogenetic tree of the taxa and gene presence/absence patterns \n (with data ordered by the tips of the tree) are required. For more details, see \n Utkarsh J. Dang, Alison M. Devault, Tatum D. Mortimer, Caitlin S. Pepperell, \n Hendrik N. Poinar, G. Brian Golding (2016). Gene insertion deletion analysis \n while accounting for possible missing data. Genetics (accepted).","Published":"2016-08-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IndependenceTests","Version":"0.2","Title":"Nonparametric tests of independence between random vectors","Description":"Functions for testing mutual independence between many\n numerical random vectors or serial independence of a\n multivariate stationary sequence. The proposed test works when\n some or all of the marginal distributions are singular with\n respect to Lebesgue measure.","Published":"2012-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IndianTaxCalc","Version":"1.0.2","Title":"Indian Income Tax Calculator","Description":"Calculate Indian Income Tax liability for Financial years of Individual resident aged below 60 years,Senior Citizen,Super Senior Citizen, Firm, Local Authority, Any Non Resident Individual / Hindu Undivided Family / Association of Persons /Body of Individuals / Artificial Judicial Person, Co-operative Society.","Published":"2017-05-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"indicspecies","Version":"1.7.6","Title":"Relationship Between Species and Groups of Sites","Description":"Functions to assess the strength and statistical significance of the relationship between species occurrence/abundance and groups of sites. Also includes functions to measure species niche breadth using resource categories.","Published":"2016-08-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IndTestPP","Version":"1.0","Title":"Tests of Independence Between Point Processes in Time","Description":"Several parametric and non-parametric tests and measures to check independence between two or more (homogeneous or nonhomogeneous) point processes in time are provided. Tools for simulating point processes in one dimension with different types of dependence are also implemented.","Published":"2016-08-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"inegiR","Version":"1.2.0","Title":"Integrate INEGI’s (Mexican Stats Office) API with R","Description":"Provides functions to download and parse information from INEGI\n (Official Mexican statistics agency).","Published":"2016-02-19","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"ineq","Version":"0.2-13","Title":"Measuring Inequality, Concentration, and Poverty","Description":"Inequality, concentration, and poverty measures. Lorenz curves (empirical and theoretical).","Published":"2014-07-21","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"iNEXT","Version":"2.0.12","Title":"Interpolation and Extrapolation for Species Diversity","Description":"Provides simple functions to compute and plot two\n types (sample-size- and coverage-based) rarefaction and extrapolation of species\n diversity (Hill numbers) for individual-based (abundance) data or sampling-unit-\n based (incidence) data.","Published":"2016-11-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"iNextPD","Version":"0.3.2","Title":"Interpolation and Extrapolation for Phylogenetic Diversity","Description":"Interpolation and extrapolation for phylogenetic diversity.","Published":"2017-03-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"InfDim","Version":"1.0","Title":"Infine-dimensional model (IDM) to analyse phenotypic variation\nin growth trajectories","Description":"This package contains functions to perform calculations of\n the infine-dimensional model (IDM) and to produce 95%\n confidence intervals around the model elements through\n bootstrapping.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"inference","Version":"0.1.0","Title":"Functions to extract inferential values of a fitted model object","Description":"Collection of functions to extract inferential values\n (point estimates, confidence intervals, p-values, etc) of a\n fitted model object into a matrix-like object that can be used\n for table/report generation; transform point estimates via the\n delta method.","Published":"2010-10-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"InferenceSMR","Version":"1.0","Title":"Inference about the standardized mortality ratio when evaluating\nthe effect of a screening program on survival","Description":"The InferenceSMR package provides functions to make\n inference about the standardized mortality ratio (SMR) when\n evaluating the effect of a screening program. The package is\n based on methods described in Sasieni (2003) and Talbot et al.\n (2011).","Published":"2013-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"inferference","Version":"0.4.62","Title":"Methods for Causal Inference with Interference","Description":"Provides methods for estimating causal effects in the presence of interference. Currently it implements the IPW estimators proposed by E.J. Tchetgen Tchetgen and T.J. Vanderweele in \"On causal inference in the presence of interference\" (Statistical Methods in Medical Research, 21(1) 55-75).","Published":"2015-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"inferr","Version":"0.1.1","Title":"Inferential Statistics","Description":"Select set of parametric and non-parametric statistical tests. 'inferr' builds upon the solid set of\n statistical tests provided in 'stats' package by including additional data types as inputs, expanding and\n restructuring the test results. The tests included are t tests, variance tests, proportion tests, chi square tests, Levene's test, McNemar Test, Cochran's Q test and Runs test.","Published":"2017-05-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"InfiniumPurify","Version":"1.3.1","Title":"Estimate and Account for Tumor Purity in Cancer Methylation Data\nAnalysis","Description":"The proportion of cancer cells in solid tumor sample, known as the tumor purity, has adverse impact on a variety of data analyses if not properly accounted for. We develop 'InfiniumPurify', which is a comprehensive R package for estimating and accounting for tumor purity based on DNA methylation Infinium 450k array data. 'InfiniumPurify' provides functionalities for tumor purity estimation. In addition, it can perform differential methylation detection and tumor sample clustering with the consideration of tumor purities. ","Published":"2017-01-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"inflection","Version":"1.3","Title":"Finds the Inflection Point of a Curve","Description":"Implementation of methods Extremum Surface Estimator (ESE) and \n Extremum Distance Estimator (EDE) to identify the inflection point of a curve.","Published":"2017-05-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"influence.ME","Version":"0.9-9","Title":"Tools for Detecting Influential Data in Mixed Effects Models","Description":"Provides a collection of tools for\n detecting influential cases in generalized mixed effects\n models. It analyses models that were estimated using 'lme4'. The\n basic rationale behind identifying influential data is that\n when single units are omitted from the data, models\n based on these data should not produce substantially different\n estimates. To standardize the assessment of how influential a\n (single group of) observation(s) is, several measures of\n influence are common practice, such as Cook's Distance. \n In addition, we provide a measure of percentage change of the fixed point \n estimates and a simple procedure to detect changing levels of significance.","Published":"2017-06-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"influence.SEM","Version":"2.1","Title":"Case Influence in Structural Equation Models","Description":"A set of tools for evaluating several measures of case influence for structural equation models. ","Published":"2017-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"influenceR","Version":"0.1.0","Title":"Software Tools to Quantify Structural Importance of Nodes in a\nNetwork","Description":"Provides functionality to compute various node centrality measures on networks.\n Included are functions to compute betweenness centrality (by utilizing Madduri and Bader's\n SNAP library), implementations of Burt's constraint and effective\n network size (ENS) metrics, Borgatti's algorithm to identify key players, and Valente's\n bridging metric. On Unix systems, the betweenness, Key Players, and\n bridging implementations are parallelized with OpenMP, which may run\n faster on systems which have OpenMP configured.","Published":"2015-09-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"infoDecompuTE","Version":"0.6.0","Title":"Information Decomposition of Two-Phase Experiments","Description":"The main purpose of this package is to generate the structure of the analysis of variance \n (ANOVA) table of the two-phase experiments. The user only need to input the design and the \n relationships of the random and fixed factors using the Wilkinson-Rogers' syntax, \n this package can then quickly generate the structure of the ANOVA table with the \n coefficients of the variance components for the expected mean squares.\n Thus, the balanced incomplete block design and provides the efficiency\n factors of the fixed effects can also be studied and compared much easily.","Published":"2017-03-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"Information","Version":"0.0.9","Title":"Data Exploration with Information Theory (Weight-of-Evidence and\nInformation Value)","Description":"Performs exploratory data analysis and variable screening for\n binary classification models using weight-of-evidence (WOE) and information\n value (IV). In order to make the package as efficient as possible, aggregations\n are done in data.table and creation of WOE vectors can be distributed across\n multiple cores. The package also supports exploration for uplift models (NWOE\n and NIV).","Published":"2016-04-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"InformationValue","Version":"1.2.3","Title":"Performance Analysis and Companion Functions for Binary\nClassification Models","Description":"Provides companion function for analysing the performance of\n classification models. Also, provides function to optimise probability cut-\n off score based on used specified objectives, Plot 'ROC' Curve in 'ggplot2',\n 'AUROC', 'IV', 'WOE' Calculation, 'KS Statistic' etc to aid accuracy improvement\n in binary classification models.","Published":"2016-10-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"InformativeCensoring","Version":"0.3.4","Title":"Multiple Imputation for Informative Censoring","Description":"Multiple Imputation for Informative Censoring.\n This package implements two methods. Gamma Imputation\n from Jackson et al. (2014) and Risk Score Imputation\n from Hsu et al. (2009) .","Published":"2016-08-11","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"informR","Version":"1.0-5","Title":"Sequence Statistics for Relational Event Models","Description":"Aids in creating sequence statistics for Butts's 'relevent' software.","Published":"2015-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"infotheo","Version":"1.2.0","Title":"Information-Theoretic Measures","Description":"This package implements various measures of information theory based on several entropy estimators.","Published":"2014-07-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"InfoTrad","Version":"1.1","Title":"Calculates the Probability of Informed Trading (PIN)","Description":"Estimates the probability of informed trading (PIN) initially introduced by Easley et. al. (1996) . Contribution of the package is that it uses likelihood factorizations of Easley et. al. (2010) (EHO factorization) and Lin and Ke (2011) (LK factorization). Moreover, the package uses different estimation algorithms. Specifically, the grid-search algorithm proposed by Yan and Zhang (2012) , hierarchical agglomerative clustering approach proposed by Gan et. al. (2015) and later extended by Ersan and Alici (2016) .","Published":"2017-02-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"infra","Version":"0.1.2","Title":"An Infrastructure Proxy Function","Description":"Takes a data frame containing latitude and longitude coordinates and downloads images from map servers to determine their file size as a proxy of infrastructure","Published":"2015-01-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"infraFDTD.assist","Version":"0.5","Title":"IO Help for infraFDTD Model","Description":"Facilitates the generation of input files for infraFDTD and processes snapshot output. infraFDTD is a finite-difference model written by Keehoon Kim for simulating infrasound that considers topography and a 1-D atmosphere (see Kim et al., 2015 ).","Published":"2016-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"infuser","Version":"0.2.6","Title":"A Very Basic Templating Engine","Description":"Replace parameters in strings and/or text files with specified\n values.","Published":"2017-03-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Infusion","Version":"1.1.0","Title":"Inference Using Simulation","Description":"Implements functions for simulation-based inference. In particular, implements functions to perform likelihood inference from data summaries whose distributions are simulated. ","Published":"2017-03-24","License":"CeCILL-2","snapshot_date":"2017-06-23"} {"Package":"infutil","Version":"1.0","Title":"Information Utility","Description":"Calculation of information utility (i.e., Lindley\n information) quantities for item response models.","Published":"2013-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ini","Version":"0.2","Title":"Read and Write '.ini' Files","Description":"Parse simple '.ini' configuration files to an structured list. Users\n can manipulate this resulting list with lapply() functions. This same\n structured list can be used to write back to file after modifications.","Published":"2016-04-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"injectoR","Version":"0.2.4","Title":"R Dependency Injection","Description":"R dependency injection framework. Dependency injection allows\n a program design to follow the dependency inversion principle. The user\n delegates to external code (the injector) the responsibility of providing its\n dependencies. This separates the responsibilities of use and construction.","Published":"2015-11-30","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"INLABMA","Version":"0.1-8","Title":"Bayesian Model Averaging with INLA","Description":"Fit Spatial Econometrics models using Bayesian model averaging \n on models fitted with INLA. The INLA package can be obtained from \n . ","Published":"2017-02-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"inline","Version":"0.3.14","Title":"Functions to Inline C, C++, Fortran Function Calls from R","Description":"Functionality to dynamically define R functions and S4 methods\n with inlined C, C++ or Fortran code supporting .C and .Call calling conventions.","Published":"2015-04-13","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"inlinedocs","Version":"2013.9.3","Title":"Convert inline comments to documentation","Description":"Generates Rd files from R source code with comments.\n The main features of the default syntax are that\n (1) docs are defined in comments near the relevant code,\n (2) function argument names are not repeated in comments, and\n (3) examples are defined in R code, not comments.\n It is also easy to define a new syntax.","Published":"2013-09-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"inlmisc","Version":"0.2.6","Title":"Miscellaneous Functions for the USGS INL Project Office","Description":"A collection of functions for creating high-level graphics,\n performing raster-based analysis, processing MODFLOW-based models, and\n overlaying multi-polygon objects. Used to support packages and scripts written\n by researchers at the United States Geological Survey (USGS)\n Idaho National Laboratory (INL) Project Office.","Published":"2017-04-01","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"iNOTE","Version":"1.0","Title":"Integrative Network Omnibus Total Effect Test","Description":"Integrated joint analysis of multiple platform genomic data across biological gene sets or pathways using powerful variance-component based testing procedures.","Published":"2017-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"inpdfr","Version":"0.1.5","Title":"Analyse Text Documents Using Ecological Tools","Description":"A set of functions and a graphical user interface\n\tto analyse and compare texts, using classical text mining\n\tfunctions, as well as those from theoretical ecology.","Published":"2017-02-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"InPosition","Version":"0.12.7","Title":"Inference Tests for ExPosition","Description":"Non-parametric resampling-based inference tests for ExPosition.","Published":"2013-12-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"insideRODE","Version":"2.0","Title":"insideRODE includes buildin functions with deSolve solver and\nC/FORTRAN interfaces to nlme, together with compiled codes","Description":"insideRODE package includes buildin functions from\n deSolve, compiled functions from compiler, and C/FORTRAN code\n interfaces to nlme. It includes nlmLSODA, nlmODE,\n nlmVODE,nlmLSODE for general purpose; cfLSODA,cfLSODE, cfODE,\n cfVODE call C/FORTRAN compiled dll functions.ver2.0 add\n sink()function into example it helps to directly combine\n c/fortran source code in R files. Finally, with new compiler\n package, we generated compiled functions: nlmODEcp, nlmVODEcp,\n nlmLSODEcp,nlmLSODAcp and cpODE, cpLSODA, cpLSODE, cpVODE. They\n will help to increase speed.","Published":"2012-10-29","License":"LGPL (> 2.0)","snapshot_date":"2017-06-23"} {"Package":"InSilicoVA","Version":"1.1.4","Title":"Probabilistic Verbal Autopsy Coding with 'InSilicoVA' Algorithm","Description":"Computes individual causes of death and population cause-specific mortality fractions using the 'InSilicoVA' algorithm from McCormick et al (2016) . It uses data derived from verbal autopsy (VA) interviews, in a format similar to the input of the widely used 'InterVA4' method. This package provides general model fitting and customization for 'InSilicoVA' algorithm and basic graphical visualization of the output.","Published":"2017-01-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"insol","Version":"1.1.1","Title":"Solar Radiation","Description":"Functions to compute insolation on complex terrain.","Published":"2014-03-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"InspectChangepoint","Version":"1.0.1","Title":"High-Dimensional Changepoint Estimation via Sparse Projection","Description":"Provides a data-driven projection-based method for estimating changepoints in high-dimensional time series. Multiple changepoints are estimated using a (wild) binary segmentation scheme.","Published":"2016-07-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"inspectr","Version":"1.0.0","Title":"Perform Basic Checks of Dataframes","Description":"Check one column or multiple columns of a dataframe\n using the preset basic checks or your own functions. Enables\n checks without knowledge of lapply() or sapply().","Published":"2017-01-30","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"INSPIRE","Version":"1.5","Title":"Inferring Shared Modules from Multiple Gene Expression Datasets\nwith Partially Overlapping Gene Sets","Description":"A method to infer modules of co-expressed genes and the\n dependencies among the modules from multiple expression datasets that may\n contain different sets of genes. Please refer to: Extracting a low-dimensional\n description of multiple gene expression datasets reveals a potential driver for\n tumor-associated stroma in ovarian cancer, Safiye Celik, Benjamin A. Logsdon,\n Stephanie Battle, Charles W. Drescher, Mara Rendi, R. David Hawkins and Su-In\n Lee (2016) .","Published":"2016-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"install.load","Version":"1.2.1","Title":"Check, Install and Load CRAN & USGS GRAN Packages","Description":"The function `install_load` checks the local R library(ies) to see\n if the required package(s) is/are installed or not. If the package(s)\n is/are not installed, then the package(s) will be installed along with\n the required dependency(ies). This function pulls source or\n binary packages from the Rstudio-sponsored CRAN mirror and/or\n the USGS GRAN Repository. Lastly, the chosen package(s)\n is/are loaded. The function `load_package` simply loads the provided\n packages.","Published":"2016-07-12","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"installr","Version":"0.19.0","Title":"Using R to Install Stuff (Such As: R, 'Rtools', 'RStudio',\n'Git', and More!)","Description":"R is great for installing software. Through the 'installr'\n package you can automate the updating of R (on Windows, using updateR())\n and install new software. Software installation is initiated through a\n GUI (just run installr()), or through functions such as: install.Rtools(),\n install.pandoc(), install.git(), and many more. The updateR() command\n performs the following: finding the latest R version, downloading it,\n running the installer, deleting the installation file, copy and updating\n old packages to the new R installation.","Published":"2017-04-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"instaR","Version":"0.2.4","Title":"Access to Instagram API via R","Description":"Provides an interface to the Instagram API , which allows R users to download public pictures filtered by\n hashtag, popularity, user or location, and to access public users' profile data.","Published":"2016-08-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"insuranceData","Version":"1.0","Title":"A Collection of Insurance Datasets Useful in Risk Classification\nin Non-life Insurance","Description":"Insurance datasets, which are often used in claims severity and claims frequency modelling. It helps testing new regression models in those problems, such as GLM, GLMM, HGLM, non-linear mixed models etc. Most of the data sets are applied in the project \"Mixed models in ratemaking\" supported by grant NN 111461540 from Polish National Science Center. ","Published":"2014-09-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"intamap","Version":"1.4-1","Title":"Procedures for Automated Interpolation","Description":"Provides classes and methods for automated\n spatial interpolation.","Published":"2016-09-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"intamapInteractive","Version":"1.1-10","Title":"procedures for automated interpolation - methods only to be used\ninteractively, not included in intamap package","Description":"A package that provides additional functionality for spatial interpolation in the intamap package.","Published":"2013-08-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IntClust","Version":"0.0.2","Title":"Integrated Data Analysis via Clustering","Description":"Several integrative data methods in which information of objects from different data sources can be combined are included in the IntClust package. As a single data source is limited in its point of view, this provides more insight and the opportunity to investigate how the variables are interconnected. Clustering techniques are to be applied to the combined information. For now, only agglomerative hierarchical clustering is implemented. Further, differential gene expression and pathway analysis can be conducted on the clusters. Plotting functions are available to visualize and compare results of the different methods.","Published":"2016-03-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IntegrateBs","Version":"0.1.0","Title":"Integration for B-Spline","Description":"Integrated B-spline function.","Published":"2016-06-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"IntegratedJM","Version":"1.5","Title":"Joint Modeling of the Gene-Expression and Bioassay Data, Taking\nCare of the Effect Due to a Fingerprint Feature","Description":"Offers modeling the association between gene-expression and bioassay data, taking care of the effect due to a fingerprint feature and helps with several plots to better understand the analysis.","Published":"2017-03-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IntegratedMRF","Version":"1.1.8","Title":"Integrated Prediction using Uni-Variate and Multivariate Random\nForests","Description":"An implementation of a framework for drug sensitivity prediction from various genetic characterizations using ensemble approaches. Random Forests or Multivariate Random Forest predictive models can be generated from each genetic characterization that are then combined using a Least Square Regression approach. It also provides options for the use of different error estimation approaches of Leave-one-out, Bootstrap, N-fold cross validation and 0.632+Bootstrap along with generation of prediction confidence interval using Jackknife-after-Bootstrap approach. ","Published":"2017-06-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Interact","Version":"1.1","Title":"Tests for marginal interactions in a 2 class response model","Description":"This package searches for marginal interactions in a\n binary response model. Interact uses permutation methods to\n estimate false discovery rates for these marginal interactions\n and has some, limited visualization capabilities","Published":"2014-06-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"interactionTest","Version":"1.0.1","Title":"Calculates Critical Test Statistics to Control False Discovery\nand Familywise Error Rates in Marginal Effects Plots","Description":"Implements the procedures suggested in Esarey and Sumner (2017) for controlling the false discovery rate or familywise error rate when constructing marginal effects plots for models with interaction terms.","Published":"2017-03-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"InteractiveIGraph","Version":"1.0.6.1","Title":"interactive network analysis and visualization","Description":"An extension of the package 'igraph'. This package create\n possibly to work with 'igraph' objects interactively.","Published":"2013-05-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"interAdapt","Version":"0.1","Title":"interAdapt","Description":"A shiny application for designing adaptive clinical trials. For\n more details, see: http://arxiv.org/abs/1404.0734","Published":"2014-08-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Interatrix","Version":"1.1.1","Title":"Compute Chi-Square Measures with Corrections","Description":"Chi-square tests are computed with corrections.","Published":"2015-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"intercure","Version":"0.1.0","Title":"Cure Rate Estimators for Interval Censored Data","Description":"Implementations of semiparametric cure rate estimators for interval\n censored data in R. The algorithms are based on the promotion time and\n frailty models, all for interval censoring. For the frailty model,\n there is also a implementation contemplating clustered data.","Published":"2016-01-12","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"InterfaceqPCR","Version":"1.0","Title":"GUI to Analyse qPCR Results after PMA Treatment or not","Description":"Graphical User Interface allowing to determine the concentration in the sample in CFU per mL or in number of copies per mL provided to qPCR results after with or without PMA treatment. This package is simply to use because no knowledge in R commands is necessary. A graphic represents the standard curve, and a table containing the result for each sample is created.","Published":"2017-04-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"interferenceCI","Version":"1.1","Title":"Exact Confidence Intervals in the Presence of Interference","Description":"Computes large sample confidence intervals of Liu and Hudgens (2014), exact confidence intervals of Tchetgen Tchetgen and VanderWeele (2012), and exact confidence intervals of Rigdon and Hudgens (2014) for treatment effects on a binary outcome in two-stage randomized experiments with interference.","Published":"2015-01-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"interflex","Version":"1.0.3","Title":"Multiplicative Interaction Models Diagnostics and Visualization","Description":"Performs diagnostic tests of multiplicative interaction models and plots non-linear marginal effects of a treatment on an outcome across different values of a moderator.","Published":"2017-03-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"intergraph","Version":"2.0-2","Title":"Coercion Routines for Network Data Objects","Description":"Functions implemented in this package allow to coerce (i.e.\n\tconvert) network data between classes provided by other R packages.\n\tCurrently supported classes are those defined in packages: network and\n\tigraph.","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"interim","Version":"0.6.0","Title":"Scheduling Interim Analyses in Clinical Trials","Description":"Allows the simulation of both the recruitment and treatment phase of a clinical trial. Based on these simulations, the timing of interim analyses can be assessed.","Published":"2017-06-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"internetarchive","Version":"0.1.6","Title":"An API Client for the Internet Archive","Description":"Search the Internet Archive, retrieve metadata, and download\n files.","Published":"2016-12-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"interp","Version":"1.0-29","Title":"Interpolation Methods","Description":"Bivariate data interpolation on regular and irregular\n grids, either linear or using splines are the main part of this\n package. It is intended to provide FOSS replacement functions for\n the ACM licensed akima::interp and tripack::tri.mesh functions.\n Currently the piecewise linear interpolation part of akima::interp\n (and also akima::interpp) is implemented in interp::interp, this\n corresponds to the call akima::interp(..., linear=TRUE) which is the\n default setting and covers most of akima::interp use cases in\n depending packages. A re-implementation of Akimas spline\n interpolation (akima::interp(..., linear=FALSE)) is currently under\n development and will complete this package in a later\n version. Estimators for partial derivatives are already available,\n these are a prerequisite for the spline interpolation. The basic\n part is currently a GPLed triangulation algorithm (sweep hull\n algorithm by David Sinclair) providing the starting point for the\n piecewise linear interpolator. As side effect this algorithm is also\n used to provide replacements for the basic functions of the tripack\n package which also suffer from the ACM restrictions. All functions\n are designed to be backward compatible with their akima / tripack\n counterparts.","Published":"2017-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"interplot","Version":"0.1.5","Title":"Plot the Effects of Variables in Interaction Terms","Description":"Plots the conditional coefficients (\"marginal effects\") of\n variables included in multiplicative interaction terms.","Published":"2016-11-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Interpol","Version":"1.3.1","Title":"Interpolation of amino acid sequences","Description":"A package for numerical encoding as well as for linear and\n non-linear interpolation of amino acid sequences.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Interpol.T","Version":"2.1.1","Title":"Hourly interpolation of multiple temperature daily series","Description":"Hourly interpolation of daily minimum and maximum temperature\n series. Carries out interpolation on multiple series ad once. Requires some\n hourly series for calibration (alternatively can use default calibration\n table).","Published":"2013-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"InterpretMSSpectrum","Version":"1.0","Title":"Interpreting High Resolution Mass Spectra","Description":"Annotate and interpret deconvoluted mass spectra (mass*intensity pairs) from high resolution mass spectrometry devices.","Published":"2017-05-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"interpretR","Version":"0.2.4","Title":"Binary Classifier and Regression Model Interpretation Functions","Description":"Compute permutation- based performance measures and create partial\n dependence plots for (cross-validated) 'randomForest' and 'ada' models.","Published":"2016-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"InterSIM","Version":"2.1","Title":"Simulation of Inter-Related Genomic Datasets","Description":"Generates three inter-related genomic datasets : methylation, gene expression and protein expression.","Published":"2016-07-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"InterVA4","Version":"1.7.4","Title":"Replicate and Analyse 'InterVA4'","Description":"Provides an R version of the 'InterVA4' software () for coding cause of death from verbal autopsies. It also provides simple graphical representation of individual and population level statistics.","Published":"2017-05-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"interval","Version":"1.1-0.1","Title":"Weighted Logrank Tests and NPMLE for interval censored data","Description":"Functions to fit nonparametric survival curves, plot them, and perform logrank or Wilcoxon type tests.","Published":"2014-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"intervals","Version":"0.15.1","Title":"Tools for Working with Points and Intervals","Description":"Tools for working with and comparing sets of points and intervals.","Published":"2015-08-27","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"interventionalDBN","Version":"1.2.2","Title":"Interventional Inference for Dynamic Bayesian Networks","Description":"This package allows a dynamic Bayesian network to be inferred from microarray timecourse data with interventions (inhibitors).","Published":"2014-05-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IntLik","Version":"1.0","Title":"Numerical Integration for Integrated Likelihood","Description":"This package calculates the integrated likelihood numerically. Given the Likelihood function and the prior function, this package integrates out the nuisance parameters by Metropolis-Hastings (MCMC) Algorithm.","Published":"2013-08-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"IntNMF","Version":"1.1","Title":"Integrative Clustering of Multiple Genomic Dataset","Description":"Carries out integrative clustering analysis using multiple types of genomic dataset. ","Published":"2016-07-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"intpoint","Version":"1.0","Title":"linear programming solver by the interior point method and\ngraphically (two dimensions)","Description":"Solves linear programming problems by the interior point\n method, and plots the graphical solution of a linear\n programming problem of two dimensions.","Published":"2012-05-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"inTrees","Version":"1.1","Title":"Interpret Tree Ensembles","Description":"For tree ensembles such as random forests, regularized random forests and gradient boosted trees, this package provides functions for: extracting, measuring and pruning rules; selecting a compact rule set; summarizing rules into a learner; calculating frequent variable interactions; formatting rules in latex code. ","Published":"2014-07-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"intReg","Version":"0.2-8","Title":"Interval Regression","Description":"Estimating interval regression models. Supports both common and observation-specific boundaries.","Published":"2015-01-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"intRegGOF","Version":"0.85-1","Title":"Integrated Regression Goodness of Fit","Description":"Performs Goodness of Fit for regression models using","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"introgress","Version":"1.2.3","Title":"methods for analyzing introgression between divergent lineages","Description":"introgress provides functions for analyzing introgression\n of genotypes between divergent, hybridizing lineages, including\n estimating genomic clines from multi-locus genotype data and\n testing for deviations from neutral expectations. Functions are\n also provided for maximum likelihood estimation of molecular\n hybrid index and graphical analysis.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"intrval","Version":"0.1-1","Title":"Relational Operators for Intervals","Description":"Evaluating if values \n of vectors are within different open/closed intervals\n (`x %[]% c(a, b)`), or if two closed\n intervals overlap (`c(a1, b1) %[]o[]% c(a2, b2)`).\n Operators for negation and directional relations also implemented.","Published":"2017-01-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"intsvy","Version":"2.0","Title":"International Assessment Data Manager","Description":"\n Provides tools for importing, merging, and analysing data from \n international assessment studies (TIMSS, PIRLS, PISA. ICILS, and PIAAC).","Published":"2017-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"intubate","Version":"1.0.0","Title":"Interface to Popular R Functions for Data Science Pipelines","Description":"\n Interface to popular R functions with formulas and data,\n such as 'lm', so they can be included painlessly in data\n science pipelines implemented by 'magrittr'\n with the operator %>%.","Published":"2016-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"inum","Version":"0.9-2","Title":"Interval and Enum-Type Representation of Vectors","Description":"Enum-type representation of vectors and representation\n of intervals, including a method of coercing variables in data frames.","Published":"2017-02-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"InvariantCausalPrediction","Version":"0.6-1","Title":"Invariant Causal Prediction","Description":"Confidence intervals for causal effects, using data collected in different experimental or environmental conditions. Hidden variables can be included in the model with a more experimental version. ","Published":"2016-05-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"InvasionCorrection","Version":"0.1","Title":"Invasion Correction","Description":"The correction is achieved under the assumption that non-migrating cells of the essay approximately form a quadratic flow profile due to frictional effects, compare law of Hagen-Poiseuille for flow in a tube. The script fits a conical plane to give xyz-coordinates of the cells. It outputs the number of migrated cells and the new corrected coordinates.","Published":"2017-02-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Inventorymodel","Version":"1.0.4","Title":"Inventory Models","Description":"Determination of the optimal policy in inventory problems from a game-theoretic perspective.","Published":"2017-05-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"InventorymodelPackage","Version":"1.0.2","Title":"Inventorymodel","Description":"This package describes the associated cost games to inventory situations.","Published":"2014-06-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"investr","Version":"1.4.0","Title":"Inverse Estimation/Calibration Functions","Description":"Functions to facilitate inverse estimation (e.g., calibration) in\n linear, generalized linear, nonlinear, and (linear) mixed-effects models. A\n generic function is also provided for plotting fitted regression models with\n or without confidence/prediction bands that may be of use to the general\n user.","Published":"2016-04-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"invgamma","Version":"1.1","Title":"The Inverse Gamma Distribution","Description":"Light weight implementation of the standard distribution\n functions for the inverse gamma distribution, wrapping those for the gamma\n distribution in the stats package.","Published":"2017-05-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"invGauss","Version":"1.1","Title":"Threshold regression that fits the (randomized drift) inverse\nGaussian distribution to survival data","Description":"invGauss fits the (randomized drift) inverse Gaussian distribution to survival data. The model is described in Aalen OO, Borgan O, Gjessing HK. Survival and Event History Analysis. A Process Point of View. Springer, 2008. It is based on describing time to event as the barrier hitting time of a Wiener process, where drift towards the barrier has been randomized with a Gaussian distribution. The model allows covariates to influence starting values of the Wiener process and/or average drift towards a barrier, with a user-defined choice of link functions. ","Published":"2014-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"invLT","Version":"0.2.1","Title":"Inversion of Laplace-Transformed Functions","Description":"Provides two functions for the numerical inversion of Laplace-Transformed functions, returning the value of the standard (time) domain function at a specified value. The first algorithm is the first optimum contour algorithm described by Evans and Chung (2000)[1].\n The second algorithm uses the Bromwich contour as per the definition of the inverse Laplace Transform. The latter is unstable for numerical inversion and mainly included for comparison or interest. There are also some additional functions provided for utility, including plotting and some simple Laplace Transform examples, for which there are known analytical solutions. Polar-cartesian conversion functions are included in this package and are used by the inversion functions.\n [1] Evans & Chung, 2000: Laplace transform inversions using optimal contours in the complex plane; International Journal of Computer Mathematics v73 pp531-543.","Published":"2015-09-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"io","Version":"0.2.4","Title":"A Unified Framework for Input-Output Operations in R","Description":"One function to read files. One function to write files. One\n function to direct plots to screen or file. Automatic file format inference\n and directory structure creation.","Published":"2016-04-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ioncopy","Version":"1.0","Title":"Calling Copy Number Alterations in Amplicon Sequencing Data","Description":"Method for the calculation of copy numbers and calling of copy number alterations. The algorithm uses coverage data from amplicon sequencing of a sample cohort as input. The method includes significance assessment, correction for multiple testing and does not depend on normal DNA controls.","Published":"2015-08-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ionflows","Version":"1.1","Title":"Calculate the Number of Required Flows for Semiconductor\nSequencing","Description":"Two methods for calculation of the number of required flows for semiconductor sequencing: 1. Using a simulation, the number of flows can be calculated for a concrete list of amplicons. 2. An exact combinatorial model is evaluated to calculate the number of flows for a random ensemble of sequences.","Published":"2014-11-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ionicons","Version":"0.1.1","Title":"'Ionicons' Icon Pack","Description":"Provides icons from the 'Ionicons' icon pack (). \n Functions are provided to get icons as png files or as raw matrices. This is useful \n when you want to embed raster icons in a report or a graphic. ","Published":"2017-01-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ionr","Version":"0.3.0","Title":"Test for Indifference of Indicator","Description":"Provides item exclusion procedure, which is a formal method to \n test 'Indifference Of iNdicator' (ION). When a latent personality \n trait-outcome association is assumed, then the association strength \n should not depend on which subset of indicators (i.e. items) has been \n chosen to reflect the trait. Personality traits are often measured \n (reflected) by a sum-score of a certain set of indicators. \n Item exclusion procedure randomly excludes items from a sum-score and \n tests, whether the sum-score - outcome correlation changes. ION has been \n achieved, when any item can be excluded from the sum-score without the \n sum-score - outcome correlation substantially changing . For more details, \n see Vainik, Mottus et. al, (2015) \"Are Trait-Outcome Associations Caused\n by Scales or Particular Items? Example Analysis of Personality Facets and\n BMI\",European Journal of Personality DOI: <10.1002/per.2009> .","Published":"2016-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iopsych","Version":"0.90.1","Title":"Methods for Industrial/Organizational Psychology","Description":"Collection of functions for IO Psychologists.","Published":"2016-04-04","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"iosmooth","Version":"0.94","Title":"Functions for Smoothing with Infinite Order Flat-Top Kernels","Description":"Density, spectral density, and regression estimation using infinite\n order flat-top kernels.","Published":"2017-01-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"iotools","Version":"0.1-12","Title":"I/O Tools for Streaming","Description":"Basic I/O tools for streaming.","Published":"2015-07-31","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ipdmeta","Version":"2.4","Title":"Tools for subgroup analyses with multiple trial data using\naggregate statistics","Description":"This package provides functions to estimate an IPD linear\n mixed effects model for a continuous outcome and any\n categorical covariate from study summary statistics. There are\n also functions for estimating the power of a\n treatment-covariate interaction test in an individual patient\n data meta-analysis from aggregate data.","Published":"2012-09-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ipdw","Version":"0.2-6","Title":"Spatial Interpolation by Inverse Path Distance Weighting","Description":"Functions are provided to interpolate geo-referenced point data via\n Inverse Path Distance Weighting. Useful for coastal marine applications where\n barriers in the landscape preclude interpolation with Euclidean distances.","Published":"2017-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IPEC","Version":"0.0.9","Title":"Root Mean Square Curvature Calculation","Description":"Calculates the RMS intrinsic and parameter-effects curvatures of a nonlinear regression model. ","Published":"2017-05-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ipflasso","Version":"0.1","Title":"Integrative Lasso with Penalty Factors","Description":"The core of the package is cvr2.ipflasso(), an extension of glmnet to be used when the (large) set of available predictors is partitioned into several modalities which potentially differ with respect to their information content in terms of prediction. For example, in biomedical applications patient outcome such as survival time or response to therapy may have to be predicted based on, say, mRNA data, miRNA data, methylation data, CNV data, clinical data, etc. The clinical predictors are on average often much more important for outcome prediction than the mRNA data. The ipflasso method takes this problem into account by using different penalty parameters for predictors from different modalities. The ratio between the different penalty parameters can be chosen by cross-validation.","Published":"2015-11-24","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ipfp","Version":"1.0.1","Title":"Fast Implementation of the Iterative Proportional Fitting\nProcedure in C","Description":"A fast (C) implementation of the iterative proportional fitting\n procedure. Based on corresponding code from the networkTomography package.","Published":"2016-02-14","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"ipft","Version":"0.6","Title":"Indoor Positioning Fingerprinting Toolset","Description":"Algorithms and utility functions for indoor positioning using fingerprinting techniques. \n These functions are designed for manipulation of RSSI (Received Signal Strength Intensity) data \n sets, estimation of positions,comparison of the performance of different models, and graphical \n visualization of data. Machine learning algorithms and methods such as k-nearest neighbors or\n probabilistic fingerprinting are implemented in this package to perform analysis\n and estimations over RSSI data sets.","Published":"2017-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iplots","Version":"1.1-7","Title":"iPlots - interactive graphics for R","Description":"Interactive plots for R","Published":"2013-12-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"IPMpack","Version":"2.1","Title":"Builds and analyses Integral Projection Models (IPMs)","Description":"IPMpack takes demographic vital rates and (optionally) environmental data to build integral projection models. A number of functional forms for growth and survival can be incorporated, as well as a range of reproductive strategies. The package also includes a suite of diagnostic routines, provides classic matrix model output (e.g., lambda, elasticities, sensitivities), and produces post-hoc metrics (e.g., passage time and life expectancy). ","Published":"2014-03-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"IPMRF","Version":"1.0","Title":"Intervention in Prediction Measure (IPM) for Random Forests","Description":"Computes IPM for assessing variable importance for random forests. See details at I. Epifanio (2017) . ","Published":"2017-05-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ipred","Version":"0.9-6","Title":"Improved Predictors","Description":"Improved predictive models by indirect classification and\n bagging for classification, regression and survival problems \n as well as resampling based estimators of prediction error. ","Published":"2017-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iprior","Version":"0.6.5","Title":"Linear Regression using I-Priors","Description":"Provides methods to perform and analyse I-prior regression models.\n Estimation is mainly done via an EM algorithm, but there is flexibility in\n using any optimiser.","Published":"2017-06-12","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"ips","Version":"0.0-7","Title":"Interfaces to Phylogenetic Software in R","Description":"This package provides functions that wrap popular phylogenetic software for sequence alignment, masking of sequence alignments, and estimation of phylogenies and ancestral character states.","Published":"2014-11-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IPSUR","Version":"1.5","Title":"Introduction to Probability and Statistics Using R","Description":"This package contains the Sweave source code used to\n generate IPSUR, an introductory probability and statistics\n textbook, alongside other supplementary materials such as the\n parsed R code for the book and data for the examples and\n exercises. The book is released under the GNU Free\n Documentation License.","Published":"2013-10-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"IPtoCountry","Version":"0.0.1","Title":"Convert IP Addresses to Country Names or Full Location with\nGeoplotting","Description":"Tools for identifying the origins of IP addresses. Includes functions for converting IP addresses\n to country names, location details (region, city, zip, latitude, longitude), IP codes, binary values, as well\n as a function for plotting IP locations on a world map. This product includes IP2Location LITE data available\n from and is is available by Creative Commons Attribution-ShareAlike 4.0 Interational\n license (CC-BY-SA 4.0).","Published":"2016-07-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"iptools","Version":"0.4.0","Title":"Manipulate, Validate and Resolve 'IP' Addresses","Description":"A toolkit for manipulating, validating and testing 'IP' addresses and\n ranges, along with datasets relating to 'IP' addresses. Tools are also provided\n to map 'IPv4' blocks to country codes. While it primarily has support for the 'IPv4'\n address space, more extensive 'IPv6' support is intended.","Published":"2016-04-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ipw","Version":"1.0-11","Title":"Estimate Inverse Probability Weights","Description":"Functions to estimate the probability to receive the observed treatment, based on\n individual characteristics. The inverse of these probabilities can be used as weights when\n\testimating causal effects from observational data via marginal structural models. Both point\n\ttreatment situations and longitudinal studies can be analysed. The same functions can be used to\n\tcorrect for informative censoring.\t","Published":"2015-08-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IPWsurvival","Version":"0.5","Title":"Propensity Score Based Adjusted Survival Curves and\nCorresponding Log-Rank Statistic","Description":"In observational studies, the presence of confounding factors is common and the comparison of different groups of subjects requires adjustment. In this package, we propose simple functions to estimate adjusted survival curves and log-rank test based on inverse probability weighting (IPW).","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IQCC","Version":"0.6","Title":"Improved Quality Control Charts","Description":"Builds statistical control charts with exact limits for\n univariate and multivariate cases.","Published":"2014-04-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"iqLearn","Version":"1.4","Title":"Interactive Q-Learning","Description":"Estimate an optimal dynamic treatment regime using Interactive Q-learning.","Published":"2015-08-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"iqspr","Version":"1.1","Title":"Generate Chemical Strings (SMILES) with the Inverse QSPR Model","Description":"Generate chemical structures possibly satisfying desired properties\n using the inverse QSPR model. It has three reference classes. ENgram is a class for learning the grammar \n structure of existing chemical strings using an extended N-gram model. QSPRpred contains \n a simple Bayes regression model to predict properties from structures. SmcChem is a class of the generator of\n chemical strings from the Inverse-QSPR model. This class has ENgram and QSPRpred class objects inside. \n The generator is implemented by the Sequential Monte Carlo sampler. ","Published":"2017-04-24","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"irace","Version":"2.3","Title":"Iterated Racing for Automatic Algorithm Configuration","Description":"Iterated race is an extension of the Iterated F-race method for\n the automatic configuration of optimization algorithms, that is,\n (offline) tuning their parameters by finding the most appropriate\n settings given a set of instances of an optimization problem.","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iRafNet","Version":"1.1-1","Title":"Integrative Random Forest for Gene Regulatory Network Inference","Description":"Provides a flexible integrative algorithm that allows information from prior data, such as protein protein interactions and gene knock-down, to be jointly considered for gene regulatory network inference.","Published":"2016-10-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IRATER","Version":"0.0.1","Title":"A R Interface for the Instantaneous RATEs (IRATE) Model","Description":"A R interface to setup, run and read IRATE model runs to assess band recovery (conventional tagging) data (i.e. age-dependent or independent fishing and natural mortality rates).","Published":"2016-10-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"IRdisplay","Version":"0.4.4","Title":"'Jupyter' Display Machinery","Description":"\n An interface to the rich display capabilities of 'Jupyter' front-ends (e.g. 'Jupyter Notebook') .\n Designed to be used from a running 'IRkernel' session .","Published":"2016-08-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"iRefR","Version":"1.13","Title":"iRefIndex Manager","Description":"\"iRefR\" allows the user to load any version of the consolidated protein interaction database \"iRefIndex\" and perform tasks such as: selecting databases, pmids, experimental methods, searching for specific proteins, separate binary interactions from complexes and polymers, generate complexes according to an algorithm that looks after possible binary-represented complexes, make general database statistics and create network graphs, among others.","Published":"2013-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iRegression","Version":"1.2.1","Title":"Regression Methods for Interval-Valued Variables","Description":"Contains some important regression methods for interval-valued variables. For each method, it is available the fitted values, residuals and some goodness-of-fit measures.","Published":"2016-07-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iRepro","Version":"1.0","Title":"Reproducibility for Interval-Censored Data","Description":"This package calculates intraclass correlation coefficient (ICC) for assessing reproducibility of interval-censored data with two repeated measurements. ICC is estimated by maximum likelihood from model with one fixed and one random effect (both intercepts). Help in model checking (normality of subjects' means and residuals) is provided.","Published":"2014-05-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IrishDirectorates","Version":"0.1.0","Title":"Irish Companies' Boards from 2003 to 2013","Description":"This data package contains the boards' compositions of companies quoted in the Irish Stock Exchange at the end of each year from 2003 to 2013. The data have been first analysed in Friel, N., Rastelli, R., Wyse, J. and Raftery, A.E. (2016) .","Published":"2016-11-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"IRISMustangMetrics","Version":"2.0.8","Title":"Statistics and Metrics for Seismic Data","Description":"Classes and functions for metrics calculation as part of the\n 'IRIS DMC MUSTANG' project. The functionality in this package \n builds upon the base classes of the 'IRISSeismic' package.\n Metrics include basic statistics as well as higher level\n 'health' metrics that can help identify problematic seismometers.","Published":"2017-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IRISSeismic","Version":"1.4.5","Title":"Classes and Methods for Seismic Data Analysis","Description":"Provides classes and methods for seismic data analysis. The\n base classes and methods are inspired by the python code found in\n the 'ObsPy' python toolbox . Additional classes and \n methods support data returned by web services provided by the 'IRIS DMC'\n .","Published":"2017-06-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"irlba","Version":"2.2.1","Title":"Fast Truncated Singular Value Decomposition and Principal\nComponents Analysis for Large Dense and Sparse Matrices","Description":"Fast and memory efficient methods for truncated singular value\n decomposition and principal components analysis of large sparse and dense matrices.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"irr","Version":"0.84","Title":"Various Coefficients of Interrater Reliability and Agreement","Description":"Coefficients of Interrater Reliability and Agreement for\n quantitative, ordinal and nominal data: ICC, Finn-Coefficient,\n Robinson'A, Kendall's W, Cohen's Kappa, ...","Published":"2012-07-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"irtDemo","Version":"0.1.2","Title":"Item Response Theory Demo Collection","Description":"\n Includes a collection of shiny applications to demonstrate\n or to explore fundamental item response theory (IRT) concepts\n such as estimation, scoring, and multidimensional IRT models.","Published":"2016-08-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"irtoys","Version":"0.2.0","Title":"A Collection of Functions Related to Item Response Theory (IRT)","Description":"A collection of functions useful in learning and practicing IRT,\n which can be combined into larger programs. Provides basic CTT analysis,\n a simple common interface to the estimation of item\n parameters in IRT models for binary responses with three different programs\n (ICL, BILOG-MG, and ltm), ability estimation (MLE, BME, EAP, WLE, plausible \n values), item and person fit statistics, scaling methods (MM, MS, Stocking-Lord,\n and the complete Hebaera method), and a rich array of parametric and \n non-parametric (kernel) plots. Estimates and plots Haberman's interaction model\n when all items are dichotomously scored.","Published":"2016-10-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IRTpp","Version":"0.2.6.1","Title":"Estimating IRT Parameters using the IRT Methodology","Description":"An implementation of the IRT paradigm for the scoring of different\n instruments measuring latent traits (a.k.a Abilities) and estimating item\n parameters for a variety of models, this package is highly optimized using\n Rcpp and carefully written R for the rest of the package, it aims to expand IRT\n applications to those applications that require faster and more robust estimation\n procedures. See the IRTpp documentation and github site for more information and\n examples.","Published":"2016-12-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"irtProb","Version":"1.2","Title":"Utilities and Probability Distributions Related to\nMultidimensional Person Item Response Models","Description":"Multidimensional Person Item Response Theory probability distributions","Published":"2014-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"irtrees","Version":"0.1.0","Title":"Estimation of Tree-Based Item Response Models","Description":"Helper functions and example data sets accompanying De\n Boeck, P. and Partchev, I. (2012) IRTrees: Tree-Based Item\n Response Models of the GLMM Family, Journal of Statistical\n Software - Code Snippets, 48(1), 1-28.","Published":"2012-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IRTShiny","Version":"1.2","Title":"Item Response Theory via Shiny","Description":"Interactive shiny application for running Item Response Theory\n analysis. Provides graphics for characteristic and information curves.","Published":"2017-04-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"isa2","Version":"0.3.5","Title":"The Iterative Signature Algorithm","Description":"The ISA is a biclustering algorithm that finds modules \n in an input matrix. A module or bicluster is a block of the\n reordered input matrix.","Published":"2017-03-02","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"ISBF","Version":"0.2.1","Title":"Iterative Selection of Blocks of Features - ISBF","Description":"Selection of features for sparse regression estimation (like the LASSO). Selection of blocks of features when the regression parameter is sparse and constant by blocks (like the Fused-LASSO). Application to cgh arrays.","Published":"2014-11-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ISDA.R","Version":"1.0","Title":"interval symbolic data analysis for R","Description":"describes a set of operations for symbolic data type based\n on interval-valued. The operations are processing of punctuals\n variables to interval variables, construction of a 3D graphic\n interval, linear regression interval and interval descriptive\n statistics such as mean, median, variance, standard deviation\n and mode.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"isdals","Version":"2.0-4","Title":"Provides datasets for Introduction to Statistical Data Analysis\nfor the Life Sciences","Description":"Provides datasets for Introduction to Statistical Data Analysis for the Life Sciences","Published":"2014-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"iSDM","Version":"1.0","Title":"Invasive Species Distribution Modelling","Description":"Functions for predicting and mapping potential and realized distributions of invasive species within the invaded range.","Published":"2017-03-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"isdparser","Version":"0.2.0","Title":"Parse 'NOAA' Integrated Surface Data Files","Description":"Tools for parsing 'NOAA' Integrated Surface Data ('ISD') files,\n described at . Data includes for example,\n wind speed and direction, temperature, cloud data, sea level pressure,\n and more. Includes data from approximately 35,000 stations worldwide,\n though best coverage is in North America/Europe/Australia. Data is stored\n as variable length ASCII character strings, with most fields optional.\n Included are tools for parsing entire files, or individual lines of data.","Published":"2017-01-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"IsingFit","Version":"0.3.1","Title":"Fitting Ising Models Using the ELasso Method","Description":"This network estimation procedure eLasso, which is based on the Ising model, combines l1-regularized logistic regression with model selection based on the Extended Bayesian Information Criterion (EBIC). EBIC is a fit measure that identifies relevant relationships between variables. The resulting network consists of variables as nodes and relevant relationships as edges. Can deal with binary data.","Published":"2016-09-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"isingLenzMC","Version":"0.2.5","Title":"Monte Carlo for Classical Ising Model","Description":"Classical Ising Model is a land mark system in statistical physics.The model explains the physics of spin glasses and magnetic materials, and cooperative phenomenon in general, for example phase transitions and neural networks.This package provides utilities to simulate one dimensional Ising Model with Metropolis and Glauber Monte Carlo with single flip dynamics in periodic boundary conditions. Utility functions for exact solutions are provided.","Published":"2016-07-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"IsingSampler","Version":"0.2","Title":"Sampling Methods and Distribution Functions for the Ising Model","Description":"Sample states from the Ising model and compute the probability of states. Sampling can be done for any number of nodes, but due to the intractibility of the Ising model the distribution can only be computed up to ~10 nodes.","Published":"2015-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"island","Version":"0.1.2","Title":"Stochastic Island Biogeography Theory Made Easy","Description":"Tools to develop stochastic models based on the Theory of Island\n Biogeography (TIB) of MacArthur and Wilson (1967) \n and extensions. The package implements methods to estimate colonization and\n extinction rates (including environmental variables) given presence-absence\n data, simulate community assembly, and perform model selection.","Published":"2016-04-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ISLR","Version":"1.0","Title":"Data for An Introduction to Statistical Learning with\nApplications in R","Description":"The collection of datasets used in the book \"An\n Introduction to Statistical Learning with Applications in R\"","Published":"2013-06-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ismev","Version":"1.41","Title":"An Introduction to Statistical Modeling of Extreme Values","Description":"Functions to support the computations carried out in\n `An Introduction to Statistical Modeling of Extreme Values' by\n Stuart Coles. The functions may be divided into the following \n groups; maxima/minima, order statistics, peaks over thresholds\n and point processes. ","Published":"2016-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Iso","Version":"0.0-17","Title":"Functions to Perform Isotonic Regression","Description":"Linear order and unimodal order (univariate)\n\t isotonic regression; bivariate isotonic regression\n\t with linear order on both variables.","Published":"2015-06-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IsoCI","Version":"1.1","Title":"Confidence intervals for current status data based on\ntransformations and bootstrap","Description":"Some functions for confidence intervals for current status data based on transformations and bootstrap.","Published":"2014-06-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"isocir","Version":"2.0-6","Title":"Isotonic Inference for Circular Data","Description":"A bunch of functions to deal with circular data under order restrictions.","Published":"2016-12-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ISOcodes","Version":"2016.12.09","Title":"Selected ISO Codes","Description":"ISO language, territory, currency, script and character codes.\n Provides ISO 639 language codes, ISO 3166 territory codes, ISO 4217\n currency codes, ISO 15924 script codes, and the ISO 8859 character codes\n as well as the UN M.49 area codes.","Published":"2016-12-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"IsoGene","Version":"1.0-24","Title":"Order-Restricted Inference for Microarray Experiments","Description":"Offers framework for testing for monotonic relationship between gene expression and doses in a microarray experiment. Several testing procedures including the global likelihood-ratio test (Bartholomew, 1961), Williams (1971, 1972), Marcus (1976), M (Hu et al. 2005) and the modified M (Lin et al. 2007) are used to test for the monotonic trend in gene expression with respect to doses. BH (Benjamini and Hochberg 1995) and BY (Benjamini and Yekutieli 2004) FDR controlling procedures are applied to adjust the raw p-values obtained from the permutations. ","Published":"2015-07-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"isopam","Version":"0.9-13","Title":"Isopam (Clustering)","Description":"Isopam clustering algorithm and utilities. \n Isopam optimizes clusters and optionally cluster numbers in \n a brute force style and aims at an optimum separation \n by all or some descriptors (typically species). ","Published":"2014-12-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"isopat","Version":"1.0","Title":"Calculation of isotopic pattern for a given molecular formula","Description":"The function calculates the isotopic pattern (fine\n structures) for a given chemical formula.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"isoph","Version":"1.1.1","Title":"Isotonic Proportional Hazards Model","Description":"Nonparametric estimation of an isotonic covariate effect for proportional hazards model.","Published":"2017-06-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IsoplotR","Version":"0.14","Title":"Statistical Toolbox for Radiometric Geochronology","Description":"An R implementation of Ken Ludwig's popular Isoplot add-in to Microsoft Excel. Plots U-Pb data on Wetherill and Tera-Wasserburg concordia diagrams. Calculates concordia and discordia ages. Performs linear regression of measurements with correlated errors using the 'York' approach. Generates Kernel Density Estimates (KDEs) and Cumulative Age Distributions (CADs). Produces Multidimensional Scaling (MDS) configurations and Shepard plots of multi-sample detrital datasets using the Kolmogorov-Smirnov distance as a dissimilarity measure. Calculates 40Ar/39Ar ages, isochrons, and age spectra. Computes weighted means accounting for overdispersion. Calculates U-Th-He (single grain and central) ages, logratio plots and ternary diagrams. Processes fission track data using the external detector method and LA-ICP-MS, calculates central ages and plots fission track and other data on radial (a.k.a. 'Galbraith' plots). Constructs Pb-Pb, Re-Os, Sm-Nd, Lu-Hf and Rb-Sr isochrons.","Published":"2017-06-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ISOpureR","Version":"1.0.21","Title":"Deconvolution of Tumour Profiles","Description":"Deconvolution of mixed tumour profiles into normal and cancer for each patient, using \n\tthe ISOpure algorithm in Quon et al. Genome Medicine, 2013 5:29. Deconvolution requires \n\tmixed tumour profiles and a set of unmatched \"basis\" normal profiles.","Published":"2016-08-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"IsoriX","Version":"0.5","Title":"Isoscape Computation and Inference of Spatial Origins using\nMixed Models","Description":"\n Building isoscapes using mixed models and inferring the geographic origin of \n organisms based on their isotopic ratios. This package is essentially a \n simplified interface to several other packages. It uses 'spaMM' for fitting \n and predicting isoscapes, and assigning an organism's origin depending on its \n isotopic ratio. 'IsoriX' also relies heavily on the package 'rasterVis' for \n plotting the maps using lattice.","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"IsoSpecR","Version":"1.0.3","Title":"The IsoSpec Algorithm","Description":"IsoSpec is a fine structure calculator used for obtaining the most\n probable masses of a chemical compound given the frequencies of the composing\n isotopes and their masses. It finds the smallest set of isotopologues with\n a given probability. The probability is assumed to be that of the product of\n multinomial distributions, each corresponding to one particular element and\n parametrized by the frequencies of finding these elements in nature. These\n numbers are supplied by IUPAC - the International Union of Pure and Applied\n Chemistry.","Published":"2017-01-13","License":"BSD_2_clause + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"isotone","Version":"1.1-0","Title":"Active Set and Generalized PAVA for Isotone Optimization","Description":"Contains two main functions: one for\n solving general isotone regression problems using the\n pool-adjacent-violators algorithm (PAVA); another one provides\n a framework for active set methods for isotone optimization\n problems with arbitrary order restrictions. Various types of\n loss functions are prespecified.","Published":"2015-07-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"isotonic.pen","Version":"1.0","Title":"Penalized Isotonic Regression in one and two dimensions","Description":"Given a response y and a one- or two-dimensional predictor, the isotonic regression estimator is calculated with the usual orderings. ","Published":"2014-04-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"IsotopeR","Version":"0.5.4","Title":"Stable Isotope Mixing Model","Description":"Estimates diet contributions from isotopic sources using JAGS.\n Includes estimation of concentration dependence and measurement error.","Published":"2016-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ISOweek","Version":"0.6-2","Title":"Week of the year and weekday according to ISO 8601","Description":"This is an substitute for the %V and %u formats which are\n not implemented on Windows. In addition, the package offers\n functions to convert from standard calender format yyyy-mm-dd\n to and from ISO 8601 week format yyyy-Www-d.","Published":"2011-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ISR3","Version":"0.98","Title":"Iterative Sequential Regression","Description":"Performs multivariate normal imputation through iterative sequential \n regression. Conditional dependency structure between imputed variables can be \n specified a priori to accelerate imputation.","Published":"2016-10-14","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"issueReporter","Version":"0.1.0","Title":"Create Reports from GitHub Issues","Description":"Generates a report from a GitHub issue thread, using R Markdown.","Published":"2017-05-15","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"isva","Version":"1.9","Title":"Independent Surrogate Variable Analysis","Description":"Independent Surrogate Variable Analysis is an algorithm\n for feature selection in the presence of potential confounding\n factors (see Teschendorff AE et al 2011, ).","Published":"2017-01-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ISwR","Version":"2.0-7","Title":"Introductory Statistics with R","Description":"Data sets and scripts for text examples and exercises in \n P. Dalgaard (2008), `Introductory Statistics with R', 2nd ed., Springer Verlag, ISBN 978-0387790534. ","Published":"2015-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"itan","Version":"1.0","Title":"Item Analysis for Multiple Choice Tests","Description":"Functions for analyzing multiple choice items. These analyses include the convertion of student response into binaty data (correct/incorrect), the computation of the number of corrected responses and grade for each subject, the calculation of item difficulty and discrimination, the computation of the frecuency and point-biserial correlation for each distractor and the graphical analysis of each item.","Published":"2017-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"itcSegment","Version":"0.5","Title":"Individual Tree Crowns Segmentation","Description":"Three methods for Individual Tree Crowns (ITCs) delineation on remote sensing data: one is based on LiDAR data in x,y,z format and one on imagery data in raster format.","Published":"2017-05-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"iteRates","Version":"3.1","Title":"Parametric rate comparison","Description":"Iterates through a phylogenetic tree to identify regions\n of rate variation using the parametric rate comparison test.","Published":"2013-05-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"iterators","Version":"1.0.8","Title":"Provides Iterator Construct for R","Description":"Support for iterators, which allow a programmer to traverse\n through all the elements of a vector, list, or other collection\n of data.","Published":"2015-10-13","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"iterLap","Version":"1.1-2","Title":"Approximate probability densities by iterated Laplace\nApproximations","Description":"The iterLap (iterated Laplace approximation) algorithm\n approximates a general (possibly non-normalized) probability\n density on R^p, by repeated Laplace approximations to the\n difference between current approximation and true density (on\n log scale). The final approximation is a mixture of\n multivariate normal distributions and might be used for example\n as a proposal distribution for importance sampling (eg in\n Bayesian applications). The algorithm can be seen as a\n computational generalization of the Laplace approximation\n suitable for skew or multimodal densities.","Published":"2012-05-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"iterpc","Version":"0.3.0","Title":"Efficient Iterator for Permutations and Combinations","Description":"A collection of iterators for generating permutations and combinations with or\n without replacement; with distinct items or non-distinct items (multiset).\n The generated sequences are in lexicographical order (dictionary order). The\n algorithms to generate permutations and combinations are memory efficient. These\n iterative algorithms enable users to process all sequences without putting all\n results in the memory at the same time. The algorithms are written in C/C++ for\n faster performances.","Published":"2016-05-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"itertools","Version":"0.1-3","Title":"Iterator Tools","Description":"Various tools for creating iterators, many patterned after\n functions in the Python itertools module, and others patterned\n after functions in the 'snow' package.","Published":"2014-03-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"itertools2","Version":"0.1.1","Title":"itertools2: Functions creating iterators for efficient looping","Description":"A port of Python's excellent itertools module to R for efficient\n looping.","Published":"2014-08-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ITGM","Version":"0.4","Title":"Individual Tree Growth Modeling","Description":"Individual tree model is an instrument to support the decision with\n regard to forest management. This package provides functions that let you work\n with data for this model. Also other support functions and extension related to\n this model are available.","Published":"2016-10-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"itree","Version":"0.1","Title":"Tools for classification and regression trees, with an emphasis\non interpretability","Description":"This package is based on the code of the rpart package.\n It extends rpart by adding additional splitting methods\n emphasizing interpretable/parsimonious trees. Unless indicated\n otherwise, it is safe to assume that all functions herein are\n extensions of or copied directly from similar or nearly\n identical rpart methods. As such, the authors of rpart are\n authors of this package as well. However, please direct any\n error reports or other questions about itree to the maintainer\n of this package; they are welcome and appreciated.","Published":"2013-06-27","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"itsadug","Version":"2.2","Title":"Interpreting Time Series and Autocorrelated Data Using GAMMs","Description":"GAMM (Generalized Additive Mixed Modeling; Lin & Zhang, 1999)\n as implemented in the R package 'mgcv' (Wood, S.N., 2006; 2011) is a nonlinear\n regression analysis which is particularly useful for time course data such as\n EEG, pupil dilation, gaze data (eye tracking), and articulography recordings,\n but also for behavioral data such as reaction times and response data. As time\n course measures are sensitive to autocorrelation problems, GAMMs implements\n methods to reduce the autocorrelation problems. This package includes functions\n for the evaluation of GAMM models (e.g., model comparisons, determining regions\n of significance, inspection of autocorrelational structure in residuals)\n and interpreting of GAMMs (e.g., visualization of complex interactions, and\n contrasts).","Published":"2016-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"itsmr","Version":"1.5","Title":"Time series analysis package for students","Description":"This package provides a subset of the functionality found in the Windows-based program ITSM. The intended audience is students using the textbook \"Introduction to Time Series and Forecasting\" by Peter J. Brockwell and Richard A. Davis.","Published":"2011-11-13","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"IUPS","Version":"1.0","Title":"Incorporating Uncertainties in Propensity Scores","Description":"This package includes functions to incorporate\n uncertainties in estimated propensity scores and provide\n adjusted standard errors for making valid causal inference.","Published":"2013-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ivbma","Version":"1.05","Title":"Bayesian Instrumental Variable Estimation and Model\nDetermination via Conditional Bayes Factors","Description":"This package allows one incorporate instrument and covariate uncertainty into instrumental variable regression.","Published":"2014-09-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ivfixed","Version":"1.0","Title":"Instrumental fixed effect panel data model","Description":"Fit an Instrumental least square dummy variable model","Published":"2014-03-25","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"ivlewbel","Version":"1.1","Title":"Uses heteroscedasticity to estimate mismeasured and endogenous\nregressor models","Description":"GMM estimation of triangular systems using heteroscedasticity based instrumental variables as in Lewbel (2012)","Published":"2014-05-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ivmodel","Version":"1.6","Title":"Statistical Inference and Sensitivity Analysis for Instrumental\nVariables Model","Description":"Contains functions for carrying out instrumental variable\n estimation of causal effects, including power analysis, sensitivity analysis,\n and diagnostics.","Published":"2017-05-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ivpack","Version":"1.2","Title":"Instrumental Variable Estimation","Description":"This package contains functions for carrying out instrumental variable estimation of causal effects and power analyses for instrumental variable studies. ","Published":"2014-10-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ivpanel","Version":"1.0","Title":"Instrumental Panel Data Models","Description":"Fit the instrumental panel data models: the fixed effects, random\n effects and between models.","Published":"2015-02-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ivprobit","Version":"1.0","Title":"Instrumental variables probit model","Description":"ivprobit fit an Instrumental variables probit model using the\n generalized least squares estimator","Published":"2014-09-25","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"iWeigReg","Version":"1.0","Title":"Improved methods for causal inference and missing data problems","Description":"Improved methods based on inverse probability weighting\n and outcome regression for causal inference and missing data\n problems","Published":"2013-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"iWISA","Version":"1.0-2","Title":"Wavelet-Based Index of Storm Activity","Description":"A powerful system for estimating an improved wavelet-based index\n of magnetic storm activity, storm activity preindex (from individual station) and SQ variations.\n It also serves as a flexible visualization tool. ","Published":"2016-03-14","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"jaatha","Version":"3.2.0","Title":"Simulation-Based Maximum Likelihood Parameter Estimation","Description":"An estimation method that can use computer simulations to\n approximate maximum-likelihood estimates even when the likelihood function can not\n be evaluated directly. It can be applied whenever it is feasible to conduct many\n simulations, but works best when the data is approximately Poisson distributed.\n It was originally designed for demographic inference in evolutionary\n biology. It has optional support for conducting coalescent simulation using\n the 'coala' package.","Published":"2016-05-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"jackknifeKME","Version":"1.2","Title":"Jackknife Estimates of Kaplan-Meier Estimators or Integrals","Description":"Computing the original and modified jackknife estimates of Kaplan-Meier estimators.","Published":"2015-10-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"jackstraw","Version":"1.1","Title":"Statistical Inference of Variables Driving Systematic Variation","Description":"Significance test for association between variables\n\tand their estimated latent variables.\n\tLatent variables may be estimated by principal component analysis (PCA),\n\tlogistic factor analysis (LFA), and other techniques.","Published":"2015-12-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"JacobiEigen","Version":"0.2-2","Title":"Classical Jacobi Eigensolution Algorithm","Description":"Implements the classical Jacobi (1846) algorithm for the\n eigenvalues and eigenvectors of a real symmetric matrix, both in \n pure R and in C++ using Rcpp. Mainly as a programming example \n for teaching purposes.","Published":"2015-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"jacpop","Version":"0.5","Title":"Jaccard Index for Population Structure Identification","Description":"Uses the Jaccard similarity index to account for population\n structure in sequencing studies. This method was specifically\n designed to detect population stratification based on rare variants, hence it\n will be especially useful in rare variant analysis.","Published":"2016-07-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"JADE","Version":"2.0-0","Title":"Blind Source Separation Methods Based on Joint Diagonalization\nand Some BSS Performance Criteria","Description":"Cardoso's JADE algorithm as well as his functions for joint diagonalization are ported to R. Also several other blind source separation (BSS) methods, like AMUSE and SOBI, and some criteria for performance evaluation of BSS algorithms, are given. ","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"jagsUI","Version":"1.4.4","Title":"A Wrapper Around 'rjags' to Streamline 'JAGS' Analyses","Description":"A set of wrappers around 'rjags' functions to run Bayesian analyses in 'JAGS' (specifically, via 'libjags'). A single function call can control adaptive, burn-in, and sampling MCMC phases, with MCMC chains run in sequence or in parallel. Posterior distributions are automatically summarized (with the ability to exclude some monitored nodes if desired) and functions are available to generate figures based on the posteriors (e.g., predictive check plots, traceplots). Function inputs, argument syntax, and output format are nearly identical to the 'R2WinBUGS'/'R2OpenBUGS' packages to allow easy switching between MCMC samplers. ","Published":"2016-12-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"JAGUAR","Version":"3.0.1","Title":"Joint Analysis of Genotype and Group-Specific Variability Using\na Novel Score Test Approach to Map Expression Quantitative\nTrait Loci (eQTL)","Description":"Implements a novel score test that measures 1) the overall shift in the gene expression due to genotype (additive genetic effect), and 2) group-specific changes in gene expression due to genotype (interaction effect) in a mixed-effects model framework.","Published":"2016-07-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"james.analysis","Version":"1.0.1","Title":"Analysis Tools for the 'JAMES' Framework","Description":"Analyze and visualize results of studies performed with the\n analysis tools in 'JAMES', a modern object-oriented Java\n framework for discrete optimization using local search\n metaheuristics (see http://www.jamesframework.org).","Published":"2015-06-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"janeaustenr","Version":"0.1.5","Title":"Jane Austen's Complete Novels","Description":"Full texts for Jane Austen's 6 completed novels, ready for text\n analysis. These novels are \"Sense and Sensibility\", \"Pride and Prejudice\",\n \"Mansfield Park\", \"Emma\", \"Northanger Abbey\", and \"Persuasion\".","Published":"2017-06-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"janitor","Version":"0.3.0","Title":"Simple Tools for Examining and Cleaning Dirty Data","Description":"The main janitor functions can: perfectly format data.frame column\n names; provide quick one- and two-variable tabulations (i.e., frequency\n tables and crosstabs); and isolate duplicate records. Other janitor functions\n nicely format the tabulation results. These tabulate-and-report functions\n approximate popular features of SPSS and Microsoft Excel. This package\n follows the principles of the \"tidyverse\" and works well with the pipe function\n %>%. janitor was built with beginning-to-intermediate R users in mind and is\n optimized for user-friendliness. Advanced R users can already do everything\n covered here, but with janitor they can do it faster and save their thinking for\n the fun stuff.","Published":"2017-05-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"JASPAR","Version":"0.0.1","Title":"R modules for JASPAR databases: a collection of transcription\nfactor DNA-binding preferences, modeled as matrices","Description":"R modules for JASPAR data processing and visualization","Published":"2012-11-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"JavaGD","Version":"0.6-1","Title":"Java Graphics Device","Description":"Graphics device routing all graphics commands to a Java\n program. The actual functionality of the JavaGD depends on the\n Java-side implementation. Simple AWT and Swing implementations\n are included.","Published":"2012-09-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"JBTools","Version":"0.7.2.9","Title":"Misc Small Tools and Helper Functions for Other Code of J.\nButtlar","Description":"Collection of several tools and helper functions used across the other packages of J. Buttlar ('ncdf.tools' and 'spectral.methods'). ","Published":"2015-06-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Jdmbs","Version":"1.0","Title":"Monte Carlo Option Pricing Algorithm for Jump Diffusion Model\nwith Correlation Companies","Description":"Black-Scholes Model [Black (1973) ] is important to calculate option premium in the stock market. And variety of improved models are studied. In this package, I proposed functions in order to calculate normal and new Jump Diffusion Models [Kou (2002) ] by Monte Carlo Method. This package can be used for Computational Finance.","Published":"2017-02-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"jetset","Version":"3.4.0","Title":"One-to-One Gene-Probeset Mapping for Affymetrix Human\nMicroarrays","Description":"On Affymetrix gene expression microarrays, a single gene may be measured by multiple probe sets. This can present a mild conundrum when attempting to evaluate a gene \"signature\" that is defined by gene names rather than by specific probe sets. This package provides a one-to-one mapping from gene to \"best\" probe set for four Affymetrix human gene expression microarrays: hgu95av2, hgu133a, hgu133plus2, and u133x3p. This package also includes the pre-calculated probe set quality scores that were used to define the mapping.","Published":"2017-04-05","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"JGEE","Version":"1.1","Title":"Joint Generalized Estimating Equation Solver","Description":"Fits two different joint generalized estimating equation models to multivariate longitudinal data.","Published":"2015-11-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"JGL","Version":"2.3","Title":"Performs the Joint Graphical Lasso for sparse inverse covariance\nestimation on multiple classes","Description":"The Joint Graphical Lasso is a generalized method for\n estimating Gaussian graphical models/ sparse inverse covariance\n matrices/ biological networks on multiple classes of data. We\n solve JGL under two penalty functions: The Fused Graphical\n Lasso (FGL), which employs a fused penalty to encourage inverse\n covariance matrices to be similar across classes, and the Group\n Graphical Lasso (GGL), which encourages similar network\n structure between classes. FGL is recommended over GGL for\n most applications.","Published":"2013-04-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"JGR","Version":"1.7-16","Title":"JGR - Java GUI for R","Description":"Java GUI for R - cross-platform, universal and unified Graphical User Interface for R. For full functionality on Windows and Mac OS X JGR requires a start application which depends on your OS. This can be downloaded from JGR website: http://rforge.net/JGR/","Published":"2013-12-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"jiebaR","Version":"0.9.1","Title":"Chinese Text Segmentation","Description":"Chinese text segmentation, keyword extraction and speech tagging\n For R.","Published":"2016-09-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"jiebaRD","Version":"0.1","Title":"Chinese Text Segmentation Data for jiebaR Package","Description":"jiebaR is a package for Chinese text segmentation, keyword extraction\n and speech tagging. This package provides the data files required by jiebaR.","Published":"2015-01-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"JM","Version":"1.4-5","Title":"Joint Modeling of Longitudinal and Survival Data","Description":"Shared parameter models for the joint modeling of longitudinal and time-to-event data. ","Published":"2016-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"JMbayes","Version":"0.8-0","Title":"Joint Modeling of Longitudinal and Time-to-Event Data under a\nBayesian Approach","Description":"Shared parameter models for the joint modeling of longitudinal and time-to-event data using MCMC. ","Published":"2016-08-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"jmcm","Version":"0.1.7.0","Title":"Joint Mean-Covariance Models using 'Armadillo' and S4","Description":"Fit joint mean-covariance models for longitudinal data. The models\n and their components are represented using S4 classes and methods. The core\n computational algorithms are implemented using the 'Armadillo' C++ library\n for numerical linear algebra and 'RcppArmadillo' glue.","Published":"2016-10-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"JMdesign","Version":"1.1","Title":"Joint Modeling of Longitudinal and Survival Data - Power\nCalculation","Description":"Performs power calculations for joint modeling of longitudinal and survival data with k-th order trajectories when the variance-covariance matrix, Sigma_theta, is unknown.","Published":"2014-10-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"jmetrik","Version":"1.0","Title":"Tools for Interacting with 'jMetrik'","Description":"The main purpose of this package is to make it easy for userR's to interact with 'jMetrik' an open source application for psychometric analysis. For example it allows useR's to write data frames to file in a format that can be used by 'jMetrik'. It also allows useR's to read *.jmetrik files (e.g. output from an analysis) for follow-up analysis in R. The *.jmetrik format is a flat file that includes a multiline header and the data as comma separated values. The header includes metadata about the file and one row per variable with the following information in each row: variable name, data type, item scoring, special data codes, and variable label. ","Published":"2015-03-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"Jmisc","Version":"0.3.1","Title":"Julian Miscellaneous Function","Description":"Some handy function in R","Published":"2014-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"jmotif","Version":"1.0.2.900","Title":"Time Series Analysis Toolkit Based on Symbolic Aggregate\nDicretization, i.e. SAX","Description":"Implements time series z-normalization, SAX, HOT-SAX, VSM, SAX-VSM, RePair, and RRA\n algorithms facilitating time series motif (i.e., recurrent pattern), discord (i.e., anomaly),\n and characteristic pattern discovery along with interpretable time series classification.","Published":"2016-06-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"jmuOutlier","Version":"1.3","Title":"Permutation Tests for Nonparametric Statistics","Description":"Performs a permutation test on the difference between two location parameters, a permutation correlation test, a permutation F-test, the Siegel-Tukey test, a ratio mean deviance test. Also performs some graphing techniques, such as for confidence intervals, vector addition, and Fourier analysis; and includes functions related to the Laplace (double exponential) and triangular distributions. Performs power calculations for the binomial test.","Published":"2017-02-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"jmv","Version":"0.7.3.5","Title":"The 'jamovi' Analyses","Description":"'jamovi' is a rich graphical statistics program providing many\n common statistical tests such as t-tests, ANOVAs, correlation matrices,\n proportion tests, contingency tables, etc (see for\n more information). This package makes all of the basic 'jamovi' analyses\n available to the R user.","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"jmvcore","Version":"0.5.5","Title":"Dependencies for the 'jamovi' Framework","Description":"'jamovi' is a framework for creating rich interactive statistical\n analyses (see for more information). This package\n represents the core libraries which jamovi analyses written in R depend\n upon.","Published":"2017-05-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"jocre","Version":"0.3.3","Title":"Joint Confidence Regions","Description":"Computing and plotting joint confidence regions and intervals. Regions include classical ellipsoids, minimum-volume or minimum-length regions, and an empirical Bayes region. Intervals include the TOST procedure with ordinary or expanded intervals and a fixed-sequence procedure. Such regions and intervals are useful e.g., for the assessment of multi-parameter (bio-)equivalence. Joint confidence regions for the mean and variance of a normal distribution are available as well.","Published":"2017-05-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Johnson","Version":"1.4","Title":"Johnson Transformation","Description":"RE.Johnson performs the Johnson Transformation to increase the normality.","Published":"2014-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"JohnsonDistribution","Version":"0.24","Title":"Johnson Distribution","Description":"Johnson curve distributions. Implementation of AS100 and\n AS99.","Published":"2012-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"joineR","Version":"1.2.0","Title":"Joint Modelling of Repeated Measurements and Time-to-Event Data","Description":"Analysis of repeated measurements and time-to-event data via random\n effects joint models. Fits the joint models proposed by Henderson and colleagues\n (single event time) and by Williamson and\n colleagues (2008) (competing risks events time) to a\n single continuous repeated measure. The time-to-event data is modelled using a \n (cause-specific) Cox proportional hazards regression model with time-varying \n covariates. The longitudinal outcome is modelled using a linear mixed effects\n model. The association is captured by a latent Gaussian process. The model is \n estimated using am Expectation Maximization algorithm. Some plotting functions \n and the variogram are also included. This project is funded by the Medical \n Research Council (Grant numbers G0400615 and MR/M013227/1).","Published":"2017-05-19","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"joineRML","Version":"0.2.2","Title":"Joint Modelling of Multivariate Longitudinal Data and\nTime-to-Event Outcomes","Description":"Fits the joint model proposed by Henderson and colleagues (2000) \n , but extended to the case of multiple \n continuous longitudinal measures. The time-to-event data is modelled using a \n Cox proportional hazards regression model with time-varying covariates. The \n multiple longitudinal outcomes are modelled using a multivariate version of the \n Laird and Ware linear mixed model. The association is captured by a multivariate\n latent Gaussian process. The model is estimated using a Monte Carlo Expectation \n Maximization algorithm. This project is funded by the Medical Research Council \n (Grant number MR/M013227/1).","Published":"2017-05-01","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"joint.Cox","Version":"2.12","Title":"Penalized Likelihood Estimation and Dynamic Prediction under the\nJoint Frailty-Copula Models Between Tumour Progression and\nDeath for Meta-Analysis","Description":"Perform the Cox regression and dynamic prediction methods under\n the joint frailty-copula model between tumour progression and death for meta-analysis.\n A penalized likelihood is employed for estimating model parameters, where the baseline hazard functions are approximated by smoothing splines.\n The methods are applicable for meta-analytic data combining several studies.\n The methods can analyze data having information on both terminal event time (e.g., time-to-death) and non-terminal event time (e.g., time-to-tumour progression).\n See Emura et al. (2015) and\n Emura et al. (2017) for details.\n Survival data from ovarian cancer patients are also available.","Published":"2017-05-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"jointDiag","Version":"0.2","Title":"Joint Approximate Diagonalization of a set of square matrices","Description":"Different algorithms to perform approximate joint\n diagonalization of a finite set of square matrices. Depending\n on the algorithm, orthogonal or non-orthogonal diagonalizer is\n found. These algorithms are particularly useful in the context\n of blind source separation.","Published":"2009-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"JointModel","Version":"1.0","Title":"Semiparametric Joint Models for Longitudinal and Counting\nProcesses","Description":"Joint fit of a semiparametric regression model for longitudinal responses and a semiparametric transformation model for time-to-event data. ","Published":"2016-06-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"jointNmix","Version":"1.0","Title":"Joint N-Mixture Models for Site-Associated Species","Description":"Fits univariate and joint N-mixture models for data on two unmarked site-associated species. Includes functions to estimate latent abundances through empirical Bayes methods.","Published":"2016-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"jointPm","Version":"2.3.1","Title":"Risk estimation using the joint probability method","Description":"A bivariate integration method to estimate risk caused by two extreme and dependent forcing variables.","Published":"2014-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"JointRegBC","Version":"0.1.1","Title":"Joint Modelling of Mixed Correlated Binary and Continuous\nResponses : A Latent Variable Approach","Description":"A joint regression model for mixed correlated binary and\n continuous responses is presented. In this model binary\n response can be dependent on the continuous response. With this\n model, the dependence between responses can be taken into\n account by the correlation between errors in the models for\n binary and continuous responses.","Published":"2013-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"joinXL","Version":"1.0.1","Title":"Perform Joins or Minus Queries on 'Excel' Files","Description":"Performs Joins and Minus Queries on 'Excel' Files\n fulljoinXL() Merges all rows of 2 'Excel' files based upon a common column in the files.\n innerjoinXL() Merges all rows from base file and join file when the join condition is met.\n leftjoinXL() Merges all rows from the base file, and all rows from the join file\n if the join condition is met.\n rightjoinXL() Merges all rows from the join file, and all rows from the base file if the join\n condition is met.\n minusXL() Performs 2 operations source-minus-target and target-minus-source\n If the files are identical all output files will be empty.\n Choose two 'Excel' files via a dialog box, and then follow prompts at the console to\n choose a base or source file and columns to merge or minus on.","Published":"2016-09-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"jomo","Version":"2.4-1","Title":"Multilevel Joint Modelling Multiple Imputation","Description":"Similarly to Schafer's package 'pan', 'jomo' is a package for multilevel joint modelling multiple imputation.\n Novel aspects of 'jomo' are the possibility of handling binary and categorical data through latent normal variables, the option to use cluster-specific covariance matrices and to impute compatibly with the substantive model. ","Published":"2017-06-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"JOP","Version":"3.6","Title":"Joint Optimization Plot","Description":"JOP is a tool for simultaneous optimization of multiple\n responses and visualization of the results. The visualization\n is done by the joint optimization plot introduced by Kuhnt and\n Erdbruegge (2004).","Published":"2013-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"JoSAE","Version":"0.2.3","Title":"Functions for some Unit-Level Small Area Estimators and their\nVariances","Description":"Implementation of some unit level EBLUP and GREG estimators as well as the estimate of their variances to further document the publication of Breidenbach and Astrup (2011). The vignette further explains the use of the implemented functions.","Published":"2015-08-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"jose","Version":"0.1","Title":"Javascript Object Signing and Encryption","Description":"A collection of specifications to securely transfer claims such as\n authorization information between parties. A JSON Web Token (JWT) contains\n claims used by systems to apply access control rules to its resources. One\n potential use case of the JWT is authentication and authorization for a\n system that exposes resources through OAuth 2.0.","Published":"2016-05-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"JOUSBoost","Version":"2.0.0","Title":"Implements Under/Oversampling for Probability Estimation","Description":"Implements under/oversampling for probability estimation. To be\n used with machine learning methods such as adaBoost, random forests, etc.","Published":"2017-05-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"jpeg","Version":"0.1-8","Title":"Read and write JPEG images","Description":"This package provides an easy and simple way to read, write and display bitmap images stored in the JPEG format. It can read and write both files and in-memory raw vectors.","Published":"2014-01-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"JPEN","Version":"1.0","Title":"Covariance and Inverse Covariance Matrix Estimation Using Joint\nPenalty","Description":"A Joint PENalty Estimation of Covariance and Inverse Covariance Matrices.","Published":"2015-09-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"jpmesh","Version":"0.3.0","Title":"Utilities for Japanese Mesh Code","Description":"Helpful functions for using mesh code (80km to 250m) data in Japan. Visualize mesh code using 'ggplot2' and 'leaflet', etc.","Published":"2016-11-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"JPSurv","Version":"1.0.1","Title":"Methods for population-based cancer survival analysis","Description":"Functions, methods, and datasets for cancer survival\n analysis, including the proportional hazard relative survival\n model, the join point relative survival model.","Published":"2012-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"jqr","Version":"0.2.4","Title":"Client for 'jq', a JSON Processor","Description":"Client for 'jq', a JSON processor (), written\n in C. 'jq' allows the following with JSON data: index into, parse, do calculations,\n cut up and filter, change key names and values, perform conditionals and comparisons,\n and more.","Published":"2016-07-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"JRF","Version":"0.1-4","Title":"Joint Random Forest (JRF) for the Simultaneous Estimation of\nMultiple Related Networks","Description":"Simultaneous estimation of multiple related networks.","Published":"2016-10-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"jrich","Version":"0.60-35","Title":"Jack-Knife Support for Evolutionary Distinctiveness Indices I\nand W","Description":"These functions calculate the taxonomic measures presented in Miranda-Esquivel (2016). \n The package introduces Jack-knife resampling in evolutionary distinctiveness prioritization analysis, \n as a way to evaluate the support of the ranking in area prioritization, and the persistence of a given area \n in a conservation analysis.\n The algorithm is described in: Miranda-Esquivel, D (2016) .","Published":"2016-03-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"jrvFinance","Version":"1.03","Title":"Basic Finance; NPV/IRR/Annuities/Bond-Pricing; Black Scholes","Description":"Implements the basic financial analysis\n functions similar to (but not identical to) what\n is available in most spreadsheet software. This\n includes finding the IRR and NPV of regularly\n spaced cash flows and annuities. Bond pricing and\n YTM calculations are included. In addition, Black\n Scholes option pricing and Greeks are also\n provided.","Published":"2015-10-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"js","Version":"0.2","Title":"Tools for Working with JavaScript in R","Description":"A set of utility functions for working with JavaScript in R.\n Currently includes functions to compile, validate, reformat, optimize\n and analyze JavaScript code.","Published":"2015-02-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"JSM","Version":"0.1.0","Title":"Semiparametric Joint Modeling of Survival and Longitudinal Data","Description":"Maximum likelihood estimation for the semiparametric joint modeling of \n survival and longitudinal data. Log-transforms and PRES procedures.","Published":"2016-08-25","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"jSonarR","Version":"1.1.1","Title":"jSonar Analytics Platform API for R","Description":"This package enables users to access MongoDB by running queries\n and returning their results in R data frames. Usually, data in MongoDB is\n only available in the form of a JSON document. jSonarR uses data\n processing and conversion capabilities in the jSonar Analytics Platform\n and the JSON Studio Gateway (http://www.jsonstudio.com), to convert it to\n a tabular format which is easy to use with existing R packages.","Published":"2014-09-26","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"jsonld","Version":"1.2","Title":"JSON for Linking Data","Description":"JSON-LD is a light-weight syntax for expressing linked data. It is primarily\n intended for web-based programming environments, interoperable web services and for \n storing linked data in JSON-based databases. This package provides bindings to the \n JavaScript library for converting, expanding and compacting JSON-LD documents.","Published":"2017-04-11","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"jsonlite","Version":"1.5","Title":"A Robust, High Performance JSON Parser and Generator for R","Description":"A fast JSON parser and generator optimized for statistical data\n and the web. Started out as a fork of 'RJSONIO', but has been completely\n rewritten in recent versions. The package offers flexible, robust, high\n performance tools for working with JSON in R and is particularly powerful\n for building pipelines and interacting with a web API. The implementation is\n based on the mapping described in the vignette (Ooms, 2014). In addition to\n converting JSON data from/to R objects, 'jsonlite' contains functions to\n stream, validate, and prettify JSON data. The unit tests included with the\n package verify that all edge cases are encoded and decoded consistently for\n use with dynamic data in systems and applications.","Published":"2017-06-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"jsonvalidate","Version":"1.0.0","Title":"Validate 'JSON'","Description":"Uses the node library 'is-my-json-valid' to validate 'JSON' against\n a 'JSON' schema.","Published":"2016-06-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"jtGWAS","Version":"1.5","Title":"Efficient Jonckheere-Terpstra Test Statistics","Description":"The core of this 'Rcpp' based package is a function to compute standardized Jonckheere-Terpstra test statistics for large numbers of dependent and independent variables, e.g., genome-wide analysis. It implements 'OpenMP', allowing the option of computing on multiple threads. Supporting functions are also provided to calculate p-values and summarize results.","Published":"2017-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"jtools","Version":"0.4.5","Title":"Analysis and Presentation of Social Scientific Data","Description":"This is a collection of tools that the author (Jacob) has written\n for the purpose of more efficiently understanding and sharing the results of\n (primarily) regression analyses. There are a number of functions focused\n specifically on the interpretation and presentation of interactions in linear\n models. Just about everything supports models from the survey package.","Published":"2017-05-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"jtrans","Version":"0.2.1","Title":"Johnson Transformation for Normality","Description":"Transforming univariate non-normal data to normality using Johnson \n families of distributions. Johnson family is a comprehensive distribution \n family that accommodates many kinds of non-normal distributions. A bunch of \n distributions with various parameters will be fit and the corresponding \n p-values under a user-specified normality test will be given. The final \n transformation will be the one with the largest p-value under the given \n normality test.","Published":"2015-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"jug","Version":"0.1.7","Title":"A Simple Web Framework for R","Description":"jug is a web framework aimed at easily building APIs. It is mostly aimed at exposing R functions, \n models and visualizations to third-parties by way of http requests.","Published":"2017-04-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Julia","Version":"1.1","Title":"Fractal Image Data Generator","Description":"Generates image data for fractals (Julia and Mandelbrot\n sets) on the complex plane in the given region and resolution.","Published":"2014-11-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"JumpTest","Version":"0.0.1","Title":"Financial Jump Detection","Description":"A fast simulation on stochastic volatility model, with jump tests, p-values pooling, and FDR adjustments.","Published":"2017-06-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"junr","Version":"0.1.1","Title":"Access Open Data Through the Junar API","Description":"\n The Junar API is a commercial platform to organize and publish data\n . It has been used in a number of national and local\n government Open Data initiatives in Latin America and the USA. This package\n is a wrapper to make it easier to access data made public through the Junar\n API.","Published":"2016-05-14","License":"MIT + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"jvnVaR","Version":"1.0","Title":"Value at Risk","Description":"Many method to compute, predict and back-test VaR. For more detail, see the report: Value at Risk .","Published":"2015-11-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"JWileymisc","Version":"0.2.1","Title":"Miscellaneous Utilities and Functions","Description":"A collection of miscellaneous tools and functions,\n such as tools to generate descriptive statistics tables,\n format output, visualize relations among variables or check\n distributions.","Published":"2016-09-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"jwutil","Version":"1.1.1","Title":"Tools for Data Manipulation and Testing","Description":"This is a set of simple utilities for various data manipulation and testing tasks.\n The goal is to use base tools well, without bringing in many\n dependencies. Main areas of interest are semi-automated data frame manipulation, such as\n converting factors in multiple binary indicator columns. There are testing\n functions which provide 'testthat' expectations to permute arguments to\n function calls. There are functions and data to test extreme numbers, dates,\n and bad input of various kinds which should allow testing failure and corner\n cases, which can be used for fuzzing your functions. The test suite has many examples of usage.","Published":"2016-10-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"kableExtra","Version":"0.2.1","Title":"Construct Complex Table with 'kable' and Pipe Syntax","Description":"A collection of functions to help build complex HTML or 'LaTeX' \n tables using 'kable()' from 'knitr' and the piping syntax from 'magrittr'. \n Function 'kable()' is a light weight table generator coming from 'knitr'. \n This package simplifies the way to manipulate the HTML or 'LaTeX' codes \n generated by 'kable()' and allows users to construct complex tables\n and customize styles using a readable syntax. ","Published":"2017-05-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"kamila","Version":"0.1.1.1","Title":"Methods for Clustering Mixed-Type Data","Description":"Implements methods for clustering mixed-type data,\n specifically combinations of continuous and nominal data. Special attention\n is paid to the often-overlooked problem of equitably balancing the\n contribution of the continuous and categorical variables. This package\n implements KAMILA clustering, a novel method for clustering\n mixed-type data in the spirit of k-means clustering. It does not require\n dummy coding of variables, and is efficient enough to scale to rather large\n data sets. Also implemented is Modha-Spangler clustering, which uses a\n brute-force strategy to maximize the cluster separation simultaneously in the\n continuous and categorical variables.","Published":"2016-08-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"kangar00","Version":"1.0","Title":"Kernel Approaches for Nonlinear Genetic Association Regression","Description":"Methods to extract information on pathways, genes and SNPs from\n online databases. It provides functions for data preparation and evaluation\n of genetic influence on a binary outcome using the logistic kernel machine\n test (LKMT). Three different kernel functions are offered to analyze genotype\n information in this variance component test: A linear kernel, a size-adjusted\n kernel and a network based kernel.","Published":"2017-04-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"KANT","Version":"2.0","Title":"Package to identify and sort genes overexpressed","Description":"Identify and sort genes overexpressed and associated to transmembrane protein in Affymetrix expression set or any other results of microarray experiment. ","Published":"2014-08-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"kantorovich","Version":"2.0.0","Title":"Kantorovich Distance Between Probability Measures","Description":"Computes the Kantorovich distance between two probability measures on a finite set.","Published":"2016-05-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"KappaGUI","Version":"1.2.1","Title":"GUI for Cohen's and Fleiss' Kappa","Description":"Offers a complete and interactive GUI to work out Cohen's and Fleiss' Kappa.","Published":"2017-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kappalab","Version":"0.4-7","Title":"Non-Additive Measure and Integral Manipulation Functions","Description":"S4 tool box for capacity (or non-additive measure, fuzzy measure) and integral manipulation in a finite setting. It contains routines for handling various types of set functions such as games or capacities. It can be used to compute several non-additive integrals: the Choquet integral, the Sugeno integral, and the symmetric and asymmetric Choquet integrals. An analysis of capacities in terms of decision behavior can be performed through the computation of various indices such as the Shapley value, the interaction index, the orness degree, etc. The well-known Möbius transform, as well as other equivalent representations of set functions can also be computed. Kappalab further contains seven capacity identification routines: three least squares based approaches, a method based on linear programming, a maximum entropy like method based on variance minimization, a minimum distance approach and an unsupervised approach based on parametric entropies. The functions contained in Kappalab can for instance be used in the framework of multicriteria decision making or cooperative game theory.","Published":"2015-07-18","License":"CeCILL","snapshot_date":"2017-06-23"} {"Package":"kappaSize","Version":"1.1","Title":"Sample Size Estimation Functions for Studies of Interobserver\nAgreement","Description":"This package contains basic tools for the purpose of\n sample size estimation in studies of interobserver/interrater\n agreement (reliability). This package contains sample size\n estimation functions for both the power-based and confidence\n interval-based methods, with binary or multinomial outcomes and\n two through six raters.","Published":"2013-03-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"KappaV","Version":"0.3","Title":"Calculates \"vectorial Kappa\", an index of congruence between\npatchy mosaics","Description":"this package allows to quantify the congruence between two patchy\n mosaics or landscapes. This \"vectorial Kappa\" approach extends the\n principle of Cohen's Kappa index by calculating areas of intersected\n patches between two mosaics rather than agreement between pixels. It\n provides an exact alternative for patchy mosaics when a Kappa index is\n needed.","Published":"2014-05-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kaps","Version":"1.0.2","Title":"K-Adaptive Partitioning for Survival data","Description":"This package provides some routines to conduct the K-adaptive parititioning (kaps) algorithm for survival data. A function kaps is an implementation version of our algorithm.","Published":"2014-11-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"karaoke","Version":"1.0","Title":"Remove Vocals from a Song","Description":"Attempts to remove vocals from a stereo '.wav' recording of a song. ","Published":"2016-03-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"KarsTS","Version":"1.1","Title":"An Interface for Karstic Time Series","Description":"An R graphical user interface for karstic time series, based on the 'tcltk' package. Karstic research typically includes CO2 and Rn concentrations and microclimatic measurements. Many of these time series have a strong non-linear behavior. Gaps are often a significant problem because caves are aggressive environments for the apparels. 'KarsTS' provides linear and non-linear analysis and filling methods, as well as tools to manipulate easily time series and gap sets.","Published":"2017-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"KATforDCEMRI","Version":"0.740","Title":"Kinetic analysis and visualization of DCE-MRI data","Description":"Package for kinetic analysis of longitudinal voxel-wise Dynamic Contrast Enhanced MRI data. Includes tools for visualization and exploration of voxel-wise parametric maps.","Published":"2014-02-13","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"kcirt","Version":"0.6.0","Title":"k-Cube Thurstonian IRT Models","Description":"Create, Simulate, Fit, Solve k-Cube Thurstonian IRT Models","Published":"2014-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kdecopula","Version":"0.9.0","Title":"Kernel Smoothing for Bivariate Copula Densities","Description":"Provides fast implementations of kernel smoothing techniques for\n bivariate copula densities, in particular density estimation and resampling.","Published":"2017-05-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"kdetrees","Version":"0.1.5","Title":"Nonparametric method for identifying discordant phylogenetic\ntrees","Description":"A non-parametric method for identifying potential\n outlying observations in a collection of phylogenetic trees based\n on the methods of Owen and Provan (2011). Such discordant trees\n may indicate problems with sequence annotation or tree\n reconstruction, or they may represent interesting biological\n phenomena, such as horizontal gene transfers.","Published":"2014-05-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kdevine","Version":"0.4.1","Title":"Multivariate Kernel Density Estimation with Vine Copulas","Description":"Implements the vine copula based kernel density estimator of\n Nagler and Czado (2016) . The estimator does\n not suffer from the curse of dimensionality and is therefore well suited for\n high-dimensional applications.","Published":"2017-05-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"kedd","Version":"1.0.3","Title":"Kernel Estimator and Bandwidth Selection for Density and Its\nDerivatives","Description":"Smoothing techniques and computing bandwidth selectors of the nth derivative of a probability density for one-dimensional data.","Published":"2015-10-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"keep","Version":"1.0","Title":"Arrays with Better Control over Dimension Dropping","Description":"Provides arrays with flexible control over dimension dropping when subscripting.","Published":"2015-12-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"kehra","Version":"0.1","Title":"Collect, Assemble and Model Air Pollution, Weather and Health\nData","Description":"Collection of utility functions used in the KEHRA project (see http://www.brunel.ac.uk/ife/britishcouncil). It refers to the multidimensional analysis of air pollution, weather and health data.","Published":"2016-06-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"kelvin","Version":"2.0-0","Title":"Calculate Solutions to the Kelvin Differential Equation using\nBessel Functions","Description":"Uses Bessel functions to calculate the \n fundamental and complementary analytic solutions to the\n Kelvin differential equation.","Published":"2015-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Kendall","Version":"2.2","Title":"Kendall rank correlation and Mann-Kendall trend test","Description":"Computes the Kendall rank correlation and Mann-Kendall\n trend test. See documentation for use of block bootstrap when\n there is autocorrelation.","Published":"2011-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"KENDL","Version":"1.1","Title":"Kernel-Smoothed Nonparametric Methods for Environmental Exposure\nData Subject to Detection Limits","Description":"Calculate the kernel-smoothed nonparametric estimator for the exposure distribution in presence of detection limits. ","Published":"2017-04-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kequate","Version":"1.6.1","Title":"The Kernel Method of Test Equating","Description":"Implements the kernel method of test equating using the CB, EG, SG, NEAT CE/PSE and NEC designs, supporting gaussian, logistic and uniform kernels and unsmoothed and pre-smoothed input data.","Published":"2017-03-11","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"kerasR","Version":"0.6.1","Title":"R Interface to the Keras Deep Learning Library","Description":"Provides a consistent interface to the 'Keras' Deep Learning Library\n directly from within R. 'Keras' provides specifications for describing dense\n neural networks, convolution neural networks (CNN) and recurrent neural networks\n (RNN) running on top of either 'TensorFlow' or 'Theano'. Type conversions between\n Python and R are automatically handled correctly, even when the default\n choices would otherwise lead to errors. Includes complete R documentation\n and many working examples.","Published":"2017-06-01","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"kerdiest","Version":"1.2","Title":"Nonparametric kernel estimation of the distribution function.\nBandwidth selection and estimation of related functions","Description":"Nonparametric kernel distribution function estimation is\n performed. Three automatic bandwidth selection methods for\n nonparametric kernel distribution function estimation are\n implemented: the plug-in of Altman and Leger, the plug-in of\n Polansky and Baker, and the modified cross-validation of\n Bowman, Hall and Prvan. The exceedance function, the mean\n return period and the return level are also computed.","Published":"2012-08-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"KERE","Version":"1.0.0","Title":"Expectile Regression in Reproducing Kernel Hilbert Space","Description":"An efficient algorithm inspired by majorization-minimization principle for solving the entire solution path of a flexible nonparametric expectile regression estimator constructed in a reproducing kernel Hilbert space.","Published":"2015-08-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kergp","Version":"0.2.0","Title":"Gaussian Process Laboratory","Description":"Gaussian Process models with customised covariance kernels.","Published":"2015-12-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"kernDeepStackNet","Version":"2.0.2","Title":"Kernel Deep Stacking Networks","Description":"Contains functions for estimation and model selection of kernel\n deep stacking networks.","Published":"2017-05-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"kerndwd","Version":"2.0.0","Title":"Distance Weighted Discrimination (DWD) and Kernel Methods","Description":"A novel implementation that solves the linear distance weighted discrimination and the kernel distance weighted discrimination.","Published":"2017-05-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kernelboot","Version":"0.1.1","Title":"Smoothed Bootstrap and Random Generation from Kernel Densities","Description":"Smoothed bootstrap and functions for random generation from\n univariate and multivariate kernel densities. It does not\n estimate kernel densities.","Published":"2017-06-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kernelFactory","Version":"0.3.0","Title":"Kernel Factory: An Ensemble of Kernel Machines","Description":"Binary classification based on an ensemble of kernel machines (\"Ballings, M. and Van den Poel, D. (2013), Kernel Factory: An Ensemble of Kernel Machines. Expert Systems With Applications, 40(8), 2904-2913\"). Kernel factory is an ensemble method where each base classifier (random forest) is fit on the kernel matrix of a subset of the training data.","Published":"2015-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Kernelheaping","Version":"1.6","Title":"Kernel Density Estimation for Heaped and Rounded Data","Description":"In self-reported or anonymised data the user often encounters\n heaped data, i.e. data which are rounded (to a possibly different degree\n of coarseness). While this is mostly a minor problem in parametric density\n estimation the bias can be very large for non-parametric methods such as kernel\n density estimation. This package implements a partly Bayesian algorithm treating\n the true unknown values as additional parameters and estimates the rounding\n parameters to give a corrected kernel density estimate. It supports various\n standard bandwidth selection methods. Varying rounding probabilities (depending\n on the true value) and asymmetric rounding is estimable as well. Additionally,\n bivariate non-parametric density estimation for rounded data as well as data aggregated on areas is supported.","Published":"2016-04-16","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"KernelKnn","Version":"1.0.5","Title":"Kernel k Nearest Neighbors","Description":"Extends the simple k-nearest neighbors algorithm by incorporating numerous kernel functions and a variety of distance metrics. The package takes advantage of 'RcppArmadillo' to speed up the calculation of distances between observations.","Published":"2017-02-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"kernlab","Version":"0.9-25","Title":"Kernel-Based Machine Learning Lab","Description":"Kernel-based machine learning methods for classification,\n regression, clustering, novelty detection, quantile regression\n and dimensionality reduction. Among other methods 'kernlab'\n includes Support Vector Machines, Spectral Clustering, Kernel\n PCA, Gaussian Processes and a QP solver.","Published":"2016-10-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kernplus","Version":"0.1.0","Title":"A Kernel Regression-Based Multidimensional Wind Turbine Power\nCurve","Description":"Provides wind energy practitioners with an effective machine learning-based\n tool that estimates a multivariate power curve and predicts the wind power output\n for a specific environmental condition.","Published":"2017-06-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"kernscr","Version":"1.0.3","Title":"Kernel Machine Score Test for Semi-Competing Risks","Description":"Kernel Machine Score Test for Pathway Analysis in the Presence of Semi-Competing Risks.","Published":"2016-06-29","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"KernSmooth","Version":"2.23-15","Title":"Functions for Kernel Smoothing Supporting Wand & Jones (1995)","Description":"Functions for kernel smoothing (and density estimation)\n corresponding to the book: \n Wand, M.P. and Jones, M.C. (1995) \"Kernel Smoothing\".","Published":"2015-06-29","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"KernSmoothIRT","Version":"6.1","Title":"Nonparametric Item Response Theory","Description":"This package fits nonparametric item and option characteristic curves using kernel smoothing. It allows for optimal selection of the smoothing bandwidth using cross-validation and a variety of exploratory plotting tools.","Published":"2014-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"keyplayer","Version":"1.0.3","Title":"Locating Key Players in Social Networks","Description":"Computes group centrality scores and identifies the most central group of players in a network.","Published":"2016-04-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"keypress","Version":"1.1.1","Title":"Wait for a Key Press in a Terminal","Description":"Wait for a single key press at the 'R' prompt.\n This works in terminals, but does not currently work\n in the 'Windows' 'GUI', the 'OS X' 'GUI' ('R.app'),\n in 'Emacs' 'ESS', in an 'Emacs' shell buffer or in\n 'R Studio'. In these cases 'keypress' stops with an\n error message.","Published":"2017-03-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"keyringr","Version":"0.4.0","Title":"Decrypt Passwords from Gnome Keyring, Windows Data Protection\nAPI and macOS Keychain","Description":"Decrypts passwords stored in the Gnome Keyring, macOS Keychain and\n strings encrypted with the Windows Data Protection API.","Published":"2017-02-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"KFAS","Version":"1.2.8","Title":"Kalman Filter and Smoother for Exponential Family State Space\nModels","Description":"State space modelling is an efficient and flexible method for \n statistical inference of a broad class of time series and other data. KFAS \n includes fast functions for Kalman filtering, smoothing, forecasting, and \n simulation of multivariate exponential family state space models, with \n observations from Gaussian, Poisson, binomial, negative binomial, and gamma \n distributions.","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kfigr","Version":"1.2","Title":"Integrated Code Chunk Anchoring and Referencing for R Markdown\nDocuments","Description":"A streamlined cross-referencing system for R Markdown documents\n generated with 'knitr'. R Markdown is an authoring format for generating\n dynamic content from R. 'kfigr' provides a hook for anchoring code\n chunks and a function to cross-reference document elements generated from\n said chunks, e.g. figures and tables.","Published":"2015-07-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"KFKSDS","Version":"1.6","Title":"Kalman Filter, Smoother and Disturbance Smoother","Description":"Naive implementation of the Kalman filter, smoother and disturbance \n smoother for state space models.","Published":"2015-01-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kgschart","Version":"1.2.3","Title":"KGS Rank Graph Parser","Description":"Restore underlining numeric data from KGS rank graphs (KGS \n is an online platform of the game of go). \n A shiny application is also provided.","Published":"2017-05-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"kimisc","Version":"0.3","Title":"Kirill's Miscellaneous Functions","Description":"A collection of useful functions not found anywhere else,\n mainly for programming: Pretty intervals, generalized lagged\n differences, checking containment in an interval, creating a\n factor where the levels maintain the order of appearance,\n sampling rows from a data frame, converting seconds from\n midnight to and from H:M:S format, choosing the first non-NA\n value, transposing lists of lists, returning the name of the\n file currently sourced, smart named lists and vectors, and an\n alternative interface to assign().","Published":"2016-02-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"kin.cohort","Version":"0.7","Title":"Analysis of Kin-Cohort Studies","Description":"Analysis of kin-cohort studies. kin.cohort provides estimates of age-specific \n cumulative risk of a disease for carriers and noncarriers of a mutation. The cohorts are\n retrospectively built from relatives of probands for whom the genotype is known. Currently \n the method of moments and marginal maximum likelihood are implemented. Confidence intervals \n are calculated from bootstrap samples.\n Most of the code is a translation from previous 'MATLAB' code by N. Chatterjee.","Published":"2015-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kineticF","Version":"1.0","Title":"Framework for the Analysis of Kinetic Visual Field Data","Description":"Data cleaning, processing, visualisation and analysis for manual (Goldmann) and automated (Octopus 900) kinetic visual field data. ","Published":"2015-06-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kinfit","Version":"1.1.14","Title":"Routines for Fitting Kinetic Models to Chemical Degradation Data","Description":"\n The FOCUS Kinetics Report first published in 2006 describes mathematical\n models and recommends statistical methods for the evaluation of \n chemical degradation data. This package implements fitting the kinetic\n models suitable for observations of the decline of a single chemical \n compound (no metabolite formation/decline or multi-compartment kinetics).\n Please note that no warranty is implied for correctness of results or\n fitness for a particular purpose. 'kinfit' is maintained, but not\n actively developed at the moment. Please check the 'mkin' package for an\n actively developed package for kinetic evaluations of degradation data.","Published":"2015-07-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"kinship2","Version":"1.6.4","Title":"Pedigree Functions","Description":"Routines to handle family data with a pedigree object. The initial purpose was to create correlation structures that describe \n family relationships such as kinship and identity-by-descent, which\n can be used to model family data in mixed effects models, such as in the \n coxme function. Also includes a tool for pedigree drawing which is \n focused on producing compact layouts without intervention. Recent additions\n include utilities to trim the pedigree object with various criteria, and \n kinship for the X chromosome.","Published":"2015-08-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kirby21.base","Version":"1.5.1.1","Title":"Example Data from the Multi-Modal MRI Reproducibility Resource","Description":"Multi-modal magnetic resonance imaging ('MRI')\n data from the 'Kirby21' reproducibility study\n , including functional\n and structural imaging.","Published":"2017-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kirby21.fmri","Version":"1.5.1","Title":"Example Functional Imaging Data from the Multi-Modal MRI\nReproducibility Resource","Description":"Functional magnetic resonance imaging ('fMRI') data from the\n 'Kirby21' reproducibility study\n .","Published":"2017-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kirby21.t1","Version":"1.5.1","Title":"Example T1 Structural Data from the Multi-Modal MRI\nReproducibility Resource","Description":"Structural T1 magnetic resonance imaging ('MRI') data from the\n 'Kirby21' reproducibility study\n .","Published":"2017-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kissmig","Version":"1.0-3","Title":"a Keep It Simple Species Migration Model","Description":"Simulating species migration and range dynamics under stable or changing environmental conditions based on a simple, raster-based, stochastic migration model. Providing accessibility for considering species migration in niche-based species distribution models. ","Published":"2015-01-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"kitagawa","Version":"2.1-0","Title":"Spectral response of water wells to harmonic strain and pressure","Description":"Provides tools to calculate the theoretical hydrodynamic response\n of an aquifer undergoing harmonic straining or pressurization. There are\n two classes of models here: (1) for sealed wells, based on the model of\n Kitagawa et al (2011), and (2) for open wells, based on the models of\n Cooper et al (1965), Hsieh et al (1987), Rojstaczer (1988), and Liu et al\n (1989). These models treat strain (or aquifer head) as an input to the\n physical system, and fluid-pressure (or water height) as the output. The\n applicable frequency band of these models is characteristic of seismic\n waves, atmospheric pressure fluctuations, and solid earth tides.","Published":"2013-10-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"kknn","Version":"1.3.1","Title":"Weighted k-Nearest Neighbors","Description":"Weighted k-Nearest Neighbors for Classification, Regression and Clustering.","Published":"2016-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"klaR","Version":"0.6-12","Title":"Classification and visualization","Description":"Miscellaneous functions for classification and visualization\n developed at the Fakultaet Statistik, Technische Universitaet Dortmund","Published":"2014-08-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"klausuR","Version":"0.12-10","Title":"Multiple Choice Test Evaluation","Description":"A set of functions designed to quickly generate results of a\n multiple choice test. Generates detailed global results, lists for\n anonymous feedback and personalised result feedback (in LaTeX and/or PDF\n format), as well as item statistics like Cronbach's alpha or disciminatory\n power. klausuR also includes a plugin for the R GUI and IDE RKWard,\n providing dialogs for its basic features. To use them, install RKWard from\n http://rkward.sf.net (plugins are detected automatically). Due to some\n restrictions on CRAN, the full package sources are only available from the\n project homepage.","Published":"2015-02-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"klin","Version":"2007-02-05","Title":"Linear equations with Kronecker structure","Description":"The package implements efficient ways to evaluate and\n solve equations of the form Ax=b, where A is a kronecker\n product of matrices. Functions to solve least squares problems\n of this type are also included.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"km.ci","Version":"0.5-2","Title":"Confidence intervals for the Kaplan-Meier estimator","Description":"Computes various confidence intervals for the Kaplan-Meier\n estimator, namely: Petos CI, Rothman CI, CI's based on\n Greenwoods variance, Thomas and Grunkemeier CI and the\n simultaneous confidence bands by Nair and Hall and Wellner.","Published":"2009-08-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kmc","Version":"0.2-2","Title":"Kaplan-Meier Estimator with Constraints for Right Censored Data\n-- a Recursive Computational Algorithm","Description":"Linearly constrained Kaplan-Meier estimator for right censored data. This version does empirical likelihood ratio test with right censored data with linear type constraint and hypothesis testing for coefficients in accelerated failure time model.","Published":"2015-11-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kmconfband","Version":"0.1","Title":"Kaplan-Meier Simultaneous Confidence Band for the Survivor\nFunction","Description":"Computes and plots an exact nonparametric band for any user-specified level of confidence from a single-sample survfit object","Published":"2013-10-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kmcudaR","Version":"1.0.0","Title":"'Yinyang' K-Means and K-NN using NVIDIA CUDA","Description":"K-means implementation is based on \"Yinyang K-Means: A Drop-In Replacement of \n\tthe Classic K-Means with Consistent Speedup\". While it introduces some overhead and many \n\tconditional clauses which are bad for CUDA, it still shows 1.6-2x speedup against the Lloyd \n\talgorithm. K-nearest neighbors employ the same triangle inequality idea and require \n\tprecalculated centroids and cluster assignments, similar to the flattened ball tree.","Published":"2017-05-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"KMDA","Version":"1.0","Title":"Kernel-Based Metabolite Differential Analysis","Description":"Compute p-values of metabolite differential expression analysis using the kernel-based approach.","Published":"2015-04-01","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"kmeans.ddR","Version":"0.1.0","Title":"Distributed k-Means for Big Data using 'ddR' API","Description":"Distributed k-means clustering algorithm written using 'ddR' (Distributed Data Structures) API in the 'ddR' package. ","Published":"2015-11-05","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"kmi","Version":"0.5.2","Title":"Kaplan-Meier Multiple Imputation for the Analysis of Cumulative\nIncidence Functions in the Competing Risks Setting","Description":"The kmi package performs a Kaplan-Meier multiple imputation to recover the missing potential censoring information from competing risks events, so that standard right-censored methods could be applied to the imputed data sets to perform analyses of the cumulative incidence functions.","Published":"2017-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Kmisc","Version":"0.5.0","Title":"Kevin Miscellaneous","Description":"This package contains a collection of functions for common data\n extraction and reshaping operations, string manipulation, and\n functions for table and plot generation for R Markdown documents.","Published":"2013-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kml","Version":"2.4.1","Title":"K-Means for Longitudinal Data","Description":"An implementation of k-means specifically design\n to cluster longitudinal data. It provides facilities to deal with missing\n value, compute several quality criterion (Calinski and Harabatz,\n Ray and Turie, Davies and Bouldin, BIC, ...) and propose a graphical\n interface for choosing the 'best' number of clusters.","Published":"2016-02-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kmlcov","Version":"1.0.1","Title":"Clustering longitudinal data using the likelihood as a metric of\ndistance","Description":"'kmlcov' Cluster longitudinal data using the likelihood as a\n metric of distance. The generalised linear model allow the user to\n introduce covariates with different level effects (2 levels).","Published":"2013-08-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kmlShape","Version":"0.9.5","Title":"K-Means for Longitudinal Data using Shape-Respecting Distance","Description":"K-means for longitudinal data using shape-respecting distance and shape-respecting means.","Published":"2016-03-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kmodR","Version":"0.1.0","Title":"K-Means with Simultaneous Outlier Detection","Description":"An implementation of the 'k-means--' algorithm proposed by Chawla and Gionis, 2013 in their paper, \"k-means-- : A unified approach to clustering and outlier detection. SIAM International Conference on Data Mining (SDM13)\", and using 'ordering' described by Howe, 2013 in the thesis, \"Clustering and anomaly detection in tropical cyclones\". Useful for creating (potentially) tighter clusters than standard k-means and simultaneously finding outliers inexpensively in multidimensional space.","Published":"2015-03-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"KMsurv","Version":"0.1-5","Title":"Data sets from Klein and Moeschberger (1997), Survival Analysis","Description":"Data sets and functions for Klein and Moeschberger (1997),\n \"Survival Analysis, Techniques for Censored and Truncated\n Data\", Springer.","Published":"2012-12-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"KnapsackSampling","Version":"0.1.0","Title":"Generate Feasible Samples of a Knapsack Problem","Description":"The sampl.mcmc() function creates samples of the feasible region of a knapsack problem with both equalities and inequalities constraints.","Published":"2016-10-16","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"knitcitations","Version":"1.0.7","Title":"Citations for 'Knitr' Markdown Files","Description":"Provides the ability to create dynamic citations\n in which the bibliographic information is pulled from the web rather\n than having to be entered into a local database such as 'bibtex' ahead of\n time. The package is primarily aimed at authoring in the R 'markdown'\n format, and can provide outputs for web-based authoring such as linked\n text for inline citations. Cite using a 'DOI', URL, or\n 'bibtex' file key. See the package URL for details.","Published":"2015-10-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"knitLatex","Version":"0.9.0","Title":"'Knitr' Helpers - Mostly Tables","Description":"Provides several helper functions for working with 'knitr' and 'LaTeX'.\n It includes 'xTab' for creating traditional 'LaTeX' tables, 'lTab' for generating\n 'longtable' environments, and 'sTab' for generating a 'supertabular' environment.\n Additionally, this package contains a knitr_setup() function which fixes a\n well-known bug in 'knitr', which distorts the 'results=\"asis\"' command when used\n in conjunction with user-defined commands; and a com command (<>=)\n which renders the output from 'knitr' as a 'LaTeX' command.","Published":"2015-06-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"knitr","Version":"1.16","Title":"A General-Purpose Package for Dynamic Report Generation in R","Description":"Provides a general-purpose tool for dynamic report generation in R\n using Literate Programming techniques.","Published":"2017-05-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"knitrBootstrap","Version":"1.0.0","Title":"Knitr Bootstrap Framework","Description":"A framework to create Bootstrap 3 HTML reports from knitr\n Rmarkdown.","Published":"2015-12-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"knncat","Version":"1.2.2","Title":"Nearest-neighbor Classification with Categorical Variables","Description":"Scale categorical variables in such a way as\n to make NN classification as accurate as possible. The code also\n handles continuous variables and prior probabilities, and does\n intelligent variable selection and estimation of both error rates\n and the right number of NN's.","Published":"2015-02-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"knnGarden","Version":"1.0.1","Title":"Multi-distance based k-Nearest Neighbors","Description":"Multi-distance based k-Nearest Neighbors Classification\n with K Threshold Value Check and Same K_i Problem Dealing,\n Missing Observations Filling","Published":"2012-07-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"knnIndep","Version":"2.0","Title":"Independence tests and benchmarks","Description":"This package provides the implementation of an exact formula of the\n ith nearest neighbour distance distribution and implementations of tests of\n independence based on that formula. Furthermore the package provides a\n general framework to benchmark tests of independence.","Published":"2014-09-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"knockoff","Version":"0.2.1","Title":"Knockoff Filter for Controlling the False Discovery Rate","Description":"The knockoff filter is a procedure for controlling the false\n discovery rate (FDR) when performing variable selection. For more information,\n see the website below and the accompanying paper.","Published":"2015-04-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"knotR","Version":"1.0-2","Title":"Knot Diagrams using Bezier Curves","Description":"Makes nice pictures of knots using Bezier curves and\n numerical optimization.","Published":"2017-05-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"KnowBR","Version":"1.2","Title":"Discriminating Well Surveyed Spatial Units from Exhaustive\nBiodiversity Databases","Description":"It uses species accumulation curves and diverse estimators to assess, at the same time, the levels of survey coverage in multiple geographic cells of a size defined by the user. It also enables the geographical depiction of observed species richness, survey effort and completeness values including a background with administrative areas.","Published":"2017-03-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kntnr","Version":"0.4.0","Title":"R Client for 'kintone' API","Description":"Retrieve data from 'kintone' () via its API.\n 'kintone' is an enterprise application platform.","Published":"2016-11-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"kobe","Version":"1.3.2","Title":"Tools for the provision of scientific fisheries management\nadvice","Description":"The tuna Regional Fisheries Management Organisations (tRFMOs) use a common framework for providing scientific advice, i.e. the Kobe II Framework. This is based on maintaining fishing mortality below FMSY and stock biomass above BMSY. This package provides methods for summarising results from stock assessments and Management Strategy Evaluations in the Kobe format.","Published":"2014-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"KODAMA","Version":"1.4","Title":"Knowledge Discovery by Accuracy Maximization","Description":"An unsupervised and semi-supervised learning algorithm that performs feature extraction from noisy and high-dimensional data.","Published":"2017-01-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kofnGA","Version":"1.2","Title":"A Genetic Algorithm for Fixed-Size Subset Selection","Description":"Function kofnGA uses a genetic algorithm to choose a subset of a \n fixed size k from the integers 1:n, such that a user-supplied objective function \n is minimized at that subset. The selection step is done by tournament selection \n based on ranks, and elitism may be used to retain a portion of the best solutions \n from one generation to the next.","Published":"2015-11-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"KOGMWU","Version":"1.1","Title":"Functional Summary and Meta-Analysis of Gene Expression Data","Description":"Rank-based tests for enrichment of KOG (euKaryotic Orthologous Groups) classes with up- or down-regulated genes based on a continuous measure. The meta-analysis is based on correlation of KOG delta-ranks across datasets (delta-rank is the difference between mean rank of genes belonging to a KOG class and mean rank of all other genes). With binary measure (1 or 0 to indicate significant and non-significant genes), one-tailed Fisher's exact test for over-representation of each KOG class among significant genes will be performed. ","Published":"2016-11-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"kohonen","Version":"3.0.2","Title":"Supervised and Unsupervised Self-Organising Maps","Description":"Functions to train self-organising maps (SOMs). Also interrogation of the maps and prediction using trained maps are supported. The name of the package refers to Teuvo Kohonen, the inventor of the SOM.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kokudosuuchi","Version":"0.2.0","Title":"R Interface to e-Stat API","Description":"Provides an interface to Kokudo Suuchi API, the GIS data service of the Japanese government. See for more information.","Published":"2016-11-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"kolmim","Version":"1.0","Title":"An Improved Evaluation of Kolmogorov's Distribution","Description":"Provides an alternative, more efficient evaluation of extreme\n probabilities of Kolmogorov's goodness-of-fit measure, Dn, when compared to\n the original implementation of Wang, Marsaglia, and Tsang. These\n probabilities are used in Kolmogorov-Smirnov tests when comparing two\n samples.","Published":"2015-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"KoNLP","Version":"0.80.1","Title":"Korean NLP Package","Description":"POS Tagger and Morphological Analyzer for Korean text based research. It provides tools for corpus linguistics research such as Keystroke converter, Hangul automata, Concordance, and Mutual Information. It also provides a convenient interface for users to apply, edit and add morphological dictionary selectively. ","Published":"2016-12-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"koRpus","Version":"0.10-2","Title":"An R Package for Text Analysis","Description":"A set of tools to analyze texts. Includes, amongst others, functions for automatic language detection, hyphenation,\n several indices of lexical diversity (e.g., type token ratio, HD-D/vocd-D, MTLD) and readability (e.g., Flesch, SMOG,\n LIX, Dale-Chall). Basic import functions for language corpora are also provided, to enable frequency analyses (supports\n Celex and Leipzig Corpora Collection file formats) and measures like tf-idf. Support for additional languages can be\n added on-the-fly or by plugin packages. Note: For full functionality a local installation of TreeTagger is recommended.\n 'koRpus' also includes a plugin for the R GUI and IDE RKWard, providing graphical dialogs for its basic features. The\n respective R package 'rkward' cannot be installed directly from a repository, as it is a part of RKWard. To make full\n use of this feature, please install RKWard from (plugins are detected automatically). Due to\n some restrictions on CRAN, the full package sources are only available from the project homepage. To ask for help,\n report bugs, request features, or discuss the development of the package, please subscribe to the koRpus-dev mailing\n list ().","Published":"2017-04-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"KoulMde","Version":"3.0.0","Title":"Koul's Minimum Distance Estimation in Linear Regression and\nAutoregression Model by Coordinate Descent Algorithm","Description":"Consider linear regression model and autoregressive model of\n order q where errors in the linear regression model and innovations in the\n autoregression model are independent and symmetrically distributed. Hira L. Koul\n (1986) proposed a nonparametric minimum distance\n estimation method by minimizing L2-type distance between certain weighted\n residual empirical processes. He also proposed a simpler version of the loss\n function by using symmetry of the integrating measure in the distance. This\n package contains three functions: KoulLrMde(), KoulArMde(), and Koul2StageMde().\n The former two provide minimum distance estimators for linear regression model\n and autoregression model, respectively, where both are based on Koul's method.\n These two functions take much less time for the computation than those based\n on parametric minimum distance estimation methods. Koul2StageMde() provides\n estimators for regression and autoregressive coefficients of linear regression\n model with autoregressive errors through minimum distant method of two stages.\n The new version is written in Rcpp and dramatically reduces computational time.","Published":"2017-02-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Kpart","Version":"1.2.0","Title":"Cubic Spline Fitting with Knot Selection","Description":"Cubic spline fitting along with knot selection, includes support for additional variables.","Published":"2016-07-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kpcalg","Version":"1.0.1","Title":"Kernel PC Algorithm for Causal Structure Detection","Description":"Kernel PC (kPC) algorithm for causal structure learning and causal inference using graphical models. kPC is a version of PC algorithm that uses kernel based independence criteria in order to be able to deal with non-linear relationships and non-Gaussian noise.","Published":"2017-01-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kpodclustr","Version":"1.0","Title":"Method for Clustering Partially Observed Data","Description":"The kpodclustr package implements the k-POD method for clustering\n partially observed data.","Published":"2014-11-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"KraljicMatrix","Version":"0.1.1","Title":"A Quantified Implementation of the Kraljic Matrix","Description":"Implements a quantified approach to the Kraljic Matrix (Kraljic, 1983, )\n for strategically analyzing a firm’s purchasing portfolio. It combines multi-objective decision analysis to measure purchasing characteristics and\n uses this information to place products and services within the Kraljic Matrix.","Published":"2017-02-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"kriens","Version":"0.1","Title":"Continuation Passing Style Development","Description":"Provides basic functions for Continuation-Passing Style development.","Published":"2015-12-02","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"kriging","Version":"1.1","Title":"Ordinary Kriging","Description":"Simple and highly optimized ordinary kriging algorithm to plot geographical data","Published":"2014-12-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"KrigInv","Version":"1.3.1","Title":"Kriging-based Inversion for Deterministic and Noisy Computer\nExperiments","Description":"Criteria and algorithms for sequentially estimating level sets of a multivariate numerical function, possibly observed with noise.","Published":"2014-12-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"KRLS","Version":"0.3-7","Title":"Kernel-based Regularized Least squares (KRLS)","Description":"Package implements Kernel-based Regularized Least Squares (KRLS), a machine learning method to fit multidimensional functions y=f(x) for regression and classification problems without relying on linearity or additivity assumptions. KRLS finds the best fitting function by minimizing the squared loss of a Tikhonov regularization problem, using Gaussian kernels as radial basis functions. For further details see Hainmueller and Hazlett (2014).","Published":"2014-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"krm","Version":"2016.7-9","Title":"Kernel Based Regression Models","Description":"Implements several methods for testing the variance component parameter in regression models that contain kernel-based random effects, including a maximum of adjusted scores test. Several kernels are supported, including a profile hidden Markov model mutual information kernel for protein sequence.","Published":"2016-07-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"KRMM","Version":"1.0","Title":"Kernel Ridge Mixed Model","Description":"Solves kernel ridge regression, within the the mixed model framework, for the linear, polynomial, Gaussian, Laplacian and ANOVA kernels. The model components (i.e. fixed and random effects) and variance parameters are estimated using the expectation-maximization (EM) algorithm. All the estimated components and parameters, e.g. BLUP of dual variables and BLUP of random predictor effects for the linear kernel (also known as RR-BLUP), are available. The kernel ridge mixed model (KRMM) is described in Jacquin L, Cao T-V and Ahmadi N (2016) A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice. Front. Genet. 7:145. .","Published":"2017-06-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ks","Version":"1.10.6","Title":"Kernel Smoothing","Description":"Kernel smoothers for univariate and multivariate data, including density functions, density derivatives, cumulative distributions, modal clustering, discriminant analysis, and two-sample hypothesis testing. ","Published":"2017-03-20","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"kSamples","Version":"1.2-6","Title":"K-Sample Rank Tests and their Combinations","Description":"Compares k samples using the Anderson-Darling test, Kruskal-Wallis type tests \n\twith different rank score criteria, Steel's multiple comparison test, and the \n Jonckheere-Terpstra (JT) test. It computes asymptotic, simulated or (limited) exact \n P-values, all valid under randomization, with or without ties, or conditionally \n under random sampling from populations, given the observed tie pattern. Except for \n Steel's test and the JT test it also combines these tests across several blocks of \n\tsamples. Also analyzed are 2 x t contingency tables and their blocked combinations \n\tusing the Kruskal-Wallis criterion. Steel's test is inverted to provide simultaneous \n\tconfidence bounds for shift parameters. A plotting function compares tail probabilities\n\tobtained under asymptotic approximation with those obtained via simulation or exact \n\tcalculations.","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kscons","Version":"0.7.0","Title":"A Bayesian Approach for Protein Residue Contact Prediction using\nthe Knob-Socket Model of Protein Tertiary Structure","Description":"Predicts a protein's residue contact map, based on the estimation of the corresponding knob-socket list. For more details, please refer to our paper: Q. Li, D. B. Dahl, M. Vannucci, H. Joo, J. W. Tsai (2016), KScons: A Bayesian Approach for Protein Residue Contact Prediction using the Knob-socket Model of Protein Tertiary Structure, Bioinformatics, 32(24): 3774-3781 .","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"KScorrect","Version":"1.2.0","Title":"Lilliefors-Corrected Kolmogorov-Smirnoff Goodness-of-Fit Tests","Description":"Implements the Lilliefors-corrected Kolmogorov-Smirnoff test for use\n in goodness-of-fit tests, suitable when population parameters are unknown and\n must be estimated by sample statistics. P-values are estimated by simulation.\n Can be used with a variety of continuous distributions, including normal,\n lognormal, univariate mixtures of normals, uniform, loguniform, exponential,\n gamma, and Weibull distributions. Functions to generate random numbers and\n calculate density, distribution, and quantile functions are provided for use\n with the log uniform and mixture distributions.","Published":"2016-03-19","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"KSD","Version":"1.0.0","Title":"Goodness-of-Fit Tests using Kernelized Stein Discrepancy","Description":"An adaptation of Kernelized Stein Discrepancy, this package provides a goodness-of-fit test of whether a given i.i.d. sample is drawn from a given distribution. It works for any distribution once its score function (the derivative of log-density) can be provided. This method is based on \"A Kernelized Stein Discrepancy for Goodness-of-fit Tests and Model Evaluation\" by Liu, Lee, and Jordan, available at .","Published":"2016-07-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"KSEAapp","Version":"0.99.0","Title":"Kinase-Substrate Enrichment Analysis","Description":"Infers relative kinase activity from phosphoproteomics data\n using the method described by Casado et al. (2013) .","Published":"2017-05-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"kselection","Version":"0.2.0","Title":"Selection of K in K-Means Clustering","Description":"Selection of k in k-means clustering based on Pham et al. paper\n ``Selection of k in k-means clustering''.","Published":"2015-02-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ksrlive","Version":"1.0","Title":"Identify Kinase Substrate Relationships Using Dynamic Data","Description":"Using this package you can combine known kinase substrate relationships with experimental data and determine active kinases and their substrates.","Published":"2015-10-19","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"kst","Version":"0.2-1","Title":"Knowledge Space Theory","Description":"Knowledge Space Theory is a set-theoretical framework, which \n proposes mathematical formalisms to operationalize knowledge structures in a \n particular domain. The kst-package provides basic functionalities to \n generate, handle, and manipulate knowledge structures and knowledge spaces.","Published":"2014-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"KTensorGraphs","Version":"0.1","Title":"Co-Tucker3 Analysis of Two Sequences of Matrices","Description":"Provides a function called COTUCKER3() (Co-Inertia Analysis\n + Tucker3 method) which performs a Co-Tucker3 analysis of two sequences of\n matrices, as well as other functions called PCA() (Principal Component Analysis)\n and BGA() (Between-Groups Analysis), which perform analysis of one matrix,\n COIA() (Co-Inertia Analysis), which performs analysis of two matrices, PTA()\n (Partial Triadic Analysis) and TUCKER3(), which perform analysis of a sequence\n of matrices, and BGCOIA() (Between-Groups Co-Inertia Analysis), STATICO()\n (STATIS method + Co-Inertia Analysis), COSTATIS() (Co-Inertia Analysis + STATIS\n method), which also perform analysis of two sequences of matrices.","Published":"2016-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ktsolve","Version":"1.1","Title":"Configurable function for solving families of nonlinear\nequations","Description":"This function is designed for use with an arbitrary set of equations with\n an arbitrary set of unknowns.\n The user selects \"fixed\" values for enough unknowns to leave as many variables as\n there are equations, which in most cases means the system is properly\n defined and a unique solution exists. The function, the fixed values\n and initial values for the remaining unknowns are fed to a nonlinear backsolver. \n The original version of \"TK!Solver\" was the inspiration for this function.","Published":"2013-11-04","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"ktspair","Version":"1.0","Title":"k-Top Scoring Pairs for Microarray Classification","Description":"These functions compute the k best pairs of genes used to classify samples based on the relative rank of the genes expression within each profile. A score based on the sensitivity and the specificity is calculated for every possible pair. The k pairs with the highest score will be selected with the restriction that a gene can appear in at most one pair. The value of k is either set as a parameter chosen by the user or computed through crossvalidation. Other functions related to the k-TSP are also available, for example the functions prediction, summary, plot, etc. can be found in the package.","Published":"2011-12-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kulife","Version":"0.1-14","Title":"Datasets and functions from the (now non-existing) Faculty of\nLife Sciences, University of Copenhagen","Description":"Provides various functions and data sets from experiments at the Faculty of Life Sciences, University of Copenhagen. This package will be discontinued and archived, and the functions and datasets will be maintained and updated in the MESS package ","Published":"2013-10-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kutils","Version":"1.0","Title":"Project Management Tools","Description":"Tools for data importation, recoding, and inspection that\n are used at the University of Kansas Center for Research Methods\n and Data Analysis. There are functions to create new project\n folders, R code templates, create uniquely named output\n directories, and to quickly obtain a visual summary for each\n variable in a data frame. The main feature here is the systematic\n implementation of the \"variable key\" framework for data\n importation and recoding. We are eager to have community feedback\n about the variable key and the vignette about it.","Published":"2017-05-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kwb.hantush","Version":"0.2.1","Title":"Calculation of Groundwater Mounding Beneath an Infiltration\nBasin","Description":"Calculation groundwater mounding beneath an infiltration basin based on the Hantush (1967) equation (http://doi.org/10.1029/WR003i001p00227). The correct implementation is shown with a verification example based on a USGS report (page 25, http://pubs.usgs.gov/sir/2010/5102/support/sir2010-5102.pdf).","Published":"2015-06-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"kyotil","Version":"2017.6-1","Title":"Utility Functions by Youyi, Krisz and Others","Description":"A miscellaneous set of functions for printing, plotting, kernels, etc. Additional contributors are acknowledged on individual function help pages.","Published":"2017-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kza","Version":"4.0.0","Title":"Kolmogorov-Zurbenko Adaptive Filters","Description":"Time Series Analysis including break detection, spectral analysis, KZ Fourier Transforms.","Published":"2016-03-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"kzfs","Version":"1.5.0.1","Title":"Multi-Scale Motions Separation with Kolmogorov-Zurbenko\nPeriodogram Signals","Description":"Separation of wave motions in different scales and directions based on\n Kolmogorov-Zurbenko Periodograms and Kolmogorov-Zurbenko Fourier Transform.","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kzft","Version":"0.17","Title":"Kolmogorov-Zurbenko Fourier Transform and Applications","Description":"A colletion of functions to implement Kolmogorov-Zurbenko\n Fourier transform based periodograms and smoothing methods","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"kzs","Version":"1.4","Title":"Kolmogorov-Zurbenko Spatial Smoothing and Applications","Description":"A spatial smoothing algorithm based on convolutions of finite rectangular kernels that provides sharp resolution in the presence of high levels of noise.","Published":"2008-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"l0ara","Version":"0.1.3","Title":"Sparse Generalized Linear Model with L0 Approximation for\nFeature Selection","Description":"An efficient procedure for feature selection for generalized linear models with L0 penalty, including linear, logistic, Poisson, gamma, inverse Gaussian regression. Adaptive ridge algorithms are used to fit the models.","Published":"2016-08-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"l1kdeconv","Version":"1.1.0","Title":"Deconvolution for LINCS L1000 Data","Description":"LINCS L1000 is a high-througphput technology that allows the gene expression measurement in a large number of assays. However, to fit the measurements of ~1000 genes in the ~500 color channels of LINCS L1000, every two landmark genes are designed to share a single channel. Thus, a deconvolution step is required to infer the expression values of each gene. Any errors in this step can be propagated adversely to the downstream analyses. We present a LINCS L1000 data peak calling R package l1kdeconv based on a new outlier detection method and an aggregate Gaussian mixture model (AGMM). Upon the remove of outliers and the borrowing information among similar samples, l1kdeconv shows more stable and better performance than methods commonly used in LINCS L1000 data deconvolution. ","Published":"2017-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"L1pack","Version":"0.38","Title":"Routines for L1 Estimation","Description":"L1 estimation for linear regression, density, distribution function,\n quantile function and random number generation for univariate and multivariate\n Laplace distribution.","Published":"2017-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"l2boost","Version":"1.0","Title":"l2boost - Friedman's boosting algorithm for regularized linear\nregression","Description":"Efficient implementation of Friedman's boosting algorithm\n with l2-loss function and coordinate direction (design matrix\n columns) basis functions.","Published":"2013-06-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"labdsv","Version":"1.8-0","Title":"Ordination and Multivariate Analysis for Ecology","Description":"A variety of ordination and community analyses\n useful in analysis of data sets in community ecology. \n Includes many of the common ordination methods, with \n graphical routines to facilitate their interpretation, \n as well as several novel analyses.","Published":"2016-01-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"label.switching","Version":"1.6","Title":"Relabelling MCMC Outputs of Mixture Models","Description":"The Bayesian estimation of mixture models (and more general hidden Markov models) suffers from the label switching phenomenon, making the MCMC output non-identifiable. This package can be used in order to deal with this problem using various relabelling algorithms.","Published":"2016-05-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"labeledLoop","Version":"0.1","Title":"Labeled Loop","Description":"Support labeled loop and escape from nested loop","Published":"2012-04-21","License":"MIT","snapshot_date":"2017-06-23"} {"Package":"labeling","Version":"0.3","Title":"Axis Labeling","Description":"Provides a range of axis labeling algorithms","Published":"2014-08-23","License":"MIT + file LICENSE | Unlimited","snapshot_date":"2017-06-23"} {"Package":"labelled","Version":"1.0.0","Title":"Manipulating Labelled Data","Description":"Work with labelled data imported from\n 'SPSS' or 'Stata' with 'haven' or 'foreign'.","Published":"2016-11-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"labelrank","Version":"0.1","Title":"Predicting Rankings of Labels","Description":"An implementation of distance-based ranking algorithms to predict rankings of labels. Two common algorithms are included: the naive Bayes and the nearest neighbor algorithms. ","Published":"2015-11-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"labeltodendro","Version":"1.3","Title":"Convert labels or tables to a dendrogram","Description":"The package offers a dendrogram representation of series\n of labels, this is specially needed in Markov chain Monte Carlo\n clustering. If you have a dendrogram in your mind you can\n easily put series of meaningful labels in a matrix and heights\n in a vector, then convert them to a dendrogram abject.","Published":"2013-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LabourMarketAreas","Version":"3.0","Title":"Identification, Tuning, Visualisation and Analysis of Labour\nMarket Areas","Description":"Produces Labour Market Areas from commuting flows available at elementary territorial units. It provides tools for automatic tuning based on spatial contiguity. It also allows for statistical analyses and visualisation of the new functional geography.","Published":"2017-05-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"labstatR","Version":"1.0.8","Title":"Libreria Del Laboratorio Di Statistica Con R","Description":"Insieme di funzioni di supporto al volume \"Laboratorio di\n Statistica con R\", Iacus-Masarotto, MacGraw-Hill Italia, 2006.\n This package contains sets of functions defined in \"Laboratorio\n di Statistica con R\", Iacus-Masarotto, MacGraw-Hill Italia,\n 2006. Function names and docs are in italian as well.","Published":"2015-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"labstats","Version":"1.0.1","Title":"Data Sets for the Book \"Experimental Design for Laboratory\nBiologists\"","Description":"Contains data sets to accompany the book: Lazic SE\n (2016). \"Experimental Design for Laboratory Biologists: Maximising Information\n and Improving Reproducibility\". Cambridge University Press.","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"laeken","Version":"0.4.6","Title":"Estimation of indicators on social exclusion and poverty","Description":"Estimation of indicators on social exclusion and poverty, as well\n as Pareto tail modeling for empirical income distributions.","Published":"2014-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"laercio","Version":"1.0-1","Title":"Duncan test, Tukey test and Scott-Knott test","Description":"The package contains functions to compare and group means.","Published":"2010-09-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"LaF","Version":"0.6.3","Title":"Fast Access to Large ASCII Files","Description":"Methods for fast access to large ASCII files. Currently the\n following file formats are supported: comma separated format (CSV) and fixed\n width format. It is assumed that the files are too large to fit into memory,\n although the package can also be used to efficiently access files that do\n fit into memory. Methods are provided to access and process files blockwise.\n Furthermore, an opened file can be accessed as one would an ordinary\n data.frame. The LaF vignette gives an overview of the functionality\n provided.","Published":"2017-01-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lagged","Version":"0.1-0","Title":"Classes and Methods for Lagged Objects","Description":"Provides classes and methods for lagged objects.","Published":"2017-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"laGP","Version":"1.4","Title":"Local Approximate Gaussian Process Regression","Description":"Performs approximate GP regression for large computer experiments and spatial datasets. The approximation is based on finding small local designs for prediction (independently) at particular inputs. OpenMP and SNOW parallelization are supported for prediction over a vast out-of-sample testing set; GPU acceleration is also supported for an important subroutine. OpenMP and GPU features may require special compilation. An interface to lower-level (full) GP inference and prediction is also provided, as are associated wrapper routines for blackbox optimization under mixed equality and inequality constraints via an augmented Lagrangian scheme, and for large scale computer model calibration.","Published":"2017-06-02","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"Lahman","Version":"5.0-0","Title":"Sean 'Lahman' Baseball Database","Description":"Provides the tables from the 'Sean Lahman Baseball Database' as\n a set of R data.frames. It uses the data on pitching, hitting and fielding\n performance and other tables from 1871 through 2015, as recorded in the 2016\n version of the database.","Published":"2016-08-27","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"LakeMetabolizer","Version":"1.5.0","Title":"Tools for the Analysis of Ecosystem Metabolism","Description":"A collection of tools for the calculation of freewater metabolism.","Published":"2016-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lakemorpho","Version":"1.1.0","Title":"Lake Morphometry Metrics in R","Description":"Lake morphometry metrics are used by limnologists to understand,\n among other things, the ecological processes in a lake. Traditionally, these\n metrics are calculated by hand, with planimeters, and increasingly with\n commercial GIS products. All of these methods work; however, they are either\n outdated, difficult to reproduce, or require expensive licenses to use. The\n lakemorpho package provides the tools to calculate a typical suite\n of these metrics from an input elevation model and lake polygon. The metrics\n currently supported are: fetch, major axis, minor axis, maximum length, \n maximum width, mean width,maximum depth, mean depth, shoreline development, \n shoreline length, surface area, and volume.","Published":"2016-12-27","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"laketemps","Version":"0.5.1","Title":"Lake Temperatures Collected by Situ and Satellite Methods from\n1985-2009","Description":"Lake temperature records, metadata, and climate drivers for 291 global lakes during the time period 1985-2009. Temperature observations were collected using satellite and in situ methods. Climatic drivers and geomorphometric characteristics were also compiled and are included for each lake. Data are part of the associated publication from the Global Lake Temperature Collaboration project (http://www.laketemperature.org). See citation('laketemps') for dataset attribution.","Published":"2015-02-28","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"LAM","Version":"0.0-17","Title":"Some Latent Variable Models","Description":"\n Contains some procedures for latent variable modelling with a \n particular focus on multilevel data.\n The LAM package contains mean and covariance structure modelling\n for multivariate normally distributed data ('mlnormal'),\n a general Metropolis-Hastings algorithm ('amh') and penalized\n maximum likelihood estimation ('pmle').","Published":"2017-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lambda.r","Version":"1.1.9","Title":"Modeling Data with Functional Programming","Description":"A language extension to efficiently write functional programs in R. Syntax extensions include multi-part function definitions, pattern matching, guard statements, built-in (optional) type safety.","Published":"2016-07-10","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"lambda.tools","Version":"1.0.9","Title":"Tools for Modeling Data with Functional Programming","Description":"Provides tools that manipulate and transform data using methods\n and techniques consistent with functional programming. The idea is that\n through the use of these tools, a program can be reasoned about insomuch\n that the implementation can be proven to be equivalent to the mathematical\n model.","Published":"2016-05-11","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"Lambda4","Version":"3.0","Title":"Collection of Internal Consistency Reliability Coefficients","Description":"Currently the package includes 14 methods for calculating internal\n consistency reliability but is still growing. The package allows users\n access to whichever reliability estimator is deemed most appropriate for\n their situation.","Published":"2013-07-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LambertW","Version":"0.6.4","Title":"Probabilistic Models to Analyze and Gaussianize Heavy-Tailed,\nSkewed Data","Description":"Lambert W x F distributions are a generalized framework to analyze\n skewed, heavy-tailed data. It is based on an input/output system, where the\n output random variable (RV) Y is a non-linearly transformed version of an input\n RV X ~ F with similar properties as X, but slightly skewed (heavy-tailed).\n The transformed RV Y has a Lambert W x F distribution. This package contains\n functions to model and analyze skewed, heavy-tailed data the Lambert Way:\n simulate random samples, estimate parameters, compute quantiles, and plot/\n print results nicely. Probably the most important function is 'Gaussianize',\n which works similarly to 'scale', but actually makes the data Gaussian.\n A do-it-yourself toolkit allows users to define their own Lambert W x\n 'MyFavoriteDistribution' and use it in their analysis right away.","Published":"2016-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lamW","Version":"1.3.0","Title":"Lambert-W Function","Description":"Implements both real-valued branches of the Lambert-W function, also known as the product logarithm, without the need for installing the entire GSL.","Published":"2017-04-24","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LANDD","Version":"1.1.0","Title":"Liquid Association for Network Dynamics Detection","Description":"Using Liquid Association for Network Dynamics Detection.","Published":"2016-10-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"landest","Version":"1.0","Title":"Landmark Estimation of Survival and Treatment Effect","Description":"Provides functions to estimate survival and a treatment effect using a landmark estimation approach.","Published":"2015-11-15","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"landpred","Version":"1.0","Title":"Landmark Prediction of a Survival Outcome","Description":"This package includes functions for landmark prediction of a survival outcome incorporating covariate and short-term event information. For more information about landmark prediction please see: Parast, Layla, Su-Chun Cheng, and Tianxi Cai. Incorporating short-term outcome information to predict long-term survival with discrete markers. Biometrical Journal 53.2 (2011): 294-307.","Published":"2014-10-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"landsat","Version":"1.0.8","Title":"Radiometric and topographic correction of satellite imagery","Description":"Processing of Landsat or other multispectral satellite\n imagery. Includes relative normalization, image-based\n radiometric correction, and topographic correction options.","Published":"2012-10-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"landsat8","Version":"0.1-10","Title":"Landsat 8 Imagery Rescaled to Reflectance, Radiance and/or\nTemperature","Description":"Functions for converted Landsat 8 multispectral satellite imagery\n rescaled to the top of atmosphere (TOA) reflectance, radiance\n and/or at satellite brightness temperature using radiometric\n rescaling coefficients provided in the metadata file (MTL file).","Published":"2017-01-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"landscapeR","Version":"1.1.3","Title":"Categorical Landscape Simulation Facility","Description":"Simulates categorical maps on actual geographical realms, starting from either empty landscapes or landscapes provided by the user (e.g. land use maps). Allows to tweak or create landscapes while retaining a high degree of control on its features, without the hassle of specifying each location attribute. In this it differs from other tools which generate null or neutral landscapes in a theoretical space. The basic algorithm currently implemented uses a simple agent style/cellular automata growth model, with no rules (apart from areas of exclusion) and von Neumann neighbourhood (four cells, aka Rook case). Outputs are raster dataset exportable to any common GIS format.","Published":"2016-12-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"Langevin","Version":"1.2","Title":"Langevin Analysis in One and Two Dimensions","Description":"Estimate drift and diffusion functions from time series and\n generate synthetic time series from given drift and diffusion coefficients.","Published":"2016-08-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"languagelayeR","Version":"1.0.0","Title":"Access the 'languagelayer' API","Description":"Improve your text analysis with languagelayer\n , a powerful language detection\n API.","Published":"2016-12-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"languageR","Version":"1.4.1","Title":"Data sets and functions with \"Analyzing Linguistic Data: A\npractical introduction to statistics\"","Description":"Data sets exemplifying statistical methods, and some\n facilitatory utility functions used in \"Analyzing Linguistic\n Data: A practical introduction to statistics using R\",\n Cambridge University Press, 2008.","Published":"2013-12-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lans2r","Version":"1.0.5","Title":"Work with Look at NanoSIMS Data in R","Description":"R interface for working with nanometer scale secondary ion mass \n spectrometry (NanoSIMS) data exported from Look at NanoSIMS. ","Published":"2017-05-24","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LaplaceDeconv","Version":"1.0.4","Title":"Laplace Deconvolution with Noisy Discrete Non-Equally Spaced\nObservations on a Finite Time Interval","Description":"Solves the problem of Laplace deconvolution with noisy discrete\n non-equally spaced observations on a finite time interval based on expansions\n of the convolution kernel, the unknown function and the observed signal over\n Laguerre functions basis. It implements the methodology proposed in the paper\n \"Laplace deconvolution on the basis of time domain data and its application to\n Dynamic Contrast Enhanced imaging\" by F. Comte, C-A. Cuenod, M. Pensky and Y.\n Rozenholc in ArXiv (http://arxiv.org/abs/1405.7107).","Published":"2016-01-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LaplacesDemon","Version":"16.0.1","Title":"Complete Environment for Bayesian Inference","Description":"Provides a complete environment for Bayesian inference using a variety of different samplers (see ?LaplacesDemon for an overview). The README describes the history of the package development process.","Published":"2016-07-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lar","Version":"0.1-2","Title":"History of labour relations package","Description":"This package is intended for researchers studying historical labour relations (see http://www.historyoflabourrelations.org). The package allows for easy access of excel files in the standard defined by the Global Collaboratory on the History of Labour Relations. The package also allows for visualisation of labour relations according to the Collaboratory's format.","Published":"2014-04-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LARF","Version":"1.4","Title":"Local Average Response Functions for Instrumental Variable\nEstimation of Treatment Effects","Description":"Provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument are binary. Applicable to both binary and continuous outcomes.","Published":"2016-07-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"largeList","Version":"0.3.1","Title":"Serialization Interface for Large List Objects","Description":"Functions to write or append a R list to a file, as well as read, remove, modify elements from it without restoring the whole list.","Published":"2017-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"largeVis","Version":"0.2.1","Title":"High-Quality Visualizations of Large, High-Dimensional Datasets","Description":"Implements the largeVis algorithm (see Tang, et al. (2016) ) for visualizing very large high-dimensional datasets. Also very fast search for approximate nearest neighbors; outlier detection; and optimized implementations of the HDBSCAN*, DBSCAN and OPTICS clustering algorithms; plotting functions for visualizing the above.","Published":"2017-05-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lars","Version":"1.2","Title":"Least Angle Regression, Lasso and Forward Stagewise","Description":"Efficient procedures for fitting an entire lasso sequence\n with the cost of a single least squares fit. Least angle\n regression and infinitesimal forward stagewise regression are\n related to the lasso, as described in the paper below.","Published":"2013-04-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lasso2","Version":"1.2-19","Title":"L1 constrained estimation aka `lasso'","Description":"Routines and documentation for solving regression problems\n while imposing an L1 constraint on the estimates, based on\n the algorithm of Osborne et al. (1998)","Published":"2014-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LassoBacktracking","Version":"0.1.2","Title":"Modelling Interactions in High-Dimensional Data with\nBacktracking","Description":"Implementation of the algorithm introduced in Shah, R. D.\n (2016) .\n Data with thousands of predictors can be handled. The algorithm\n performs sequential Lasso fits on design matrices containing\n increasing sets of candidate interactions. Previous fits are used to greatly\n speed up subsequent fits so the algorithm is very efficient.","Published":"2017-04-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lassopv","Version":"0.1.3","Title":"Nonparametric P-Value Estimation for Predictors in Lasso","Description":"Estimate p-values for predictors x against target variable y in lasso regression, using the regularization strength when each predictor enters the active set of regularization path for the first time as the statistic. This is based on the assumption that predictors that (first) become active earlier tend to be more significant. Null distribution for each predictor is computed analytically under approximation, which aims at efficiency and accuracy for small p-values.","Published":"2017-02-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lassoscore","Version":"0.6","Title":"High-Dimensional Inference with the Penalized Score Test","Description":"Use the lasso regression method to perform approximate inference\n in high dimensions, by penalizing the effects of nuisance parameters.","Published":"2014-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lassoshooting","Version":"0.1.5-1","Title":"L1 regularized regression (Lasso) solver using the Cyclic\nCoordinate Descent algorithm aka Lasso Shooting","Description":"L1 regularized regression (Lasso) solver using the Cyclic\n Coordinate Descent algorithm aka Lasso Shooting is fast. This\n implementation can choose which coefficients to penalize. It\n support coefficient-specific penalities and it can take X'X and\n X'y instead of X and y.","Published":"2012-05-02","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"lasvmR","Version":"0.1.2","Title":"A Simple Wrapper for the LASVM Solver","Description":"This is a simple wrapper for the LASVM Solver (see http://leon.bottou.org/projects/lasvm). LASVM is basically an online variant of the SMO solver. ","Published":"2015-08-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"latdiag","Version":"0.2-3","Title":"Draws Diagrams Useful for Checking Latent Scales","Description":"A graph\n proposed by Rosenbaum is useful\n for checking some properties of various\n sorts of latent scale, this program generates commands\n to obtain the graph using 'dot' from 'graphviz'.","Published":"2016-10-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"latentnet","Version":"2.7.1","Title":"Latent Position and Cluster Models for Statistical Networks","Description":"Fit and simulate latent position and cluster models for statistical networks. ","Published":"2015-06-20","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Laterality","Version":"0.9.3","Title":"Functions to Calculate Common Laterality Statistics in\nPrimatology","Description":"Calculates and plots Handedness index (HI), absolute HI, mean HI and z-score which are commonly used indexes for the study of hand preference (laterality) in non-human primates.","Published":"2015-04-01","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"latex2exp","Version":"0.4.0","Title":"Use LaTeX Expressions in Plots","Description":"Parses and converts LaTeX math formulas to R's plotmath\n expressions, used to enter mathematical formulas and symbols to be rendered as\n text, axis labels, etc. throughout R's plotting system.","Published":"2015-11-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lattice","Version":"0.20-35","Title":"Trellis Graphics for R","Description":"A powerful and elegant high-level data visualization\n system inspired by Trellis graphics, with an emphasis on\n multivariate data. Lattice is sufficient for typical graphics needs,\n and is also flexible enough to handle most nonstandard requirements.\n See ?Lattice for an introduction.","Published":"2017-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"latticeDensity","Version":"1.0.7","Title":"Density estimation and nonparametric regression on irregular\nregions","Description":"This package contains functions that compute the\n lattice-based density estimator of Barry and McIntyre, which\n accounts for point processes in two-dimensional regions with \n irregular boundaries and holes. The package also implements\n two-dimensional non-parametric regression for similar regions.","Published":"2012-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"latticeExtra","Version":"0.6-28","Title":"Extra Graphical Utilities Based on Lattice","Description":"Building on the infrastructure provided by the lattice\n\t package, this package provides several new high-level\n\t functions and methods, as well as additional utilities\n\t such as panel and axis annotation functions.","Published":"2016-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LatticeKrig","Version":"6.4","Title":"Multiresolution Kriging Based on Markov Random Fields","Description":"Methods for the interpolation of large spatial\n datasets. This package follows a \"fixed rank Kriging\" approach using\n a large number of basis functions and provides spatial estimates\n that are comparable to standard families of covariance functions.\n Using a large number of basis functions allows for estimates that\n can come close to interpolating the observations (a spatial model\n with a small nugget variance.) Moreover, the covariance model for\n this method can approximate the Matern covariance family but also\n allows for a multi-resolution model and supports efficient\n computation of the profile likelihood for estimating covariance\n parameters. This is accomplished through compactly supported basis\n functions and a Markov random field model for the basis\n coefficients. These features lead to sparse matrices for the\n computations and this package makes of the R spam package for this.\n An extension of this version over previous ones ( < 5.4 ) is the\n support for different geometries besides a rectangular domain. The\n Markov random field approach combined with a basis function\n representation makes the implementation of different geometries\n simple where only a few specific functions need to be added with\n most of the computation and evaluation done by generic routines that\n have been tuned to be efficient. One benefit of this package's\n model/approach is the facility to do unconditional and conditional\n simulation of the field for large numbers of arbitrary points. There\n is also the flexibility for estimating non-stationary covariances\n and also the case when the observations are a linear combination\n (e.g. an integral) of the spatial process. Included are generic\n methods for prediction, standard errors for prediction, plotting of\n the estimated surface and conditional and unconditional simulation.","Published":"2017-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lava","Version":"1.5","Title":"Latent Variable Models","Description":"Estimation and simulation of latent variable models.","Published":"2017-03-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lava.tobit","Version":"0.5","Title":"Latent Variable Models with Censored and Binary Outcomes","Description":"Lava plugin allowing combinations of left and right censored and\n binary outcomes.","Published":"2017-03-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lavaan","Version":"0.5-23.1097","Title":"Latent Variable Analysis","Description":"Fit a variety of latent variable models, including confirmatory\n factor analysis, structural equation modeling and latent growth curve models.","Published":"2017-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lavaan.shiny","Version":"1.2","Title":"Latent Variable Analysis with Shiny","Description":"Interactive shiny application for working with different kinds of\n latent variable analysis, with the 'lavaan' package. Graphical output for models\n are provided and different estimators are supported.","Published":"2017-04-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lavaan.survey","Version":"1.1.3.1","Title":"Complex Survey Structural Equation Modeling (SEM)","Description":"Fit structural equation models (SEM) including factor analysis,\n multivariate regression models with latent variables and many other latent\n variable models while correcting estimates, standard errors, and\n chi-square-derived fit measures for a complex sampling design. \n Incorporate clustering, stratification, sampling weights, and \n finite population corrections into a SEM analysis.\n Wrapper around packages lavaan and survey.","Published":"2016-12-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lavaanPlot","Version":"0.1.0","Title":"Path Diagrams for Lavaan Models via DiagrammeR","Description":"Plots path diagrams from models in lavaan using the plotting\n functionality from the DiagrammeR package. DiagrammeR provides nice path diagrams \n via Graphviz, and these functions make it easy to generate these diagrams from a\n lavaan path model without having to write the DOT language graph specification.","Published":"2017-06-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lawn","Version":"0.3.0","Title":"Client for 'Turfjs' for 'Geospatial' Analysis","Description":"Client for 'Turfjs' () for\n 'geospatial' analysis. The package revolves around using 'GeoJSON'\n data. Functions are included for creating 'GeoJSON' data objects,\n measuring aspects of 'GeoJSON', and combining, transforming,\n and creating random 'GeoJSON' data objects.","Published":"2016-10-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lawstat","Version":"3.1","Title":"Tools for Biostatistics, Public Policy, and Law","Description":"Statistical tests widely utilized in biostatistics, public policy, and law. Along with the well-known tests for equality of means and variances, randomness, \n measures of relative variability etc, the package contains new robust tests of symmetry, \n omnibus and directional tests of normality, and their graphical counterparts such as \n Robust QQ plot; a robust trend tests for variances etc. All implemented tests and methods \n are illustrated by simulations and real-life examples from legal statistics, economics, \n and biostatistics.","Published":"2017-01-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lazy","Version":"1.2-15","Title":"Lazy Learning for Local Regression","Description":"By combining constant, linear, and quadratic local models,\n lazy estimates the value of an unknown multivariate function on\n the basis of a set of possibly noisy samples of the function\n itself. This implementation of lazy learning automatically\n adjusts the bandwidth on a query-by-query basis through a\n leave-one-out cross-validation.","Published":"2013-09-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lazyData","Version":"1.1.0","Title":"A LazyData Facility","Description":"Supplies a LazyData facility for packages which have data\n\t\tsets but do not provide LazyData: true. A single function is\n\t\tis included, requireData, which is a drop-in replacement for\n\t\tbase::require, but carrying the additional\n\t\tfunctionality. By default, it suppresses package\n\t\tstartup messages as well. See argument 'reallyQuitely'.","Published":"2016-12-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lazyeval","Version":"0.2.0","Title":"Lazy (Non-Standard) Evaluation","Description":"An alternative approach to non-standard evaluation using\n formulas. Provides a full implementation of LISP style 'quasiquotation',\n making it easier to generate code with other code.","Published":"2016-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lazyrmd","Version":"0.2.0","Title":"Render R Markdown Outputs Lazily","Description":"An R Markdown html document format that provides the ability to lazily\n load plot outputs as the user scrolls over them. This is useful for large R\n Markdown documents with many plots, as it allows for a fast initial page load and\n defers loading of individual graphics to the time that the user navigates near them.","Published":"2016-10-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lazysql","Version":"0.1.3","Title":"Lazy SQL Programming","Description":"\n Helper functions to build SQL statements\n for dbGetQuery or dbSendQuery under program control.\n They are intended to increase speed of coding and\n to reduce coding errors. Arguments are carefully checked,\n in particular SQL identifiers such as names of tables or columns.\n More patterns will be added as required.","Published":"2016-03-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lazyWeave","Version":"3.0.1","Title":"LaTeX Wrappers for R Users","Description":"Provides the functionality to write LaTeX code from within R\n without having to learn LaTeX. Functionality also exists to create HTML\n and Markdown code. While the functionality still exists to write\n complete documents with lazyWeave, it is generally easier to do so with\n with markdown and knitr. lazyWeave's main strength now is the ability\n to design custom and complex tables for reporting results.","Published":"2016-01-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"lba","Version":"2.4.1","Title":"Latent Budget Analysis for Compositional Data","Description":"Latent budget analysis is a method for the analysis of a two-way\n contingency table with an exploratory variable and a response variable. It is\n specially designed for compositional data.","Published":"2016-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lbfgs","Version":"1.2.1","Title":"Limited-memory BFGS Optimization","Description":"A wrapper built around the libLBFGS optimization library by Naoaki Okazaki. The lbfgs package implements both the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) and the Orthant-Wise Quasi-Newton Limited-Memory (OWL-QN) optimization algorithms. The L-BFGS algorithm solves the problem of minimizing an objective, given its gradient, by iteratively computing approximations of the inverse Hessian matrix. The OWL-QN algorithm finds the optimum of an objective plus the L1-norm of the problem's parameters. The package offers a fast and memory-efficient implementation of these optimization routines, which is particularly suited for high-dimensional problems.","Published":"2014-08-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lbfgsb3","Version":"2015-2.13","Title":"Limited Memory BFGS Minimizer with Bounds on Parameters","Description":"Interfacing to Nocedal et al. L-BFGS-B.3.0 (2011) limited\n\tmemory BFGS minimizer with bounds on parameters.","Published":"2015-02-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lbiassurv","Version":"1.1","Title":"Length-biased correction to survival curve estimation","Description":"The package offers various length-bias corrections to\n survival curve estimation.","Published":"2013-05-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lbreg","Version":"1.0","Title":"Log-Binomial Regression with Constrained Optimization","Description":"Maximum likelihood estimation of log-binomial regression with special functionality when the MLE is on (or close to) the boundary of the parameter space.","Published":"2016-12-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LBSPR","Version":"0.1.1","Title":"Length-Based Spawning Potential Ratio","Description":"Simulate expected equilibrium length composition, YPR, and\n SPR using the LBSPR model. Fit the LBSPR model to length data to estimate\n selectivity, relative fishing mortality, and spawning potential ratio for \n\tdata-limited fisheries.","Published":"2017-06-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LCA","Version":"0.1","Title":"Localised Co-Dependency Analysis","Description":"Performs model fitting and significance estimation for Localised Co-Dependency between pairs of features of a numeric dataset.","Published":"2013-09-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LCAextend","Version":"1.2","Title":"Latent Class Analysis (LCA) with familial dependence in extended\npedigrees","Description":"This package performs a Latent Class Analysis of\n phenotypic measurements in pedigrees and a model selection\n based on one of two methods: likelihood-based cross-validation\n and Bayesian Information Criterion. It computes also individual\n and triplet child-parents weights in a pedigree using an\n upward-downward algorithm. It takes into account the familial\n dependence defined by the pedigree structure by considering\n that a class of a child depends on his parents classes via\n triplet-transition probabilities of the classes. The package\n handles the case where measurements are available on all\n subjects and the case where measurements are available only on\n symptomatic (i.e. affected) subjects. Distributions for\n discrete (or ordinal) and continuous data are currently\n implemented. The package can deal with missing data.","Published":"2012-03-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"lcda","Version":"0.3","Title":"Latent Class Discriminant Analysis","Description":"Local Discrimination via Latent Class Models","Published":"2011-04-06","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"LCF","Version":"1.6-6","Title":"Linear Combination Fitting","Description":"Baseline correction, normalization and linear combination fitting (LCF) \n of X-ray absorption near edge structure (XANES) spectra.\n The package includes data loading of .xmu files exported from 'ATHENA' (Ravel and Newville, 2005) . \n Loaded spectra can be background corrected and all standards can be fitted at once.\n Two linear combination fitting functions can be used:\n (1) fit_athena(): Simply fitting combinations of standards as in ATHENA, \n (2) fit_float(): Fitting all standards with changing baseline correction and edge-step normalization parameters. ","Published":"2017-02-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LCFdata","Version":"2.0","Title":"Data sets for package ``LMERConvenienceFunctions''","Description":"This package contains (1) event-related brain potential data recorded from 10 participants at electrodes Fz, Cz, Pz, and Oz (0--300 ms) in the context of Antoine Tremblay's PhD thesis (Tremblay, 2009); (2) ERP amplitudes at electrode Fz restricted to the 100 to 175 millisecond time window; and (3) plotting data generated from a linear mixed-effects model.","Published":"2013-11-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lclGWAS","Version":"1.0.3","Title":"Efficient Estimation of Discrete-Time Multivariate Frailty Model\nUsing Exact Likelihood Function for Grouped Survival Data","Description":"The core of this 'Rcpp' based package is several functions to estimate the baseline hazard, frailty variance, and fixed effect parameter for a discrete-time shared frailty model with random effects. The functions are designed to analyze grouped time-to-event data accounting for family structure of related individuals (i.e., trios). The core functions include two processes: (1) evaluate the multivariable integration to compute the exact proportional hazards model based likelihood and (2) estimate the desired parameters using maximum likelihood estimation. The integration is evaluated by the 'Cuhre' algorithm from the 'Cuba' library (Hahn, T., Cuba-a library for multidimensional numerical integration, Comput. Phys. Commun. 168, 2005, 78-95 ), and the source files of the 'Cuhre' function are included in this package. The maximization process is carried out using Brent's algorithm, with the 'C++' code file from John Burkardt and John Denker (Brent, R.,Algorithms for Minimization without Derivatives, Dover, 2002, ISBN 0-486-41998-3).","Published":"2017-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LCMCR","Version":"0.4.1","Title":"Bayesian Nonparametric Latent-Class Capture-Recapture","Description":"Bayesian population size estimation using non parametric latent-class models.","Published":"2016-02-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lcmm","Version":"1.7.8","Title":"Extended Mixed Models Using Latent Classes and Latent Processes","Description":"Estimation of various extensions of the mixed models including latent class mixed models, joint latent latent class mixed models and mixed models for curvilinear univariate or multivariate longitudinal outcomes using a maximum likelihood estimation method.","Published":"2017-05-29","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"lcopula","Version":"1.0","Title":"Liouville Copulas","Description":"Collections of functions allowing random number generations and\n estimation of Liouville copulas.","Published":"2017-02-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lctools","Version":"0.2-5","Title":"Local Correlation, Spatial Inequalities, Geographically Weighted\nRegression and Other Tools","Description":"The main purpose of lctools is to provide researchers and educators with easy-to-learn\n user friendly tools for calculating key spatial statistics and to apply simple as well as\n advanced methods of spatial analysis in real data. These include: Local Pearson and \n Geographically Weighted Pearson Correlation Coefficients, Spatial Inequality Measures\n (Gini, Spatial Gini, LQ, Focal LQ), Spatial Autocorrelation (Global and Local Moran's I), \n several Geographically Weighted Regression techniques and other Spatial Analysis tools \n (other geographically weighted statistics). This package also contains functions for \n measuring the significance of each statistic calculated, mainly based on Monte Carlo simulations.","Published":"2016-09-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lda","Version":"1.4.2","Title":"Collapsed Gibbs Sampling Methods for Topic Models","Description":"Implements latent Dirichlet allocation (LDA)\n\t and related models. This includes (but is not limited\n\t to) sLDA, corrLDA, and the mixed-membership stochastic\n\t blockmodel. Inference for all of these models is\n\t implemented via a fast collapsed Gibbs sampler written\n\t in C. Utility functions for reading/writing data\n\t typically used in topic models, as well as tools for\n\t examining posterior distributions are also included.","Published":"2015-11-22","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"ldamatch","Version":"1.0.1","Title":"Selection of Statistically Similar Research Groups","Description":"Select statistically similar research groups by backward selection using various robust algorithms, including a heuristic based on linear discriminant analysis, multiple heuristics based on the test statistic, and parallelized exhaustive search.","Published":"2016-06-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ldatuning","Version":"0.2.0","Title":"Tuning of the Latent Dirichlet Allocation Models Parameters","Description":"For this first version only metrics to estimate the best fitting\n number of topics are implemented.","Published":"2016-10-24","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LDAvis","Version":"0.3.2","Title":"Interactive Visualization of Topic Models","Description":"Tools to create an interactive web-based visualization of a\n topic model that has been fit to a corpus of text data using\n Latent Dirichlet Allocation (LDA). Given the estimated parameters of\n the topic model, it computes various summary statistics as input to\n an interactive visualization built with D3.js that is accessed via\n a browser. The goal is to help users interpret the topics in their\n LDA topic model.","Published":"2015-10-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ldbod","Version":"0.1.2","Title":"Local Density-Based Outlier Detection","Description":"Flexible procedures to compute local density-based outlier scores for ranking outliers.\n Both exact and approximate nearest neighbor search can be implemented, while also accommodating\n multiple neighborhood sizes and four different local density-based methods. It allows for\n referencing a random subsample of the input data or a user specified reference data set\n to compute outlier scores against, so both unsupervised and semi-supervised outlier\n detection can be implemented.","Published":"2017-05-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ldbounds","Version":"1.1-1","Title":"Lan-DeMets Method for Group Sequential Boundaries","Description":"Computations related to group sequential boundaries.\n Includes calculation of bounds using the Lan-DeMets\n alpha spending function approach.","Published":"2014-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LDcorSV","Version":"1.3.2","Title":"Linkage Disequilibrium Corrected by the Structure and the\nRelatedness","Description":"Four measures of linkage disequilibrium are provided: the usual r^2\n measure, the r^2_S measure (r^2 corrected by the structure\n sample), the r^2_V (r^2 corrected by the relatedness of\n genotyped individuals), the r^2_VS measure (r^2 corrected by\n both the relatedness of genotyped individuals and the structure\n of the sample).","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LDheatmap","Version":"0.99-2","Title":"Graphical Display of Pairwise Linkage Disequilibria Between SNPs","Description":"Produces a graphical display, as a heat map, of measures\n of pairwise linkage disequilibria between SNPs. Users may\n optionally include the physical locations or genetic map\n distances of each SNP on the plot.","Published":"2016-08-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ldhmm","Version":"0.4.1","Title":"Hidden Markov Model for Financial Time-Series Based on Lambda\nDistribution","Description":"Hidden Markov Model (HMM) based on symmetric lambda distribution\n framework is implemented for the study of return time-series in the financial\n market. Major features in the S&P500 index, such as regime identification,\n volatility clustering, and anti-correlation between return and volatility,\n can be extracted from HMM cleanly. Univariate symmetric lambda distribution\n is essentially a location-scale family of exponential power distribution.\n Such distribution is suitable for describing highly leptokurtic time series\n obtained from the financial market. It provides a theoretically solid foundation\n to explore such data where the normal distribution is not adequate. The HMM\n implementation follows closely the book: \"Hidden Markov Models for Time Series\",\n by Zucchini, MacDonald, Langrock (2016).","Published":"2017-06-03","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"ldlasso","Version":"3.2","Title":"LD LASSO Regression for SNP Association Study","Description":"ldlasso requires data be of class gwaa.data from the the\n package GenABEL","Published":"2013-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LDOD","Version":"1.0","Title":"Finding Locally D-optimal optimal designs for some nonlinear and\ngeneralized linear models","Description":"this package provides functions for Finding Locally\n D-optimal designs for Logistic, Negative Binomial, Poisson,\n Michaelis-Menten, Exponential, Log-Linear, Emax, Richards,\n Weibull and Inverse Quadratic regression models and also\n functions for auto-constructing Fisher information matrix and\n Frechet derivative based on some input variables and without\n user-interfere.","Published":"2013-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LDPD","Version":"1.1.2","Title":"Probability of Default Calibration","Description":"Implementation of most popular approaches to PD (probability of default) calibration: Quasi Moment Matching algorithm (D. Tasche), algorithm proposed by M. van der Burgt, K. Pluto and D. Tasche's most prudent estimation methodology.","Published":"2015-05-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ldr","Version":"1.3.3","Title":"Methods for likelihood-based dimension reduction in regression","Description":"Functions, methods, and data sets for fitting likelihood-based dimension reduction in regression, using principal fitted components (pfc), likelihood acquired directions (lad), covariance reducing models (core).","Published":"2014-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LDRTools","Version":"0.2","Title":"Tools for Linear Dimension Reduction","Description":"Linear dimension reduction subspaces can be uniquely defined using orthogonal projection matrices. This package provides tools to compute distances between such subspaces and to compute the average subspace. ","Published":"2015-09-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ldstatsHD","Version":"1.0.0","Title":"Linear Dependence Statistics for High-Dimensional Data","Description":"Statistical methods related \n\tto the estimation and testing of multiple correlation, partial correlation and regression coefficient matrices when data is high-dimensional. ","Published":"2016-08-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LDtests","Version":"1.0","Title":"Exact tests for Linkage Disequilibrium and Hardy-Weinberg\nEquilibrium","Description":"Exact tests for Linkage Disequilibrium (LD) and Hardy-Weinberg Equilibrium (HWE). - 2-sided LD tests based on different measures of LD (Kulinskaya and Lewin 2008) - 1-sided Fisher's exact test for LD - 2-sided Haldane test for HWE (Wiggington 2005) - 1-sided test for inbreeding - conditional p-values proposed in Kulinskaya (2008) to overcome the problems of asymetric distributions (for both LD and HWE)","Published":"2008-06-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"leaderCluster","Version":"1.2","Title":"Leader Clustering Algorithm","Description":"The leader clustering algorithm provides\n a means for clustering a set of data points. Unlike many other clustering\n algorithms it does not require the user to specify the number of clusters,\n but instead requires the approximate radius of a cluster as its primary\n tuning parameter. The package provides a fast implementation of this\n algorithm in n-dimensions using Lp-distances (with special cases for p=1,2,\n and infinity) as well as for spatial data using the Haversine\n formula, which takes latitude/longitude pairs as inputs and clusters\n based on great circle distances.","Published":"2014-12-16","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"LeafAngle","Version":"1.2-1","Title":"Analysis and Visualization of Plant Leaf Angle Distributions","Description":"A leaf angle distribution is a special distribution that is defined between 0 and 90 degrees, and a number of distributions are used to characterize the leaf angle distribution in real plant canopies. This package includes methods to fit distributions to data, visualize the fit, and compare fits of nine different distributions.","Published":"2014-12-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"LeafArea","Version":"0.1.7","Title":"Rapid Digital Image Analysis of Leaf Area","Description":"An interface for the image processing program 'ImageJ', which\n allows a rapid digital image analysis for particle sizes. This package includes\n function to write an 'ImageJ' macro which is optimized for a leaf area analysis by\n default.","Published":"2017-03-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"leaflet","Version":"1.1.0","Title":"Create Interactive Web Maps with the JavaScript 'Leaflet'\nLibrary","Description":"Create and customize interactive maps using the 'Leaflet'\n JavaScript library and the 'htmlwidgets' package. These maps can be used\n directly from the R console, from 'RStudio', in Shiny apps and R Markdown\n documents.","Published":"2017-02-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"leaflet.minicharts","Version":"0.4.0","Title":"Mini Charts for Interactive Maps","Description":"Add and modify small charts on an interactive map created with \n package 'leaflet'. These charts can be used to represent at same time multiple \n variables on a single map.","Published":"2017-06-20","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"leafletCN","Version":"0.2.1","Title":"An R Gallery for China and Other Geojson Choropleth Map in\nLeaflet","Description":"An R gallery for China and other geojson choropleth map in leaflet. Contains the geojson data for provinces, cities in China.","Published":"2017-02-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"leafletR","Version":"0.4-0","Title":"Interactive Web-Maps Based on the Leaflet JavaScript Library","Description":"Display your spatial data on interactive web-maps using the open-source JavaScript library Leaflet. 'leafletR' provides basic web-mapping functionality to combine vector data and online map tiles from different sources. See for more information on Leaflet.","Published":"2016-04-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LEANR","Version":"1.4.9","Title":"Finds \"Local Subnetworks\" Within an Interaction Network which\nShow Enrichment for Differentially Expressed Genes","Description":"Implements the method described in \"Network-based analysis of omics data: The LEAN method\" [Gwinner Boulday (2016) ]\n Given a protein interaction network and a list of p-values describing a measure of interest (as e.g. differential gene expression) this method\n computes an enrichment p-value for the protein neighborhood of each gene and compares it to a background distribution of randomly drawn p-values.\n The resulting scores are corrected for multiple testing and significant hits are returned in tabular format.","Published":"2016-11-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LEAP","Version":"0.2","Title":"Constructing Gene Co-Expression Networks for Single-Cell\nRNA-Sequencing Data Using Pseudotime Ordering","Description":"Advances in sequencing technology now allow researchers to capture the expression profiles of individual cells. Several algorithms have been developed to attempt to account for these effects by determining a cell's so-called `pseudotime', or relative biological state of transition. By applying these algorithms to single-cell sequencing data, we can sort cells into their pseudotemporal ordering based on gene expression. LEAP (Lag-based Expression Association for Pseudotime-series) then applies a time-series inspired lag-based correlation analysis to reveal linearly dependent genetic associations.","Published":"2016-09-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LEAPFrOG","Version":"1.0.7","Title":"Likelihood Estimation of Admixture in Parents From Offspring\nGenotypes","Description":"Contains LEAPFrOG Gradient Optimisation and Expectation Maximisation functions for inferring parental admixture proportions from an offspring with SNP genotypes.","Published":"2014-08-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"leapp","Version":"1.2","Title":"latent effect adjustment after primary projection","Description":"These functions take a gene expression value matrix, a\n primary covariate vector, an additional known covariates\n matrix. A two stage analysis is applied to counter the effects\n of latent variables on the rankings of hypotheses. The\n estimation and adjustment of latent effects are proposed by\n Sun, Zhang and Owen (2011). \"leapp\" is developed in the\n context of microarray experiments, but may be used as a general\n tool for high throughput data sets where dependence may be\n involved.","Published":"2014-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"leaps","Version":"3.0","Title":"Regression Subset Selection","Description":"Regression subset selection, including exhaustive search.","Published":"2017-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LeArEst","Version":"0.1.5","Title":"Border and Area Estimation of Data Measured with Additive Error","Description":"Provides methods for estimating borders of uniform distribution on\n the interval (one-dimensional) and on the elliptical domain (two-dimensional)\n under measurement errors. For one-dimensional case, it also estimates the\n length of underlying uniform domain and tests the hypothesized length against\n two-sided or one-sided alternatives. For two-dimensional case, it estimates\n the area of underlying uniform domain. It works with numerical inputs as well\n as with pictures in JPG format.","Published":"2017-06-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LearnBayes","Version":"2.15","Title":"Functions for Learning Bayesian Inference","Description":"LearnBayes contains a collection of functions helpful in learning the basic tenets of Bayesian statistical inference. It contains functions for summarizing basic one and two parameter posterior distributions and predictive distributions. It contains MCMC algorithms for summarizing posterior distributions defined by the user. It also contains functions for regression models, hierarchical models, Bayesian tests, and illustrations of Gibbs sampling.","Published":"2014-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"learningCurve","Version":"1.1.1","Title":"An Implementation of Crawford's and Wright's Learning Curve\nProduction Functions","Description":"Implements common learning curve production functions. It incorporates\n Crawford's and Wright's learning curve functions to compute unit and cumulative \n block estimates for time (or cost) of units along with an aggregate learning \n curve. It also provides delta and error functions and some basic learning curve \n plotting functions.along with functions to compute aggregated learning curves, \n error rates, and to visualize learning curves.","Published":"2017-03-03","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"learningr","Version":"0.29","Title":"Data and functions to accompany the book \"Learning R\"","Description":"Crabs in the English channel, deer skulls, English\n monarchs, half-caste Manga characters, Jamaican cities,\n Shakespeare's The Tempest, drugged up cyclists and sexually\n transmitted diseases.","Published":"2013-11-06","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"learNN","Version":"0.2.0","Title":"Examples of Neural Networks","Description":"Implementations of several basic neural network concepts in R, as based on posts on \\url{http://qua.st/}.","Published":"2015-09-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"learnr","Version":"0.9","Title":"Interactive Tutorials for R","Description":"Create interactive tutorials using R Markdown. Use a combination \n of narrative, figures, videos, exercises, and quizzes to create self-paced\n tutorials for learning about R and R packages.","Published":"2017-06-20","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"learnrbook","Version":"0.0.2","Title":"Datasets for Aphalo's \"Learn R\" Book","Description":"Datasets used in the book \"Learn R ...as you learnt your mother\n tongue\" by Pedro J. Aphalo (2017) .","Published":"2017-05-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"learnstats","Version":"0.1.1","Title":"An Interactive Environment for Learning Statistics","Description":"Allows students to use R as an interactive educational environment\n for statistical concepts, ranging from p-values to confidence intervals\n\tto stability in time series.","Published":"2015-06-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"lefse","Version":"0.1","Title":"Phylogenetic and Functional Analyses for Ecology","Description":"Utilizing phylogenetic and functional information for the analyses of ecological datasets. The analyses include methods for quantifying the phylogenetic and functional diversity of assemblages.","Published":"2014-09-11","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"LEGIT","Version":"1.1.1","Title":"Latent Environmental & Genetic InTeraction (LEGIT) Model","Description":"Constructs genotype x environment interaction (GxE) models where\n G is a weighted sum of genetic variants (genetic score) and E is a weighted\n sum of environments (environmental score) using the alternating optimization algorithm \n by Jolicoeur-Martineau et al. (2017) . This approach has greatly \n enhanced predictive power over traditional GxE models which include only a single \n genetic variant and a single environmental exposure. Although this approach was \n originally made for GxE modelling, it is flexible and does not require the use of \n genetic and environmental variables. It can also handle more than 2 latent variables \n (rather than just G and E) and 3-way interactions or more. The LEGIT model produces \n highly interpretable results and is very parameter-efficient thus it can even be \n used with small sample sizes (n < 250).","Published":"2017-05-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"leiv","Version":"2.0-7","Title":"Bivariate Linear Errors-In-Variables Estimation","Description":"Estimate the slope and intercept of a bivariate\n\tlinear relationship by calculating a posterior density\n\tthat is invariant to interchange and scaling of the\n\tcoordinates.","Published":"2015-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LeLogicielR","Version":"1.2","Title":"Functions and datasets to accompany the book \"Le logiciel R:\nMaitriser le langage, Effectuer des analyses statistiques\"\n(french)","Description":"This package provides functions and datasets for the\n reader of the book \"Le logiciel R: Maitriser le langage,\n Effectuer des analyse statistiques\". The documentation and help\n pages are written in french.","Published":"2012-12-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lemon","Version":"0.3.0","Title":"Freshing Up your 'ggplot2' Plots","Description":"Functions for working with legends and axis lines of 'ggplot2',\n facets that repeat axis lines on all panels, and some 'knitr' extensions.","Published":"2017-05-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LendingClub","Version":"1.0.3","Title":"A Lending Club API Wrapper","Description":"Functions to access Lending Club's API and assist the investor manage \n\ttheir account. Lending Club is a peer-to-peer lending service where loans are \n\tbroken up into $25 notes that investors buy with the expectation of earning a \n\treturn on the interest. You can learn more about the API here:\n\t.","Published":"2017-06-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lero.lero","Version":"0.1","Title":"Generate 'Lero Lero' Quotes","Description":"Generates quotes from 'Lero Lero', a database for meaningless sentences filled with corporate buzzwords, intended to be used as corporate lorem ipsum (see for more information). Unfortunately, quotes are currently portuguese-only.","Published":"2017-03-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"lessR","Version":"3.6.2","Title":"Less Code, More Results","Description":"Each function accomplishes the work of several or more standard R functions. For example, two function calls, Read() and CountAll(), read the data and generate summary statistics for all variables in the data frame, plus histograms and bar charts as appropriate. Other functions provide for descriptive statistics, a comprehensive regression analysis, analysis of variance and t-test, plotting, bar chart, histogram, box plot, density curves, calibrated power curve, reading multiple data formats with the same function call, variable labels, color themes, Trellis graphics and a built-in help system. A confirmatory factor analysis of multiple indicator measurement models is available, as are pedagogical routines for data simulation such as for the Central Limit Theorem. Compatible with 'RStudio' and 'knitr' including generation of R markdown instructions for interpretative output. ","Published":"2017-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lestat","Version":"1.8","Title":"A package for LEarning STATistics","Description":"This package contains some simple objects and functions to do \n statistics using linear models and a Bayesian framework. ","Published":"2013-11-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"letsR","Version":"3.0","Title":"Tools for Data Handling and Analysis in Macroecology","Description":"R functions for handling, processing, and analyzing geographic\n data on species' distributions and environmental variables.","Published":"2017-06-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lettercase","Version":"0.13.1","Title":"Utilities for Formatting Strings with Consistent Capitalization,\nWord Breaks and White Space","Description":"Utilities for formatting strings and character\n vectors to for capitalization, word break and white space. Supported formats\n are: snake_case, spine-case, camelCase, PascalCase, Title Case, UPPERCASE,\n lowercase, Sentence case or combinations thereof. 'lettercase' strives to\n provide a simple, consistent, intuitive and high performing interface.","Published":"2016-03-03","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lexicon","Version":"0.3.1","Title":"Lexicons for Text Analysis","Description":"A collection of lexical hash tables, dictionaries, and\n word lists.","Published":"2017-04-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LexisPlotR","Version":"0.3","Title":"Plot Lexis Diagrams for Demographic Purposes","Description":"Functions to plot Lexis Diagrams for Demographic purposes.","Published":"2016-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lexRankr","Version":"0.4.1","Title":"Extractive Summarization of Text with the LexRank Algorithm","Description":"An R implementation of the LexRank algorithm described by G. Erkan and D. R. Radev (2004) .","Published":"2017-05-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lfactors","Version":"1.0.1","Title":"Factors with Levels","Description":"Provides an extension to factors called 'lfactor' that are similar\n to factors but allow users to refer to 'lfactor' levels by either the level or\n the label.","Published":"2017-04-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lfda","Version":"1.1.2","Title":"Local Fisher Discriminant Analysis","Description":"Functions for performing and visualizing Local Fisher Discriminant\n Analysis(LFDA), Kernel Fisher Discriminant Analysis(KLFDA), and Semi-supervised\n Local Fisher Discriminant Analysis(SELF).","Published":"2017-01-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LFDR.MLE","Version":"1.0","Title":"Estimation of the Local False Discovery Rates by Type II Maximum\nLikelihood Estimation","Description":"Suite of R functions for the estimation of the local false discovery rate (LFDR) using Type II maximum likelihood estimation (MLE).","Published":"2015-08-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lfe","Version":"2.5-1998","Title":"Linear Group Fixed Effects","Description":"Transforms away factors with many levels prior to doing an OLS.\n Useful for estimating linear models with multiple group fixed effects, and for\n estimating linear models which uses factors with many levels as pure control\n variables. Includes support for instrumental variables, conditional F statistics\n for weak instruments, robust and multi-way clustered standard errors, as well as\n limited mobility bias correction.","Published":"2016-04-19","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"lfl","Version":"1.4","Title":"Linguistic Fuzzy Logic","Description":"Various algorithms related to linguistic fuzzy logic: mining for linguistic fuzzy association\n rules, composition of fuzzy relations, performing perception-based logical deduction (PbLD), \n and forecasting time-series using fuzzy rule-based ensemble (FRBE).","Published":"2017-04-25","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"lfstat","Version":"0.9.4","Title":"Calculation of Low Flow Statistics for Daily Stream Flow Data","Description":"The \"Manual on Low-flow Estimation and Prediction\", published by\n the World Meteorological Organisation (WMO), gives a comprehensive summary on\n how to analyse stream flow data focusing on low-flows. This packages provides\n functions to compute the described statistics and produces plots similar to the\n ones in the manual.","Published":"2016-08-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lga","Version":"1.1-1","Title":"Tools for linear grouping analysis (LGA)","Description":"Tools for linear grouping analysis. Three user-level\n functions: gap, rlga and lga.","Published":"2012-01-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"lgarch","Version":"0.6-2","Title":"Simulation and Estimation of Log-GARCH Models","Description":"Simulation and estimation of univariate and multivariate log-GARCH models. The main functions of the package are: lgarchSim(), mlgarchSim(), lgarch() and mlgarch(). The first two functions simulate from a univariate and a multivariate log-GARCH model, respectively, whereas the latter two estimate a univariate and multivariate log-GARCH model, respectively.","Published":"2015-09-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lgcp","Version":"1.4","Title":"Log-Gaussian Cox Process","Description":"Spatial and spatio-temporal modelling of point patterns using the\n log-Gaussian Cox process. Bayesian inference for spatial, spatiotemporal,\n multivariate and aggregated point processes using Markov chain Monte Carlo.","Published":"2017-04-12","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"LGEWIS","Version":"0.2","Title":"Tests for Genetic Association/Gene-Environment Interaction in\nLongitudinal Gene-Environment-Wide Interaction Studies","Description":"Functions for testing the genetic association/gene-environment interaction in longitudinal gene-environment-wide interaction studies. Generalized score type tests are used for set based analyses. Then GEE based score tests are applied to all single variants within the defined set.","Published":"2015-10-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LGRF","Version":"1.0","Title":"Set-Based Tests for Genetic Association in Longitudinal Studies","Description":"Functions for the longitudinal genetic random field method (He et al., 2015, ) to test the association between a longitudinally measured quantitative outcome and a set of genetic variants in a gene/region.","Published":"2015-09-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lgtdl","Version":"1.1.4","Title":"A Set of Methods for Longitudinal Data Objects","Description":"A very simple implementation of a class for \n\t longitudinal data.","Published":"2017-01-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lhmixr","Version":"0.1.0","Title":"Fit Sex-Specific Life History Models with Missing\nClassifications","Description":"Fits sex-specific life-history models for fish and other taxa where some of the individuals have unknown sex.","Published":"2017-05-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"lhs","Version":"0.14","Title":"Latin Hypercube Samples","Description":"Provides a number of methods for creating and augmenting Latin Hypercube Samples.","Published":"2016-08-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"libamtrack","Version":"0.6.3","Title":"Computational Routines for Proton and Ion Radiotherapy","Description":"R interface to the open-source, ANSI C library 'libamtrack' (http://libamtrack.dkfz.org). 'libamtrack' provides computational routines for the prediction of detector response and radiobiological efficiency in heavy charged particle beams. It is designed for research in proton and ion dosimetry and radiotherapy. 'libamtrack' also includes many auxiliary physics routines for proton and ion beams. Original package and C-to-R conversion routines developed by Felix A. Klein.","Published":"2015-12-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"libcoin","Version":"0.9-2","Title":"Linear Test Statistics for Permutation Inference","Description":"Basic infrastructure for linear test statistics and permutation\n inference in the framework of Strasser and Weber (1999) . \n This package must not be used by end-users. CRAN package 'coin' implements all \n user interfaces and is ready to be used by anyone.","Published":"2017-04-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LiblineaR","Version":"2.10-8","Title":"Linear Predictive Models Based on the 'LIBLINEAR' C/C++ Library","Description":"A wrapper around the 'LIBLINEAR' C/C++ library for machine\n learning (available at\n ). 'LIBLINEAR' is\n a simple library for solving large-scale regularized linear\n classification and regression. It currently supports\n L2-regularized classification (such as logistic regression,\n L2-loss linear SVM and L1-loss linear SVM) as well as\n L1-regularized classification (such as L2-loss linear SVM and\n logistic regression) and L2-regularized support vector\n regression (with L1- or L2-loss). The main features of\n LiblineaR include multi-class classification (one-vs-the rest,\n and Crammer & Singer method), cross validation for model\n selection, probability estimates (logistic regression only) or\n weights for unbalanced data. The estimation of the models is\n particularly fast as compared to other libraries.","Published":"2017-02-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LiblineaR.ACF","Version":"1.94-2","Title":"Linear Classification with Online Adaptation of Coordinate\nFrequencies","Description":"Solving the linear SVM problem with coordinate descent\n is very efficient and is implemented in one of the most often used packages,\n 'LIBLINEAR' (available at http://www.csie.ntu.edu.tw/~cjlin/liblinear).\n It has been shown that the uniform selection of coordinates can be\n accelerated by using an online adaptation of coordinate frequencies (ACF).\n This package implements ACF and is based on 'LIBLINEAR' as well as\n the 'LiblineaR' package ().\n It currently supports L2-regularized L1-loss as well as L2-loss linear SVM.\n Similar to 'LIBLINEAR' multi-class classification (one-vs-the rest, and\n Crammer & Singer method) and cross validation for model selection is\n supported. The training of the models based on ACF is much faster than\n standard 'LIBLINEAR' on many problems.","Published":"2016-01-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Libra","Version":"1.5","Title":"Linearized Bregman Algorithms for Generalized Linear Models","Description":"Efficient procedures for fitting the regularization path\n for linear, binomial, multinomial, Ising and Potts models with lasso,\n group lasso or column lasso(only for multinomial) penalty.\n The package uses Linearized Bregman Algorithm to solve the\n regularization path through iterations. Bregman Inverse Scale Space Differential\n Inclusion solver is also provided for linear model with lasso penalty.","Published":"2016-02-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"libsoc","Version":"0.5","Title":"Read, Create and Write 'PharmML' Standard Output (so) XML Files","Description":"Handle 'PharmML' (Pharmacometrics Markup Language) standard output (SO) XML files.\n SO files can be created, read, manipulated and written through a\n data binding from the XML structure to a tree structure of R objects.","Published":"2017-02-11","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"libstableR","Version":"1.0","Title":"Fast and Accurate Evaluation, Random Number Generation and\nParameter Estimation of Skew Stable Distributions","Description":"Tools for fast and accurate evaluation of skew stable distributions (CDF, PDF and quantile functions), random number generation and parameter estimation.","Published":"2017-06-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LICORS","Version":"0.2.0","Title":"Light Cone Reconstruction of States - Predictive State\nEstimation From Spatio-Temporal Data","Description":"Estimates predictive states from spatio-temporal data and\n consequently can provide provably optimal forecasts.\n Currently this implementation\n supports an N-dimensional spatial grid observed over equally spaced time\n intervals. E.g. a video is a 2D spatial systems observed over time. This\n package implements mixed LICORS, has plotting tools (for (1+1)D and (2+1)D\n systems), and methods for optimal forecasting. Due to memory limitations\n it is recommend to only analyze (1+1)D systems.","Published":"2013-11-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LICurvature","Version":"0.1.1","Title":"Sensitivity Analysis for Case Weight in Normal Linear Regression","Description":"This package presents a general method for assessing the\n local influence of minor perturbations of case weight for the\n linear regression models.","Published":"2013-06-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lidR","Version":"1.2.1","Title":"Airborne LiDAR Data Manipulation and Visualization for Forestry\nApplications","Description":"Airborne LiDAR (Light Detection and Ranging) interface for data\n manipulation and visualization. Read/write 'las' and 'laz' files, computation\n of metrics in area based approach, point filtering, artificial point reduction,\n classification from geographic data, normalization, individual tree segmentation\n and other manipulations.","Published":"2017-06-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lifecontingencies","Version":"1.2.3","Title":"Financial and Actuarial Mathematics for Life Contingencies","Description":"Classes and methods that allow the user to manage life table,\n actuarial tables (also multiple decrements tables). Moreover, functions to easily\n perform demographic, financial and actuarial mathematics on life contingencies\n insurances calculations are contained therein.","Published":"2017-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lifecourse","Version":"2.0","Title":"Quantification of Lifecourse Fluidity","Description":"Provides in built datasets and three functions.\n These functions are mobility_index, nonStanTest and linkedLives. The mobility_index\n function facilitates the calculation of lifecourse fluidity, whilst the nonStanTest and the \n linkedLives functions allow the user to determine the probability that the observed sequence data \n was due to chance. The linkedLives function acknowledges the fact that some individuals may\n have identical sequences.\n The datasets available provide sequence data on marital status(maritalData) \n and mobility (mydata) for a selected group of individuals from the British Household Panel Study\n (BHPS). In addition, personal and house ID's for 100 individuals are provided in a \n third dataset (myHouseID) from the BHPS. ","Published":"2016-09-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"LifeHist","Version":"1.0-1","Title":"Life History Models of Individuals","Description":"Likelihood-based estimation of individual growth and sexual maturity models for organisms, usually fish and invertebrates. It includes methods for data organization, plotting standard exploratory and analytical plots, predictions.","Published":"2015-09-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lifelogr","Version":"0.1.0","Title":"Life Logging","Description":"Provides a framework for combining self-data (exercise, sleep, etc.) from multiple sources (fitbit, Apple Health), creating visualizations, and experimenting on onself.","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LifeTables","Version":"1.0","Title":"Two-Parameter HMD Model Life Table System","Description":"Functions supplied in this package will implement\n discriminant analysis to select an appropriate life table\n family, select an appropriate alpha level based on a desired\n life expectancy at birth, produce a model mortality pattern\n based on family and level as well as plot the results.","Published":"2015-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lift","Version":"0.0.2","Title":"Compute the Top Decile Lift and Plot the Lift Curve","Description":"Compute the top decile lift and plot the lift curve. Cumulative lift curves are also supported.","Published":"2015-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"liftLRD","Version":"1.0-5","Title":"Wavelet Lifting Estimators of the Hurst Exponent for Regularly\nand Irregularly Sampled Time Series","Description":"Implementations of Hurst exponent estimators based on the relationship between wavelet lifting scales and wavelet energy. ","Published":"2016-09-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"liftr","Version":"0.5","Title":"Containerize R Markdown Documents","Description":"Persistent reproducible reporting by containerization of R Markdown documents.","Published":"2017-04-12","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LightningR","Version":"1.0.2","Title":"Tools for Communication with Lightning-Viz Server","Description":"The purpose of this package is to enable usage of lightningviz server to be accessible from R. The\n server itself can be found at http://lightning-viz.org/ and is required to work with this package. Package\n by itself cannot and will not create any visualizations.","Published":"2015-12-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lightsout","Version":"0.3","Title":"Implementation of the 'Lights Out' Puzzle Game","Description":"Lights Out is a puzzle game consisting of a grid of lights\n that are either on or off. Pressing any light will toggle it and its\n adjacent lights. The goal of the game is to switch all the lights off. This\n package provides an interface to play the game on different board sizes,\n both through the command line or with a visual application. Puzzles can\n also be solved using the automatic solver included. View a demo\n online at http://daattali.com/shiny/lightsout/.","Published":"2016-07-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LIHNPSD","Version":"0.2.1","Title":"Poisson Subordinated Distribution","Description":"A Poisson Subordinated Distribution to capture major\n leptokurtic features in log-return time series of financial\n data.","Published":"2012-04-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"likelihood","Version":"1.7","Title":"Methods for Maximum Likelihood Estimation","Description":"Tools for maximum likelihood estimation of parameters \n of scientific models.","Published":"2015-02-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"likelihoodAsy","Version":"0.45","Title":"Functions for Likelihood Asymptotics","Description":"Functions for computing the r and r* statistics for inference on an arbitrary scalar function of model parameters, plus some code for the (modified) profile likelihood.","Published":"2016-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"likeLTD","Version":"6.2.1","Title":"Tools to Evaluate DNA Profile Evidence","Description":"Tools to determine DNA profile Weight of Evidence. \n For further information see the 'likeLTD' guide provided, \n Balding, D.J. (2013) ,\n\t or Steele, C.D. et al. (2016) .","Published":"2017-06-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"likert","Version":"1.3.5","Title":"Analysis and Visualization Likert Items","Description":"An approach to analyzing Likert response items, with an emphasis on visualizations. \n The stacked bar plot is the preferred method for presenting Likert results. Tabular results\n are also implemented along with density plots to assist researchers in determining whether \n Likert responses can be used quantitatively instead of qualitatively. See the likert(), \n summary.likert(), and plot.likert() functions to get started.","Published":"2016-12-31","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"LIM","Version":"1.4.6","Title":"Linear Inverse Model examples and solution methods","Description":"Functions that read and solve linear inverse problems (food web problems, linear programming problems).\n These problems find solutions to linear or quadratic functions:\n min or max (f(x)), where f(x) = ||Ax-b||^2 or f(x) = sum(ai*xi)\n subject to equality constraints Ex=f and inequality constraints Gx>=h. Uses package limSolve.","Published":"2014-12-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"limitplot","Version":"1.2","Title":"Jitter/CI Plot with Ordered Points Below the Limit of Detection","Description":"Values below a specified limit of detection are stacked in\n rows in order to reduce overplotting and create a clear\n graphical representation of your data.","Published":"2011-07-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"limSolve","Version":"1.5.5.2","Title":"Solving Linear Inverse Models","Description":"Functions that (1) find the minimum/maximum of a linear or quadratic function:\n min or max (f(x)), where f(x) = ||Ax-b||^2 or f(x) = sum(a_i*x_i)\n subject to equality constraints Ex=f and/or inequality constraints Gx>=h,\n (2) sample an underdetermined- or overdetermined system Ex=f subject to Gx>=h, and if applicable Ax~=b, \n (3) solve a linear system Ax=B for the unknown x. It includes banded and tridiagonal linear systems. \n The package calls Fortran functions from 'LINPACK'.","Published":"2017-01-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"linbin","Version":"0.1.2","Title":"Binning and Plotting of Linearly Referenced Data","Description":"Short for 'linear binning', the linbin package provides functions\n for manipulating, binning, and plotting linearly referenced data. Although\n developed for data collected on river networks, it can be used with any interval\n or point data referenced to a 1-dimensional coordinate system. Flexible bin\n generation and batch processing makes it easy to compute and visualize variables\n at multiple scales, useful for identifying patterns within and between variables\n and investigating the influence of scale of observation on data interpretation.","Published":"2017-03-14","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"LinCal","Version":"1.0","Title":"Static Univariate Frequentist and Bayesian Linear Calibration","Description":"Estimate and confidence/credible intervals for an unknown\n regressor x0 given an observed y0.","Published":"2014-11-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LindenmayeR","Version":"0.1.6","Title":"Functions to Explore L-Systems (Lindenmayer Systems)","Description":"L-systems or Lindenmayer systems are parallel rewriting systems which can\n be used to simulate biological forms and certain kinds of fractals.\n Briefly, in an L-system a series of symbols in a string are replaced\n iteratively according to rules to give a more complex string. Eventually,\n the symbols are translated into turtle graphics for plotting. Wikipedia has\n a very good introduction: en.wikipedia.org/wiki/L-system This package\n provides basic functions for exploring L-systems.","Published":"2015-07-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"lindia","Version":"0.9","Title":"Automated Linear Regression Diagnostic","Description":"Provides a set of streamlined functions that allow\n easy generation of linear regression diagnostic plots necessarily \n for checking linear model assumptions.\n This package is meant for easy scheming of linear \n regression diagnostics, while preserving merits of \n \"The Grammar of Graphics\" as implemented in 'ggplot2'.\n See the 'ggplot2' website for more information regarding the\n specific capability of graphics.","Published":"2017-05-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LindleyR","Version":"1.1.0","Title":"The Lindley Distribution and Its Modifications","Description":"Computes the probability density, the cumulative distribution, the quantile and\n the hazard rate functions and generates random deviates from the discrete and continuous\n Lindley distribution as well as for 19 of its modifications. It also generates censored\n random deviates from any probability distribution available in R.","Published":"2016-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"linear.tools","Version":"1.3.0","Title":"Manipulate Formulas and Evaluate Marginal Effects","Description":"Provides tools to manipulate formulas, such as getting x, y or contrasts from the model/formula, and functions to evaluate and check the marginal effects of a linear model.","Published":"2016-07-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LinearizedSVR","Version":"1.3","Title":"Linearized Support Vector Regression","Description":"Train and predict using fast prototype-based Linearized\n Support-Vector Regression methods.","Published":"2014-08-01","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LinearRegressionMDE","Version":"1.0","Title":"Minimum Distance Estimation in Linear Regression Model","Description":"Consider linear regression model Y = Xb + error where the distribution function of errors is unknown, but errors are independent and symmetrically distributed. The package contains a function named LRMDE which takes Y and X as input and returns minimum distance estimator of parameter b in the model. ","Published":"2015-09-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"linERR","Version":"1.0","Title":"Linear Excess Relative Risk Model","Description":"Fits a linear excess relative risk model by maximum likelihood, possibly including several variables and allowing for lagged exposures.","Published":"2016-02-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lineup","Version":"0.37-6","Title":"Lining Up Two Sets of Measurements","Description":"Tools for detecting and correcting sample mix-ups between two sets\n of measurements, such as between gene expression data on two tissues.","Published":"2015-07-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lingtypology","Version":"1.0.5","Title":"Linguistic Typology and Mapping","Description":"Provides R with the Glottolog database and some more abilities for purposes of linguistic mapping. The Glottolog database contains the catalogue of languages of the world. This package helps researchers to make a linguistic maps, using philosophy of the Cross-Linguistic Linked Data project , which allows for while at the same time facilitating uniform access to the data across publications. A tutorial for this package is available on GitHub pages and package vignette. Maps created by this package can be used both for the investigation and linguistic teaching.","Published":"2017-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"link2GI","Version":"0.1-0","Title":"Linking GIS, Remote Sensing and Other Command Line Tools","Description":"Functions to simplify the linking of open source GIS and remote sensing related command line interfaces.","Published":"2017-01-22","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LinkageMapView","Version":"2.1.0","Title":"Plot Linkage Group Maps with Quantitative Trait Loci","Description":"Produces high resolution, publication ready linkage maps\n and quantitative trait loci maps. Input can be output from 'R/qtl',\n simple text or comma delimited files. Output is currently\n a portable document file.","Published":"2017-05-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"linkcomm","Version":"1.0-11","Title":"Tools for Generating, Visualizing, and Analysing Link\nCommunities in Networks","Description":"Link communities reveal the nested and overlapping structure in networks, and uncover the key nodes that form connections to multiple communities. linkcomm provides a set of tools for generating, visualizing, and analysing link communities in networks of arbitrary size and type. The linkcomm package also includes tools for generating, visualizing, and analysing Overlapping Cluster Generator (OCG) communities.","Published":"2014-08-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LinkedMatrix","Version":"1.2.0","Title":"Column-Linked and Row-Linked Matrices","Description":"Matrices implemented as collections of matrix-like nodes, linked by\n columns or rows.","Published":"2016-09-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"linkim","Version":"0.1","Title":"Linkage information based genotype imputation method","Description":"A linkage information based method for imputing missing diploid genotypes","Published":"2014-01-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"linkR","Version":"1.1.1","Title":"3D Lever and Linkage Mechanism Modeling","Description":"Creates kinematic and static force models of 3D levers and linkage mechanisms, with particular application to the fields of engineering and biomechanics.","Published":"2016-10-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"linLIR","Version":"1.1","Title":"linear Likelihood-based Imprecise Regression","Description":"This package implements the methodology of\n Likelihood-based Imprecise Regression (LIR) for the case of\n linear regression with interval data.","Published":"2012-11-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"linprog","Version":"0.9-2","Title":"Linear Programming / Optimization","Description":"This package can be used to solve Linear Programming /\n Linear Optimization problems by using the simplex algorithm.","Published":"2012-10-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LinRegInteractive","Version":"0.3-1","Title":"Interactive Interpretation of Linear Regression Models","Description":"Interactive visualization of effects, response functions \n and marginal effects for different kinds of regression models. In this version \n linear regression models, generalized linear models, generalized additive\n models and linear mixed-effects models are supported. \n Major features are the interactive approach and the handling of the effects of categorical covariates: \n if two or more factors are used as covariates every combination of the levels of each \n factor is treated separately. The automatic calculation of \n marginal effects and a number of possibilities to customize the graphical output \n are useful features as well.","Published":"2015-04-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LINselect","Version":"1.1","Title":"Selection of Linear Estimators","Description":"Estimate the mean of a Gaussian vector, by choosing among a large collection of estimators. In particular it solves the problem of variable selection by choosing the best predictor among predictors emanating from different methods as lasso, elastic-net, adaptive lasso, pls, randomForest. Moreover, it can be applied for choosing the tuning parameter in a Gauss-lasso procedure.","Published":"2017-04-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"lintools","Version":"0.1.1.4","Title":"Manipulation of Linear Systems of (in)Equalities","Description":"Variable elimination (Gaussian elimination, Fourier-Motzkin elimination), \n Moore-Penrose pseudoinverse, reduction to reduced row echelon form, value substitution, \n projecting a vector on the convex polytope described by a system of (in)equations, \n simplify systems by removing spurious columns and rows and collapse implied equalities, \n test if a matrix is totally unimodular, compute variable ranges implied by linear\n (in)equalities.","Published":"2017-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lintr","Version":"1.0.0","Title":"Static R Code Analysis","Description":"Checks adherence to a given style, syntax errors and possible\n semantic issues. Supports on the fly checking of R code edited with Emacs,\n Vim and Sublime Text.","Published":"2016-04-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"liqueueR","Version":"0.0.1","Title":"Implements Queue, PriorityQueue and Stack Classes","Description":"Provides three classes: Queue, PriorityQueue and Stack. Queue is just a\n \"plain vanilla\" FIFO queue; PriorityQueue orders items according to priority. Stack implements LIFO.","Published":"2016-08-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"liquidSVM","Version":"1.0.1","Title":"A Fast and Versatile SVM Package","Description":"Support vector machines (SVMs) and related kernel-based learning\n algorithms are a well-known class of machine learning algorithms, for non-\n parametric classification and regression. liquidSVM is an implementation of\n SVMs whose key features are: fully integrated hyper-parameter selection, extreme\n speed on both small and large data sets, inclusion of a variety of different\n classification and regression scenarios, and full flexibility for experts.","Published":"2017-03-02","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"lira","Version":"1.2.0","Title":"LInear Regression in Astronomy","Description":"Performs Bayesian linear regression in astronomy. The method accounts for heteroscedastic errors in both the independent and the dependent variables, intrinsic scatters (in both variables), time evolution of slopes, normalization and scatters, Malmquist and Eddington bias, and break of linearity. The posterior distribution of the regression parameters is sampled with a Gibbs method exploiting the JAGS library.","Published":"2016-03-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"liso","Version":"0.2","Title":"Fitting lasso penalised additive isotone models","Description":"Fits lasso (total variation) penalised additive isotone models","Published":"2011-11-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"lisp","Version":"0.1","Title":"List-processing à la SRFI-1","Description":"Though SRFI-1 scopes both list-processing and higher-order\n programming, we'll save some list-orthogonal functions for the\n `functional' package; this is freely a mixture of\n implementation and API.","Published":"2012-01-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lisrelToR","Version":"0.1.4","Title":"Import output from LISREL into R","Description":"This is an unofficial package aimed at automating the\n import of LISREL output in R. This package or its maintainer\n is not in any way affiliated with the creators of LISREL and\n SSI, Inc.","Published":"2013-05-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"list","Version":"8.3","Title":"Statistical Methods for the Item Count Technique and List\nExperiment","Description":"Allows researchers to conduct multivariate\n statistical analyses of survey data with list experiments. This\n survey methodology is also known as the item count technique or\n the unmatched count technique and is an alternative to the commonly\n used randomized response method. The package implements the methods\n developed by Imai (2011) , \n Blair and Imai (2012) , \n Blair, Imai, and Lyall (2013) , \n Imai, Park, and Greene (2014) ,\n Aronow, Coppock, Crawford, and Green (2015) , \n and Chou, Imai, and Rosenfeld (2016) \n . \n This includes a Bayesian MCMC implementation of regression for the \n standard and multiple sensitive item list experiment designs and a \n random effects setup, a Bayesian MCMC hierarchical regression model \n with up to three hierarchical groups, the combined list experiment \n and endorsement experiment regression model, a joint model of the \n list experiment that enables the analysis of the list experiment as \n a predictor in outcome regression models, and a method for combining \n list experiments with direct questions. In addition, the package\n implements the statistical test that is designed to detect\n certain failures of list experiments, and a placebo test\n for the list experiment using data from direct questions.","Published":"2016-12-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"listdtr","Version":"1.0","Title":"List-Based Rules for Dynamic Treatment Regimes","Description":"Construction of list-based rules, i.e. a list of if-then clauses, to estimate the optimal dynamic treatment regime.","Published":"2016-11-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"listenv","Version":"0.6.0","Title":"Environments Behaving (Almost) as Lists","Description":"List environments are environments that have list-like properties. For instance, the elements of a list environment are ordered and can be accessed and iterated over using index subsetting, e.g. 'x <- listenv(a=1, b=2); for (i in seq_along(x)) x[[i]] <- x[[i]]^2; y <- as.list(x)'.","Published":"2015-12-28","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"LIStest","Version":"2.1","Title":"Tests of independence based on the Longest Increasing\nSubsequence","Description":"Tests for independence between X and Y computed from a paired sample (x1,y1),...(xn,yn) of (X,Y), using one of the following statistics (a) the Longest Increasing Subsequence (Ln), (b) JLn, a Jackknife version of Ln or (c) JLMn, a Jackknife version of the longest monotonic subsequence. This family of tests can be applied under the assumption of continuity of X and Y.","Published":"2014-03-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"listless","Version":"0.0-2","Title":"Convert Lists to Tidy Data Frames","Description":"A lightweight utility for converting lists to tidy data frames.","Published":"2016-08-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"listviewer","Version":"1.4.0","Title":"'htmlwidget' for Interactive Views of R Lists","Description":"R lists, especially nested lists, can be very difficult to\n visualize or represent. Sometimes 'str()' is not enough, so this suite of\n htmlwidgets is designed to help see, understand, and maybe even modify your R\n lists. The function 'reactjson()' requires a non-CRAN package\n 'reactR' that can be installed from .","Published":"2016-11-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"listWithDefaults","Version":"1.2.0","Title":"List with Defaults","Description":"Provides a function that, as an alternative to base::list, allows\n default values to be inherited from another list.","Published":"2017-06-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"liteq","Version":"1.0.0","Title":"Lightweight Portable Message Queue Using 'SQLite'","Description":"Temporary and permanent message queues for R. Built on top of\n 'SQLite' databases. 'SQLite' provides locking, and makes it possible\n to detect crashed consumers. Crashed jobs can be automatically marked\n as \"failed\", or put in the queue again, potentially a limited number of times.","Published":"2017-01-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"littler","Version":"0.3.2","Title":"R at the Command-Line via 'r'","Description":"A scripting and command-line front-end\n is provided by 'r' (aka 'littler') as a lightweight binary wrapper around\n the GNU R language and environment for statistical computing and graphics.\n While R can be used in batch mode, the r binary adds full support for\n both 'shebang'-style scripting (i.e. using a hash-mark-exclamation-path\n expression as the first line in scripts) as well as command-line use in\n standard Unix pipelines. In other words, r provides the R language without\n the environment.","Published":"2017-02-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"liureg","Version":"1.0","Title":"Liu Regression with Liu Biasing Parameters and Statistics","Description":"Linear Liu regression coefficient's estimation and testing with\n different Liu related measures such as MSE, R-squared etc.","Published":"2017-03-08","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"livechatR","Version":"0.1.0","Title":"R Wrapper for LiveChat REST API","Description":"Provides a wrapper around LiveChat's API. The R functions allow for\n one to extract chat sessions, raw text of chats between agents and customers and\n events.","Published":"2016-02-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ljr","Version":"1.4-0","Title":"Logistic Joinpoint Regression","Description":"Fits and tests logistic joinpoint models.","Published":"2016-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"llama","Version":"0.9.1","Title":"Leveraging Learning to Automatically Manage Algorithms","Description":"Provides functionality to train and evaluate algorithm selection models for portfolios.","Published":"2015-12-05","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lle","Version":"1.1","Title":"Locally linear embedding","Description":"LLE is a non-linear algorithm for mapping high-dimensional\n data into a lower dimensional (intrinsic) space. This package\n provides the main functions to performs the LLE alogrithm\n including some enhancements like subset selection, calculation\n of the intrinsic dimension etc.","Published":"2012-03-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lllcrc","Version":"1.2","Title":"Local Log-linear Models for Capture-Recapture","Description":"Applies local log-linear capture-recapture models (LLLMs) for\n closed populations, as described in the doctoral thesis of Zachary Kurtz.\n The method is relevant when there are 3-5 capture occasions, with auxiliary\n covariates available for all capture occasions. As part of estimating the\n number of missing population units, the method estimates the \"rate of\n missingness\" as it varies over the covariate space. In addition,\n user-friendly functions are provided to recreate (approximately) the method\n of Zwane and van der Heijden (2004), which applied the VGAM package\n in a way that is closely related to LLLMs.","Published":"2014-10-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LLSR","Version":"0.0.2.0","Title":"Data Analysis of Liquid-Liquid Systems","Description":"Analyses experimental data from liquid-liquid phase diagrams and\n provide a simple way to obtain its parameters and a simplified report. Designed\n initially to analyse Aqueous Two-Phases Systems, the package will include (every\n other update) new functions in order to comprise useful tools in liquid-liquid\n analysis.","Published":"2016-02-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lm.beta","Version":"1.5-1","Title":"Add Standardized Regression Coefficients to lm-Objects","Description":"Adds standardized regression coefficients to objects created by lm. Also extends the S3 methods print, summary and coef with additional boolean argument standardized.","Published":"2014-12-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lm.br","Version":"2.9.3","Title":"Linear Model with Breakpoint","Description":"Exact significance tests for a changepoint in linear or multiple linear regression. \n Confidence regions with exact coverage probabilities for the changepoint. The method is from\n Knowles, Siegmund and Zhang (1991) .","Published":"2017-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lme4","Version":"1.1-13","Title":"Linear Mixed-Effects Models using 'Eigen' and S4","Description":"Fit linear and generalized linear mixed-effects models.\n The models and their components are represented using S4 classes and\n methods. The core computational algorithms are implemented using the\n 'Eigen' C++ library for numerical linear algebra and 'RcppEigen' \"glue\".","Published":"2017-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lmec","Version":"1.0","Title":"Linear Mixed-Effects Models with Censored Responses","Description":"This package includes a function to fit a linear\n mixed-effects model in the formulation described in Laird and\n Ware (1982) but allowing for censored normal responses. In this\n version, the with-in group errors are assumed independent and\n identically distributed.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lmem.gwaser","Version":"0.1.0","Title":"Linear Mixed Effects Models for Genome-Wide Association Studies","Description":"Performs Genome-Wide Association analysis for diverse populations and\n for multi-environment and multi-trait analysis using linear mixed models.","Published":"2016-05-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lmem.qtler","Version":"0.1.1","Title":"Linear Mixed Effects Models for QTL Mapping for Multienvironment\nand Multitrait Analysis","Description":"Performs QTL mapping analysis for balanced and for\n multi-environment and multi-trait analysis using mixed models.\n Balanced population, single trait, single environment QTL mapping is\n performed through marker-regression (Haley and Knott (1992) ,\n Martinez and Curnow (1992) ,\n while multi-environment and multi-trait QTL\n mapping is performed through linear mixed models.\n These functions could use any of the following populations: double haploid,\n F2, recombinant inbred lines, back-cross, and 4-way crosses.\n Performs a Single Marker Analysis, a Single Interval Mapping,\n or a Composite Interval Mapping analysis, and then constructs a final model\n with all of the relevant QTL.","Published":"2016-07-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lmeNB","Version":"1.3","Title":"Compute the Personalized Activity Index Based on a Negative\nBinomial Model","Description":"The functions in this package implement the safety monitoring procedures proposed in the paper titled \"Detection of unusual increases in MRI lesion counts in individual multiple sclerosis patients\" by Zhao, Y., Li, D.K.B., Petkau, A.J., Riddehough, A., Traboulsee, A., published in Journal of the American Statistical Association in 2013. The procedure first models longitudinally collected count variables with a negative binomial mixed-effect regression model. To account for the correlation among repeated measures from the same patient, the model has subject-specific random intercept, which can be modelled with a gamma or log-normal distributions. One can also choose the semi-parametric option which does not assume any distribution for the random effect. These mixed-effect models could be useful beyond the application of the safety monitoring. The maximum likelihood methods are used to estimate the unknown fixed effect parameters of the model. Based on the fitted model, the personalized activity index is computed for each patient. Lastly, this package is companion to R package lmeNBBayes, which contains the functions to compute the Personalized Activity Index in Bayesian framework.","Published":"2015-02-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lmeNBBayes","Version":"1.3.1","Title":"Compute the Personalized Activity Index Based on a Flexible\nBayesian Negative Binomial Model","Description":"The functions in this package implement the safety monitoring procedures proposed in the paper titled \"A flexible mixed effect negative binomial regression model for detecting unusual increases in MRI lesion counts in individual multiple sclerosis patients\" by Kondo, Y., Zhao, Y. and Petkau, A.J. The procedure first models longitudinally collected count variables with a negative binomial mixed-effect regression model. To account for the correlation among repeated measures from the same patient, the model has subject-specific random intercept, which is modelled with the infinite mixture of Beta distributions, very flexible distribution that theoretically allows any form. The package also has the option of a single beta distribution for random effects. These mixed-effect models could be useful beyond the application of the safety monitoring. The inference is based on MCMC samples and this package contains a Gibbs sampler to sample from the posterior distribution of the negative binomial mixed-effect regression model. Based on the fitted model, the personalized activity index is computed for each patient. Lastly, this package is companion to R package lmeNB, which contains the functions to compute the Personalized Activity Index in the frequentist framework.","Published":"2015-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lmenssp","Version":"1.2","Title":"Linear Mixed Effects Models with Non-Stationary Stochastic\nProcesses","Description":"Contains functions to estimate model parameters and filter, smooth and forecast random effects coefficients for mixed models with stationary and non-stationary stochastic processes under multivariate normal and t response distributions, diagnostic checks, bootstrap standard error calculation, etc.","Published":"2016-07-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LMERConvenienceFunctions","Version":"2.10","Title":"Model Selection and Post-hoc Analysis for (G)LMER Models","Description":"The main function of the package is to perform backward selection of fixed effects, forward fitting of the random effects, and post-hoc analysis using parallel capabilities. Other functionality includes the computation of ANOVAs with upper- or lower-bound p-values and R-squared values for each model term, model criticism plots, data trimming on model residuals, and data visualization. The data to run examples is contained in package LCF_data.","Published":"2015-01-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lmeresampler","Version":"0.1.0","Title":"Bootstrap Methods for Nested Linear Mixed-Effects Models","Description":"Bootstrap routines for nested linear mixed effects models fit using\n either 'lme4' or 'nlme'. The provided 'bootstrap()' function implements the\n parametric, semi-parametric (i.e., CGR), residual, cases, and random effect\n block (REB) bootstrap procedures.","Published":"2016-07-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lmerTest","Version":"2.0-33","Title":"Tests in Linear Mixed Effects Models","Description":"Different kinds of tests for linear mixed effects models as implemented \n in 'lme4' package are provided. The tests comprise types I - III F tests \n for fixed effects, LR tests for random effects. \n The package also provides the calculation of population means for fixed factors \n with confidence intervals and corresponding plots. Finally the backward \n elimination of non-significant effects is implemented.","Published":"2016-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lmeSplines","Version":"1.1-10","Title":"Add smoothing spline modelling capability to nlme","Description":"Add smoothing spline modelling capability to nlme. Fit\n smoothing spline terms in Gaussian linear and nonlinear\n mixed-effects models","Published":"2013-08-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LMest","Version":"2.4","Title":"Latent Markov Models with and without Covariates","Description":"Fit certain versions of the Latent Markov model for longitudinal categorical data.","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lmeVarComp","Version":"1.0","Title":"Testing for a subset of variance components in linear mixed\nmodels","Description":"Test zero variance components in linear mixed models and \n test additivity in nonparametric regression \n using the restricted likelihood ratio test and the generalized F-test.","Published":"2014-08-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lmf","Version":"1.2","Title":"Functions for estimation and inference of selection in\nage-structured populations","Description":"This R package provide methods for estimation and statistical\n inference on directional and fluctuating selection in age-structured\n populations.","Published":"2013-10-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lmfor","Version":"1.2","Title":"Functions for Forest Biometrics","Description":"Functions for different purposes related to Forest biometrics, including illustrative graphics, numerical computation, modeling height-diameter relationships, prediction of tree volumes, and modelling of diameter distributions. Datasets on tree height-diameter relationship, light response of moss photosynthesis, and productivity of stump-lifting machines are included. ","Published":"2017-01-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lmm","Version":"1.0","Title":"Linear Mixed Models","Description":"Some improved procedures for linear mixed models.","Published":"2015-02-10","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"lmmlasso","Version":"0.1-2","Title":"Linear mixed-effects models with Lasso","Description":"This package fits (gaussian) linear mixed-effects models\n for high-dimensional data (n<= 2)","snapshot_date":"2017-06-23"} {"Package":"lmodel2","Version":"1.7-2","Title":"Model II Regression","Description":"Computes model II simple linear regression using ordinary\n least squares (OLS), major axis (MA), standard major axis (SMA), and\n ranged major axis (RMA).","Published":"2014-02-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lmom","Version":"2.5","Title":"L-moments","Description":"Functions related to L-moments: computation of L-moments\n and trimmed L-moments of distributions and data samples; parameter\n estimation; L-moment ratio diagram; plot vs. quantiles of an\n extreme-value distribution.","Published":"2015-02-02","License":"Common Public License Version 1.0","snapshot_date":"2017-06-23"} {"Package":"lmomco","Version":"2.2.7","Title":"L-Moments, Censored L-Moments, Trimmed L-Moments, L-Comoments,\nand Many Distributions","Description":"Extensive functions for L-moments (LMs) and probability-weighted moments\n (PWMs), parameter estimation for distributions, LM computation for distributions, and\n L-moment ratio diagrams. Maximum likelihood and maximum product of spacings estimation\n are also available. LMs for right-tail and left-tail censoring by known or unknown\n threshold and by indicator variable are available. Asymmetric (asy) trimmed LMs\n (TL-moments, TLMs) are supported. LMs of residual (resid) and reversed (rev) resid life\n are implemented along with 13 quantile function operators for reliability and survival\n analyses. Exact analytical bootstrap estimates of order statistics, LMs, and variances-\n covariances of LMs are provided. The Harri-Coble Tau34-squared Normality Test is available.\n Distribution support with \"L\" (LMs), \"TL\" (TLMs) and added (+) support for right-tail\n censoring (RC) encompasses: Asy Exponential (Exp) Power [L], Asy Triangular [L],\n Cauchy [TL], Eta-Mu [L], Exp. [L], Gamma [L], Generalized (Gen) Exp Poisson [L],\n Gen Extreme Value [L], Gen Lambda [L,TL], Gen Logistic [L), Gen Normal [L],\n Gen Pareto [L+RC, TL], Govindarajulu [L], Gumbel [L], Kappa [L], Kappa-Mu [L],\n Kumaraswamy [L], Laplace [L], Linear Mean Resid. Quantile Function [L], Normal [L],\n 3-p log-Normal [L], Pearson Type III [L], Rayleigh [L], Rev-Gumbel [L+RC], Rice/Rician [L],\n Slash [TL], 3-p Student t [L], Truncated Exponential [L], Wakeby [L], and Weibull [L].\n Multivariate sample L-comoments (LCMs) are implemented to measure asymmetric associations.","Published":"2017-03-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Lmoments","Version":"1.2-3","Title":"L-Moments and Quantile Mixtures","Description":"Contains functions to estimate\n L-moments and trimmed L-moments from the data. Also\n contains functions to estimate the parameters of the normal\n polynomial quantile mixture and the Cauchy polynomial quantile\n mixture from L-moments and trimmed L-moments.","Published":"2016-02-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lmomRFA","Version":"3.0-1","Title":"Regional frequency analysis using L-moments","Description":"Functions for regional frequency analysis using the methods\n of J. R. M. Hosking and J. R. Wallis (1997), \"Regional frequency analysis:\n an approach based on L-moments\".","Published":"2015-02-02","License":"Common Public License Version 1.0","snapshot_date":"2017-06-23"} {"Package":"lmPerm","Version":"2.1.0","Title":"Permutation Tests for Linear Models","Description":"Linear model functions using permutation tests.","Published":"2016-08-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lmridge","Version":"1.0","Title":"Linear Ridge Regression with Ridge Penalty and Ridge Statistics","Description":"Linear ridge regression coefficient's estimation and testing with\n different ridge related measures such as MSE, R-squared etc.","Published":"2016-11-22","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"lmSupport","Version":"2.9.8","Title":"Support for Linear Models","Description":"Provides tools and a consistent interface to support analyses using General, Generalized, and Multi-level Linear Models.","Published":"2017-01-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lmtest","Version":"0.9-35","Title":"Testing Linear Regression Models","Description":"A collection of tests, data sets, and examples\n for diagnostic checking in linear regression models. Furthermore,\n some generic tools for inference in parametric models are provided.","Published":"2017-02-11","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"lmvar","Version":"1.2.1","Title":"Linear Regression with Non-Constant Variances","Description":"Runs a linear regression in which both the expected value and the variance can vary per observation. The expected values mu follows the standard linear model mu = X_mu * beta_mu. The standard deviation sigma follows the model log(sigma) = X_sigma * beta_sigma. The package comes with two vignettes: 'Intro' gives an introduction, 'Math' gives mathematical details.","Published":"2017-06-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LncFinder","Version":"1.0.0","Title":"Long Non-Coding RNA Identification Based on Features of\nSequence, EIIP and Secondary Structure","Description":"Functions for predicting sequences are mRNAs or long non-coding RNAs.\n Default models are trained on human, mouse and wheat datasets by employing \n SVM. Features are based on intrinsic composition of sequence, EIIP value \n (electron-ion interaction pseudopotential) and secondary structure. The model\n can also be built on users' own data.","Published":"2017-02-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LncMod","Version":"1.1","Title":"Predicting Modulator and Functional/Survival Analysis","Description":"Predict modulators regulating the ability of effectors to regulate their targets and produce\n modulator-effector-target triplets followed by goterm functional enrichment and survival analysis.This\n\tis mainly applied to long non-coding RNAs (lncRNAs) as candidate modulators regulating the ability of \n\ttranscription factors (TFs) to regulate their corresponding targets.","Published":"2015-06-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LncPath","Version":"1.0","Title":"Identifying the Pathways Regulated by LncRNA Sets of Interest","Description":"Identifies pathways synergisticly regulated by the interested lncRNA(long non-coding RNA) sets based on a lncRNA-mRNA(messenger RNA) interaction network. 1) The lncRNA-mRNA interaction network was built from the protein-protein interactions and the lncRNA-mRNA co-expression relationships in 28 RNA-Seq data sets. 2) The interested lncRNAs can be mapped into networks as seed nodes and a random walk strategy will be performed to evaluate the rate of each coding genes influenced by the seed lncRNAs. 3) Pathways regulated by the lncRNA set will be evaluated by a weighted Kolmogorov-Smirnov statistic as an ES Score. 4) The p value and false discovery rate value will also be calculated through a permutation analysis. 5) The running score of each pathway can be plotted and the heat map of each pathway can also be plotted if an expression profile is provided. 6) The rank and scores of the gene list of each pathway can be printed.","Published":"2016-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LncPriCNet","Version":"1.0","Title":"Prioritizing Candidate LncRNAs Based on a Composite Multi-Level\nNetwork","Description":"Prioritizing the disease lncRNAs based on random walking on a composite Multi-level network. ","Published":"2016-06-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LNIRT","Version":"0.2.0","Title":"LogNormal Response Time Item Response Theory Models","Description":"Allows the simultaneous analysis of responses and response times in an Item Response Theory (IRT) modelling framework. Supports covariates for item and person (random) parameters. Parameter estimation is done with a MCMC algorithm. LNIRT replaces the package CIRT, which was written by Rinke Klein Entink. For reference, see the paper by Fox, Klein Entink and Van der Linden (2007), \"Modeling of Responses and Response Times with the Package cirt\", Journal of Statistical Software, .","Published":"2017-03-20","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"loa","Version":"0.2.38","Title":"Lattice Options and Add-Ins","Description":"Various plots and functions that make use of the lattice/trellis plotting framework. \n The plots (which include 'loaPlot', 'GoogleMap' and 'trianglePlot') use panelPal(), a function that \n extends 'lattice' and 'hexbin' package methods to automate plot subscript and panel-to-panel \n and panel-to-key synchronization/management. ","Published":"2016-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LocalControl","Version":"1.0.1","Title":"Nonparametric Methods for Generating High Quality Comparative\nEffectiveness Evidence","Description":"Implements novel nonparametric approaches to address\n biases and confounding when comparing treatments or exposures in\n observational studies of outcomes. While designed and appropriate for use\n in studies involving medicine and the life sciences, the package can be\n used in other situations involving outcomes with multiple confounders.\n The package implements a family of methods for nonparametric bias\n correction when comparing treatments in cross-sectional, case-control,\n and survival analysis settings, including competing risks with censoring.\n The approach extends to bias-corrected personalized predictions of\n treatment outcome differences, and analysis of heterogeneity of treatment\n effect-sizes across patient subgroups.","Published":"2017-06-03","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"localdepth","Version":"0.5-7","Title":"Local Depth","Description":"Simplicial, Mahalanobis and Ellipsoid Local and Global Depth","Published":"2013-11-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"localgauss","Version":"0.40","Title":"Estimating Local Gaussian Parameters","Description":"Computational routines for estimating local Gaussian parameters. Local Gaussian parameters are useful for characterizing and testing for non-linear dependence within bivariate data. See e.g. Tjostheim and Hufthammer, Local Gaussian correlation: A new measure of dependence, Journal of Econometrics, 2013, Volume 172 (1), pages 33-48 .","Published":"2016-11-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"localsolver","Version":"2.3","Title":"R API to LocalSolver","Description":"The package converts R data onto input and data for LocalSolver,\n executes optimization and exposes optimization results as R data.\n LocalSolver (http://www.localsolver.com/) is an optimization engine\n developed by Innovation24 (http://www.innovation24.fr/). It is designed to\n solve large-scale mixed-variable non-convex optimization problems. The\n localsolver package is developed and maintained by WLOG Solutions\n (http://www.wlogsolutions.com/en/) in collaboration with Decision Support\n and Analysis Division at Warsaw School of Economics\n (http://www.sgh.waw.pl/en/).","Published":"2014-06-18","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"locfdr","Version":"1.1-8","Title":"Computes Local False Discovery Rates","Description":"Computation of local false discovery rates.","Published":"2015-07-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LocFDRPois","Version":"1.0.0","Title":"Functions for Performing Local FDR Estimation when Null and\nAlternative are Poisson","Description":"The main idea of the Local FDR algorithm is to estimate both proportion of null observations and the ratio of null and alternative densities. In the case that there are many null observations, this can be done reliably, through maximum likelihood or generalized linear models. This package implements this in the case that the null and alternative densities are Poisson.","Published":"2015-05-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"locfit","Version":"1.5-9.1","Title":"Local Regression, Likelihood and Density Estimation","Description":"Local regression, likelihood and density estimation.","Published":"2013-04-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"locits","Version":"1.7.3","Title":"Test of Stationarity and Localized Autocovariance","Description":"Provides test of second-order stationarity for time\n\tseries (for dyadic and arbitrary-n length data). Provides\n\tlocalized autocovariance, with confidence intervals,\n\tfor locally stationary (nonstationary) time series.","Published":"2016-11-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Lock5Data","Version":"2.6","Title":"Datasets for \"Statistics: UnLocking the Power of Data\"","Description":"Datasets for \"Statistics:Unlocking the Power of Data\" by\n Lock, Lock, Lock, Lock and Lock","Published":"2012-12-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Lock5withR","Version":"1.2.2","Title":"Datasets for 'Statistics: Unlocking the Power of Data'","Description":"Data sets and other utilities for \n 'Statistics: Unlocking the Power of Data'\n by Lock, Lock, Lock, Lock and Lock \n (ISBN : 978-0-470-60187-7, http://lock5stat.com/).","Published":"2015-12-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"locpol","Version":"0.6-0","Title":"Kernel local polynomial regression","Description":"Computes local polynomial estimators.","Published":"2012-10-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"loder","Version":"0.1.2","Title":"Dependency-Free Access to PNG Image Files","Description":"Read and write access to PNG image files using the LodePNG\n library. The package has no external dependencies.","Published":"2017-05-30","License":"BSD_3_clause + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"lodGWAS","Version":"1.0-7","Title":"Genome-Wide Association Analysis of a Biomarker Accounting for\nLimit of Detection","Description":"Genome-wide association (GWAS) analyses\n of a biomarker that account for the limit of detection.","Published":"2015-11-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"loe","Version":"1.1","Title":"Local Ordinal Embedding","Description":"Local Ordinal embedding (LOE) is one of graph embedding methods for unweighted graphs.","Published":"2016-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"log4r","Version":"0.2","Title":"A simple logging system for R, based on log4j","Description":"logr4 provides an object-oriented logging system that uses an API\n roughly equivalent to log4j and its related variants.","Published":"2014-09-29","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"logbin","Version":"2.0.3","Title":"Relative Risk Regression Using the Log-Binomial Model","Description":"Methods for fitting log-link GLMs and GAMs to binomial data,\n including EM-type algorithms with more stable convergence properties than standard methods.","Published":"2017-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LogConcDEAD","Version":"1.5-9","Title":"Log-concave Density Estimation in Arbitrary Dimensions","Description":"Computes a log-concave (maximum likelihood) estimator for\n i.i.d. data in any number of dimensions.","Published":"2014-07-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"logconcens","Version":"0.16-4","Title":"Maximum likelihood estimation of a log-concave density based on\ncensored data","Description":"Based on right or interval censored data, compute the maximum likelihood estimator of a (sub)probability density under the assumption that it is log-concave. For further information see Duembgen, Rufibach, and Schuhmacher (2011, preprint). ","Published":"2013-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"logcondens","Version":"2.1.5","Title":"Estimate a Log-Concave Probability Density from Iid Observations","Description":"Given independent and identically distributed observations X(1), ..., X(n), compute the maximum likelihood estimator (MLE) of a density as well as a smoothed version of it under the assumption that the density is log-concave, see Rufibach (2007) and Duembgen and Rufibach (2009). The main function of the package is 'logConDens' that allows computation of the log-concave MLE and its smoothed version. In addition, we provide functions to compute (1) the value of the density and distribution function estimates (MLE and smoothed) at a given point (2) the characterizing functions of the estimator, (3) to sample from the estimated distribution, (5) to compute a two-sample permutation test based on log-concave densities, (6) the ROC curve based on log-concave estimates within cases and controls, including confidence intervals for given values of false positive fractions (7) computation of a confidence interval for the value of the true density at a fixed point. Finally, three datasets that have been used to illustrate log-concave density estimation are made available.","Published":"2016-07-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"logcondens.mode","Version":"1.0.1","Title":"Compute MLE of Log-Concave Density on R with Fixed Mode, and\nPerform Inference for the Mode","Description":"Computes maximum likelihood estimate of a log-concave density with fixed and known location of the mode. Performs inference about the mode via a likelihood ratio test. Extension of the logcondens package.","Published":"2013-09-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"logcondiscr","Version":"1.0.6","Title":"Estimate a Log-Concave Probability Mass Function from Discrete\ni.i.d. Observations","Description":"Given independent and identically distributed observations X(1), ..., X(n), allows to compute the maximum likelihood estimator (MLE) of probability mass function (pmf) under the assumption that it is log-concave, see Weyermann (2007) and Balabdaoui, Jankowski, Rufibach, and Pavlides (2012). The main functions of the package are 'logConDiscrMLE' that allows computation of the log-concave MLE, 'logConDiscrCI' that computes pointwise confidence bands for the MLE, and 'kInflatedLogConDiscr' that computes a mixture of a log-concave PMF and a point mass at k.","Published":"2015-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"logconPH","Version":"1.5","Title":"CoxPH Model with Log Concave Baseline Distribution","Description":"Computes a cox PH model with a log concave baseline distribution. If no covariates are provided, estimates the log concave NPMLE. Built specifically for interval censored data, where data is a n by 2 matrix with [i, 1] as the left side of the interval for subject i and [i,2] as right side. Uncensored data can be entered by setting [i,1] = [i,2]. Alternatively, this can also handle uncensored data. If all the data is uncensored, you may enter data as a length(n) vector.","Published":"2014-12-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"logging","Version":"0.7-103","Title":"R logging package","Description":"logging is a pure R package that implements the ubiquitous\n log4j package.","Published":"2013-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LogicForest","Version":"2.1.0","Title":"Logic Forest","Description":"Two classification ensemble methods based on logic regression models. LogForest uses a bagging approach to construct an ensemble of logic regression models. LBoost uses a combination of boosting and cross-validation to construct an ensemble of logic regression models. Both methods are used for classification of binary responses based on binary predictors and for identification of important variables and variable interactions predictive of a binary outcome.","Published":"2014-09-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LOGICOIL","Version":"0.99.0","Title":"LOGICOIL: multi-state prediction of coiled-coil oligomeric\nstate","Description":"This package contains the functions necessary to run the LOGICOIL algorithm. LOGICOIL can be used to differentiate between antiparallel dimers, parallel dimers, trimers and higher-order coiled-coil sequence. By covering >90 percent of the known coiled-coil structures, LOGICOIL is a net improvement compared with other existing methods, which achieve a predictive coverage of around 31 percent of this population. As such, LOGICOIL is particularly useful for researchers looking to characterize novel coiled-coil sequences or studying coiled-coil containing protein assemblies. It may also be used to assist in the structural characterization of synthetic coiled-coil sequences. ","Published":"2014-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LogicOpt","Version":"1.0.0","Title":"Truth Table Logic Optimizer","Description":"Access to powerful logic minimization algorithms and data structures that operate on a sum-of-products truth table. The core algorithms are built on Espresso Version 2.3 developed at UC Berkeley for digital logic synthesis purposes. Enhancements have been made to integrate within the R framework and support additional logic optimization use cases such as those needed by Qualitative Comparative Analysis (QCA) and Genetic Programming. There are no expressed or implied warranties.","Published":"2016-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LogicReg","Version":"1.5.9","Title":"Logic Regression","Description":"Routines for fitting Logic Regression models.","Published":"2016-08-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"logistf","Version":"1.22","Title":"Firth's Bias-Reduced Logistic Regression","Description":"Fit a logistic regression model using Firth's bias reduction method, equivalent to penalization of the log-likelihood by the Jeffreys \n\tprior. Confidence intervals for regression coefficients can be computed by penalized profile likelihood. Firth's method was proposed as ideal\n\tsolution to the problem of separation in logistic regression. If needed, the bias reduction can be turned off such that ordinary\n\tmaximum likelihood logistic regression is obtained.","Published":"2016-12-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"logistic4p","Version":"1.5","Title":"Logistic Regression with Misclassification in Dependent\nVariables","Description":"Error in a binary dependent variable, also known as misclassification, has not drawn much attention in psychology. Ignoring misclassification in logistic regression can result in misleading parameter estimates and statistical inference. This package conducts logistic regression analysis with misspecification in outcome variables. ","Published":"2017-05-31","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"LogisticDx","Version":"0.2","Title":"Diagnostic Tests for Models with a Binomial Response","Description":"Diagnostic tests and plots for GLMs (generalized linear models)\n with binomial/ binary outcomes, particularly logistic regression.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"logisticPCA","Version":"0.2","Title":"Binary Dimensionality Reduction","Description":"Dimensionality reduction techniques for binary data including\n logistic PCA.","Published":"2016-03-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LOGIT","Version":"1.3","Title":"Functions, Data and Code for Binary and Binomial Data","Description":"Functions, data and code for Hilbe, J.M. 2015. Practical Guide to Logistic Regression, by Chapman and Hall/CRC.","Published":"2016-02-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"logitchoice","Version":"0.9.4","Title":"Fitting l2-regularized logit choice models via generalized\ngradient descent","Description":"Fits linear discrete logit choice models with l2 regularization. To handle reasonably sized datasets, we employ an accelerated version of generalized gradient descent.","Published":"2014-09-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LogitNet","Version":"0.1-1","Title":"Infer network based on binary arrays using regularized logistic\nregression","Description":"LogitNet is developed for inferring network of binary\n variables under the high-dimension-low-sample-size setting","Published":"2009-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"logitnorm","Version":"0.8.34","Title":"Functions for the Logitnormal Distribution","Description":"Density, distribution, quantile and random generation function for the logitnormal distribution. Estimation of the mode and the first two moments. Estimation of distribution parameters.","Published":"2017-03-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"loglognorm","Version":"1.0.1","Title":"Double log normal distribution functions","Description":"r,d,p,q functions for the double log normal distribution","Published":"2013-06-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"logmult","Version":"0.6.4","Title":"Log-Multiplicative Models, Including Association Models","Description":"Functions to fit log-multiplicative models using gnm, with\n support for convenient printing, plots, and jackknife/bootstrap\n standard errors. For complex survey data, models can be fitted from\n design objects from the 'survey' package. Currently supported models\n include UNIDIFF (Erikson & Goldthorpe), a.k.a. log-multiplicative\n layer effect model (Xie), and several association models: Goodman's\n row-column association models of the RC(M) and RC(M)-L families\n with one or several dimensions; two skew-symmetric association\n models proposed by Yamaguchi and by van der Heijden & Mooijaart.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"logOfGamma","Version":"0.0.1","Title":"Natural Logarithms of the Gamma Function for Large Values","Description":"Uses approximations to compute the natural logarithm of the Gamma\n function for large values.","Published":"2017-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LogrankA","Version":"1.0","Title":"Logrank Test for Aggregated Survival Data","Description":"LogrankA provides a logrank test across unlimited groups with the\n possibility to input aggregated survival data.","Published":"2013-07-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"logspline","Version":"2.1.9","Title":"Logspline Density Estimation Routines","Description":"Routines for the logspline density estimation. oldlogspline()\n uses the same algorithm as the logspline 1.0.x package - the Kooperberg\n and Stone (1992) algorithm (with an improved interface).\n The recommended routine logspline() uses an algorithm from Stone et al (1997). \n ","Published":"2016-02-03","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"lokern","Version":"1.1-8","Title":"Kernel Regression Smoothing with Local or Global Plug-in\nBandwidth","Description":"Kernel regression smoothing with adaptive local or global plug-in\n\t bandwidth selection.","Published":"2016-10-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lomb","Version":"1.0","Title":"Lomb-Scargle Periodogram","Description":"Computes the Lomb-Scargle Periodogram for unevenly sampled time series. Includes a randomization procedure to obtain reliable p-values.","Published":"2013-10-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"longCatEDA","Version":"0.31","Title":"Package for Plotting Categorical Longitudinal and Time-Series\nData","Description":"Methods for plotting categorical longitudinal and time-series data by mapping individuals to the vertical space (each horizontal line represents a participant), time (or repeated measures) to the horizontal space, categorical (or discrete) states as facets using color or shade, and events to points using plotting characters. Sorting individuals in the vertical space and (or) stratifying them by groups can reveal patterns in the changes over time.","Published":"2017-04-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"longclust","Version":"1.2","Title":"Model-Based Clustering and Classification for Longitudinal Data","Description":"Clustering or classification of longitudinal data based on a mixture of multivariate t or Gaussian distributions with a Cholesky-decomposed covariance structure.","Published":"2015-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"longitudinal","Version":"1.1.12","Title":"Analysis of Multiple Time Course Data","Description":"Contains general data structures and\n functions for longitudinal data with multiple variables, \n repeated measurements, and irregularly spaced time points.\n Also implements a shrinkage estimator of dynamical correlation\n and dynamical covariance.","Published":"2015-07-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"longitudinalData","Version":"2.4.1","Title":"Longitudinal Data","Description":"Tools for longitudinal data and joint longitudinal data (used by packages kml and kml3d).","Published":"2016-02-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"longmemo","Version":"1.0-0","Title":"Statistics for Long-Memory Processes (Jan Beran) -- Data and\nFunctions","Description":"Datasets and Functionality from the textbook Jan Beran\n (1994). Statistics for Long-Memory Processes; Chapman & Hall.","Published":"2011-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"longpower","Version":"1.0-16","Title":"Sample Size Calculations for Longitudinal Data","Description":"The longpower package contains functions for computing\n power and sample size for linear models of longitudinal data\n based on the formula due to Liu and Liang (1997) and Diggle et\n al (2002). Either formula is expressed in terms of marginal\n model or Generalized Estimating Equations (GEE) parameters.\n This package contains functions which translate pilot mixed\n effect model parameters (e.g. random intercept and/or slope)\n into marginal model parameters so that the formulas of Diggle\n et al or Liu and Liang formula can be applied to produce sample\n size calculations for two sample longitudinal designs assuming\n known variance.","Published":"2016-07-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"longurl","Version":"0.3.0","Title":"Expand Short URLs","Description":"Tools expand vectors of short URLs into long URLs. No API services are used,\n which may mean that this operates more slowly than API services do (since they usually\n cache results of expansions every user of the service performs). ","Published":"2016-12-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"loo","Version":"1.1.0","Title":"Efficient Leave-One-Out Cross-Validation and WAIC for Bayesian\nModels","Description":"Efficient approximate leave-one-out cross-validation (LOO)\n using Pareto smoothed importance sampling (PSIS), a new procedure for\n regularizing importance weights. As a byproduct of the calculations, we also\n obtain approximate standard errors for estimated predictive errors and for\n the comparison of predictive errors between models. We also compute the\n widely applicable information criterion (WAIC).","Published":"2017-03-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"lookupTable","Version":"0.1","Title":"Look-Up Tables using S4","Description":"Fits look-up tables by filling entries with the mean or median values of observations\n fall in partitions of the feature space. Partitions can be determined by user of the\n package using input argument feature.boundaries, and dimensions of the feature space\n can be any combination of continuous and categorical features provided by the data set.\n A Predict function directly fetches corresponding entry value, and a default value is\n defined as the mean or median of all available observations.\n The table and other components are represented using the S4 class lookupTable.","Published":"2015-08-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"loop","Version":"1.1","Title":"loop decomposition of weighted directed graphs for life cycle\nanalysis, providing flexbile network plotting methods, and\nanalyzing food chain properties in ecology","Description":"The program can perform loop analysis and plot network\n structure (especially for food webs),including minimum spanning\n tree, loop decomposition of weighted directed graphs, and other\n network properties which may be related to food chain\n properties in ecology.","Published":"2012-10-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LoopAnalyst","Version":"1.2-4","Title":"A Collection of Tools to Conduct Levins' Loop Analysis","Description":"Loop analysis makes qualitative predictions of variable change in a system of causally interdependent variables, where \"qualitative\" means sign only (i.e. increases, decreases, non change, and ambiguous). This implementation includes output support for graphs in .dot file format for use with visualization software such as graphviz (graphviz.org). 'LoopAnalyst' provides tools for the construction and output of community matrices, computation and output of community effect matrices, tables of correlations, adjoint, absolute feedback, weighted feedback and weighted prediction matrices, change in life expectancy matrices, and feedback, path and loop enumeration tools.","Published":"2015-07-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"loopr","Version":"1.0.1","Title":"Uses an Archive to Amend Previous Stages of a Pipe using Current\nOutput","Description":"Remedies a common problem in piping: not having access to\n intermediate outputs of the pipe. Within a \"loop\", a piping intermediate\n is stored in a stack archive, data is processed, and then both the\n stored intermediate and the current output are reintegrated using an\n \"ending\" function. Two special ending functions are provided: amend and\n insert. However, any ending function can be specified, including merge\n functions, join functions, setNames(), etc. This framework allows the\n following work-flow: focus on a particular aspect or section of a\n data set, conduct specific operations, and then reintegrate changes into\n the whole.","Published":"2015-05-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lordif","Version":"0.3-3","Title":"Logistic Ordinal Regression Differential Item Functioning using\nIRT","Description":"Analysis of Differential Item Functioning (DIF) for\n dichotomous and polytomous items using an iterative hybrid of\n ordinal logistic regression and item response theory (IRT).","Published":"2016-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lorec","Version":"0.6.1","Title":"LOw Rand and sparsE Covariance matrix estimation","Description":"Estimate covariance matrices that contain low rank and sparse components","Published":"2014-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LOST","Version":"1.3","Title":"Missing Morphometric Data Simulation and Estimation","Description":"Functions for simulating missing morphometric\n\tdata randomly, with taxonomic bias and with anatomical bias. LOST also \n\tincludes functions for estimating linear and geometric morphometric data. ","Published":"2017-04-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LotkasLaw","Version":"0.0.1.0","Title":"Runs Lotka's Law which is One of the Special Applications of\nZipf's Law","Description":"Running Lotka's Law following Pao (1985)(DOI: 10.1016/0306-4573(85)90055-X). The Law is based around the proof that the number of authors making n contributions is about 1/n^{a} of those making one contribution.","Published":"2015-08-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"lowmemtkmeans","Version":"0.1.2","Title":"Low Memory Use Trimmed K-Means","Description":"Performs the trimmed k-means clustering algorithm with lower memory use. It also provides a number of utility functions such as BIC calculations.","Published":"2017-01-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"LowRankQP","Version":"1.0.2","Title":"Low Rank Quadratic Programming","Description":"This package contains routines and documentation for\n solving quadratic programming problems where the hessian is\n represented as the product of two matrices.","Published":"2014-07-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lpbrim","Version":"1.0.0","Title":"LP-BRIM Bipartite Modularity","Description":"Optimization of bipartite modularity using LP-BRIM (Label propagation\n followed by Bipartite Recursively Induced Modularity).","Published":"2015-07-18","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lpc","Version":"1.0.2","Title":"Lassoed principal components for testing significance of\nfeatures","Description":"Implements the LPC method of Witten&Tibshirani(Annals of Applied Statistics 2008) for identification of significant genes in a microarray experiment.","Published":"2013-12-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LPCM","Version":"0.45-0","Title":"Local Principal Curve Methods","Description":"Fitting multivariate data patterns with local principal curves; including simple tools for data compression (projection), bandwidth selection, and measuring goodness-of-fit.","Published":"2015-09-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lpdensity","Version":"0.2.1","Title":"Local Polynomial Density Estimation and Inference","Description":"Without imposing stringent distributional assumptions or shape restrictions, nonparametric density estimation has been popular in economics and other social sciences for counterfactual analysis, program evaluation, and policy recommendations. This package implements a novel density estimator based on local polynomial regression, documented in Cattaneo, Jansson and Ma (2017a): lpdensity() to construct local polynomial based density (and derivatives) estimator; lpbwdensity() to perform data-driven bandwidth selection; and lpdensity.plot() for density plot with robust confidence interval.","Published":"2017-06-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lpint","Version":"2.0","Title":"Local polynomial estimators of intensity function or its\nderivatives","Description":"Estimates the intensity function, or its derivative of a given order, of a multiplicative counting process using the local polynomial method","Published":"2014-04-02","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"lplyr","Version":"0.1.6","Title":"'dplyr' Verbs for Lists and Other Verbs for Data Frames","Description":"Provides 'dplyr' verbs for lists and other useful \n verbs for manipulation of data frames. In particular, it includes a \n mutate_which() function that mutates columns for a specific subset of \n rows defined by a condition, and fuse() which is a more flexible version \n of 'tidyr' unite() function. ","Published":"2017-01-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LPM","Version":"2.6","Title":"Linear Parametric Models Applied to Hydrological Series","Description":"Apply Univariate Long Memory Models, \n Multivariate Short Memory Models To Hydrological Dataset.","Published":"2015-11-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lpme","Version":"1.1.0","Title":"Local Polynomial Estimators in Measurement Error Models","Description":"Provide local polynomial estimators for nonparametric mean regression and nonparametric modal regression in the presence/absence of measurement error. Bandwidth selection is also provided for each estimator.","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LPmerge","Version":"1.6","Title":"Merging linkage maps by linear programming","Description":"LPmerge creates a consensus genetic map by merging linkage maps from different populations. The software uses linear programming (LP) to efficiently minimize the mean absolute error between the consensus map and the linkage maps. This minimization is performed subject to linear inequality constraints that ensure the ordering of the markers in the linkage maps is preserved. When marker order is inconsistent between linkage maps, a minimum set of ordinal constraints is deleted to resolve the conflicts.","Published":"2014-08-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lpmodeler","Version":"0.2-1","Title":"Modeler for linear programs (LP) and mixed integer linear\nprograms (MILP)","Description":"lpmodeler is a set of user-friendly functions to simplify the modelling of linear programs (LP) and mixed integer programs (MIP). It provides a unified interface compatible with optimization packages: Rsymphony.","Published":"2014-02-21","License":"GPL (>= 2) | BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"LPR","Version":"1.0","Title":"Lasso and Partial Ridge","Description":"Contains a function called \"LPR\" to estimate coefficients using Lasso and Partial Ridge method and to calculate confidence intervals through bootstrap.","Published":"2016-01-11","License":"GNU General Public License version 2","snapshot_date":"2017-06-23"} {"Package":"lpridge","Version":"1.0-7","Title":"Local Polynomial (Ridge) Regression","Description":"Local Polynomial Regression with Ridging.","Published":"2013-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LPS","Version":"1.0.10","Title":"Linear Predictor Score, for Binary Inference from Multiple\nContinuous Variables","Description":"An implementation of the Linear Predictor Score approach, as initiated by Radmacher et al. (J Comput Biol 2001) and enhanced by Wright et al. (PNAS 2003) for gene expression signatures. Several tools for unsupervised clustering of gene expression data are also provided.","Published":"2015-02-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"lpSolve","Version":"5.6.13","Title":"Interface to 'Lp_solve' v. 5.5 to Solve Linear/Integer Programs","Description":"Lp_solve is freely available (under LGPL 2) software for\n solving linear, integer and mixed integer programs. In this\n implementation we supply a \"wrapper\" function in C and some R\n functions that solve general linear/integer problems,\n assignment problems, and transportation problems. This version\n calls lp_solve version 5.5.","Published":"2015-09-19","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"lpSolveAPI","Version":"5.5.2.0-17","Title":"R Interface to 'lp_solve' Version 5.5.2.0","Description":"The lpSolveAPI package provides an R interface to 'lp_solve',\n a Mixed Integer Linear Programming (MILP) solver with support for pure\n linear, (mixed) integer/binary, semi-continuous and special ordered sets\n (SOS) models.","Published":"2016-01-13","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"LPStimeSeries","Version":"1.0-5","Title":"Learned Pattern Similarity and Representation for Time Series","Description":"Learned Pattern Similarity (LPS) for time series. \n\t\t\tImplements a novel approach to model the dependency structure \n\t\t\tin time series that generalizes the concept of autoregression to local \n\t\t\tauto-patterns. Generates a pattern-based representation of time series\n\t\t\talong with a similarity measure called Learned Pattern Similarity (LPS).\n\t\t\tIntroduces a generalized autoregressive kernel.This package is based on the \n\t\t\t'randomForest' package by Andy Liaw. ","Published":"2015-03-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LPTime","Version":"1.0-2","Title":"LP Nonparametric Approach to Non-Gaussian Non-Linear Time Series\nModelling","Description":"Specially designed rank transform based Legendre Polynomial-like (LP) orthonormal transformations are implemented for non-linear signal processing.","Published":"2015-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lqa","Version":"1.0-3","Title":"Penalized Likelihood Inference for GLMs","Description":"This package provides some basic infrastructure and tools\n to fit Generalized Linear Models (GLMs) via penalized\n likelihood inference. Estimating procedures already implemented\n are the LQA algorithm (that is where its name come from),\n P-IRLS, RidgeBoost, GBlockBoost and ForwardBoost.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lqmm","Version":"1.5.3","Title":"Linear Quantile Mixed Models","Description":"This is a collection of functions to fit quantile regression models for independent and hierarchical data.","Published":"2016-02-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lqr","Version":"1.5","Title":"Robust Linear Quantile Regression","Description":"It fits a robust linear quantile regression model using a new\n family of zero-quantile distributions for the error term. This family of\n distribution includes skewed versions of the Normal, Student's t, Laplace, Slash\n and Contaminated Normal distribution. It also performs logistic quantile regression for bounded responses\n as shown in Bottai et.al.(2009) . It provides estimates and full inference.\n It also provides envelopes plots for assessing the fit and confidences bands\n when several quantiles are provided simultaneously.","Published":"2016-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LRcontrast","Version":"1.0","Title":"Dose Response Signal Detection under Model Uncertainty","Description":"Provides functions for calculating test statistics, simulating quantiles \n\tand simulating p-values of likelihood ratio contrast tests in regression models \n\twith a lack of identifiability.","Published":"2015-11-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lrequire","Version":"0.1.3","Title":"Sources an R \"Module\" with Caching & Encapsulation, Returning\nExported Vars","Description":"In the fashion of 'node.js' , requires a file,\n sourcing into the current environment only the variables explicitly specified\n in the module.exports or exports list variable. If the file was already sourced,\n the result of the earlier sourcing is returned to the caller.","Published":"2016-02-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lrgs","Version":"0.5.1","Title":"Linear Regression by Gibbs Sampling","Description":"Implements a Gibbs sampler to do linear regression with multiple covariates, multiple responses, Gaussian measurement errors on covariates and responses, Gaussian intrinsic scatter, and a covariate prior distribution which is given by either a Gaussian mixture of specified size or a Dirichlet process with a Gaussian base distribution.","Published":"2016-07-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lrmest","Version":"3.0","Title":"Different Types of Estimators to Deal with Multicollinearity","Description":"When multicollinearity exists among predictor variables of the linear model, least square estimators does not provide a better solution for estimating parameters. To deal with multicollinearity several estimators are proposed in the literature. Some of these estimators are Ordinary Least Square Estimator (OLSE), Ordinary Generalized Ordinary Least Square Estimator (OGOLSE), Ordinary Ridge Regression Estimator (ORRE), Ordinary Generalized Ridge Regression Estimator (OGRRE), Restricted Least Square Estimator (RLSE), Ordinary Generalized Restricted Least Square Estimator (OGRLSE), Ordinary Mixed Regression Estimator (OMRE), Ordinary Generalized Mixed Regression Estimator (OGMRE), Liu Estimator (LE), Ordinary Generalized Liu Estimator (OGLE), Restricted Liu Estimator (RLE), Ordinary Generalized Restricted Liu Estimator (OGRLE), Stochastic Restricted Liu Estimator (SRLE), Ordinary Generalized Stochastic Restricted Liu Estimator (OGSRLE), Type (1),(2),(3) Liu Estimator (Type-1,2,3 LTE), Ordinary Generalized Type (1),(2),(3) Liu Estimator (Type-1,2,3 OGLTE), Type (1),(2),(3) Adjusted Liu Estimator (Type-1,2,3 ALTE), Ordinary Generalized Type (1),(2),(3) Adjusted Liu Estimator (Type-1,2,3 OGALTE), Almost Unbiased Ridge Estimator (AURE), Ordinary Generalized Almost Unbiased Ridge Estimator (OGAURE), Almost Unbiased Liu Estimator (AULE), Ordinary Generalized Almost Unbiased Liu Estimator (OGAULE), Stochastic Restricted Ridge Estimator (SRRE), Ordinary Generalized Stochastic Restricted Ridge Estimator (OGSRRE), Restricted Ridge Regression Estimator (RRRE) and Ordinary Generalized Restricted Ridge Regression Estimator (OGRRRE). To select the best estimator in a practical situation the Mean Square Error (MSE) is used. Using this package scalar MSE value of all the above estimators and Prediction Sum of Square (PRESS) values of some of the estimators can be obtained, and the variation of the MSE and PRESS values for the relevant estimators can be shown graphically. ","Published":"2016-05-14","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"LRTH","Version":"1.3","Title":"A Likelihood Ratio Test Accounting for Genetic Heterogeneity","Description":"R code of a likelihood ratio test for genome-wide association under genetic\n heterogeneity.","Published":"2016-02-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LS2W","Version":"1.3-3","Title":"Locally stationary two-dimensional wavelet process estimation\nscheme","Description":"Estimates two-dimensional local wavelet spectra","Published":"2013-07-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LS2Wstat","Version":"2.0-3","Title":"A Multiscale Test of Spatial Stationarity for LS2W processes","Description":"Wavelet-based methods for testing stationarity and quadtree segmenting of images","Published":"2014-06-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lsa","Version":"0.73.1","Title":"Latent Semantic Analysis","Description":"The basic idea of latent semantic analysis (LSA) is, \n that text do have a higher order (=latent semantic) structure which, \n however, is obscured by word usage (e.g. through the use of synonyms \n or polysemy). By using conceptual indices that are derived statistically \n via a truncated singular value decomposition (a two-mode factor analysis) \n over a given document-term matrix, this variability problem can be overcome. ","Published":"2015-05-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LSAfun","Version":"0.5.1","Title":"Applied Latent Semantic Analysis (LSA) Functions","Description":"Provides functions that allow for convenient working\n with Latent Semantic Analysis. For actually building an LSA space, use the\n package 'lsa' or other specialized software.","Published":"2016-11-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LSAmitR","Version":"1.0-0","Title":"Datensätze und Übungsmaterial zu 'Large-Scale Assessment mit R'","Description":"Dieses R-Paket stellt Zusatzmaterial in Form von Daten, Funktionen\n und R-Hilfe-Seiten für den Herausgeberband Breit, S. und Schreiner, \n C. (Hrsg.). (2016). \"Large-Scale Assessment mit R: Methodische Grundlagen \n der österreichischen Bildungsstandardüberprüfung.\" Wien: facultas. \n (ISBN: 978-3-7089-1343-8, ) zur \n Verfügung.","Published":"2016-11-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"lsasim","Version":"1.0.1","Title":"Functions to Facilitate the Simulation of Large Scale Assessment\nData","Description":"Provides functions to simulate data from large-scale educational \n assessments, including background questionnaire data and cognitive item \n responses that adhere to a multiple-matrix sampled design.","Published":"2017-05-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lsbclust","Version":"1.0.4","Title":"Least-Squares Bilinear Clustering for Three-Way Data","Description":"Functions for performing least-squares bilinear clustering of\n three-way data. The method uses the bilinear decomposition (or biadditive\n model) to model two-way matrix slices while clustering over the third way.\n Up to four different types of clusters are included, one for each term of the\n bilinear decomposition. In this way, matrices are clustered simultaneously on\n (a subset of) their overall means, row margins, column margins and row-column\n interactions. The orthogonality of the bilinear model results in separability of\n the joint clustering problem into four separate ones. Three of these subproblems\n are specific k-means problems, while a special algorithm is implemented for the\n interactions. Plotting methods are provided, including biplots for the low-rank\n approximations of the interactions.","Published":"2016-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LSC","Version":"0.1.5","Title":"Local Statistical Complexity - Automatic Pattern Discovery in\nSpatio-Temporal Data","Description":"Estimators and visualization for local statistical complexity of\n (N+1)D fields. In particular for 0, 1 and 2 dimensional space this package\n provides useful visualization.","Published":"2014-01-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LSD","Version":"3.0","Title":"Lots of Superior Depictions","Description":"Create lots of colorful plots in a plethora of variations (try the LSD demotour() )","Published":"2015-01-09","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"LSDinterface","Version":"0.3.0","Title":"Reading LSD Results (.res) Files","Description":"Interfaces R with LSD. Reads object-oriented data in results files (.res) produced by LSD and creates appropriate multi-dimensional arrays in R. Supports multiple core parallelization of multi-file data reading for increased performance. Also provides functions to extract basic information and statistics from data files. LSD (Laboratory for Simulation Development) is free software developed by Marco Valente (documentation and downloads available at ).","Published":"2017-02-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lsdv","Version":"1.1","Title":"Least square dummy variable regression","Description":"Fit a least square dummy variable regression","Published":"2014-03-24","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"lsei","Version":"1.1-1","Title":"Solving Least Squares Problems under Equality/Inequality\nConstraints","Description":"It contains functions that solve least squares linear\n\t regression problems under linear equality/inequality\n\t constraints. It is developed based on the 'Fortran' program of\n\t Lawson and Hanson (1974, 1995), which is public domain and\n\t available at http://www.netlib.org/lawson-hanson.","Published":"2015-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lsgl","Version":"1.3.6","Title":"Linear Multiple Output Sparse Group Lasso","Description":"Linear multiple output using sparse group lasso. The\n algorithm finds the sparse group lasso penalized maximum\n likelihood estimator. This result in feature and parameter\n selection, and parameter estimation. Use of parallel computing\n for cross validation and subsampling is supported through the\n 'foreach' and 'doParallel' packages. Development version is on\n GitHub, please report package issues on GitHub.","Published":"2017-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lshorth","Version":"0.1-6","Title":"The Length of the Shorth","Description":"Calculates the (localised) length of the shorth and\n supplies corresponding diagnostic plots.","Published":"2013-06-08","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"lsl","Version":"0.5.5","Title":"Latent Structure Learning","Description":"Conduct structural equation modeling via penalized likelihood.","Published":"2016-07-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lsmeans","Version":"2.26-3","Title":"Least-Squares Means","Description":"Obtain least-squares means for many linear, generalized linear, \n and mixed models. Compute contrasts or linear functions of least-squares\n means, and comparisons of slopes. Plots and compact letter displays.","Published":"2017-05-09","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"LSMonteCarlo","Version":"1.0","Title":"American options pricing with Least Squares Monte Carlo method","Description":"The package compiles functions for calculating prices of American put options with Least Squares Monte Carlo method. The option types are plain vanilla American put, Asian American put, and Quanto American put. The pricing algorithms include variance reduction techniques such as Antithetic Variates and Control Variates. Additional functions are given to derive \"price surfaces\" at different volatilities and strikes, create 3-D plots, quickly generate Geometric Brownian motion, and calculate prices of European options with Black & Scholes analytical solution.","Published":"2013-09-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LSPFP","Version":"1.0.0","Title":"Lysate and Secretome Peptide Feature Plotter","Description":"Creates plots of peptides from shotgun proteomics analysis of secretome and lysate samples. These plots contain associated protein features and scores for potential secretion and truncation.","Published":"2016-05-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lspline","Version":"1.0-0","Title":"Linear Splines with Convenient Parametrisations","Description":"Linear splines with convenient parametrisations such that \n (1) coefficients are slopes of consecutive segments or (2) coefficients are \n slope changes at consecutive knots. Knots can be set manually or at break points\n of equal-frequency or equal-width intervals covering the range of 'x'.\n The implementation follows Greene (2003), chapter 7.2.5.","Published":"2017-04-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lspls","Version":"0.2-1","Title":"LS-PLS Models","Description":"Implements the LS-PLS (least squares - partial least\n squares) method described in for instance Jørgensen, K.,\n Segtnan, V. H., Thyholt, K., Næs, T. (2004) A Comparison of\n Methods for Analysing Regression Models with Both Spectral and\n Designed Variables. Journal of Chemometrics, 18(10), 451--464.","Published":"2011-10-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lsr","Version":"0.5","Title":"Companion to \"Learning Statistics with R\"","Description":"A collection of tools intended to make introductory statistics easier to teach, including wrappers for common hypothesis tests and basic data manipulation. It accompanies Navarro, D. J. (2015). Learning Statistics with R: A Tutorial for Psychology Students and Other Beginners, Version 0.5. [Lecture notes] School of Psychology, University of Adelaide, Adelaide, Australia. ISBN: 978-1-326-18972-3. URL: http://health.adelaide.edu.au/psychology/ccs/teaching/lsr/.","Published":"2015-03-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lss","Version":"0.52","Title":"the accelerated failure time model to right censored data based\non least-squares principle","Description":"Due to lack of proper inference procedure and software,\n the ordinary linear regression model is seldom used in practice\n for the analysis of right censored data. This paper presents an\n S-Plus/R program that implements a recently developed inference\n procedure (Jin, Lin and Ying, 2006)\\cite{Jin} for the\n accelerated failure time model based on the least-squares\n principle.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LSTS","Version":"1.0","Title":"Locally Stationary Time Series","Description":"Has a set of functions that allows stationary analysis and locally stationary time series analysis.","Published":"2015-10-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ltbayes","Version":"0.4","Title":"Simulation-Based Bayesian Inference for Latent Traits of Item\nResponse Models","Description":"Functions for simulating realizations from the posterior distribution of\n\ta latent trait of an item response model. Distributions are conditional on one or\n\ta subset of response patterns (e.g., sum scores). Functions for computing likelihoods,\n\tFisher and observed information, posterior modes, and profile likelihood confidence\n\tintervals are also included. These functions are designed to be easily amenable to \n\tuser-specified models.","Published":"2016-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ltm","Version":"1.0-0","Title":"Latent Trait Models under IRT","Description":"Analysis of multivariate dichotomous and polytomous data using latent trait models under the Item Response Theory approach. It includes the Rasch, the Two-Parameter Logistic, the Birnbaum's Three-Parameter, the Graded Response, and the Generalized Partial Credit Models.","Published":"2013-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ltmle","Version":"0.9-9-3","Title":"Longitudinal Targeted Maximum Likelihood Estimation","Description":"Targeted Maximum Likelihood Estimation (TMLE) of\n treatment/censoring specific mean outcome or marginal structural model for\n point-treatment and longitudinal data.","Published":"2017-06-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LTPDvar","Version":"1.2","Title":"LTPD and AOQL Plans for Acceptance Sampling Inspection by\nVariables","Description":"Calculation of rectifying LTPD and AOQL plans for sampling inspection by variables which minimize mean inspection cost per lot of process average quality. ","Published":"2015-06-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LTR","Version":"1.0.0","Title":"Perform LTR analysis on microarray data","Description":"A set of functions to execute the linear-transformation of\n replicate (LTR) algorithm for preprocessing of microarray data","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"LTRCtrees","Version":"0.5.0","Title":"Survival Trees to Fit Left-Truncated and Right-Censored and\nInterval-Censored Survival Data","Description":"Recursive partition algorithms designed for fitting survival tree with left-truncated and right censored (LTRC) data, as well as interval-censored data.\n The LTRC trees can also be used to fit survival tree with time-varying covariates.","Published":"2017-02-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ltsa","Version":"1.4.6","Title":"Linear Time Series Analysis","Description":"Methods of developing linear time series modelling.\n Methods are given for loglikelihood computation, forecasting\n and simulation.","Published":"2015-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ltsbase","Version":"1.0.1","Title":"Ridge and Liu Estimates based on LTS (Least Trimmed Squares)\nMethod","Description":"This is a new tool to estimate Ridge and Liu estimators based on LTS method in multiple linear regression analysis. ","Published":"2013-08-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ltsk","Version":"1.0.4","Title":"Local Time Space Kriging","Description":"Implements local spatial and local spatiotemporal Kriging based on local spatial and local spatiotemporal variograms, respectively.","Published":"2015-09-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ltxsparklines","Version":"1.1.2","Title":"Lightweight Sparklines for a LaTeX Document","Description":"Sparklines are small plots (about one line of text high),\n made popular by Edward Tufte. This package is the interface from R\n to the LaTeX package sparklines by Andreas Loeffer and Dan Luecking\n (). It can work with Sweave or\n knitr or other engines that produce TeX. The package can be used to\n plot vectors, matrices, data frames, time series (in ts or zoo format).","Published":"2017-01-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"lubridate","Version":"1.6.0","Title":"Make Dealing with Dates a Little Easier","Description":"Functions to work with date-times and time-spans: fast and user\n friendly parsing of date-time data, extraction and updating of components of\n a date-time (years, months, days, hours, minutes, and seconds), algebraic\n manipulation on date-time and time-span objects. The 'lubridate' package has\n a consistent and memorable syntax that makes working with dates easy and\n fun.","Published":"2016-09-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"luca","Version":"1.0-5","Title":"Likelihood inference from case-control data Under Covariate\nAssumptions (LUCA)","Description":"Likelihood inference in case-control studies of a rare\n disease under independence or simple dependence of genetic and\n non-genetic covariates","Published":"2009-05-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lucid","Version":"1.4","Title":"Printing Floating Point Numbers in a Human-Friendly Format","Description":"Print vectors (and data frames) of floating point numbers\n using a non-scientific format optimized for human readers. Vectors\n of numbers are rounded using significant digits, aligned at the\n decimal point, and all zeros trailing the decimal point are dropped.","Published":"2016-10-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lucr","Version":"0.2.0","Title":"Currency Formatting and Conversion","Description":"Reformat currency-based data as numeric values (or numeric values\n as currency-based data) and convert between currencies.","Published":"2016-10-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ludic","Version":"0.1.5","Title":"Linkage Using Diagnosis Codes","Description":"Probabilistic record linkage without direct identifiers using only diagnosis codes.","Published":"2016-12-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lulcc","Version":"1.0.2","Title":"Land Use Change Modelling in R","Description":"Classes and methods for spatially explicit land use change\n modelling in R.","Published":"2017-03-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lumberjack","Version":"0.1.0","Title":"Track Changes in Data the Tidy Way","Description":"A function composition ('pipe') operator and extensible \n framework that allows for easy logging of changes in data.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lumendb","Version":"0.2.2","Title":"Lumen Database API Client","Description":"A simple client for the Lumen Database \n (formerly Chilling Effects) API.","Published":"2017-02-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Luminescence","Version":"0.7.4","Title":"Comprehensive Luminescence Dating Data Analysis","Description":"A collection of various R functions for the purpose of Luminescence\n dating data analysis. This includes, amongst others, data import, export,\n application of age models, curve deconvolution, sequence analysis and\n plotting of equivalent dose distributions.","Published":"2017-03-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"LumReader","Version":"0.1.0","Title":"TL/OSL Reader Simulator","Description":"A series of functions to estimate the detection windows of a luminescence reader based on the filters and the photomultiplier (PMT) selected. These functions also allow to simulate a luminescence experiment based on the thermoluminesce (TL) or the optically stimulated luminescence (OSL) properties of a material.","Published":"2017-01-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"lunar","Version":"0.1-04","Title":"Lunar Phase & Distance, Seasons and Other Environmental Factors","Description":"Provides functions to calculate lunar and other environmental\n covariates.","Published":"2014-09-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"luzlogr","Version":"0.2.0","Title":"Lightweight Logging for R Scripts","Description":"Provides flexible but lightweight logging facilities for R scripts.\n Supports priority levels for logs and messages, flagging messages, capturing\n script output, switching logs, and logging to files or connections.","Published":"2016-02-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lvm4net","Version":"0.2","Title":"Latent Variable Models for Networks","Description":"Latent variable models for network data using fast inferential\n procedures.","Published":"2015-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LVMMCOR","Version":"0.01.1","Title":"A Latent Variable Model for Mixed Continuous and Ordinal\nResponses","Description":"A model for mixed ordinal and continuous responses is\n presented where the heteroscedasticity of the variance of the\n continuous response is also modeled. In this model ordinal\n response can be dependent on the continuous response. The aim\n is to use an approach similar to that of Heckman (1978) for the\n joint modelling of the ordinal and continuous responses. With\n this model, the dependence between responses can be taken into\n account by the correlation between errors in the models for\n continuous and ordinal responses","Published":"2013-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"lvnet","Version":"0.3.1","Title":"Latent Variable Network Modeling","Description":"Estimate, fit and compare Structural Equation Models (SEM) and network models (Gaussian Graphical Models; GGM) using OpenMx. Allows for two possible generalizations to include GGMs in SEM: GGMs can be used between latent variables (latent network modeling; LNM) or between residuals (residual network modeling; RNM).","Published":"2017-03-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"lvplot","Version":"0.2.0","Title":"Letter Value 'Boxplots'","Description":"Implements the letter value 'boxplot' which extends the standard\n 'boxplot' to deal with both larger and smaller number of data points\n by dynamically selecting the appropriate number of letter values to display.","Published":"2016-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"LW1949","Version":"1.1.0","Title":"An Automated Approach to Evaluating Dose-Effect Experiments\nFollowing Litchfield and Wilcoxon (1949)","Description":"The manual approach of Litchfield and Wilcoxon (1949)\n \n for evaluating dose-effect experiments\n is automated so that the computer can do the work. ","Published":"2017-03-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"lxb","Version":"1.5","Title":"Fast LXB File Reader","Description":"Functions to quickly read LXB parameter data.","Published":"2016-03-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"lymphclon","Version":"1.3.0","Title":"Accurate Estimation of Clonal Coincidences and Abundances from\nBiological Replicates","Description":"We provide a clonality score estimator that takes full advantage of the multi-biological-replicate structure of modern sequencing experiments; it specifically takes into account the reality that, typically, the clonal coverage is well below 0.1%.","Published":"2014-11-11","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"LZeroSpikeInference","Version":"1.0.1","Title":"Exact Spike Train Inference via L0 Optimization","Description":"An implementation of algorithms described in Jewell and Witten (2017) . ","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"m2b","Version":"1.0","Title":"Movement to Behaviour Inference using Random Forest","Description":"Prediction of behaviour from movement \n\tcharacteristics using observation and random forest for the analyses of movement\n\tdata in ecology.\n\tFrom movement information (speed, bearing...) the model predicts the\n\tobserved behaviour (movement, foraging...) using random forest. The\n\tmodel can then extrapolate behavioural information to movement data\n\twithout direct observation of behaviours.\n\tThe specificity of this method relies on the derivation of multiple predictor variables from the\n\tmovement data over a range of temporal windows. This procedure allows to capture\n\tas much information as possible on the changes and variations of movement and\n\tensures the use of the random forest algorithm to its best capacity. The method\n\tis very generic, applicable to any set of data providing movement data together with\n\tobservation of behaviour.","Published":"2017-05-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"m2r","Version":"1.0.0","Title":"Macaulay2 in R","Description":"Persistent interface to Macaulay2 ()\n\tand front-end tools facilitating its use in the R ecosystem.","Published":"2017-06-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"M3","Version":"0.3","Title":"Reading M3 files","Description":"This package contains functions to read in and manipulate\n air quality model output from Models3-formatted files. This\n format is used by the Community Multiscale Air Quaility (CMAQ)\n model.","Published":"2012-05-16","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"M4comp","Version":"0.0.1","Title":"Data from the M4 Time Series Forecasting Competition","Description":"The 10.000 time series from the M4 time series forecasting\n competition, including print and plot functions.","Published":"2016-01-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"m4fe","Version":"0.1","Title":"Models for Financial Economics","Description":"Provides binomial tree models for European, American and Asian\n Options as well as Interest Rates. Monte Carlo Simulation and Methods for\n Solving Differential Equations are also included.","Published":"2014-09-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"maboost","Version":"1.0-0","Title":"Binary and Multiclass Boosting Algorithms","Description":"Performs binary and multiclass boosting in maximum-margin, sparse, smooth and normal settings\n as described in \"A Boosting Framework on Grounds of Online Learning\" by T. Naghibi and B. Pfister, (2014). \n For further information regarding the algorithms, please refer to http://arxiv.org/abs/1409.7202","Published":"2014-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MAc","Version":"1.1","Title":"Meta-Analysis with Correlations","Description":"This is an integrated meta-analysis package for conducting\n a correlational research synthesis. One of the unique features\n of this package is in its integration of user-friendly\n functions to facilitate statistical analyses at each stage in a\n meta-analysis with correlations. It uses recommended procedures\n as described in The Handbook of Research Synthesis and\n Meta-Analysis (Cooper, Hedges, & Valentine, 2009).","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"macc","Version":"1.0.0","Title":"Mediation Analysis of Causality under Confounding","Description":"Performs causal mediation analysis under confounding or correlated errors. This package includes a single level mediation model, a two-level mediation model, and a three-level mediation model for data with hierarchical structures. Under the two/three-level mediation model, the correlation parameter is identifiable and is estimated based on a hierarchical-likelihood, a marginal-likelihood or a two-stage method. See reference for details (Zhao, Y., & Luo, X. (2014). Estimating Mediation Effects under Correlated Errors with an Application to fMRI. arXiv preprint arXiv:1410.7217. ).","Published":"2016-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"machina","Version":"0.1.6","Title":"Machina Time Series Generation and Backtesting","Description":"Connects to and allows the creation\n of time series, and running backtests on selected strategy if requested.","Published":"2016-10-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"machQA","Version":"0.1.4","Title":"QA Machina Indicators","Description":"Performs Quality Analysis on Machina algebraic indicators 'sma' (simple moving average), 'wavg' (weighted average),'xavg' (exponential moving average), 'hma' (Hull moving average), 'adma' (adaptive moving average), 'tsi' (true strength index), 'rsi' (relative strength index), 'gauss' (Gaussian elimination), 'momo' (momentum), 't3' (triple exponential moving average), 'macd' (moving average convergence divergence). Machina is a strategy creation and backtesting engine for quants and financial professionals (see for more information).","Published":"2016-08-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"macleish","Version":"0.3.0","Title":"Retrieve Data from MacLeish Field Station","Description":"Download data from the Ada and Archibald MacLeish Field \n Station in Whately, MA. The Ada \n and Archibald MacLeish Field Station is a 260-acre patchwork of \n forest and farmland located in West Whately, MA that provides opportunities \n for faculty and students to pursue environmental research, outdoor \n education, and low-impact recreation \n (see for more information). \n This package contains \n weather data over several years, and spatial data on various man-made and \n natural structures.","Published":"2016-06-09","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"MAclinical","Version":"1.0-5","Title":"Class prediction based on microarray data and clinical\nparameters","Description":"'Maclinical' implements class prediction using both\n microarray data and clinical parameters. It addresses the\n question of the additional predictive value of microarray data.\n Class prediction is performed using a two-step method combining\n (pre-validated) PLS dimension reduction and random forests.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MAd","Version":"0.8-2","Title":"Meta-Analysis with Mean Differences","Description":"A collection of functions for conducting a meta-analysis with mean differences data. It uses recommended procedures as\tdescribed in The \tHandbook of Research Synthesis and Meta-Analysis\t(Cooper, Hedges, & Valentine, 2009). ","Published":"2014-12-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mada","Version":"0.5.7","Title":"Meta-Analysis of Diagnostic Accuracy","Description":"Provides functions for diagnostic meta-analysis. Next to basic analysis and visualization the bivariate Model of Reitsma et al. (2005) that is equivalent to the HSROC of Rutter&Gatsonis (2001) can be fitted. A new approach based to diagnostic meta-analysis of Holling et al. (2012) is also available. Standard methods like summary, plot and so on are provided.","Published":"2015-02-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"maddison","Version":"0.1","Title":"Maddison Project Database","Description":"Contains the Maddison Project database, which provides estimates of\n GDP per capita for all countries in the world between AD 1 and 2010. See\n http://www.ggdc.net/maddison for more information.","Published":"2015-12-09","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"madness","Version":"0.2.2","Title":"Automatic Differentiation of Multivariate Operations","Description":"An object that supports automatic differentiation\n of matrix- and multidimensional-valued functions with \n respect to multidimensional independent variables. \n Automatic differentiation is via 'forward accumulation'.","Published":"2017-04-26","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"MADPop","Version":"1.1","Title":"MHC Allele-Based Differencing Between Populations","Description":"Tools for the analysis of population differences using the Major Histocompatibility Complex (MHC) genotypes of samples having a variable number of alleles (1-4) recorded for each individual. A hierarchical Dirichlet-Multinomial model on the genotype counts is used to pool small samples from multiple populations for pairwise tests of equality. Bayesian inference is implemented via the 'rstan' package. Bootstrapped and posterior p-values are provided for chi-squared and likelihood ratio tests of equal genotype probabilities.","Published":"2017-01-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"madr","Version":"1.0.0","Title":"Model Averaged Double Robust Estimation","Description":"Estimates average treatment effects using model average double robust (MA-DR) estimation. The MA-DR estimator is defined as weighted average of double robust estimators, where each double robust estimator corresponds to a specific choice of the outcome model and the propensity score model. The MA-DR estimator extend the desirable double robustness property by achieving consistency under the much weaker assumption that either the true propensity score model or the true outcome model be within a specified, possibly large, class of models.","Published":"2016-09-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"madrat","Version":"1.8.0","Title":"May All Data be Reproducible and Transparent (MADRaT) *","Description":"Provides a framework which should improve reproducibility and transparency in data processing. It provides functionality such as automatic meta data creation and management, rudimentary quality management, data caching, work-flow management and data aggregation. \n * The title is a wish not a promise. By no means we expect this package to deliver everything what is needed to achieve full reproducibility and transparency, but we believe that it supports efforts in this direction. ","Published":"2017-05-29","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mads","Version":"0.1.5","Title":"Multi-Analysis Distance Sampling","Description":"Performs distance sampling analyses on a number of species\n accounting for unidentified sightings, model uncertainty and covariate\n uncertainty.","Published":"2017-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"madsim","Version":"1.2.1","Title":"A Flexible Microarray Data Simulation Model","Description":"This function allows to generate two biological conditions synthetic \n microarray dataset which has similar behavior to those currently \n observed with common platforms. User provides a subset of parameters.\n Available default parameters settings can be modified.","Published":"2016-12-07","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"Maeswrap","Version":"1.7","Title":"Wrapper Functions for MAESTRA/MAESPA","Description":"A bundle of functions for modifying MAESTRA/MAESPA input files,reading output files, and visualizing the stand in 3D. Handy for running sensitivity analyses, scenario analyses, etc.","Published":"2015-06-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mafs","Version":"0.0.2","Title":"Multiple Automatic Forecast Selection","Description":"Fits several forecast models available from the forecast package\n and selects the best one according to an error metric. Its main function\n is select_forecast().","Published":"2017-01-25","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"magclass","Version":"4.39","Title":"Data Class and Tools for Handling Spatial-Temporal Data","Description":"Data class for increased interoperability working with spatial-\n temporal data together with corresponding functions and methods (conversions,\n basic calculations and basic data manipulation). The class distinguishes\n between spatial, temporal and other dimensions to facilitate the development\n and interoperability of tools build for it. Additional features are name-based\n addressing of data and internal consistency checks (e.g. checking for the right\n data order in calculations).","Published":"2017-05-26","License":"LGPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"magic","Version":"1.5-6","Title":"create and investigate magic squares","Description":"a collection of efficient, vectorized algorithms for the\n creation and investigation of magic squares and hypercubes, including\n a variety of functions for the manipulation and analysis of\n arbitrarily dimensioned arrays. The package includes methods for\n creating normal magic squares of any order greater than 2. The\n ultimate intention is for the package to be a computerized embodiment\n all magic square knowledge, including direct numerical verification\n of properties of magic squares (such as recent results on the\n determinant of odd-ordered semimagic squares). Some antimagic\n functionality is included. The package also\n serves as a rebuttal to the often-heard comment \"I thought R\n was just for statistics\".","Published":"2013-11-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"magicaxis","Version":"2.0.1","Title":"Pretty Scientific Plotting with Minor-Tick and Log Minor-Tick\nSupport","Description":"Functions to make useful (and pretty) plots for scientific plotting. Additional plotting features are added for base plotting, with particular emphasis on making attractive log axis plots.","Published":"2017-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"magicfor","Version":"0.1.0","Title":"Magic Functions to Obtain Results from for Loops","Description":"Magic functions to obtain results from for loops.","Published":"2016-12-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"magick","Version":"0.4","Title":"Advanced Image-Processing in R","Description":"Bindings to 'ImageMagick': the most comprehensive open-source image\n processing library available. Supports many common formats (png, jpeg, tiff,\n pdf, etc) and manipulations (rotate, scale, crop, trim, flip, blur, etc).\n All operations are vectorized via the Magick++ STL meaning they operate either\n on a single frame or a series of frames for working with layers, collages,\n or animation. In RStudio images are automatically previewed when printed to\n the console, resulting in an interactive editing environment.","Published":"2017-03-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MAGNAMWAR","Version":"1.0.0","Title":"A Pipeline for Meta-Genome Wide Association","Description":"Correlates variation within the meta-genome to target species\n phenotype variations in meta-genome with association studies.","Published":"2017-04-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MagneticMap","Version":"1.0","Title":"Magnetic Laplacian Matrix and Magnetic Eigenmap Visualization","Description":"Constructs the normalized magnetic Laplacian Matrix of a square matrix, returns the eigenvectors and visualization of magnetic eigenmap.","Published":"2016-08-03","License":"GNU General Public License version 2","snapshot_date":"2017-06-23"} {"Package":"magree","Version":"1.0","Title":"Implements the O'Connell-Dobson-Schouten Estimators of Agreement\nfor Multiple Observers","Description":"Implements an interface to the legacy Fortran code from O'Connell and Dobson (1984) . Implements Fortran 77 code for the methods developed by Schouten (1982) . Includes estimates of average agreement for each observer and average agreement for each subject.","Published":"2016-12-09","License":"GPL-3 | GPL-2","snapshot_date":"2017-06-23"} {"Package":"magrittr","Version":"1.5","Title":"A Forward-Pipe Operator for R","Description":"Provides a mechanism for chaining commands with a\n new forward-pipe operator, %>%. This operator will forward a\n value, or the result of an expression, into the next function\n call/expression. There is flexible support for the type\n of right-hand side expressions. For more information, see\n package vignette.\n To quote Rene Magritte, \"Ceci n'est pas un pipe.\"","Published":"2014-11-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"maGUI","Version":"2.2","Title":"A Graphical User Interface for Microarray Data Analysis and\nAnnotation","Description":"Provides a Graphical User Interface for Analysing DNA Microarray Data. It performs functional enrichment on genes of interest, identifies gene symbols and also builds co-expression network. ","Published":"2017-03-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mail","Version":"1.0","Title":"Sending Email Notifications from R","Description":"Easy to use package for sending email notifications with status information from R","Published":"2011-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mailR","Version":"0.4.1","Title":"A Utility to Send Emails from R","Description":"Interface to Apache Commons Email to send emails\n from R.","Published":"2015-01-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MAINT.Data","Version":"1.1.2","Title":"Model and Analyse Interval Data","Description":"Implements methodologies for modelling interval data by Normal\n and Skew-Normal distributions, considering appropriate parameterizations of\n the variance-covariance matrix that takes into account the intrinsic nature of\n interval data, and lead to four different possible configuration structures.\n The Skew-Normal parameters can be estimated by maximum likelihood, while Normal\n parameters may be estimated by maximum likelihood or robust trimmed maximum\n likelihood methods.","Published":"2017-06-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"makedummies","Version":"1.0","Title":"Create Dummy Variables from Categorical Data","Description":"Create dummy variables from categorical data.\n This package can convert categorical data (factor and ordered) into\n dummy variables and handle multiple columns simultaneously.\n This package enables to select whether a dummy variable for base group\n is included (for principal component analysis/factor analysis) or\n excluded (for regression analysis) by an option.","Published":"2017-01-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MakefileR","Version":"1.0","Title":"Create 'Makefiles' Using R","Description":"A user-friendly interface for the construction of\n 'Makefiles'.","Published":"2016-01-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"makeFlow","Version":"1.0.2","Title":"Visualizing Sequential Classifications","Description":"A user-friendly tool for visualizing categorical or group movement.","Published":"2016-08-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"makeProject","Version":"1.0","Title":"Creates an empty package framework for the LCFD format","Description":"This package creates an empty framework of files and\n directories for the \"Load, Clean, Func, Do\" structure described\n by Josh Reich.","Published":"2012-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"malani","Version":"1.0","Title":"Machine Learning Assisted Network Inference","Description":"Find dark genes. These genes are often disregarded due to no detected mutation or differential expression, but are important in coordinating the functionality in cancer networks.","Published":"2016-09-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MALDIquant","Version":"1.16.2","Title":"Quantitative Analysis of Mass Spectrometry Data","Description":"A complete analysis pipeline for matrix-assisted laser\n desorption/ionization-time-of-flight (MALDI-TOF) and other\n two-dimensional mass spectrometry data. In addition to commonly\n used plotting and processing methods it includes distinctive\n features, namely baseline subtraction methods such as\n morphological filters (TopHat) or the statistics-sensitive\n non-linear iterative peak-clipping algorithm (SNIP), peak\n alignment using warping functions, handling of replicated\n measurements as well as allowing spectra with different\n resolutions.","Published":"2017-04-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"MALDIquantForeign","Version":"0.10","Title":"Import/Export Routines for MALDIquant","Description":"Functions for reading (tab, csv, Bruker fid, Ciphergen\n XML, mzXML, mzML, imzML, Analyze 7.5, CDF, mMass MSD) and\n writing (tab, csv, mMass MSD, mzML, imzML) different file\n formats of mass spectrometry data into/from MALDIquant objects.","Published":"2015-11-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mallet","Version":"1.0","Title":"A wrapper around the Java machine learning tool MALLET","Description":"This package allows you to train topic models in mallet and load results directly into R.","Published":"2013-08-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MAMS","Version":"1.01","Title":"Designing Multi-Arm Multi-Stage Studies","Description":"Designing multi-arm multi-stage studies with normal endpoints and known variance.","Published":"2017-02-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MAMSE","Version":"0.2-1","Title":"Calculation of Minimum Averaged Mean Squared Error (MAMSE)\nWeights","Description":"Calculates the nonparametric adaptive MAMSE\n weights for univariate, right-censored or multivariate data.\n The MAMSE weights can be used in a weighted likelihood or to\n define a mixture of empirical distribution functions. The package\n includes functions for the MAMSE weighted Kaplan-Meier estimate\n and for MAMSE weighted ROC curves.","Published":"2017-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"managelocalrepo","Version":"0.1.5","Title":"Manage a CRAN-Style Local Repository","Description":"This will allow easier management of a CRAN-style repository on\n local networks (i.e. not on CRAN). This might be necessary where hosted\n packages contain intellectual property owned by a corporation.","Published":"2015-04-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MANCIE","Version":"1.4","Title":"Matrix Analysis and Normalization by Concordant Information\nEnhancement","Description":"High-dimensional data integration is a critical but difficult problem in genomics research because of potential biases from high-throughput experiments. We present MANCIE, a computational method for integrating two genomic data sets with homogenous dimensions from different sources based on a PCA procedure as an approximation to a Bayesian approach.","Published":"2016-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mangoTraining","Version":"1.0-7","Title":"Mango Solutions Training Datasets","Description":"Datasets designed to be used in conjunction with Mango Solutions\n training materials and the book SAMS Teach Yourself R in 24 Hours (ISBN: 978-0-672-33848-9).","Published":"2017-01-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Mangrove","Version":"1.21","Title":"Risk Prediction on Trees","Description":"Methods for performing genetic risk\n prediction from genotype data. You can use it to perform risk\n prediction for individuals, or for families with missing data.","Published":"2017-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"manhattanly","Version":"0.2.0","Title":"Interactive Q-Q and Manhattan Plots Using 'plotly.js'","Description":"Create interactive Q-Q, manhattan and volcano plots that are usable from the R console,\n in the 'RStudio' viewer pane, in 'R Markdown' documents, and in 'Shiny' apps.\n Hover the mouse pointer over a point to show details or drag a rectangle to\n zoom. A manhattan plot is a popular graphical method for visualizing results\n from high-dimensional data analysis such as a (epi)genome wide association study\n (GWAS or EWAS), in which p-values, Z-scores, test statistics are plotted on a scatter\n plot against their genomic position. Manhattan plots are used for visualizing\n potential regions of interest in the genome that are associated with a phenotype.\n Interactive manhattan plots allow the inspection of specific value (e.g. rs number or\n gene name) by hovering the mouse over a cell, as well as zooming into a region of the\n genome (e.g. a chromosome) by dragging a rectangle around the relevant area.\n This work is based on the 'qqman' package by Stephen Turner and the 'plotly.js'\n engine. It produces similar manhattan and Q-Q plots as the 'manhattan' and 'qq'\n functions in the 'qqman' package, with the advantage of including extra annotation \n information and interactive web-based visualizations directly from R. \n Once uploaded to a 'plotly' account, 'plotly' graphs (and the data behind them) \n can be viewed and modified in a web browser.","Published":"2016-11-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"manifestoR","Version":"1.2.4","Title":"Access and Process Data and Documents of the Manifesto Project","Description":"Provides access to coded election programmes from the Manifesto\n Corpus and to the Manifesto Project's Main Dataset and routines to analyse this\n data. The Manifesto Project collects and\n analyses election programmes across time and space to measure the political\n preferences of parties. The Manifesto Corpus contains the collected and\n annotated election programmes in the Corpus format of the package 'tm' to enable\n easy use of text processing and text mining functionality. Specific functions\n for scaling of coded political texts are included.","Published":"2017-05-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ManifoldOptim","Version":"0.1.3","Title":"An R Interface to the 'ROPTLIB' Library for Riemannian Manifold\nOptimization","Description":"An R interface to the 'ROPTLIB' optimization library (see for more information). Optimize real-valued functions over manifolds such as Stiefel, Grassmann, and Symmetric Positive Definite matrices.","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"manipulate","Version":"1.0.1","Title":"Interactive Plots for RStudio","Description":"Interactive plotting functions for use within RStudio.\n The manipulate function accepts a plotting expression and a set of\n controls (e.g. slider, picker, checkbox, or button) which are used\n to dynamically change values within the expression. When a value is\n changed using its corresponding control the expression is\n automatically re-executed and the plot is redrawn.","Published":"2014-12-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"manipulateWidget","Version":"0.7.0","Title":"Add Even More Interactivity to Interactive Charts","Description":"Like package 'manipulate' does for static graphics, this package\n helps to easily add controls like sliders, pickers, checkboxes, etc. that \n can be used to modify the input data or the parameters of an interactive \n chart created with package 'htmlwidgets'.","Published":"2017-06-17","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ManlyMix","Version":"0.1.7","Title":"Manly Mixture Modeling and Model-Based Clustering","Description":"The utility of this package includes finite mixture modeling and model-based clustering through Manly mixture models by Zhu and Melnykov (2016) . It also provides capabilities for forward and backward model selection procedures. ","Published":"2016-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MANOVA.RM","Version":"0.0.5","Title":"Analysis of Multivariate Data and Repeated Measures Designs","Description":"Implemented are various tests for semi-parametric repeated measures\n and general MANOVA designs that do neither assume multivariate normality nor\n covariance homogeneity, i.e., the procedures are applicable for a wide range\n of general multivariate factorial designs.","Published":"2017-04-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ManyTests","Version":"1.2","Title":"Multiple Testing Procedures of Cox (2011) and Wong and Cox\n(2007)","Description":"Performs the multiple testing procedures of Cox (2011) and Wong and Cox (2007) .","Published":"2016-10-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Map2NCBI","Version":"1.1","Title":"Mapping Markers to the Nearest Genomic Feature","Description":"Allows the user to generate a list of features (gene, pseudo, RNA, \n CDS, and/or UTR) directly from NCBI database for any species with a current \n build available. Option to save downloaded and formatted files is available, \n and the user can prioritize the feature list based on type and assembly builds \n present in the current build used. The user can then use the list of features \n generated or provide a list to map a set of markers (designed for SNP markers \n with a single base pair position available) to the closest feature based on \n the map build. This function does require map positions of the markers to be \n provided and the positions should be based on the build being queried through \n NCBI.","Published":"2015-11-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MAPA","Version":"2.0.2","Title":"Multiple Aggregation Prediction Algorithm","Description":"Functions and wrappers for using the Multiple Aggregation Prediction Algorithm (MAPA) for time series forecasting. MAPA models and forecasts time series at multiple temporal aggregation levels, thus strengthening and attenuating the various time series components for better holistic estimation of its structure.","Published":"2017-06-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mapdata","Version":"2.2-6","Title":"Extra Map Databases","Description":"Supplement to maps package, providing the larger and/or\n\thigher-resolution databases.","Published":"2016-01-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mapfit","Version":"0.9.7","Title":"A Tool for PH/MAP Parameter Estimation","Description":"Estimation methods for phase-type\n distribution (PH) and Markovian arrival process (MAP) from\n empirical data (point and grouped data) and density function.","Published":"2015-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MapGAM","Version":"1.0","Title":"Mapping Smoothed Effect Estimates from Individual-Level Data","Description":"Contains functions for mapping odds ratios, hazard ratios, or other effect estimates using individual-level data such as case-control study data, using generalized additive models (GAMs) or Cox models for smoothing with a two-dimensional predictor (e.g., geolocation or exposure to chemical mixtures) while adjusting linearly for confounding variables, using methods described by Kelsall and Diggle (1998), Webster at al. (2006), and Bai et al. (submitted). Includes convenient functions for mapping point estimates and confidence intervals, efficient control sampling, and permutation tests for the null hypothesis that the two-dimensional predictor is not associated with the outcome variable (adjusting for confounders). ","Published":"2016-09-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MAPLES","Version":"1.0","Title":"Smoothed age profile estimation","Description":"MAPLES is a general method for the estimation of age\n profiles that uses standard micro-level demographic survey\n data. The aim is to estimate smoothed age profiles and relative\n risks for time-fixed and time-varying covariates.","Published":"2011-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mapmisc","Version":"1.5.0","Title":"Utilities for Producing Maps","Description":"A minimal, light-weight set of tools for producing nice looking maps in R, with support for map projections.","Published":"2016-05-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mapplots","Version":"1.5","Title":"Data Visualisation on Maps","Description":"Create simple maps; add sub-plots like pie plots to a map or any other plot; format, plot and export gridded data. The package was developed for displaying fisheries data but most functions can be used for more generic data visualisation. ","Published":"2014-10-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mapproj","Version":"1.2-5","Title":"Map Projections","Description":"Converts latitude/longitude into projected coordinates.","Published":"2017-06-08","License":"Lucent Public License","snapshot_date":"2017-06-23"} {"Package":"mapr","Version":"0.3.4","Title":"Visualize Species Occurrence Data","Description":"Utilities for visualizing species occurrence data. Includes\n functions to visualize occurrence data from 'spocc', 'rgbif',\n and other packages. Mapping options included for base R plots, 'ggplot2',\n 'ggmap', 'leaflet' and 'GitHub' 'gists'.","Published":"2017-04-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"maps","Version":"3.2.0","Title":"Draw Geographical Maps","Description":"Display of maps. Projection code and larger maps are in\n separate packages ('mapproj' and 'mapdata').","Published":"2017-06-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mapStats","Version":"2.4","Title":"Geographic Display of Survey Data Statistics","Description":"Automated calculation and visualization of survey data statistics on a color-coded map.","Published":"2015-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"maptools","Version":"0.9-2","Title":"Tools for Reading and Handling Spatial Objects","Description":"Set of tools for manipulating and reading geographic data, in particular 'ESRI Shapefiles'; C code used from 'shapelib'. It includes binary access to 'GSHHG' shoreline files. The package also provides interface wrappers for exchanging spatial objects with packages such as 'PBSmapping', 'spatstat', 'maps', 'RArcInfo', 'Stata tmap', 'WinBUGS', 'Mondrian', and others.","Published":"2017-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"maptpx","Version":"1.9-2","Title":"MAP Estimation of Topic Models","Description":"Posterior maximization for topic models (LDA) in text analysis,\n\tas described in Taddy (2012) `on estimation and selection for topic models'. Previous versions of this code were included as part of the textir package. If you want to take advantage of openmp parallelization, uncomment the relevant flags in src/MAKEVARS before compiling.","Published":"2015-04-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"maptree","Version":"1.4-7","Title":"Mapping, pruning, and graphing tree models","Description":"Functions with example data for graphing, pruning, and\n mapping models from hierarchical clustering, and classification\n and regression trees.","Published":"2012-11-26","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"mapview","Version":"2.0.1","Title":"Interactive Viewing of Spatial Objects in R","Description":"Methods to view spatial objects interactively.","Published":"2017-05-07","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mAr","Version":"1.1-2","Title":"Multivariate AutoRegressive analysis","Description":"R functions for multivariate autoregressive analysis","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MAR1","Version":"1.0","Title":"Multivariate Autoregressive Modeling for Analysis of Community\nTime-Series Data","Description":"The MAR1 package provides basic tools for preparing\n ecological community time-series data for MAR modeling,\n building MAR-1 models via model selection and bootstrapping,\n and visualizing and exporting model results. It is intended to\n make MAR analysis (sensu Ives et al. (2003) Analysis of\n community stability and ecological interactions from\n time-series data) a more accessible tool for anyone studying\n community dynamics. The user need not necessarily be familiar\n with time-series modeling or command-based statistics programs\n such as R.","Published":"2013-03-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mar1s","Version":"2.1","Title":"Multiplicative AR(1) with Seasonal Processes","Description":"Multiplicative AR(1) with Seasonal is a stochastic\n process model built on top of AR(1). The package provides the\n following procedures for MAR(1)S processes: fit, compose, decompose,\n advanced simulate and predict.","Published":"2013-10-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"march","Version":"1.4","Title":"Markov Chains","Description":"Computation of various Markovian models for categorical data\n including homogeneous Markov chains of any order, MTD models, Hidden Markov\n models, and Double Chain Markov Models.","Published":"2016-05-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"marcher","Version":"0.0-2","Title":"Migration and Range Change Estimation in R","Description":"A set of tools for likelihood-based estimation, model selection and testing of two- and three-range shift and migration models for animal movement data as described in Gurarie et al. (2017) . Provided movement data (X, Y and Time), including irregularly sampled data, functions estimate the time, duration and location of one or two range shifts, as well as the ranging area and auto-correlation structure of the movment. Tests assess, for example, whether the shift was \"significant\", and whether a two-shift migration was a true return migration.","Published":"2017-04-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"marelac","Version":"2.1.6","Title":"Tools for Aquatic Sciences","Description":"Datasets, constants, conversion factors, and utilities for 'MArine', 'Riverine',\n 'Estuarine', 'LAcustrine' and 'Coastal' science. \n The package contains among others: (1) chemical and\n physical constants and datasets, e.g. atomic weights, gas\n constants, the earths bathymetry; (2) conversion factors\n (e.g. gram to mol to liter, barometric units,\n temperature, salinity); (3) physical functions, e.g. to\n estimate concentrations of conservative substances, gas\n transfer and diffusion coefficients, the Coriolis force\n and gravity; (4) thermophysical properties of the\n seawater, as from the UNESCO polynomial or from the more\n recent derivation based on a Gibbs function.","Published":"2016-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MareyMap","Version":"1.3.3","Title":"Estimation of Meiotic Recombination Rates Using Marey Maps","Description":"Local recombination rates are graphically estimated across a genome using Marey maps.","Published":"2016-11-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"marg","Version":"1.2-2","Title":"Approximate marginal inference for regression-scale models","Description":"Likelihood inference based on higher order approximations \n for linear nonnormal regression models","Published":"2014-04-03","License":"GPL (>= 2) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"margins","Version":"0.3.0","Title":"Marginal Effects for Model Objects","Description":"An R port of Stata's 'margins' command, which can be used to\n calculate marginal (or partial) effects from model objects.","Published":"2017-03-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"marima","Version":"2.2","Title":"Multivariate ARIMA and ARIMA-X Analysis","Description":"Multivariate ARIMA and ARIMA-X estimation using Spliid's \n algorithm (marima()) and simulation (marima.sim()).","Published":"2017-01-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"marinespeed","Version":"0.1.0","Title":"Benchmark Data Sets and Functions for Marine Species\nDistribution Modelling","Description":"A collection of marine species benchmark data sets and functions\n for species distribution modelling (ecological niche modelling).","Published":"2017-02-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"markdown","Version":"0.8","Title":"'Markdown' Rendering for R","Description":"Provides R bindings to the 'Sundown' 'Markdown' rendering library\n (https://github.com/vmg/sundown). 'Markdown' is a plain-text formatting\n syntax that can be converted to 'XHTML' or other formats. See\n http://en.wikipedia.org/wiki/Markdown for more information about 'Markdown'.","Published":"2017-04-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"marked","Version":"1.1.13","Title":"Mark-Recapture Analysis for Survival and Abundance Estimation","Description":"Functions for fitting various models to capture-recapture data\n including fixed and mixed-effects Cormack-Jolly-Seber(CJS) for survival\n estimation and POPAN structured Jolly-Seber models for abundance\n estimation. Includes a CJS models that concurrently estimates and corrects\n for tag loss. Hidden Markov model (HMM) implementations of CJS and\n multistate models with and without state uncertainty.","Published":"2016-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"maRketSim","Version":"0.9.2","Title":"Market simulator for R","Description":"maRketSim is a market simulator for R. It was initially designed \n\taround the bond market, with plans to expand to stocks. maRketSim is\n\tbuilt around the idea of portfolios of fundamental objects. Therefore\n\tit is slow in its current incarnation, but allows you the flexibility of\n\tseeing exactly what is in your final results, since the objects are retained.","Published":"2013-07-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"markmyassignment","Version":"0.6.1","Title":"Automatic Marking of R Assignments","Description":"Automatic marking of R assignments for students and teachers based\n on 'testthat' test suites.","Published":"2016-08-16","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"markophylo","Version":"1.0.4","Title":"Markov Chain Models for Phylogenetic Trees","Description":"Allows for fitting of maximum likelihood models using Markov chains\n on phylogenetic trees for analysis of discrete character data. Examples of such\n discrete character data include restriction sites, gene family presence/absence,\n intron presence/absence, and gene family size data. Hypothesis-driven user-\n specified substitution rate matrices can be estimated. Allows for biologically\n realistic models combining constrained substitution rate matrices, site rate\n variation, site partitioning, branch-specific rates, allowing for non-stationary\n prior root probabilities, correcting for sampling bias, etc.","Published":"2015-12-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"markovchain","Version":"0.6.9.3","Title":"Easy Handling Discrete Time Markov Chains","Description":"Functions and S4 methods to create and manage discrete time Markov\n chains more easily. In addition functions to perform statistical (fitting\n and drawing random variates) and probabilistic (analysis of their structural\n proprieties) analysis are provided.","Published":"2017-05-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MarkowitzR","Version":"0.9900.0","Title":"Statistical Significance of the Markowitz Portfolio","Description":"A collection of tools for analyzing significance of\n Markowitz portfolios.","Published":"2016-09-17","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"marl","Version":"1.0","Title":"Multivariate Analysis Based on Relative Likelihoods","Description":"Functions provided allow data simulation; construction of weighted relative likelihood functions; clustering and principal component analysis based on weighted relative likelihood functions.","Published":"2015-04-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"marmap","Version":"0.9.6","Title":"Import, Plot and Analyze Bathymetric and Topographic Data","Description":"Import xyz data from the NOAA (National Oceanic and Atmospheric Administration, ), GEBCO (General Bathymetric Chart of the Oceans, ) and other sources, plot xyz data to prepare publication-ready figures, analyze xyz data to extract transects, get depth / altitude based on geographical coordinates, or calculate z-constrained least-cost paths.","Published":"2017-01-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"marqLevAlg","Version":"1.1","Title":"An algorithm for least-squares curve fitting","Description":"This algorithm provides a numerical solution to the\n problem of minimizing a function. This is more efficient than\n the Gauss-Newton-like algorithm when starting from points very\n far from the final minimum. A new convergence test is\n implemented (RDM) in addition to the usual stopping criterion :\n stopping rule is when the gradients are small enough in the\n parameters metric (GH-1G).","Published":"2013-03-18","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"MARSS","Version":"3.9","Title":"Multivariate Autoregressive State-Space Modeling","Description":"The MARSS package provides maximum-likelihood parameter estimation for constrained and unconstrained linear multivariate autoregressive state-space (MARSS) models fit to multivariate time-series data. Fitting is primarily via an Expectation-Maximization (EM) algorithm, although fitting via the BFGS algorithm (using the optim function) is also provided. MARSS models are a class of dynamic linear model (DLM) and vector autoregressive model (VAR) model. Functions are provided for parametric and innovations bootstrapping, Kalman filtering and smoothing, bootstrap model selection criteria (AICb), confidences intervals via the Hessian approximation and via bootstrapping and calculation of auxiliary residuals for detecting outliers and shocks. The user guide shows examples of using MARSS for parameter estimation for a variety of applications, model selection, dynamic factor analysis, outlier and shock detection, and addition of covariates. Type RShowDoc(\"UserGuide\", package=\"MARSS\") at the R command line to open the MARSS user guide. Online workshops (lectures and computer labs) at http://faculty.washington.edu/eeholmes/workshops.shtml See the NEWS file for update information.","Published":"2014-03-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MARX","Version":"0.1","Title":"Simulation, Estimation and Selection of MARX Models","Description":"Simulate, estimate (by t-MLE) and select mixed causal-noncausal autoregressive models with possibly exogenous regressors, using methods proposed in Lanne and Saikkonen (2011) and Hecq et al. (2016) .","Published":"2017-06-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"maSAE","Version":"0.1-5","Title":"Mandallaz' Model-Assisted Small Area Estimators","Description":"An S4 implementation of the unbiased extension of the model-\n assisted synthetic-regression estimator proposed by \n Mandallaz (2013) , \n Mandallaz et al. (2013) and \n Mandallaz (2014) . \n It yields smaller variances than the standard bias correction, \n the generalised regression estimator.","Published":"2016-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mason","Version":"0.2.5","Title":"Build Data Structures for Common Statistical Analysis","Description":"Use a consistent syntax to create data structures of common\n statistical techniques that can be continued in a pipe chain.\n Design the analysis, add settings and variables, construct the results, and\n polish the final structure. Rinse and repeat for any number of statistical\n techniques.","Published":"2016-07-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MASS","Version":"7.3-47","Title":"Support Functions and Datasets for Venables and Ripley's MASS","Description":"Functions and datasets to support Venables and Ripley,\n \"Modern Applied Statistics with S\" (4th edition, 2002).","Published":"2017-04-21","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"MASSTIMATE","Version":"1.3","Title":"Body Mass Estimation Equations for Vertebrates","Description":"Estimation equations are from a variety of sources but are, in general, based on regressions between skeletal measurements (e.g., femoral circumference) and body mass in living taxa.","Published":"2016-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MasterBayes","Version":"2.54","Title":"ML and MCMC Methods for Pedigree Reconstruction and Analysis","Description":"The primary aim of MasterBayes is to use MCMC techniques to integrate over uncertainty in pedigree configurations estimated from molecular markers and phenotypic data. Emphasis is put on the marginal distribution of parameters that relate the phenotypic data to the pedigree. All simulation is done in compiled C++ for efficiency.","Published":"2016-10-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MAT","Version":"2.2","Title":"Multidimensional Adaptive Testing","Description":"Simulate Multidimensional Adaptive Testing","Published":"2014-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MATA","Version":"0.3","Title":"Model-Averaged Tail Area Wald (MATA-Wald) Confidence Interval","Description":"Calculates Model-Averaged Tail Area Wald (MATA-Wald) confidence\n intervals, which are constructed using single-model estimators and model\n weights.","Published":"2015-04-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Matching","Version":"4.9-2","Title":"Multivariate and Propensity Score Matching with Balance\nOptimization","Description":"Provides functions for multivariate and propensity score matching \n and for finding optimal balance based on a genetic search algorithm. \n A variety of univariate and multivariate metrics to\n determine if balance has been obtained are also provided.","Published":"2015-12-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MatchingFrontier","Version":"1.0.0","Title":"Computation of the Balance - Sample Size Frontier in Matching\nMethods for Causal Inference","Description":"Returns the subset of the data with the minimum imbalance for \n\t every possible subset size (N - 1, N - 2, ...), down to the data set with the \n\t minimum possible imbalance. Also includes tools for the estimation\n\t of causal effects for each subset size, functions for visualization\n\t and data export, and functions for calculating\n\t model dependence as proposed by Athey and Imbens. ","Published":"2015-03-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"matchingMarkets","Version":"0.3-3","Title":"Analysis of Stable Matchings","Description":"Implements structural estimators to correct for\n the sample selection bias from observed outcomes in matching\n markets. This includes one-sided matching of agents into\n groups as well as two-sided matching of students to schools.\n The package also contains algorithms to find stable matchings \n in the three most common matching problems: the stable roommates \n problem, the college admissions problem, and the house \n allocation problem.","Published":"2017-03-26","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"matchingR","Version":"1.2.1","Title":"Matching Algorithms in R and C++","Description":"Computes matching algorithms quickly using Rcpp.\n Implements the Gale-Shapley Algorithm to compute the stable\n matching for two-sided markets, such as the stable marriage\n problem and the college-admissions problem. Implements Irving's\n Algorithm for the stable roommate problem. Implements the top\n trading cycle algorithm for the indivisible goods trading problem.","Published":"2015-11-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MatchIt","Version":"3.0.1","Title":"Nonparametric Preprocessing for Parametric Casual Inference","Description":"Selects matched samples of the original treated and\n control groups with similar covariate distributions -- can be\n used to match exactly on covariates, to match on propensity\n scores, or perform a variety of other matching procedures. The\n package also implements a series of recommendations offered in\n Ho, Imai, King, and Stuart (2007) .","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MatchItSE","Version":"1.0","Title":"Calculates SE for Matched Samples from 'MatchIt'","Description":"\n Contains various methods for Standard Error estimation for 'MatchIt' objects.","Published":"2016-09-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MatchLinReg","Version":"0.7.0","Title":"Combining Matching and Linear Regression for Causal Inference","Description":"Core functions as well as diagnostic and calibration tools for combining matching and linear regression for causal inference in observational studies.","Published":"2015-07-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"matchMulti","Version":"1.1.5","Title":"Optimal Multilevel Matching using a Network Algorithm","Description":"Performs multilevel matches for data with cluster-level treatments and individual-level outcomes using a network optimization algorithm. Functions for checking balance at the cluster and individual levels are also provided, as are methods for permutation-inference-based outcome analysis.","Published":"2016-08-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"matconv","Version":"0.3.2","Title":"A Code Converter from the Matlab/Octave Language to R","Description":"Transferring over a code base from Matlab to R is often a repetitive\n and inefficient use of time. This package provides a translator for Matlab /\n Octave code into R code. It does some syntax changes, but most of the heavy\n lifting is in the function changes since the languages are so similar.\n Options for different data structures and the functions that can be changed\n are given. The Matlab code should be mostly in adherence to the standard\n style guide but some effort has been made to accommodate different number of\n spaces and other small syntax issues. This will not make the code more R\n friendly and may not even run afterwards. However, the rudimentary syntax,\n base function and data structure conversion is done quickly so that the\n maintainer can focus on changes to the design structure.","Published":"2017-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mateable","Version":"0.3.1","Title":"Tools to Assess Mating Potential in Space and Time","Description":"Tools to simulate, manage, visualize, and analyze\n spatially and temporally explicit datasets of mating potential.\n Implements methods to calculate synchrony, proximity, and compatibility.","Published":"2016-04-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mathgraph","Version":"0.9-11","Title":"Directed and undirected graphs","Description":"Simple tools for constructing and manipulating objects of\n class mathgraph from the book \"S Poetry\", available at\n http://www.burns-stat.com/pages/spoetry.html","Published":"2013-12-11","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"matie","Version":"1.2","Title":"Measuring Association and Testing Independence Efficiently","Description":"Uses a ratio of weighted distributions to estimate association between variables in a data set.","Published":"2013-11-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"matlab","Version":"1.0.2","Title":"MATLAB emulation package","Description":"Emulate MATLAB code using R","Published":"2014-06-24","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"matlabr","Version":"1.1.3","Title":"An Interface for MATLAB using System Calls","Description":"Provides users to call MATLAB from using the \"system\" command.\n Allows users to submit lines of code or MATLAB m files.\n This is in comparison to 'R.matlab', which creates a MATLAB server.","Published":"2016-05-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"matlib","Version":"0.8.1","Title":"Matrix Functions for Teaching and Learning Linear Algebra and\nMultivariate Statistics","Description":"A collection of matrix functions for teaching and learning matrix\n linear algebra as used in multivariate statistical methods. These functions are\n mainly for tutorial purposes in learning matrix algebra ideas using R. In some\n cases, functions are provided for concepts available elsewhere in R, but where\n the function call or name is not obvious. In other cases, functions are provided\n to show or demonstrate an algorithm. In addition, a collection of functions are\n provided for drawing vector diagrams in 2D and 3D.","Published":"2016-09-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"matpow","Version":"0.1.1","Title":"matrix powers","Description":"A general framework for computing powers of matrices. A\n key feature is the capability for users to write callback functions,\n called after each iteration, thus enabling customization for specific\n applications. Diverse types of matrix classes/matrix multiplication\n are accommodated. If the multiplication type computes in parallel,\n then the package computation is also parallel.","Published":"2014-08-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"matR","Version":"0.9","Title":"Metagenomics Analysis Tools for R","Description":"An analysis platform for metagenomics combining \n specialized tools and workflows, easy handling of the BIOM \n format, and transparent access to MG-RAST resources. matR integrates \n easily with other R packages and non-R software.","Published":"2014-10-23","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Matrix","Version":"1.2-10","Title":"Sparse and Dense Matrix Classes and Methods","Description":"Classes and methods for dense and sparse matrices and\n operations on them using 'LAPACK' and 'SuiteSparse'.","Published":"2017-04-28","License":"GPL (>= 2) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"Matrix.utils","Version":"0.9.5","Title":"Data.frame-Like Operations on Sparse and Dense Matrix Objects","Description":"Implements data manipulation methods such as cast, aggregate, and merge/join for Matrix and matrix-like objects.","Published":"2016-12-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"matrixcalc","Version":"1.0-3","Title":"Collection of functions for matrix calculations","Description":"A collection of functions to support matrix calculations\n for probability, econometric and numerical analysis. There are\n additional functions that are comparable to APL functions which\n are useful for actuarial models such as pension mathematics.\n This package is used for teaching and research purposes at the\n Department of Finance and Risk Engineering, New York\n University, Polytechnic Institute, Brooklyn, NY 11201.","Published":"2012-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MatrixCorrelation","Version":"0.9.1","Title":"Matrix Correlation Coefficients","Description":"Computation and visualization of matrix correlation coefficients.\n The main method is the Similarity of Matrices Index, while various related\n measures like r1, r2, r3, r4, Yanai's GCD, RV, RV2 and adjusted RV are included\n for comparison.","Published":"2017-05-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MatrixEQTL","Version":"2.1.1","Title":"Matrix eQTL: Ultra fast eQTL analysis via large matrix\noperations","Description":"Matrix eQTL is designed for fast eQTL analysis on large datasets.\n\tMatrix eQTL can test for association between genotype and gene expression using linear regression \n\twith either additive or ANOVA genotype effects.\n\tThe models can include covariates to account for factors \n\tas population stratification, gender, and clinical variables. \n\tIt also supports models with heteroscedastic and/or correlated errors,\n\tfalse discovery rate estimation and separate treatment of local (cis) and distant (trans) eQTLs.","Published":"2015-02-03","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"matrixLaplacian","Version":"1.0","Title":"Normalized Laplacian Matrix and Laplacian Map","Description":"Constructs the normalized Laplacian matrix of a square matrix, returns the eigenvectors (singular vectors) and visualization of normalized Laplacian map.","Published":"2016-07-14","License":"GNU General Public License version 2","snapshot_date":"2017-06-23"} {"Package":"MatrixLDA","Version":"0.1","Title":"Penalized Matrix-Normal Linear Discriminant Analysis","Description":"Fits the penalized matrix-normal model to be used for linear discriminant analysis with matrix-valued predictors.","Published":"2016-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MatrixModels","Version":"0.4-1","Title":"Modelling with Sparse And Dense Matrices","Description":"Modelling with sparse and dense 'Matrix' matrices, using\n modular prediction and response module classes.","Published":"2015-08-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"matrixpls","Version":"1.0.5","Title":"Matrix-Based Partial Least Squares Estimation","Description":"Partial Least Squares Path Modeling\n algorithm and related algorithms. The algorithm implementations aim for\n computational efficiency using matrix algebra and covariance data. The\n package is designed toward Monte Carlo simulations and includes functions\n to perform simple Monte Carlo simulations.","Published":"2017-05-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"matrixStats","Version":"0.52.2","Title":"Functions that Apply to Rows and Columns of Matrices (and to\nVectors)","Description":"High-performing functions operating on rows and columns of matrices, e.g. col / rowMedians(), col / rowRanks(), and col / rowSds(). Functions optimized per data type and for subsetted calculations such that both memory usage and processing time is minimized. There are also optimized vector-based methods, e.g. binMeans(), madDiff() and weightedMedian().","Published":"2017-04-14","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"MATTOOLS","Version":"1.1","Title":"Modern Calibration Functions for the Modern Analog Technique\n(MAT)","Description":"This package in includes functions for receiver operating\n characteristic (ROC) analyses as well as Monte Carlo\n simulation. It includes specific graphical functions for\n interpreting the output of these techniques.","Published":"2012-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MAVE","Version":"1.2.9","Title":"Methods for Dimension Reduction","Description":"Functions for dimension reduction, using MAVE (Minimum Average Variance Estimation), OPG (Outer Product of Gradient) and KSIR (sliced inverse regression of kernel version). Methods for selecting the best dimension are also included.","Published":"2017-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MAVIS","Version":"1.1.2","Title":"Meta Analysis via Shiny","Description":"Interactive shiny application for running a meta-analysis,\n provides support for both random effects and fixed effects models with the 'metafor' package.\n Additional support is included for calculating effect sizes plus\n support for single case designs, graphical output, and detecting publication bias.","Published":"2016-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MAVTgsa","Version":"1.3","Title":"Three methods to identify differentially expressed gene sets,\nordinary least square test, Multivariate Analysis Of Variance\ntest with n contrasts and Random forest","Description":"This package is a gene set analysis function for one-sided test (OLS), two-sided test (multivariate analysis of variance).\n If the experimental conditions are equal to 2, the p-value for Hotelling's t^2 test is calculated.\n If the experimental conditions are great than 2, the p-value for Wilks' Lambda is determined and post-hoc test is reported too.\n Three multiple comparison procedures, Dunnett, Tukey, and sequential pairwise comparison, are implemented.\n The program computes the p-values and FDR (false discovery rate) q-values for all gene sets.\n The p-values for individual genes in a significant gene set are also listed.\n MAVTgsa generates two visualization output: a p-value plot of gene sets (GSA plot) and a GST-plot of the empirical distribution function of the ranked test statistics of a given gene set.\n A Random Forests-based procedure is to identify gene sets that can accurately predict samples from different experimental conditions or are associated with the continuous phenotypes.","Published":"2014-07-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MaXact","Version":"0.2.1","Title":"Exact max-type Cochran-Armitage trend test(CATT)","Description":"Perform exact MAX3 or MAX2 test for one-locus genetic\n association analysis and trend test for dominant, recessive and\n additive models. It can also calculate approximated p-value\n with the normal approximation method.","Published":"2013-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"maxent","Version":"1.3.3.1","Title":"Low-memory Multinomial Logistic Regression with Support for Text\nClassification","Description":"maxent is an R package with tools for low-memory\n multinomial logistic regression, also known as maximum entropy.\n The focus of this maximum entropy classifier is to minimize\n memory consumption on very large datasets, particularly sparse\n document-term matrices represented by the tm package. The\n classifier is based on an efficient C++ implementation written\n by Dr. Yoshimasa Tsuruoka.","Published":"2013-11-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"maxLik","Version":"1.3-4","Title":"Maximum Likelihood Estimation and Related Tools","Description":"Functions for Maximum Likelihood (ML) estimation and non-linear\n optimization, and related tools. It includes a unified way to call\n different optimizers, and classes and methods to handle the results from\n the ML viewpoint. It also includes a number of convenience tools for testing\n and developing your own models.","Published":"2015-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"maxlike","Version":"0.1-7","Title":"Model Species Distributions by Estimating the Probability of\nOccurrence Using Presence-Only Data","Description":"Provides a likelihood-based approach to modeling species distributions using presence-only data. In contrast to the popular software program MAXENT, this approach yields estimates of the probability of occurrence, which is a natural descriptor of a species' distribution.","Published":"2017-01-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"maxmatching","Version":"0.1.0","Title":"Maximum Matching for General Weighted Graph","Description":"Computes the maximum matching for unweighted graph and maximum\n matching for (un)weighted bipartite graph efficiently.","Published":"2017-01-15","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"maxnet","Version":"0.1.2","Title":"Fitting 'Maxent' Species Distribution Models with 'glmnet'","Description":"Procedures to fit species distributions models from occurrence records and environmental variables, using 'glmnet' for model fitting. Model structure is the same as for the 'Maxent' Java package, version 3.4.0, with the same feature types and regularization options. See the 'Maxent' website for more details.","Published":"2017-02-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MaxPro","Version":"3.1-2","Title":"Maximum Projection Designs","Description":"Generate a maximum projection (MaxPro) design, a MaxPro Latin hypercube design or improve an initial design based on the MaxPro criterion. Details of the MaxPro criterion can be found in: Joseph, V. R., Gul, E., and Ba, S. (2015) \"Maximum Projection Designs for Computer Experiments\", Biometrika.","Published":"2015-01-27","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"MaxSkew","Version":"1.1","Title":"Orthogonal Data Projections with Maximal Skewness","Description":"It finds Orthogonal Data Projections with Maximal Skewness. The first data projection in the output is the most skewed among all linear data projections. The second data projection in the output is the most skewed among all data projections orthogonal to the first one, and so on. ","Published":"2017-05-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"maxstat","Version":"0.7-25","Title":"Maximally Selected Rank Statistics","Description":"Maximally selected rank statistics with\n several p-value approximations.","Published":"2017-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MazamaSpatialUtils","Version":"0.4.9","Title":"Spatial Data Download and Utility Functions","Description":"A suite of conversion scripts to create internally standardized\n spatial polygons dataframes. Utility scripts use these datasets to return\n values such as country, state, timezone, watershed, etc. associated with a\n set of longitude/latitude pairs. (They also make cool maps.)","Published":"2017-03-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mazeGen","Version":"0.1.2","Title":"Elithorn Maze Generator","Description":"A maze generator that creates the Elithorn Maze (HTML file) and the functions to calculate the associated maze parameters (i.e. Difficulty and Ability). ","Published":"2017-03-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MBA","Version":"0.0-9","Title":"Multilevel B-Spline Approximation","Description":"Functions to interpolate irregularly and regularly spaced data using Multilevel B-spline Approximation (MBA). Functions call portions of the SINTEF Multilevel B-spline Library written by Øyvind Hjelle which implements methods developed by Lee, Wolberg and Shin (1997; ).","Published":"2017-03-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mbbefd","Version":"0.8.8","Title":"Maxwell Boltzmann Bose Einstein Fermi Dirac Distribution and\nDestruction Rate Modelling","Description":"Distributions that are typically used for exposure rating in\n general insurance, in particular to price reinsurance contracts.\n The vignettes show code snippets to fit the distribution to\n empirical data.","Published":"2017-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MBC","Version":"0.10-2","Title":"Multivariate Bias Correction of Climate Model Outputs","Description":"Calibrate and apply multivariate bias correction algorithms\n for climate model simulations of multiple climate variables. Three methods\n described by Cannon (2016) and \n Cannon (2017) are implemented:\n 1) MBC Pearson correlation (MBCp), 2) MBC rank correlation (MBCr),\n and 3) MBC N-dimensional PDF transform (MBCn).","Published":"2017-03-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MBCluster.Seq","Version":"1.0","Title":"Model-Based Clustering for RNA-seq Data","Description":"Cluster genes based on Poisson or Negative-Binomial model\n for RNA-Seq or other digital gene expression (DGE) data","Published":"2012-10-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mbclusterwise","Version":"1.0","Title":"Clusterwise Multiblock Analyses","Description":"Perform clusterwise multiblock analyses (clusterwise multiblock Partial Least Squares, clusterwise multiblock Redundancy Analysis or a regularized method between the two latter ones) associated with a F-fold cross-validation procedure to select the optimal number of clusters and dimensions.","Published":"2016-11-22","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"MBESS","Version":"4.3.0","Title":"The MBESS R Package","Description":"Implements methods that useful in designing research studies and analyzing data, with \n\tparticular emphasis on methods that are developed for or used within the behavioral, \n\teducational, and social sciences (broadly defined). That being said, many of the methods \n\timplemented within MBESS are applicable to a wide variety of disciplines. MBESS has a \n\tsuite of functions for a variety of related topics, such as effect sizes, confidence intervals \n\tfor effect sizes (including standardized effect sizes and noncentral effect sizes), sample size\n\tplanning (from the accuracy in parameter estimation [AIPE], power analytic, equivalence, and \n\tminimum-risk point estimation perspectives), mediation analysis, various properties of \n\tdistributions, and a variety of utility functions. MBESS (pronounced 'em-bes') was originally \n\tan acronym for 'Methods for the Behavioral, Educational, and Social Sciences,' but at this \n\tpoint MBESS contains methods applicable and used in a wide variety of fields and is an \n\torphan acronym, in the sense that what was an acronym is now literally its name. MBESS has \n\tgreatly benefited from others, see for a detailed \n\tlist of those that have contributed and other details.","Published":"2017-06-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"mbest","Version":"0.5","Title":"Moment-Based Estimation for Hierarchical Models","Description":"Implements methods from the paper\n \"Fast Moment-Based Estimation for Hierarchical Models,\" by Perry (2016).","Published":"2016-03-08","License":"Apache License (== 2.0) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mbgraphic","Version":"1.0.0","Title":"Measure Based Graphic Selection","Description":"Measure based exploratory data analysis. Some of the functions call interactive apps programmed with the package shiny to provide flexible selection options.","Published":"2017-05-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MBHdesign","Version":"1.0.63","Title":"Spatial Designs for Ecological and Environmental Surveys","Description":"Provides spatially balanced designs from a set of (contiguous) potential sampling locations in a study region. Accommodates , without detrimental effects on spatial balance, sites that the researcher wishes to include in the survey for reasons other than the current randomisation (legacy sites).","Published":"2017-05-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MBI","Version":"1.0","Title":"(M)ultiple-site (B)iodiversity (I)ndices Calculator","Description":"Over 20 multiple-site diversity indices can be calculated.\n Later versions will include phylogenetic diversity","Published":"2012-10-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mblm","Version":"0.12","Title":"Median-Based Linear Models","Description":"This package provides linear models based on Theil-Sen\n single median and Siegel repeated medians. They are very robust\n (29 or 50 percent breakdown point, respectively), and if no\n outliers are present, the estimators are very similar to OLS.","Published":"2013-12-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MBmca","Version":"0.0.3-5","Title":"Nucleic Acid Melting Curve Analysis on Microbead Surfaces with R","Description":"The MBmca package provides data sets and lightweight utilities for\n nucleic acid melting curve analysis and presentation on microbead surfaces\n but also for reactions in solution (e.g., qPCR).","Published":"2015-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mbmdr","Version":"2.6","Title":"Model Based Multifactor Dimensionality Reduction","Description":"Model Based Multifactor Dimension Reduction proposed by\n Calle et al. (2008) as a dimension reduction method for\n exploring gene-gene interactions.","Published":"2012-07-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mboost","Version":"2.8-0","Title":"Model-Based Boosting","Description":"Functional gradient descent algorithm\n (boosting) for optimizing general risk functions utilizing\n component-wise (penalised) least squares estimates or regression\n trees as base-learners for fitting generalized linear, additive\n and interaction models to potentially high-dimensional data.","Published":"2017-05-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mbrglm","Version":"0.0.1","Title":"Median Bias Reduction in Binomial-Response GLMs","Description":"Fit generalized linear models with binomial responses using a median modified score approach (Kenne Pagui et al., 2016, ) to median bias reduction. This method respects equivariance under reparameterizations for each parameter component and also solves the infinite estimates problem (data separation).","Published":"2017-02-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MBSGS","Version":"1.0.0","Title":"Multivariate Bayesian Sparse Group Selection with Spike and Slab","Description":"An implementation of a Bayesian sparse group model using spike and slab priors in a regression context. It is designed for regression with a multivariate response variable, but also provides an implementation for univariate response.","Published":"2016-08-27","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"MBTAr","Version":"1.0.1","Title":"Access Data from the Massachusetts Bay Transit Authority (MBTA)\nWeb API","Description":"Access to the MBTA API for R. Creates an easy-to-use bundle of\n functions to work with all the built-in calls to the MBTA API. Allows users\n to download realtime tracking data in dataframe format that is manipulable\n in standard R analytics functions.","Published":"2015-09-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mBvs","Version":"1.0","Title":"Multivariate Bayesian Variable Selection Method Exploiting\nDependence among Outcomes","Description":"Bayesian variable selection methods for data with continuous multivariate responses and multiple covariates.","Published":"2015-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mc2d","Version":"0.1-18","Title":"Tools for Two-Dimensional Monte-Carlo Simulations","Description":"A complete framework to build and study Two-Dimensional Monte-Carlo simulations, aka Second-Order Monte-Carlo simulations. Also includes various distributions (pert, triangular, Bernoulli, empirical discrete and continuous).","Published":"2017-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MC2toPath","Version":"0.0.16","Title":"Translates information from netcdf files with MC2 output into\ninter-PVT transitions","Description":"Post processes MC2 output, especially for use by Path or ST-Sim. MC2 (short for \"MC1 version 2\") is a dynamic global vegetation model (en.wikipedia.org/wiki/DGVM). Path (essa.com/tools/path) and ST-Sim (www.apexrms.com) are state-and-transition model (STM) engines. MC2 has a user website at sites.google.com/site/mc1dgvmusers. Since 2001, MC1 has been used to simulate changes in natural vegetation due to climate change at scales from regional to global. In 2012, MC1 was reimplemented in C++ to make it faster and to reduce storage requirements. This newer version is referred to as MC2, an abbreviation of \"MC1 version 2\". Beginning in 2011, output from MC1 and MC2 has been used to inform regional state-and-transition model simulations by the U.S. Forest Service and the Washington State Department of Natural Resources. Projects to date have involved study areas in central Oregon, the Olympic Peninsula, the Blue Mountains ecoregion, southwestern Oregon, and southeastern Oregon. In the first of this series of projects, the netCDF output files from MC2 were manually post-processed, mostly in Excel, to produce input .csv files for the STM engines. Beginning with the second project, R scripts were used to automate the post-processing work. These R scripts have been collected into the MC2toPath R-package.","Published":"2014-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MCAvariants","Version":"2.0","Title":"Multiple Correspondence Analysis Variants","Description":"Provides two variants of multiple correspondence analysis (ca):\n multiple ca and ordered multiple ca via orthogonal polynomials of Emerson.","Published":"2016-11-22","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"mcbiopi","Version":"1.1.2","Title":"Matrix Computation Based Identification Of Prime Implicants","Description":"Computes the prime implicants or a minimal disjunctive normal form for a\n logic expression presented by a truth table or a logic tree. Has been particularly \n developed for logic expressions resulting from a logic regression analysis, i.e.\n logic expressions typically consisting of up to 16 literals, where the prime implicants \n are typically composed of a maximum of 4 or 5 literals.","Published":"2012-01-04","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mcc","Version":"1.0","Title":"Moment Corrected Correlation","Description":"A number of biomedical problems involve performing many hypothesis tests, with an attendant need to apply stringent thresholds. Often the data take the form of a series of predictor vectors, each of which must be compared with a single response vector, perhaps with nuisance covariates. Parametric tests of association are often used, but can result in inaccurate type I error at the extreme thresholds, even for large sample sizes. Furthermore, standard two-sided testing can reduce power compared to the doubled p-value, due to asymmetry in the null distribution. Exact (permutation) testing approaches are attractive, but can be computationally intensive and cumbersome. MCC is an approximation to exact association testing of two vectors that is accurate and fast enough for standard use in high-throughput settings, and can easily provide standard two-sided or doubled p-values. ","Published":"2014-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mcclust","Version":"1.0","Title":"Process an MCMC Sample of Clusterings","Description":"Implements methods for processing a sample of (hard)\n clusterings, e.g. the MCMC output of a Bayesian clustering\n model. Among them are methods that find a single best\n clustering to represent the sample, which are based on the\n posterior similarity matrix or a relabelling algorithm.","Published":"2012-07-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mccr","Version":"0.4.4","Title":"The Matthews Correlation Coefficient","Description":"The Matthews correlation coefficient (MCC) score is calculated (Matthews BW (1975) ).","Published":"2017-06-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MCDA","Version":"0.0.16","Title":"Functions to Support the Multicriteria Decision Aiding Process","Description":"Functions which can be useful to support the analyst in the Multicriteria Decision Aiding (MCDA) process involving multiple, conflicting criteria. ","Published":"2017-01-14","License":"EUPL (== 1.1)","snapshot_date":"2017-06-23"} {"Package":"MCDM","Version":"1.2","Title":"Multi-Criteria Decision Making Methods for Crisp Data","Description":"Implementation of several MCDM methos for crisp data for decision\n making problems. The methods that are implemented in this package are RIM,\n TOPSIS (with two normalization procedures), VIKOR, Multi-MOORA and WASPAS.\n In addition, MetaRanking function calculates a new ranking from the sum \n of the rankings calculated, as well as an aggregated ranking.","Published":"2016-09-22","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mcemGLM","Version":"1.1","Title":"Maximum Likelihood Estimation for Generalized Linear Mixed\nModels","Description":"Maximum likelihood estimation for generalized linear mixed models via Monte Carlo EM.","Published":"2015-11-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mcga","Version":"3.0.1","Title":"Machine Coded Genetic Algorithms for Real-Valued Optimization\nProblems","Description":"Machine coded genetic algorithm (MCGA) is a fast tool for\n real-valued optimization problems. It uses the byte\n representation of variables rather than real-values. It\n performs the classical crossover operations (uniform) on these\n byte representations. Mutation operator is also similar to\n classical mutation operator, which is to say, it changes a\n randomly selected byte value of a chromosome by +1 or -1 with\n probability 1/2. In MCGAs there is no need for\n encoding-decoding process and the classical operators are\n directly applicable on real-values. It is fast and can handle a\n wide range of a search space with high precision. Using a\n 256-unary alphabet is the main disadvantage of this algorithm\n but a moderate size population is convenient for many problems.\n Package also includes multi_mcga function for multi objective\n optimization problems. This function sorts the chromosomes\n using their ranks calculated from the non-dominated sorting\n algorithm.","Published":"2016-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mcgfa","Version":"1.0.0","Title":"Mixtures of Contaminated Gaussian Factor Analyzers","Description":"Performs clustering and classification using the Mixtures of Contaminated Gaussian Factor Analyzers model. Allows for automatic detection of outliers and noise.","Published":"2016-10-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mcgibbsit","Version":"1.1.0","Title":"Warnes and Raftery's MCGibbsit MCMC diagnostic","Description":"\n 'mcgibbsit' provides an implementation of Warnes & Raftery's\n MCGibbsit run-length diagnostic for a set of (not-necessarily\n independent) MCMC samplers. It combines the estimate error-bounding\n approach of the Raftery and Lewis MCMC run length diagnostic with\n the between verses within chain approach of the Gelman and\n Rubin MCMC convergence diagnostic.","Published":"2013-10-24","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mcglm","Version":"0.3.0","Title":"Multivariate Covariance Generalized Linear Models","Description":"Fitting multivariate covariance generalized linear\n models (McGLMs) to data. McGLMs is a general framework for non-normal\n multivariate data analysis, designed to handle multivariate response\n variables, along with a wide range of temporal and spatial correlation\n structures defined in terms of a covariance link function combined\n with a matrix linear predictor involving known matrices.\n The models take non-normality into account in the conventional way\n by means of a variance function, and the mean structure is modelled\n by means of a link function and a linear predictor.\n The models are fitted using an efficient Newton scoring algorithm\n based on quasi-likelihood and Pearson estimating functions, using\n only second-moment assumptions. This provides a unified approach to\n a wide variety of different types of response variables and covariance\n structures, including multivariate extensions of repeated measures,\n time series, longitudinal, spatial and spatio-temporal structures.\n The package offers a user-friendly interface for fitting McGLMs\n similar to the glm() R function.","Published":"2016-06-09","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mcGlobaloptim","Version":"0.1","Title":"Global optimization using Monte Carlo and Quasi Monte Carlo\nsimulation","Description":"The package performs global optimization combining Monte Carlo and Quasi Monte Carlo simulation with a local search. \\n The local searches can be easily speeded-up by using a network of local workstations. ","Published":"2013-10-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mcheatmaps","Version":"1.0.0","Title":"Multiple matrices heatmap visualization","Description":"mcheatmaps serves to visualize multiple different symmetric matrices and matrix clusters in a single figure using a dendogram, two half matrices and various color labels.","Published":"2014-04-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MChtest","Version":"1.0-2","Title":"Monte Carlo hypothesis tests with Sequential Stopping","Description":"The package performs Monte Carlo hypothesis tests. It\n allows a couple of different sequential stopping boundaries (a\n truncated sequential probability ratio test boundary and a\n boundary proposed by Besag and Clifford, 1991). Gives valid\n p-values and confidence intervals on p-values.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"MCI","Version":"1.3.0","Title":"Multiplicative Competitive Interaction (MCI) Model","Description":"Market area models are used to analyze and predict store choices and market areas concerning retail and service locations. This package implements two market area models (Huff Model, Multiplicative Competitive Interaction Model) into R, while the emphases lie on 1.) fitting these models based on empirical data via OLS regression and nonlinear techniques and 2.) data preparation and processing (esp. interaction matrices and data preparation for the MCI Model).","Published":"2016-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mcIRT","Version":"0.41","Title":"IRT models for multiple choice items (mcIRT)","Description":"This package provides functions to estimate two popular IRT-models: The Nominal Response Model (Bock 1972) and the quite recently developed Nested Logit Model (Suh & Bolt 2010). These are two models to examine multiple-choice items and other multicategorial response formats.","Published":"2014-08-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MCL","Version":"1.0","Title":"Markov Cluster Algorithm","Description":"Contains the Markov cluster algorithm (MCL) for identifying clusters in networks and graphs. The algorithm simulates random walks on a (n x n) matrix as the adjacency matrix of a graph. It alternates an expansion step and an inflation step until an equilibrium state is reached.","Published":"2015-03-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mclcar","Version":"0.1-8","Title":"Estimating Conditional Auto-Regressive (CAR) Models using Monte\nCarlo Likelihood Methods","Description":"The likelihood of direct CAR models and Binomial and Poisson GLM with latent CAR variables are approximated by the Monte Carlo likelihood. The Maximum Monte Carlo likelihood estimator is found either by an iterative procedure of directly maximising the Monte Carlo approximation or by a response surface design method.","Published":"2016-11-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mcll","Version":"1.2","Title":"Monte Carlo Local Likelihood Estimation","Description":"Maximum likelihood estimation using a Monte Carlo local likelihood (MCLL) method","Published":"2014-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mclogit","Version":"0.4.4","Title":"Mixed Conditional Logit Models","Description":"Specification and estimation of conditional logit models of binary responses and multinomial counts is provided,\n with or without alternative-specific random effects (random intercepts only, no random slopes yet).\n The current implementation of the estimator for random effects variances uses a Laplace approximation (or PQL) \n approach and thus should be used only if groups sizes are large.","Published":"2016-12-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mclust","Version":"5.3","Title":"Gaussian Mixture Modelling for Model-Based Clustering,\nClassification, and Density Estimation","Description":"Gaussian finite mixture models fitted via EM algorithm for model-based clustering, classification, and density estimation, including Bayesian regularization, dimension reduction for visualisation, and resampling-based inference.","Published":"2017-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mcmc","Version":"0.9-5","Title":"Markov Chain Monte Carlo","Description":"Simulates continuous distributions of random vectors using\n Markov chain Monte Carlo (MCMC). Users specify the distribution by an\n R function that evaluates the log unnormalized density. Algorithms\n are random walk Metropolis algorithm (function metrop), simulated\n tempering (function temper), and morphometric random walk Metropolis\n (Johnson and Geyer, 2012, ,\n function morph.metrop),\n which achieves geometric ergodicity by change of variable.","Published":"2017-04-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MCMC.OTU","Version":"1.0.10","Title":"Bayesian Analysis of Multivariate Counts Data in DNA\nMetabarcoding and Ecology","Description":"Poisson-lognormal generalized linear mixed model analysis of multivariate counts data using MCMC, aiming to infer the changes in relative proportions of individual variables. The package was originally designed for sequence-based analysis of microbial communities (\"metabarcoding\", variables = operational taxonomic units, OTUs), but can be used for other types of multivariate counts, such as in ecological applications (variables = species). The results are summarized and plotted using 'ggplot2' functions. Includes functions to remove sample and variable outliers and reformat counts into normalized log-transformed values for correlation and principal component/coordinate analysis. Walkthrough and examples: http://www.bio.utexas.edu/research/matz_lab/matzlab/Methods_files/walkthroughExample_mcmcOTU_R.txt. ","Published":"2016-02-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MCMC.qpcr","Version":"1.2.3","Title":"Bayesian Analysis of qRT-PCR Data","Description":"Quantitative RT-PCR data are analyzed using generalized linear mixed models based on lognormal-Poisson error distribution, fitted using MCMC. Control genes are not required but can be incorporated as Bayesian priors or, when template abundances correlate with conditions, as trackers of global effects (common to all genes). The package also implements a lognormal model for higher-abundance data and a \"classic\" model involving multi-gene normalization on a by-sample basis. Several plotting functions are included to extract and visualize results. The detailed tutorial is available here: .","Published":"2016-11-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MCMC4Extremes","Version":"1.1","Title":"Posterior Distribution of Extreme Value Models in R","Description":"Provides some function to perform posterior estimation for some distribution, with emphasis to extreme value distributions. It contains some extreme datasets, and functions that perform the runs of posterior points of the GPD and GEV distribution. The package calculate some important extreme measures like return level for each t periods of time, and some plots as the predictive distribution, and return level plots. ","Published":"2016-07-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MCMCglmm","Version":"2.24","Title":"MCMC Generalised Linear Mixed Models","Description":"MCMC Generalised Linear Mixed Models. ","Published":"2016-11-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MCMCpack","Version":"1.4-0","Title":"Markov Chain Monte Carlo (MCMC) Package","Description":"Contains functions to perform Bayesian\n inference using posterior simulation for a number of\n statistical models. Most simulation is done in compiled C++\n written in the Scythe Statistical Library Version 1.0.3. All\n models return coda mcmc objects that can then be summarized\n using the coda package. Some useful\n utility functions such as density functions,\n\tpseudo-random number generators for statistical\n distributions, a general purpose Metropolis sampling algorithm,\n and tools for visualization are provided.","Published":"2017-06-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mcmcplots","Version":"0.4.2","Title":"Create Plots from MCMC Output","Description":"Functions for convenient plotting and viewing of MCMC output.","Published":"2015-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MCMCprecision","Version":"0.3.6","Title":"Precision of Discrete Parameters in Transdimensional MCMC","Description":"Estimates the precision of transdimensional Markov chain Monte Carlo (MCMC) output, which is often used for Bayesian analysis of models with different dimensionality (e.g., model selection). Transdimensional MCMC (e.g., reversible jump MCMC) relies on sampling a discrete model-indicator variable to estimate the posterior model probabilities. If only few switches occur between the models, precision may be low and assessment based on the assumption of independent samples misleading. Based on the observed transition matrix of the indicator variable, the method of Heck, Overstall, Gronau, & Wagenmakers (2017) draws posterior samples of the stationary distribution to (a) assess the uncertainty in the estimated posterior model probabilities and (b) estimate the effective sample size of the MCMC output.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mcmcse","Version":"1.2-1","Title":"Monte Carlo Standard Errors for MCMC","Description":"Provides tools for computing Monte Carlo standard\n errors (MCSE) in Markov chain Monte Carlo (MCMC) settings. MCSE\n computation for expectation and quantile estimators is\n supported as well as multivariate estimations. The package also provides \n\tfunctions for computing effective sample size and for plotting\n\tMonte Carlo estimates versus sample size.","Published":"2016-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MCMCvis","Version":"0.7.1","Title":"Tools to Visualize, Manipulate, and Summarize MCMC Output","Description":"Performs key functions for MCMC analysis using minimal code - visualizes, manipulates, and summarizes MCMC output. Functions support simple and straightforward subsetting of model parameters within the calls, and produce presentable and 'publication-ready' output. MCMC output may be derived from Bayesian model output fit with JAGS, Stan, or other MCMC samplers.","Published":"2017-02-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mco","Version":"1.0-15.1","Title":"Multiple Criteria Optimization Algorithms and Related Functions","Description":"Functions for multiple criteria optimization using genetic\n algorithms and related test problems","Published":"2014-11-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Mcomp","Version":"2.6","Title":"Data from the M-Competitions","Description":"\n The 1001 time series from the M-competition (Makridakis et al. 1982) and the 3003 time series from the IJF-M3 competition (Makridakis and Hibon, 2000) .","Published":"2017-02-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MConjoint","Version":"0.1","Title":"Conjoint Analysis through Averaging of Multiple Analyses","Description":"The package aids in creating a Conjoint Analysis design\n with extra cards. Unlike traditional \"holdout\" cards these\n cards are used to create a set of \"good\" (balanced and low\n correlation) designs. Each of these designs is analyzed and the\n average calculated.","Published":"2013-06-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mcPAFit","Version":"0.1.3","Title":"Estimating Preferential Attachment from a Single Network\nSnapshot by Markov Chain Monte Carlo","Description":"A Markov chain Monte Carlo method is provided to estimate the preferential attachment function from a single network snapshot. Conventional methods require the complete information about the appearance order of all nodes and edges in the network. This package incorporates the appearance order into the state space and estimates it together with the preferential attachment function. Auxiliary variables are introduced to facilitate fast Gibbs sampling.","Published":"2016-05-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MCPAN","Version":"1.1-20","Title":"Multiple Comparisons Using Normal Approximation","Description":"Multiple contrast tests and simultaneous confidence\n intervals based on normal approximation. With implementations for\n binomial proportions in a 2xk setting (risk difference and odds ratio),\n poly-3-adjusted tumour rates, biodiversity indices (multinomial data) \n and expected values under lognormal assumption. Approximative power \n calculation for multiple contrast tests of binomial and Gaussian data.","Published":"2016-01-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mcparallelDo","Version":"1.1.0","Title":"A Simplified Interface for Running Commands on Parallel\nProcesses","Description":"Provides a function that wraps \n mcparallel() and mccollect() from 'parallel' with temporary variables and a \n task handler. Wrapped in this way the results of an mcparallel() call \n can be returned to the R session when the fork is complete \n without explicitly issuing a specific mccollect() to retrieve the value.\n Outside of top-level tasks, multiple mcparallel() jobs can be retrieved with \n a single call to mcparallelDoCheck().","Published":"2016-07-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MCPerm","Version":"1.1.4","Title":"A Monte Carlo permutation method for multiple test correlation","Description":"A Monte Carlo permutation method for multiple test\n correlation.","Published":"2013-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MCPMod","Version":"1.0-9","Title":"Design and Analysis of Dose-Finding Studies","Description":"Implements a methodology for the design and analysis of dose-response studies that\n combines aspects of multiple comparison procedures and modeling approaches\n\t (Bretz, Pinheiro and Branson, 2005, Biometrics 61, 738-748, ).\n The package provides tools for the analysis of dose finding trials as well as a variety\n of tools necessary to plan a trial to be conducted with the MCP-Mod methodology.\n Please note: The 'MCPMod' package will not be further developed, all future development of \n the MCP-Mod methodology will be done in the 'DoseFinding' R-package. ","Published":"2016-11-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mcprofile","Version":"0.2-3","Title":"Testing Generalized Linear Hypotheses for Generalized Linear\nModel Parameters by Profile Deviance","Description":"Calculation of signed root deviance profiles for linear combinations of parameters in a generalized linear model. Multiple tests and simultaneous confidence intervals are provided.","Published":"2016-10-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mcr","Version":"1.2.1","Title":"Method Comparison Regression","Description":"This package provides regression methods to quantify the relation between two measurement methods. In particular it addresses regression problems with errors in both variables and without repeated measurements. The package provides implementations of Deming regression, weighted Deming regression, and Passing-Bablok regression following the CLSI EP09-A3 recommendations for analytical method comparison and bias estimation using patient samples. ","Published":"2014-02-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"MCS","Version":"0.1.1","Title":"Model Confidence Set Procedure","Description":"Perform the model confidence set procedure of Hansen et al (2011).","Published":"2015-03-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mcsm","Version":"1.0","Title":"Functions for Monte Carlo Methods with R","Description":"mcsm contains a collection of functions that allows the\n reenactment of the R programs used in the book EnteR Monte\n Carlo Methods without further programming. Programs being\n available as well, they can be modified by the user to conduct\n one's own simulations.","Published":"2009-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"McSpatial","Version":"2.0","Title":"Nonparametric spatial data analysis","Description":"Locally weighted regression, semiparametric and\n conditionally parametric regression, fourier and cubic spline\n functions, GMM and linearized spatial logit and probit,\n k-density functions and counterfactuals, nonparametric quantile\n regression and conditional density functions, Machado-Mata\n decomposition for quantile regressions, spatial AR model,\n repeat sales models, conditionally parametric logit and probit","Published":"2013-05-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mctest","Version":"1.1","Title":"Multicollinearity Diagnostic Measures","Description":"Package computes popular and widely used multicollinearity diagnostic measures. Package also indicates which regressors may be the reason of collinearity among regressors.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MCTM","Version":"1.0","Title":"Markov Chains Transition Matrices","Description":"Transition matrices (probabilities or counts) estimation for discrete Markov Chains of order n (1 <= n <= 5). ","Published":"2015-05-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"md","Version":"1.0.4","Title":"Selecting Bandwidth for Kernel Density Estimator with Minimum\nDistance Method","Description":"Selects bandwidth for the kernel density estimator with minimum distance method as proposed by Devroye and Lugosi (1996). The minimum distance method directly selects the optimal kernel density estimator from countably infinite kernel density estimators and indirectly selects the optimal bandwidth. This package selects the optimal bandwidth from finite kernel density estimators.","Published":"2016-02-21","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"md.log","Version":"0.1.1","Title":"Produces Markdown Log File with a Built-in Function Call","Description":"Produces clean and neat Markdown log file\n and also provide an argument to include the function call inside the Markdown log.","Published":"2017-04-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mda","Version":"0.4-9","Title":"Mixture and Flexible Discriminant Analysis","Description":"Mixture and flexible discriminant analysis, multivariate\n adaptive regression splines (MARS), BRUTO, ...","Published":"2016-08-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mdatools","Version":"0.8.2","Title":"Multivariate Data Analysis for Chemometrics","Description":"Package implements projection based methods for preprocessing,\n exploring and analysis of multivariate data used in chemometrics.","Published":"2017-01-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mded","Version":"0.1-2","Title":"Measuring the Difference Between Two Empirical Distributions","Description":"Provides a function for measuring the difference between two independent or non-independent empirical distributions and returning a significance level of the difference.","Published":"2015-04-27","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"mdftracks","Version":"0.2.0","Title":"Read and Write 'MTrackJ Data Files'","Description":"'MTrackJ' is an 'ImageJ' plugin for motion tracking and analysis (see \n ). This package reads \n and writes 'MTrackJ Data Files' ('.mdf', see \n ). It supports\n 2D data and read/writes cluster, point, and channel information. If desired, \n generates track identifiers that are unique over the clusters.\n See the project page for more information and examples.","Published":"2017-02-06","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mdhglm","Version":"1.6","Title":"Multivariate Double Hierarchical Generalized Linear Models","Description":"Allows various models for multivariate response variables where each response is assumed to follow double hierarchical generalized linear models. In double hierarchical generalized linear models, the mean, dispersion parameters for variance of random effects, and residual variance can be further modeled as random-effect models.","Published":"2016-09-19","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"MDimNormn","Version":"0.8.0","Title":"Multi-Dimensional MA Normalization for Plate Effect","Description":"Normalize data to minimize the difference between sample plates \n (batch effects). For given data in a matrix and grouping variable (or\n\tplate), the function 'normn_MA' normalizes the data on MA coordinates. \n\tMore details are in the citation. The primary method is 'Multi-MA'. Other \n\tfitting functions on MA coordinates can also be employed e.g. loess. ","Published":"2015-08-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MDM","Version":"1.3","Title":"Multinomial Diversity Model","Description":"The multinomial diversity model is a toolbox for relating diversity to complex predictors. It is based on (1) Shannon diversity; (2) the multinomial logit model, and (3) the link between Shannon diversity and the log-likelihood of the MLM.","Published":"2013-07-10","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"mdmb","Version":"0.2-0","Title":"Model Based Treatment of Missing Data","Description":"\n Contains model-based treatment of missing data for regression models \n with missing values in covariates or the dependent variable \n using maximum likelihood or Bayesian estimation.\n Multiple imputation can be also conducted.","Published":"2017-02-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MDMR","Version":"0.5.0","Title":"Multivariate Distance Matrix Regression","Description":"Allows a user to conduct multivariate distance matrix regression using analytic p-values and compute measures of effect size.","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mdpeer","Version":"1.0.1","Title":"Graph-Constrained Regression with Enhanced Regularization\nParameters Selection","Description":"Provides graph-constrained regression methods in which\n regularization parameters are selected automatically via estimation of\n equivalent Linear Mixed Model formulation. 'riPEER' (ridgified Partially\n Empirical Eigenvectors for Regression) method employs a penalty term being\n a linear combination of graph-originated and ridge-originated penalty terms,\n whose two regularization parameters are ML estimators from corresponding\n Linear Mixed Model solution; a graph-originated penalty term allows imposing\n similarity between coefficients based on graph information given whereas\n additional ridge-originated penalty term facilitates parameters estimation:\n it reduces computational issues arising from singularity in a graph-originated\n penalty matrix and yields plausible results in situations when graph information\n is not informative. 'riPEERc' (ridgified Partially Empirical Eigenvectors\n for Regression with constant) method utilizes addition of a diagonal matrix\n multiplied by a predefined (small) scalar to handle the non-invertibility of\n a graph Laplacian matrix. 'vrPEER' (variable reducted PEER) method performs\n variable-reduction procedure to handle the non-invertibility of a graph\n Laplacian matrix.","Published":"2017-05-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MDplot","Version":"1.0.0","Title":"Visualising Molecular Dynamics Analyses","Description":"Provides automatization for plot generation succeeding common molecular dynamics analyses.\n This includes straightforward plots, such as RMSD (Root-Mean-Square-Deviation) and\n RMSF (Root-Mean-Square-Fluctuation) but also more sophisticated ones such as\n dihedral angle maps, hydrogen bonds, cluster bar plots and\n DSSP (Definition of Secondary Structure of Proteins) analysis. Currently able to load\n GROMOS, GROMACS and AMBER formats, respectively.","Published":"2017-02-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MDPtoolbox","Version":"4.0.3","Title":"Markov Decision Processes Toolbox","Description":"The Markov Decision Processes (MDP) toolbox proposes functions related to the resolution of discrete-time Markov Decision Processes: finite horizon, value iteration, policy iteration, linear programming algorithms with some variants and also proposes some functions related to Reinforcement Learning.","Published":"2017-03-03","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MDR","Version":"1.2","Title":"Detect gene-gene interactions using multifactor dimensionality\nreduction","Description":"Performs multifactor dimensionality reduction (MDR) to\n detect potential gene-gene interactions in case-control\n studies.","Published":"2012-03-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mdscore","Version":"0.1-3","Title":"Improved Score Tests for Generalized Linear Models","Description":"A set of functions to obtain modified score test for generalized linear models.","Published":"2017-02-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mdsdt","Version":"1.2","Title":"Functions for Analysis of Data with General Recognition Theory","Description":"Tools associated with General\n Recognition Theory (Townsend & Ashby, 1986), including Gaussian model fitting of 2x2 and more\n general designs, associated plotting and model comparison tools,\n and tests of marginal response invariance and report independence.","Published":"2016-03-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MDSGUI","Version":"0.1.6","Title":"A GUI for interactive MDS in R","Description":"A graphical user interface (GUI) for performing Multidimensional Scaling applications and interactively analysing the results all within the GUI environment. The MDS-GUI provides means of performing Classical Scaling, Least Squares Scaling, Metric SMACOF, Non-Metric SMACOF, Kruskal's Analysis and Sammon Mapping with animated optimisation.","Published":"2014-10-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mdsOpt","Version":"0.1-3","Title":"Searching for Optimal MDS Procedure for Metric Data","Description":"Searching for Optimal MDS procedure for metric data.","Published":"2017-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mdsr","Version":"0.1.3","Title":"Complement to 'Modern Data Science with R'","Description":"A complement to *Modern Data\n Science with R* (ISBN: 978-1498724487, publisher URL: \n ). \n This package contains all of the data and code necessary to\n complete exercises and reproduce examples from the text. It also \n facilitates connections to the SQL database server used in the book.","Published":"2016-08-29","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"meanr","Version":"0.1-0","Title":"Basic Sentiment Analysis Scorer","Description":"A popular technique in text analysis today is sentiment analysis, \n or trying to determine the overall emotional attitude of a piece of text\n (positive or negative). We provide a new, basic implementation of a common\n method for computing sentiment, whereby words are scored as positive or\n negative according to a \"dictionary\", and then an average of those scores\n for the document is produced. The package uses the 'Hu' and 'Liu' sentiment\n dictionary for assigning sentiment.","Published":"2017-06-07","License":"BSD 2-clause License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MeanShift","Version":"1.1-1","Title":"Clustering via the Mean Shift Algorithm","Description":"Clustering of vector data and functional data using the mean shift algorithm (multi-core processing is supported) or its blurring version.","Published":"2016-04-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"meanShiftR","Version":"0.50","Title":"A Computationally Efficient Mean Shift Implementation","Description":"Performs mean shift classification using linear and \n k-d tree based nearest neighbor implementations for the Gaussian\n kernel. ","Published":"2016-08-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"measurements","Version":"1.1.0","Title":"Tools for Units of Measurement","Description":"Collection of tools to make working with physical measurements\n\t\teasier. Convert between metric and imperial units, or calculate a dimension's\n\t\tunknown value from other dimensions' measurements.","Published":"2016-10-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"measuRing","Version":"0.4","Title":"Detection and Control of Tree-Ring Widths on Scanned Image\nSections","Description":"Identification of ring borders on scanned image sections from dendrochronological samples. Processing of image reflectances to produce gray matrices and time series of smoothed gray values. Luminance data is plotted on segmented images for users to perform both: visual identification of ring borders, or control of automatic detection. Routines to visually include/exclude ring borders on the R graphical device, or automatically detect ring borders using a linear detection algorithm. This algorithm detects ring borders according to positive/negative extreme values in the smoothed time-series of gray values. ","Published":"2017-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"meboot","Version":"1.4-7","Title":"Maximum Entropy Bootstrap for Time Series","Description":"Maximum entropy density based dependent data bootstrap. \n An algorithm is provided to create a population of time series (ensemble) \n without assuming stationarity. The reference paper (Vinod, H.D., 2004) explains\n how the algorithm satisfies the ergodic theorem and the central limit theorem.","Published":"2016-11-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MEclustnet","Version":"1.1","Title":"Fits the Mixture of Experts Latent Position Cluster Model to\nNetwork Data","Description":"Fits the mixture of experts latent position cluster model to network data to cluster nodes into subgroups, while incorporating covariate information, in a mixture of experts model setting.","Published":"2017-01-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MedDietCalc","Version":"0.1.0","Title":"Multi Calculator to Compute Scores of Adherence to Mediterranean\nDiet","Description":"Multi Calculator of different scores to measure adherence to Mediterranean Diet, to compute them in nutriepidemiological data. Additionally, a sample dataset of this kind of data is provided, and some other minor tools useful in epidemiological studies.","Published":"2017-04-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mederrRank","Version":"0.0.8","Title":"Bayesian Methods for Identifying the Most Harmful Medication\nErrors","Description":"Two distinct but related statistical approaches to the problem of identifying the combinations of medication error characteristics that are more likely to result in harm are implemented in this package: 1) a Bayesian hierarchical model with optimal Bayesian ranking on the log odds of harm, and 2) an empirical Bayes model that estimates the ratio of the observed count of harm to the count that would be expected if error characteristics and harm were independent. In addition, for the Bayesian hierarchical model, the package provides functions to assess the sensitivity of results to different specifications of the random effects distributions.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"medfate","Version":"0.2.2","Title":"Mediterranean Forest Simulation","Description":"Functions to simulate forest dynamics using cohort-based description of vegetation.","Published":"2016-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"medflex","Version":"0.6-1","Title":"Flexible Mediation Analysis Using Natural Effect Models","Description":"Run flexible mediation analyses using natural effect models as described in \n Lange, Vansteelandt and Bekaert (2012) ,\n Vansteelandt, Bekaert and Lange (2012) and\n Loeys, Moerkerke, De Smet, Buysse, Steen and Vansteelandt (2013) .","Published":"2017-02-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MediaK","Version":"1.0","Title":"Calculate MeDiA_K Distance","Description":"Calculates MeDiA_K (means Mean Distance Association by K-nearest neighbor) in order to detect nonlinear associations. ","Published":"2015-12-24","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Mediana","Version":"1.0.5","Title":"Clinical Trial Simulations","Description":"Provides a general framework for clinical trial simulations based\n on the Clinical Scenario Evaluation (CSE) approach. The package supports a\n broad class of data models (including clinical trials with continuous, binary,\n survival-type and count-type endpoints as well as multivariate outcomes that are\n based on combinations of different endpoints), analysis strategies and commonly\n used evaluation criteria.","Published":"2017-05-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mediation","Version":"4.4.5","Title":"Causal Mediation Analysis","Description":"We implement parametric and non parametric mediation analysis. This package performs the methods and suggestions in Imai, Keele and Yamamoto (2010), Imai, Keele and Tingley (2010), Imai, Tingley and Yamamoto (2013), Imai and Yamamoto (2013) and Yamamoto (2013). In addition to the estimation of causal mediation effects, the software also allows researchers to conduct sensitivity analysis for certain parametric models.","Published":"2015-07-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"medicalrisk","Version":"1.2","Title":"Medical Risk and Comorbidity Tools for ICD-9-CM Data","Description":"Generates risk estimates and comorbidity flags from ICD-9-CM\n codes available in administrative medical datasets. The package supports\n the Charlson Comorbidity Index, the Elixhauser Comorbidity\n classification, the Revised Cardiac Risk Index, and the Risk Stratification\n Index. Methods are table-based, fast, and use the 'plyr' package, so\n parallelization is possible for large jobs. Also includes a sample of\n real ICD-9 data for 100 patients from a publicly available dataset.","Published":"2016-01-24","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"medicare","Version":"0.2.1","Title":"Tools for Obtaining and Cleaning Medicare Public Use Files","Description":"Publicly available data from Medicare frequently requires extensive\n initial effort to extract desired variables and merge them; this package\n formalizes the techniques I've found work best. More information on the \n Medicare program, as well as guidance for the publicly available data this package \n targets, can be found on CMS's website covering publicly available data. See .","Published":"2017-04-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MedOr","Version":"0.1","Title":"Median Ordering Statistical R package","Description":"This package contains the functions used to perform some\n confidence statistics based in population median.","Published":"2012-12-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"medSTC","Version":"1.0.0","Title":"A max-margin supervised Sparse Topical Coding Model","Description":"This is a C++ implementation of Sparse Topical Coding\n (STC), a model of discrete data which is fully described in Zhu\n et al. (2011) (http://www.cs.cmu.edu/~junzhu/stc/stc.pdf). It\n can be used for multi-class classification and describing\n documents with underlying sparse topics.","Published":"2013-01-24","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MEET","Version":"5.1.1","Title":"MEET: Motif Elements Estimation Toolkit","Description":"MEET (Motif Elements Estimation Toolkit) is a R-package\n that integrates a set of computational algorithms for the\n detection of Transcription Factor Binding Sites (TFBS).","Published":"2013-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mefa","Version":"3.2-7","Title":"Multivariate Data Handling in Ecology and Biogeography","Description":"A framework package aimed to provide standardized computational environment for specialist work via object classes to represent the data coded by samples, taxa and segments (i.e. subpopulations, repeated measures). It supports easy processing of the data along with cross tabulation and relational data tables for samples and taxa. An object of class `mefa' is a project specific compendium of the data and can be easily used in further analyses. Methods are provided for extraction, aggregation, conversion, plotting, summary and reporting of `mefa' objects. Reports can be generated in plain text or LaTeX format. Vignette contains worked examples.","Published":"2016-01-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mefa4","Version":"0.3-4","Title":"Multivariate Data Handling with S4 Classes and Sparse Matrices","Description":"An S4 update of the 'mefa' package\n using sparse matrices for enhanced efficiency.\n Sparse array-like objects are supported via\n lists of sparse matrices.","Published":"2016-10-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MEGENA","Version":"1.3.6","Title":"Multiscale Clustering of Geometrical Network","Description":"Co-Expression Network Analysis by adopting network embedding technique.","Published":"2017-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"meifly","Version":"0.3","Title":"Interactive model exploration using GGobi","Description":"Exploratory model analysis. Fit and graphical\n explore ensembles of linear models.","Published":"2014-04-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Meiosis","Version":"1.0.2","Title":"Simulation of Meiosis in Plant Breeding Research","Description":"Tools for simulation of meiosis in plant breeding research.","Published":"2017-05-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"meltt","Version":"0.3.0","Title":"Matching Event Data by Location, Time and Type","Description":"Framework for merging and disambiguating event data based on spatiotemporal co-occurrence and secondary event characteristics. It can account for intrinsic \"fuzziness\" in the coding of events, varying event taxonomies and different geo-precision codes.","Published":"2017-05-18","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"melviewr","Version":"0.0.1","Title":"View and Classify MELODIC Output for ICA+FIX","Description":"Provides a graphical interface that allows the user to easily view \n and classify output from 'MELODIC', a part of the 'FSL' neuroimaging analysis\n software suite that performs independent component analysis (ICA; see \n for more information). The \n user categorizes a component as signal or noise based on its spatial and \n temporal characteristics and can then save a text file of these \n classifications in the format required by 'ICA+FIX', an automatic noise \n removal tool ().","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mem","Version":"2.8","Title":"The Moving Epidemic Method R Package","Description":"Tools to model influenza epidemics and to monitor influenza surveillance.","Published":"2017-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"memapp","Version":"2.2","Title":"The Moving Epidemic Method Web Application","Description":"Web application created in the Shiny framework for the 'mem' R package.","Published":"2017-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"memgene","Version":"1.0","Title":"Spatial pattern detection in genetic distance data using Moran's\nEigenvector Maps","Description":"Memgene can detect relatively weak spatial genetic patterns by using Moran's Eigenvector Maps (MEM) to extract only the spatial component of genetic variation. Memgene has applications in landscape genetics where the movement and dispersal of organisms are studied using neutral genetic variation.","Published":"2014-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"memisc","Version":"0.99.8","Title":"Tools for Management of Survey Data and the Presentation of\nAnalysis Results","Description":"One of the aims of this package is to make life easier for\n R users who deal with survey data sets. It provides an\n infrastructure for the management of survey data including\n value labels, definable missing values, recoding of variables,\n production of code books, and import of (subsets of) 'SPSS' and\n 'Stata' files. Further, it provides functionality to produce\n tables and data frames of arbitrary descriptive statistics and\n (almost) publication-ready tables of regression model\n estimates, which can be exported to 'LaTeX' and HTML.","Published":"2016-12-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"memo","Version":"1.0","Title":"In-Memory Caching for Repeated Computations","Description":"A simple in-memory, LRU cache that can be wrapped\n around any function to memoize it. The cache can be keyed on a hash of the\n input data (using 'digest') or on pointer equivalence.","Published":"2016-08-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"memoise","Version":"1.1.0","Title":"Memoisation of Functions","Description":"Cache the results of a function so that when you call it\n again with the same arguments it returns the pre-computed value.","Published":"2017-04-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MEMSS","Version":"0.9-2","Title":"Data sets from Mixed-effects Models in S","Description":"Data sets and sample analyses from Pinheiro and Bates,\n \"Mixed-effects Models in S and S-PLUS\" (Springer, 2000).","Published":"2014-03-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"memuse","Version":"3.0-1","Title":"Memory Estimation Utilities","Description":"How much ram do you need to store a 100,000 by 100,000 matrix?\n How much ram is your current R session using? How much ram do you even have?\n Learn the scintillating answer to these and many more such questions with\n the 'memuse' package.","Published":"2016-09-20","License":"BSD 2-clause License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MendelianRandomization","Version":"0.2.0","Title":"Mendelian Randomization Package","Description":"Encodes several methods for performing Mendelian randomization analyses with summarized data. Summarized data on genetic associations with the exposure and with the outcome can be obtained from large consortia. These data can be used for obtaining causal estimates using instrumental variable methods.","Published":"2016-09-30","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"MenuCollection","Version":"1.2","Title":"Collection of Configurable GTK+ Menus","Description":"Set of configurable menus built with GTK+ to graphically interface new functions.","Published":"2015-01-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"merDeriv","Version":"0.1-1","Title":"Case-Wise and Cluster-Wise Derivatives for Mixed Effects Models","Description":"Compute analytic case-wise and cluster-wise derivative for \n mixed effects models with respect to fixed effects parameter, random effect (co)variances, \n and residual variance.","Published":"2017-02-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MergeGUI","Version":"0.2-1","Title":"A GUI for Merging Datasets in R","Description":"A GUI for merging datasets in R using gWidgets.","Published":"2014-01-27","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"merror","Version":"2.0.2","Title":"Accuracy and Precision of Measurements","Description":"N>=3 methods are used to measure each of n items. \n The data are used to estimate simultaneously systematic error (bias)\n and random error (imprecision). Observed measurements for each method\n or device are assumed to be linear functions of the unknown true values\n and the errors are assumed normally distributed. Maximum likelihood \n estimation is used for the imprecision standard deviation estimates. \n Pairwise calibration curves and plots can be easily generated.","Published":"2015-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"merTools","Version":"0.3.0","Title":"Tools for Analyzing Mixed Effect Regression Models","Description":"Provides methods for extracting results from mixed-effect model\n objects fit with the 'lme4' package. Allows construction of prediction intervals\n efficiently from large scale linear and generalized linear mixed-effects models.","Published":"2016-12-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"meshsimp","Version":"0.1.1","Title":"Simplification of Surface Triangular Meshes with Associated\nDistributed Data","Description":"Iterative simplification strategy for surface triangular meshes (2.5D meshes) with associated data. Each iteration corresponds to an edge collapse where the selection of the edge to contract is driven by a cost functional that depends both on the geometry of the mesh than on the distribution of the data locations over the mesh. The library can handle both zero and higher genus surfaces. The package has been designed to be fully compatible with the R package 'fdaPDE', which implements regression models with partial differential regularizations, making use of the Finite Element Method. In the future, the functionalities provided by the current package may be directly integrated into 'fdaPDE'.\t","Published":"2017-06-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MESS","Version":"0.4-15","Title":"Miscellaneous Esoteric Statistical Scripts","Description":"A mixed collection of useful and semi-useful diverse\n statistical functions, some of which may even be referenced in\n The R Primer book.","Published":"2017-04-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"meta","Version":"4.8-2","Title":"General Package for Meta-Analysis","Description":"User-friendly general package providing standard methods for meta-analysis and supporting Schwarzer, Carpenter, and Rücker , \"Meta-Analysis with R\" (2015):\n - fixed effect and random effects meta-analysis;\n - several plots (forest, funnel, Galbraith / radial, L'Abbe, Baujat, bubble);\n - statistical tests and trim-and-fill method to evaluate bias in meta-analysis;\n - import data from 'RevMan 5';\n - prediction interval, Hartung-Knapp and Paule-Mandel method for random effects model;\n - cumulative meta-analysis and leave-one-out meta-analysis;\n - meta-regression (if R package 'metafor' is installed);\n - generalised linear mixed models (if R packages 'metafor', 'lme4', 'numDeriv', and 'BiasedUrn' are installed).","Published":"2017-05-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"meta4diag","Version":"2.0.5","Title":"Meta-Analysis for Diagnostic Test Studies","Description":"Bayesian inference analysis for bivariate meta-analysis of diagnostic test studies using integrated nested Laplace approximation with INLA. A purpose built graphic user interface is available. The installation of R package INLA is compulsory for successful usage. The INLA package can be obtained from . We recommend the testing version, which can be downloaded by running: install.packages(\"INLA\", repos=\"http://www.math.ntnu.no/inla/R/testing\").","Published":"2016-07-06","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"MetaAnalyser","Version":"0.2.1","Title":"An Interactive Visualisation of Meta-Analysis as a Physical\nWeighing Machine","Description":"An interactive application to visualise meta-analysis data as a\n physical weighing machine. The interface is based on the Shiny web application\n framework, though can be run locally and with the user's own data.","Published":"2016-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MetABEL","Version":"0.2-0","Title":"Meta-analysis of genome-wide SNP association results","Description":"A package for meta-analysis of genome-wide association\n scans between quantitative or binary traits and SNPs","Published":"2014-02-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MetabolAnalyze","Version":"1.3","Title":"Probabilistic latent variable models for metabolomic data","Description":"Fits probabilistic principal components analysis,\n probabilistic principal components and covariates analysis and\n mixtures of probabilistic principal components models to\n metabolomic spectral data.","Published":"2012-08-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MetaboList","Version":"1.2","Title":"Annotation of Metabolites from Liquid Chromatography-Mass\nSpectrometry Data","Description":"Automatic metabolite annotation from Liquid Chromatography-Mass Spectrometry (LC-MS and LC-MS/MS) data from .mzXML files, providing an inclusion list of metabolites/fragments (Only the ion mass). The function returns the identification and quantification of the peaks presented in the sample, as well as the non-identified metabolites/fragments.","Published":"2017-04-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"metabolomics","Version":"0.1.4","Title":"Analysis of Metabolomics Data","Description":"A collection of functions to aid in the statistical analysis of metabolomic data","Published":"2014-12-29","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"MetaboQC","Version":"1.0","Title":"Normalize Metabolomic Data using QC Signal","Description":"Takes QC signal for each day and normalize metabolomic\n data that has been acquired in a certain period of time. At least\n three QC per day are required.","Published":"2016-09-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"metacoder","Version":"0.1.3","Title":"Tools for Parsing, Manipulating, and Graphing Hierarchical Data","Description":"A set of tools for parsing, manipulating, and graphing data classified by a hierarchy (e.g. a taxonomy). ","Published":"2017-05-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"metacom","Version":"1.4.6","Title":"Analysis of the 'Elements of Metacommunity Structure'","Description":"Functions to analyze coherence, boundary clumping, and turnover\n following the pattern-based metacommunity analysis of Leibold and Mikkelson\n 2002 . The package also includes functions \n to visualize ecological networks, and to calculate modularity as a replacement \n to boundary clumping.","Published":"2017-03-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MetaComp","Version":"1.0.1","Title":"EDGE Taxonomy Assignments Visualization","Description":"Implements routines for metagenome sample taxonomy assignments collection, \n aggregation, and visualization. Accepts the EDGE-formatted output from GOTTCHA/GOTTCHA2, \n BWA, Kraken, and MetaPhlAn. Produces SVG and PDF heatmap-like plots comparing taxa \n abundances across projects. ","Published":"2017-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"metacor","Version":"1.0-2","Title":"Meta-analysis of correlation coefficients","Description":"Implement the DerSimonian-Laird (DSL) and Olkin-Pratt (OP)\n meta-analytical approaches with correlation coefficients as\n effect sizes.","Published":"2011-03-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MetaCycle","Version":"1.1.0","Title":"Evaluate Periodicity in Large Scale Data","Description":"Provides two functions-meta2d and meta3d for detecting \n rhythmic signals from time-series datasets. For analyzing\n time-series datasets without individual information, 'meta2d' is \n suggested, which could incorporates multiple methods from ARSER, \n JTK_CYCLE and Lomb-Scargle in the detection of interested rhythms. For \n analyzing time-series datasets with individual information, 'meta3d' is \n suggested, which takes use of any one of these three methods to analyze \n\ttime-series data individual by individual and gives out integrated values \n based on analysis result of each individual.","Published":"2015-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MetaDE","Version":"1.0.5","Title":"MetaDE: Microarray meta-analysis for differentially expressed\ngene detection","Description":"MetaDE package implements 12 major meta-analysis methods\n for differential expression analysis.","Published":"2012-08-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"metafolio","Version":"0.1.0","Title":"Metapopulation simulations for conserving salmon through\nportfolio optimization","Description":"The metafolio R package is a tool to simulate salmon\n metapopulations and apply financial portfolio optimization concepts. The\n package accompanies the paper 'Portfolio conservation of metapopulations\n under climate change'. See citation(\"metafolio\").","Published":"2014-07-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"metafor","Version":"2.0-0","Title":"Meta-Analysis Package for R","Description":"A comprehensive collection of functions for conducting meta-analyses in R. The package includes functions to calculate various effect sizes or outcome measures, fit fixed-, random-, and mixed-effects models to such data, carry out moderator and meta-regression analyses, and create various types of meta-analytical plots (e.g., forest, funnel, radial, L'Abbe, Baujat, GOSH plots). For meta-analyses of binomial and person-time data, the package also provides functions that implement specialized methods, including the Mantel-Haenszel method, Peto's method, and a variety of suitable generalized linear (mixed-effects) models (i.e., mixed-effects logistic and Poisson regression models). Finally, the package provides functionality for fitting meta-analytic multivariate/multilevel models that account for non-independent sampling errors and/or true effects (e.g., due to the inclusion of multiple treatment studies, multiple endpoints, or other forms of clustering). Network meta-analyses and meta-analyses accounting for known correlation structures (e.g., due to phylogenetic relatedness) can also be conducted.","Published":"2017-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"metafuse","Version":"2.0-1","Title":"Fused Lasso Approach in Regression Coefficient Clustering","Description":"Fused lasso method to cluster and estimate regression coefficients\n of the same covariate across different data sets when a large number of\n independent data sets are combined. Package supports Gaussian, binomial,\n Poisson and Cox PH models.","Published":"2016-10-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"metagear","Version":"0.4","Title":"Comprehensive Research Synthesis Tools for Systematic Reviews\nand Meta-Analysis","Description":"Functionalities for facilitating systematic reviews, data\n extractions, and meta-analyses. It includes a GUI (graphical user interface)\n to help screen the abstracts and titles of bibliographic data; tools to assign\n screening effort across multiple collaborators/reviewers and to assess inter-\n reviewer reliability; tools to help automate the download and retrieval of\n journal PDF articles from online databases; figure and image extractions \n from PDFs; web scraping of citations; automated and manual data extraction \n from scatter-plot and bar-plot images; PRISMA (Preferred Reporting Items for\n Systematic Reviews and Meta-Analyses) flow diagrams; simple imputation tools\n to fill gaps in incomplete or missing study parameters; generation of random\n effects sizes for Hedges' d, log response ratio, odds ratio, and correlation\n coefficients for Monte Carlo experiments; covariance equations for modelling\n dependencies among multiple effect sizes (e.g., effect sizes with a common \n control); and finally summaries that replicate analyses and outputs from \n widely used but no longer updated meta-analysis software. Funding for this \n package was supported by National Science Foundation (NSF) grants \n DBI-1262545 and DEB-1451031.","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"metagen","Version":"1.0","Title":"Inference in Meta Analysis and Meta Regression","Description":"Provides methods for making inference in the random effects meta\n regression model such as point estimates and confidence intervals for the\n heterogeneity parameter and the regression coefficients vector. Inference\n methods are based on different approaches to statistical inference.\n Methods from three different schools are included: methods based on the\n method of moments approach, methods based on likelihood, and methods based\n on generalised inference. The package also includes tools to run extensive\n simulation studies in parallel on high performance clusters in a modular\n way. This allows extensive testing of custom inferential methods with all\n implemented state-of-the-art methods in a standardised way. Tools for\n evaluating the performance of both point and interval estimates are\n provided. Also, a large collection of different pre-defined plotting\n functions is implemented in a ready-to-use fashion.","Published":"2014-05-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"metaheur","Version":"0.2.0","Title":"Metaheuristic Optimization Framework for Preprocessing\nCombinations","Description":"Automation of preprocessing often requires computationally costly\n preprocessing combinations. This package helps to find near-best combinations\n faster. Metaheuristics supported are taboo search, simulated annealing, reheating\n and late acceptance. Start conditions include random and grid starts. End conditions\n include all iteration rounds completed, objective threshold reached and convergence.\n Metaheuristics, start and end conditions can be hybridized and hyperparameters optimized.\n Parallel computations are supported. The package is intended to be used with package\n 'preprocomb' and takes its 'GridClass' object as input.","Published":"2016-06-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MetaheuristicFPA","Version":"1.0","Title":"An Implementation of Flower Pollination Algorithm in R","Description":"A nature-inspired metaheuristics algorithm based on the pollination\n process of flowers. This R package makes it easy to implement the standard\n flower pollination algorithm for every user. The algorithm was first developed\n by Xin-She Yang in 2012 ().","Published":"2016-08-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MetaIntegrator","Version":"1.0.3","Title":"Meta-Analysis of Gene Expression Data","Description":"A pipeline for the meta-analysis of gene expression data. We have\n\tassembled several analysis and plot functions to\n perform integrated multi-cohort analysis of gene expression data (meta-\n analysis). Methodology described in:\n\t.","Published":"2016-09-29","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"MetaLandSim","Version":"0.5.5","Title":"Landscape and Range Expansion Simulation","Description":"Tools to generate random landscape graphs, evaluate species\n occurrence in dynamic landscapes, simulate future landscape occupation and\n evaluate range expansion when new empty patches are available (e.g. as a\n result of climate change).","Published":"2017-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"metaLik","Version":"0.42.0","Title":"Likelihood Inference in Meta-Analysis and Meta-Regression Models","Description":"First- and higher-order likelihood inference in\n meta-analysis and meta-regression models.","Published":"2015-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"metaMA","Version":"3.1.2","Title":"Meta-analysis for MicroArrays","Description":"Combines either p-values or modified effect sizes from different\n studies to find differentially expressed genes","Published":"2015-01-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"metamisc","Version":"0.1.5","Title":"Diagnostic and Prognostic Meta-Analysis","Description":"Meta-analysis of diagnostic and prognostic modeling studies. Summarize estimates of diagnostic test accuracy and prediction model performance. Validate, update and combine published prediction models. ","Published":"2017-06-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"metaMix","Version":"0.2","Title":"Bayesian Mixture Analysis for Metagenomic Community Profiling","Description":"Resolves complex metagenomic mixtures by analysing\n deep sequencing data, using a mixture model based approach.\n The use of parallel Monte Carlo Markov chains for the exploration\n of the species space enables the identification of the set\n of species more likely to contribute to the mixture.","Published":"2015-03-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"metansue","Version":"1.2","Title":"Meta-Analysis of Studies with Non Statistically-Significant\nUnreported Effects","Description":"A novel meta-analytic method that allows an unbiased inclusion of studies with Non Statistically-Significant Unreported Effects (NSUEs). Briefly, the method first calculates the interval where the unreported effects (e.g. t-values) should be according to the threshold of statistical significance used in each study. Afterwards, maximizing likelihood techniques are used to impute the expected effect size of each study with NSUEs, accounting for between-study heterogeneity and potential covariates. Multiple imputations of the NSUEs are then randomly created based on the expected value, variance and statistical significance bounds. Finally, a restricted-maximum likelihood random-effects meta-analysis is separately conducted for each set of imputations, and estimations from these meta-analyses are pooled. Please read the reference in 'meta.nsue' for details of the procedure.","Published":"2016-09-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"metap","Version":"0.8","Title":"Meta-Analysis of Significance Values","Description":"The canonical way to perform meta-analysis involves using effect sizes.\n When they are not available this package provides a number of methods for\n meta-analysis of significance values including the methods of Edgington, Fisher,\n Stouffer, Tippett, and Wilkinson; a number of data-sets to replicate published results;\n and a routine for graphical display.","Published":"2017-01-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MetaPath","Version":"1.0","Title":"Perform the Meta-Analysis for Pathway Enrichment Analysis (MAPE)","Description":"Perform the Meta-analysis for Pathway Enrichment (MAPE) \n\t\tmethods introduced by Shen and Tseng (2010). It includes functions to\n automatically perform MAPE_G (integrating multiple studies at\n gene level), MAPE_P (integrating multiple studies at pathway\n level) and MAPE_I (a hybrid method integrating MAEP_G and\n MAPE_P methods). In the simulation and real data analyses in\n the paper, MAPE_G and MAPE_P have complementary advantages and\n detection power depending on the data structure. In general,\n the integrative form of MAPE_I is recommended to use. In the\n case that MAPE_G (or MAPE_P) detects almost none pathway, the\n integrative MAPE_I does not improve performance and MAPE_P (or\n MAPE_G) should be used. Reference: Shen, Kui, and George C\n Tseng. Meta-analysis for pathway enrichment analysis when\n combining multiple microarray studies.Bioinformatics (Oxford,\n England) 26, no. 10 (April 2010): 1316-1323.\n doi:10.1093/bioinformatics/btq148.\n http://www.ncbi.nlm.nih.gov/pubmed/20410053.","Published":"2015-10-03","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"MetaPCA","Version":"0.1.4","Title":"MetaPCA: Meta-analysis in the Dimension Reduction of Genomic\ndata","Description":"MetaPCA implements simultaneous dimension reduction using\n PCA when multiple studies are combined. We propose two basic\n ideas to find a common PC subspace by eigenvalue maximization\n approach and angle minimization approach, and we extend the\n concept to incorporate Robust PCA and Sparse PCA in the\n meta-analysis realm.","Published":"2011-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"metaplot","Version":"0.1.2","Title":"Formalized Plots for Self-Describing Data","Description":"Creates fully-annotated plots with minimum guidance.\n Since the data is self-describing, less effort is needed for\n creating the plot. Generally expects data of class folded\n (see fold package). If attributes GUIDE and LABEL are present, \n they will be used to create formal axis labels. Several aesthetics \n are supported, such as reference lines, unity lines, smooths, and \n log transformations.","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"metaplotr","Version":"0.0.3","Title":"Creates CrossHairs Plots for Meta-Analyses","Description":"Creates crosshairs plots to summarize and analyse\n meta-analysis results. In due time this package will contain code\n that will create other kind of meta-analysis graphs.","Published":"2016-08-04","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"metaplus","Version":"0.7-9","Title":"Robust Meta-Analysis and Meta-Regression","Description":"Performs meta-analysis and meta-regression using standard and robust methods with confidence intervals based on the profile likelihood. Robust methods are based on alternative distributions for the random effect, either the t-distribution (Lee and Thompson, 2008 or Baker and Jackson, 2008 ) or mixtures of normals (Beath, 2014 ).","Published":"2016-11-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MetaQC","Version":"0.1.13","Title":"MetaQC: Objective Quality Control and Inclusion/Exclusion\nCriteria for Genomic Meta-Analysis","Description":"MetaQC implements our proposed quantitative quality\n control measures: (1) internal homogeneity of co-expression\n structure among studies (internal quality control; IQC); (2)\n external consistency of co-expression structure correlating\n with pathway database (external quality control; EQC); (3)\n accuracy of differentially expressed gene detection (accuracy\n quality control; AQCg) or pathway identification (AQCp); (4)\n consistency of differential expression ranking in genes\n (consistency quality control; CQCg) or pathways (CQCp). (See\n the reference for detailed explanation.) For each quality\n control index, the p-values from statistical hypothesis testing\n are minus log transformed and PCA biplots were applied to\n assist visualization and decision. Results generate systematic\n suggestions to exclude problematic studies in microarray\n meta-analysis and potentially can be extended to GWAS or other\n types of genomic meta-analysis. The identified problematic\n studies can be scrutinized to identify technical and biological\n causes (e.g. sample size, platform, tissue collection,\n preprocessing etc) of their bad quality or irreproducibility\n for final inclusion/exclusion decision.","Published":"2012-12-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"metaRNASeq","Version":"1.0.2","Title":"Meta-analysis of RNA-seq data","Description":"Implementation of two p-value combination techniques (inverse normal and Fisher methods). A vignette is provided to explain how to perform a meta-analysis from two independent RNA-seq experiments.","Published":"2015-01-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"metaSEM","Version":"0.9.14","Title":"Meta-Analysis using Structural Equation Modeling","Description":"A collection of functions for conducting meta-analysis using a\n structural equation modeling (SEM) approach via the 'OpenMx' package.\n It also implements the two-stage SEM approach to conduct meta-analytic\n structural equation modeling on correlation and covariance matrices. ","Published":"2017-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"metasens","Version":"0.3-1","Title":"Advanced Statistical Methods to Model and Adjust for Bias in\nMeta-Analysis","Description":"The following methods are implemented to evaluate how sensitive the results of a meta-analysis are to potential bias in meta-analysis and to support Schwarzer et al. (2015) , Chapter 5 \"Small-Study Effects in Meta-Analysis\":\n - Copas selection model described in Copas & Shi (2001) ;\n - limit meta-analysis by Rücker et al. (2011) ;\n - upper bound for outcome reporting bias by Copas & Jackson (2004) .","Published":"2016-10-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MetaSKAT","Version":"0.60","Title":"Meta Analysis for SNP-Set (Sequence) Kernel Association Test","Description":"Functions for Meta-analysis Burden test, SKAT and SKAT-O. These methods use summary-level score statistics to carry out gene-based meta-analysis for rare variants.","Published":"2015-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"metatest","Version":"1.0-4","Title":"Fit and test metaregression models","Description":"This package fits meta regression models and generates a\n number of statistics: next to t- and z-tests, the likelihood\n ratio, bartlett corrected likelihood ratio and permutation\n tests are performed on the model coefficients.","Published":"2013-01-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Metatron","Version":"0.1-1","Title":"Meta-analysis for Classification Data and Correction to\nImperfect Reference","Description":"This package allows doing meta-analysis for primary studies with classification outcomes in order to evaluate systematically the accuracies of classifiers, namely, the diagnostic tests. It provides functions to fit the bivariate model of Reitsma et al.(2005). Moreover, if the reference employed in the classification process isn't a gold standard, its deficit can be detected and its influence to the underestimation of the diagnostic test's accuracy can be corrected, as described in Botella et al.(2013).","Published":"2014-10-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"metavcov","Version":"1.1","Title":"Variance-Covariance Matrix for Multivariate Meta-Analysis","Description":"Compute variance-covariance matrix for multivariate meta-analysis. Effect sizes include correlation (r), mean difference (MD), standardized mean difference (SMD), log odds ratio (logOR), log risk ratio (logRR), and risk difference (RD).","Published":"2017-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"metaviz","Version":"0.1.0","Title":"Rainforest Plots for Meta-Analysis","Description":"Creates rainforest plots (proposed by Schild & Voracek, 2015 ), a variant and \n enhancement of the classic forest plot for meta-analysis. In the near future, the 'metaviz' \n package will be extended by further, established as well as novel, plotting options for \n visualizing meta-analytic data.","Published":"2017-02-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"meteo","Version":"0.1-5","Title":"Spatio-Temporal Analysis and Mapping of Meteorological\nObservations","Description":"Spatio-temporal geostatistical mapping of meteorological data. Global spatio-temporal models calculated using publicly available data are stored in package.","Published":"2015-09-24","License":"GPL (>= 2.0) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"meteoForecast","Version":"0.51","Title":"Numerical Weather Predictions","Description":"Access to several Numerical Weather Prediction services both in raster format and as a time series for a location. Currently it works with GFS, MeteoGalicia, NAM, and RAP.","Published":"2017-04-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"meteogRam","Version":"1.0","Title":"Tools for plotting meteograms","Description":"meteogram is a collection of programs for plotting\n meteograms for meteorological data such as atmospheric cross\n section, temperatures plots.","Published":"2013-03-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"meteoland","Version":"0.5.9","Title":"Landscape Meteorology Tools","Description":"Functions to estimate weather variables at any position of a landscape.","Published":"2017-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"meteR","Version":"1.2","Title":"Fitting and Plotting Tools for the Maximum Entropy Theory of\nEcology (METE)","Description":"Fit and plot macroecological patterns predicted by the Maximum\n Entropy Theory of Ecology (METE).","Published":"2016-06-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MetFns","Version":"2.2.0","Title":"Analysis of Visual Meteor Data","Description":"Functions for selection of visual meteor data, calculations of Zenithal Hourly Rate (ZHR) and population index, graphics of population index, ZHR and magnitude distribution.","Published":"2017-02-02","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"Meth27QC","Version":"1.1","Title":"Meth27QC: sample quality analysis, and sample control analysis","Description":"Meth27QC is a tool for analyzing Illumina Infinium\n HumanMethylation27 BeadChip Data and generating QC reports.\n This package allows users quickly assess data quality of the\n Assay. Users can evaluate the data quality in the way that\n Illumina GenomeStudio/BeadStudio recommended based on the\n control probes. The package reads files exported from\n GenomeStudio/BeadStudio software, generating intensity and\n standard deviation plots grouped by the types of the control\n probes. Meth27 carries 40 control probes for staining,\n hybridization, target removal, extension, bisulfite conversion,\n specificity, negative and non-polymorphic controls. Details of\n those control probes can be found in the Infinium Assay for\n Methylation Protocol Guide from Illumina.We also used the other\n non-control probes to plot intensity of detected genes, signal\n average for green and red. Outliers can be identified.","Published":"2011-02-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MethComp","Version":"1.22.2","Title":"Functions for Analysis of Agreement in Method Comparison Studies","Description":"Methods (standard and advanced) for analysis of agreement\n between measurement methods.","Published":"2015-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MethodCompare","Version":"0.1.0","Title":"Bias and Precision Plots to Compare Two Measurements with\nPossibly Heteroscedastic Measurement Errors","Description":"Implementation of the methodology from the paper titled\n \"Effective plots to assess bias and precision in method comparison studies\"\n published in Statistical Methods in Medical Research, P. Taffe (2016) .","Published":"2016-10-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Methplot","Version":"1.0","Title":"Visualize the methylation patterns","Description":"It plots the output from Methpup (https://github.com/XinYang6699/Methpup)","Published":"2014-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MethyBayes","Version":"0.1.0","Title":"Full Bayesian Partition Model for Identifying Differentially\nMethylated Loci","Description":"A full Bayesian partition model is implemented to identify\n differentially methylated loci from single-nucleotide resolution sequencing\n data.","Published":"2016-07-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MethylCapSig","Version":"1.0.1","Title":"Detection of Differentially Methylated Regions using\nMethylCap-Seq Data","Description":"Provides a univariate and several high dimensional multivariate test statistics for detecting differentially methylated regions based on MethylCap-seq data. ","Published":"2015-08-12","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"MetNorm","Version":"0.1","Title":"Statistical Methods for Normalizing Metabolomics Data","Description":"Metabolomics data are inevitably subject to a component of unwanted variation, due to factors such as batch effects, matrix effects, and confounding biological variation. This package contains a collection of R functions which can be used to remove unwanted variation and obtain normalized metabolomics data. ","Published":"2015-02-08","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"MetProc","Version":"1.0.1","Title":"Separate Metabolites into Likely Measurement Artifacts and True\nMetabolites","Description":"Split an untargeted metabolomics data set into a set of likely true \n metabolites and a set of likely measurement artifacts. This process involves \n comparing missing rates of pooled plasma samples and biological samples. The \n functions assume a fixed injection order of samples where biological samples are \n randomized and processed between intermittent pooled plasma samples. By comparing \n patterns of missing data across injection order, metabolites that appear in blocks\n and are likely artifacts can be separated from metabolites that seem to have \n random dispersion of missing data. The two main metrics used are: 1. the number of \n consecutive blocks of samples with present data and 2. the correlation of missing rates \n between biological samples and flanking pooled plasma samples.","Published":"2016-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Metrics","Version":"0.1.2","Title":"Evaluation Metrics for Machine Learning","Description":"Metrics is a set of evaluation metrics that is commonly\n used in supervised machine learning.","Published":"2017-04-21","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"metricsgraphics","Version":"0.9.0","Title":"Create Interactive Charts with the JavaScript 'MetricsGraphics'\nLibrary","Description":"Provides an 'htmlwidgets' interface to the\n 'MetricsGraphics.js' ('D3'-based) charting library which is geared towards\n displaying time-series data. Chart types include line charts, scatterplots,\n histograms and rudimentary bar charts. Support for laying out multiple charts\n into a grid layout is also provided. All charts are interactive and many\n have an option for line, label and region annotations.","Published":"2015-12-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"metricTester","Version":"1.3.3","Title":"Test Metric and Null Model Statistical Performance","Description":"Explore the behavior and statistical performance of 13 pre-defined\n\tphylogenetic metrics and 11 null models, and of user-defined metrics\n\tand null models, as detailed in Miller et al. (2017) .","Published":"2017-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"metRology","Version":"0.9-23-2","Title":"Support for Metrological Applications","Description":"Provides classes and calculation and plotting functions \n for metrology applications, including measurement uncertainty estimation\n and inter-laboratory metrology comparison studies. ","Published":"2016-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mets","Version":"1.2.2","Title":"Analysis of Multivariate Event Times","Description":"Implementation of various statistical models for multivariate\n event history data. Including multivariate cumulative incidence models, and\n bivariate random effects probit models (Liability models). Also contains\n two-stage binomial modelling that can do pairwise odds-ratio dependence\n modelling based marginal logistic regression models. This is an alternative\n to the alternating logistic regression approach (ALR).","Published":"2017-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"metScanR","Version":"1.0.0","Title":"Find, Map, and Gather Environmental Data and Metadata","Description":"A tool for locating, mapping, and gathering environmental data and metadata, worldwide. Users can search for and filter metadata from ~ 107,000 environmental monitoring stations among 219 countries/territories and 18 networks/platforms via elevation, location, active dates, elements measured (e.g., temperature, precipitation), country, network, and/or known identifier. Future updates to the package will allow the user to obtain datasets from stations within the database.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MetSizeR","Version":"1.1","Title":"GUI Tool for Estimating Sample Sizes for Metabolomic Experiments","Description":"An easy to use Graphical User Interface for estimating sample sizes required for metabolomic experiments even when experimental pilot data is not available.","Published":"2014-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MetStaT","Version":"1.0","Title":"Statistical metabolomics tools","Description":"A diverse collection of metabolomics related statistical tools.","Published":"2013-11-18","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"mev","Version":"1.10","Title":"Multivariate Extreme Value Distributions","Description":"Exact simulation from max-stable processes and multivariate extreme value distributions for various parametric models. Threshold selection methods.","Published":"2017-02-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mewAvg","Version":"0.3.0","Title":"A Fixed Memeory Moving Expanding Window Average","Description":"Computes the average of a sequence of random vectors\n in a moving expanding window using a fixed amount of storage","Published":"2014-07-22","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"mexhaz","Version":"1.3","Title":"Mixed Effect Excess Hazard Models","Description":"Fit flexible (excess) hazard regression models with the possibility of including non-proportional effects of covariables and of adding a random effect at the cluster level (corresponding to a shared frailty).","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MExPosition","Version":"2.0.3","Title":"Multi-table ExPosition","Description":"MExPosition is for descriptive (i.e., fixed-effects)\n multi-table multivariate analysis the singular value\n decomposition.","Published":"2013-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MF","Version":"4.3.2","Title":"Mitigated Fraction","Description":"Calculate MF (mitigated fraction) with clustering and bootstrap options. See http://goo.gl/pcXYVr for definition of MF.","Published":"2014-01-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MFAg","Version":"1.4","Title":"Multiple Factor Analysis (MFA)","Description":"Performs Multiple Factor Analysis method for quantitative, categorical, frequency and mixed data, in addition to generating a lot of graphics, also has other useful functions.","Published":"2016-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mfe","Version":"0.1.0","Title":"Meta-Feature Extractor","Description":"Extracts meta-features from datasets to support the design of \n recommendation systems based on Meta-Learning. The meta-features, also \n called characterization measures, are able to characterize the complexity of \n datasets and to provide estimates of algorithm performance. The package \n contains not only the standard characterization measures, but also more \n recent characterization measures. By making available a large set of \n meta-feature extraction functions, this package allows a comprehensive data \n characterization, a deep data exploration and a large number of \n Meta-Learning based data analysis. These concepts are described in the book: \n Brazdil, P., Giraud-Carrier, C., Soares, C., Vilalta, R. (2009) \n .","Published":"2017-01-31","License":"GPL | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MFHD","Version":"0.0.1","Title":"Multivariate Functional Halfspace Depth","Description":"Multivariate functional halfspace depth and median for two-dimensional functional data.","Published":"2013-10-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mFilter","Version":"0.1-3","Title":"Miscellaneous time series filters","Description":"The package implements several time series filters useful\n for smoothing and extracting trend and cyclical components of a\n time series. The routines are commonly used in economics and\n finance, however they should also be interest to other areas.\n Currently, Christiano-Fitzgerald, Baxter-King,\n Hodrick-Prescott, Butterworth, and trigonometric regression\n filters are included in the package.","Published":"2007-11-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mfp","Version":"1.5.2","Title":"Multivariable Fractional Polynomials","Description":"Fractional polynomials are used to represent curvature in regression models. A key reference is Royston and Altman, 1994.","Published":"2015-09-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MFPCA","Version":"1.1","Title":"Multivariate Functional Principal Component Analysis for Data\nObserved on Different Dimensional Domains","Description":"Calculate a multivariate functional principal component analysis\n for data observed on different dimensional domains. The estimation algorithm\n relies on univariate basis expansions for each element of the multivariate\n functional data. Multivariate and univariate functional data objects are\n represented by S4 classes for this type of data implemented in the package\n 'funData'.","Published":"2017-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MfUSampler","Version":"1.0.4","Title":"Multivariate-from-Univariate (MfU) MCMC Sampler","Description":"Convenience functions for multivariate MCMC using univariate samplers including:\n slice sampler with stepout and shrinkage (Neal (2003) ),\n adaptive rejection sampler (Gilks and Wild (1992) ),\n adaptive rejection Metropolis (Gilks et al (1995) ), and\n univariate Metropolis with Gaussian proposal.","Published":"2017-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mfx","Version":"1.1","Title":"Marginal Effects, Odds Ratios and Incidence Rate Ratios for GLMs","Description":"Estimates probit, logit, Poisson, negative binomial, and beta regression models, returning their marginal effects, odds ratios, or incidence rate ratios as an output.","Published":"2014-01-19","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"mgarchBEKK","Version":"0.0.2","Title":"Simulating, Estimating and Diagnosing MGARCH (BEKK and mGJR)\nProcesses","Description":"Procedures to simulate, estimate and diagnose MGARCH\n processes of BEKK and multivariate GJR (bivariate asymmetric GARCH\n model) specification.","Published":"2016-04-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mgcv","Version":"1.8-17","Title":"Mixed GAM Computation Vehicle with GCV/AIC/REML Smoothness\nEstimation","Description":"GAMs, GAMMs and other generalized ridge regression with \n multiple smoothing parameter estimation by GCV, REML or UBRE/AIC. \n Includes a gam() function, a wide variety of smoothers, JAGS \n support and distributions beyond the exponential family.","Published":"2017-02-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MGGM","Version":"1.0","Title":"Structural Pursuit Over Multiple Undirected Graphs","Description":"Implement algorithms to recover multiple networks by pursuit of both sparseness and cluster.","Published":"2016-03-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MGL","Version":"1.1","Title":"Module Graphical Lasso","Description":"An aggressive dimensionality reduction and network estimation\n technique for a high-dimensional Gaussian graphical model (GGM). Please\n refer to: Efficient Dimensionality Reduction for High-Dimensional Network\n Estimation, Safiye Celik, Benjamin A. Logsdon, Su-In Lee, Proceedings of\n The 31st International Conference on Machine Learning, 2014, p. 1953--1961.","Published":"2014-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MGLM","Version":"0.0.8","Title":"Multivariate Response Generalized Linear Models","Description":"Provides functions that (1) fit multivariate discrete distributions, (2) generate random numbers from multivariate discrete distributions, and (3) run regression and penalized regression on the multivariate categorical response data. Implemented models include: multinomial logit model, Dirichlet multinomial model, generalized Dirichlet multinomial model, and negative multinomial model. Making the best of the minorization-maximization (MM) algorithm and Newton-Raphson method, we derive and implement stable and efficient algorithms to find the maximum likelihood estimates. On a multi-core machine, multi-threading is supported.","Published":"2017-03-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mglmn","Version":"0.0.2","Title":"Model Averaging for Multivariate GLM with Null Models","Description":"Tools for univariate and multivariate generalized linear models with model averaging and null model technique.","Published":"2015-05-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mglR","Version":"0.1.0","Title":"Master Gene List","Description":"Tools to download and organize large-scale, publicly available genomic studies on a candidate gene scale. Includes functions to integrate these data sources and compare features across candidate genes.","Published":"2017-01-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mgm","Version":"1.2-1","Title":"Estimating Time-Varying k-Order Mixed Graphical Models","Description":"Estimation of k-Order time-varying Mixed Graphical Models and mixed VAR(p) models via elastic-net regularized neighborhood regression.","Published":"2017-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mgpd","Version":"1.99","Title":"mgpd: Functions for multivariate generalized Pareto distribution\n(MGPD of Type II)","Description":"Extends distribution and density functions to parametric\n multivariate generalized Pareto distributions (MGPD of Type\n II), and provides fitting functions which calculate maximum\n likelihood estimates for bivariate and trivariate models. (Help\n is under progress)","Published":"2012-03-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mgraph","Version":"1.03","Title":"Graphing map attributes and non-map variables in R","Description":"Each function in the package performs three main functions\n i) it reads spatial data and produces basic graphs including\n pie chart, bar chart, box plots, histogram, scatter plots, and\n lines ii)it reads non-spatial data such as \"csv\", \"txt\", \"dat\"\n data and produces basic graphs and iii) it plots map(s) of the\n input attribute(s) of spatial data by setting \"type\" parameter\n to \"map\"","Published":"2013-04-21","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"MGRASTer","Version":"0.9","Title":"API Client for the MG-RAST Server of the US DOE KBase","Description":"Convenience Functions for R Language Access to the v.1 API of the MG-RAST Metagenome Annotation Server, part of the US Department of Energy (DOE) Systems Biology Knowledge Base (KBase).","Published":"2014-08-02","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MGSDA","Version":"1.4","Title":"Multi-Group Sparse Discriminant Analysis","Description":"Implements Multi-Group Sparse Discriminant Analysis proposal of I.Gaynanova, J.Booth and M.Wells (2015), Simultaneous sparse estimation of canonical vectors in the p>>N setting, JASA, to appear,[DOI:10.1080/01621459.2015.1034318].","Published":"2016-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mGSZ","Version":"1.0","Title":"Gene set analysis based on GSZ-scoring function and asymptotic\np-value","Description":"Performs gene set analysis based on GSZ scoring function and asymptotic p-value. It is different from GSZ in that it implements asymptotic p-values instead of empirical p-values. Asymptotic p-values are calculated by fitting suitable distribution model to the null distribution. Unlike empirical p-values, resolution of asymptotic p-values are independent of the number of permutations and hence requires considerably fewer permutations. In addition, this package allows gene set analysis with seven other popular gene set analysis methods.","Published":"2014-02-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MHadaptive","Version":"1.1-8","Title":"General Markov Chain Monte Carlo for Bayesian Inference using\nadaptive Metropolis-Hastings sampling","Description":"Performs general Metropolis-Hastings Markov Chain Monte\n Carlo sampling of a user defined function which returns the\n un-normalized value (likelihood times prior) of a Bayesian\n model. The proposal variance-covariance structure is updated\n adaptively for efficient mixing when the structure of the\n target distribution is unknown. The package also provides some\n functions for Bayesian inference including Bayesian Credible\n Intervals (BCI) and Deviance Information Criterion (DIC)\n calculation.","Published":"2012-03-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mhde","Version":"1.0-1","Title":"Minimum Hellinger Distance Test for Normality","Description":"Implementation of a goodness-of-fit test for normality using the Minimum Hellinger Distance.","Published":"2015-10-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"mHG","Version":"1.0","Title":"Minimum-Hypergeometric Test","Description":"Runs a minimum-hypergeometric (mHG) test as described in: Eden, E. (2007). Discovering Motifs in Ranked Lists of DNA Sequences. Haifa. ","Published":"2015-07-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mhsmm","Version":"0.4.16","Title":"Inference for Hidden Markov and Semi-Markov Models","Description":"Parameter estimation and prediction for hidden Markov and semi-Markov models for data with multiple observation sequences. Suitable for equidistant time series data, with multivariate and/or missing data. Allows user defined emission distributions.","Published":"2017-01-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mht","Version":"3.1.2","Title":"Multiple Hypothesis Testing for Variable Selection in\nHigh-Dimensional Linear Models","Description":"Multiple Hypothesis Testing For Variable Selection in high dimensional linear models. This package performs variable selection with multiple hypothesis testing, either for ordered variable selection or non-ordered variable selection. In both cases, a sequential procedure is performed. It starts to test the null hypothesis \"no variable is relevant\"; if this hypothesis is rejected, it then tests \"only the first variable is relevant\", and so on until the null hypothesis is accepted. ","Published":"2015-03-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mhtboot","Version":"1.3.3","Title":"Multiple Hypothesis Test Based on Distribution of p Values","Description":"A framework for multiple hypothesis testing based on distribution\n of p values. It is well known that the p values come from different\n distribution for null and alternatives, in this package we provide\n functions to detect that change. We provide a method for using the change\n in distribution of p values as a way to detect the true signals in the\n data.","Published":"2016-10-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MHTdiscrete","Version":"0.1.3","Title":"Multiple Hypotheses Testing for Discrete Data","Description":"A Comprehensive tool for almost all existing multiple testing\n methods for discrete data. The package also provides some novel multiple testing\n procedures controlling FWER/FDR for discrete data. Given discrete p-values\n and their domains, the [method].p.adjust function returns adjusted p-values,\n which can be used to compare with the nominal significant level alpha and make\n decisions. For users' convenience, the functions also provide the output option \n for printing decision rules.","Published":"2017-02-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MHTmult","Version":"0.1.0","Title":"Multiple Hypotheses Testing for Multiple Families/Groups\nStructure","Description":"A Comprehensive tool for almost all existing multiple testing\n methods for multiple families. The package summarizes the existing methods for multiple families multiple testing procedures (MTPs) such as double FDR, group Benjamini-Hochberg (GBH) procedure and average FDR controlling procedure. The package also provides some novel multiple testing procedures using selective inference idea.","Published":"2017-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MHTrajectoryR","Version":"1.0.1","Title":"Bayesian Model Selection in Logistic Regression for the\nDetection of Adverse Drug Reactions","Description":"Spontaneous adverse event reports have a high potential for detecting adverse drug reactions. However, due to their dimension, the analysis of such databases requires statistical methods. We propose to use a logistic regression whose sparsity is viewed as a model selection challenge. Since the model space is huge, a Metropolis-Hastings algorithm carries out the model selection by maximizing the BIC criterion.","Published":"2016-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mhurdle","Version":"1.1-7","Title":"Multiple Hurdle Tobit Models","Description":"Estimation of models with zero left-censored variables. \n Null values may be caused by a selection process, insufficient \n resources or infrequency of purchase.","Published":"2016-09-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mi","Version":"1.0","Title":"Missing Data Imputation and Model Checking","Description":"The mi package provides functions for data manipulation, imputing missing values in an approximate Bayesian framework, diagnostics of the models used to generate the imputations, confidence-building mechanisms to validate some of the assumptions of the imputation algorithm, and functions to analyze multiply imputed data sets with the appropriate degree of sampling uncertainty.","Published":"2015-04-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MIAmaxent","Version":"0.4.0","Title":"Maxent Distribution Model Selection","Description":"Tools for training, selecting, and evaluating maximum entropy\n (Maxent) distribution models. This package provides tools for user-\n controlled transformation of explanatory variables, selection of variables\n by nested model comparison, and flexible model evaluation and projection.\n It is based on the strict maximum likelihood interpretation of maximum\n entropy modelling.","Published":"2017-02-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mice","Version":"2.30","Title":"Multivariate Imputation by Chained Equations","Description":"Multiple imputation using Fully Conditional Specification (FCS)\n implemented by the MICE algorithm as described in Van Buuren and \n Groothuis-Oudshoorn (2011) . Each variable has \n its own imputation model. Built-in imputation models are provided for \n continuous data (predictive mean matching, normal), binary data (logistic \n regression), unordered categorical data (polytomous logistic regression) \n and ordered categorical data (proportional odds). MICE can also impute \n continuous two-level data (normal model, pan, second-level variables). \n Passive imputation can be used to maintain consistency between variables. \n Various diagnostic plots are available to inspect the quality of the \n imputations.","Published":"2017-02-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"miceadds","Version":"2.5-9","Title":"Some Additional Multiple Imputation Functions, Especially for\n'mice'","Description":"\n Contains some auxiliary functions for multiple \n imputation which complements existing functionality \n in R.\n In addition to some utility functions, main features\n include plausible value imputation, multilevel \n imputation functions, imputation using partial least \n squares (PLS) for high dimensional predictors, nested \n multiple imputation.","Published":"2017-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"micEcon","Version":"0.6-14","Title":"Microeconomic Analysis and Modelling","Description":"Various tools for microeconomic analysis and microeconomic modelling,\n e.g. estimating quadratic, Cobb-Douglas and Translog functions,\n calculating partial derivatives and elasticities of these functions,\n and calculating Hessian matrices, checking curvature\n and preparing restrictions for imposing monotonicity of Translog functions.","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"micEconAids","Version":"0.6-18","Title":"Demand Analysis with the Almost Ideal Demand System (AIDS)","Description":"Functions and tools\n for analysing consumer demand\n with the Almost Ideal Demand System (AIDS)\n suggested by Deaton and Muellbauer (1980).","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"micEconCES","Version":"0.9-8","Title":"Analysis with the Constant Elasticity of Substitution (CES)\nfunction","Description":"Tools for economic analysis and economic modelling\n with a Constant Elasticity of Substitution (CES) function","Published":"2014-04-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"micEconIndex","Version":"0.1-6","Title":"Price and Quantity Indices","Description":"Tools for calculating Laspeyres, Paasche, and Fisher\n price and quantity indices.","Published":"2017-04-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"micEconSNQP","Version":"0.6-6","Title":"Symmetric Normalized Quadratic Profit Function","Description":"Production analysis with the Symmetric Normalized Quadratic (SNQ) profit function","Published":"2014-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"micemd","Version":"1.0.0","Title":"Multiple Imputation by Chained Equations with Multilevel Data","Description":"Addons for the 'mice' package to perform multiple imputation using chained equations with two-level data. Includes imputation methods specifically handling sporadically and systematically missing values. Imputation of continuous, binary or count variables are available. Following the recommendations of Audigier, V. et al (2017), the choice of the imputation method for each variable can be facilitated by a default choice tuned according to the structure of the incomplete dataset. Allows parallel calculation for 'mice'.","Published":"2017-05-12","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"micompr","Version":"1.0.1","Title":"Multivariate Independent Comparison of Observations","Description":"A procedure for comparing multivariate samples associated with\n different groups. It uses principal component analysis to convert\n multivariate observations into a set of linearly uncorrelated statistical\n measures, which are then compared using a number of statistical methods. The\n procedure is independent of the distributional properties of samples and\n automatically selects features that best explain their differences, avoiding\n manual selection of specific points or summary statistics. It is appropriate\n for comparing samples of time series, images, spectrometric measures or\n similar multivariate observations.","Published":"2016-08-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"miCoPTCM","Version":"1.0","Title":"Promotion Time Cure Model with Mis-Measured Covariates","Description":"Fits Semiparametric Promotion Time Cure Models, taking into account (using a \n\t\t\t corrected score approach or the SIMEX algorithm) or not the measurement error\n\t\t\t in the covariates, using a backfitting approach to maximize the likelihood.","Published":"2016-01-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"microbats","Version":"0.1-1","Title":"An Implementation of Bat Algorithm in R","Description":"A nature-inspired metaheuristic algorithm based on the echolocation behavior of microbats that uses frequency tuning to optimize problems in both continuous and discrete dimensions. This R package makes it easy to implement the standard bat algorithm on any user-supplied function. The algorithm was first developed by Xin-She Yang in 2010 (, ).","Published":"2016-02-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"microbenchmark","Version":"1.4-2.1","Title":"Accurate Timing Functions","Description":"Provides infrastructure to accurately measure and compare\n the execution time of R expressions.","Published":"2015-11-25","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"microclass","Version":"1.1","Title":"Methods for Taxonomic Classification of Prokaryotes","Description":"Functions for assigning 16S sequence data to a\n taxonomic level in the tree-of-life for prokaryotes.","Published":"2017-01-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"microcontax","Version":"1.0","Title":"The ConTax Data Package","Description":"The consensus taxonomy for prokaryotes is a set of data-sets for\n best possible taxonomic classification based on 16S rRNA sequence data.","Published":"2016-07-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MicroDatosEs","Version":"0.8.2","Title":"Utilities for Official Spanish Microdata","Description":"Provides utilities for reading and processing microdata from Spanish official statistics with R.","Published":"2016-08-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"micromap","Version":"1.9.2","Title":"Linked Micromap Plots","Description":"This group of functions simplifies the creation of linked micromap\n plots.","Published":"2015-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"micromapST","Version":"1.1.1","Title":"Linked Micromap Plots for General U. S. and Other Geographic\nAreas","Description":"Provides the users with the ability to quickly create Linked \n Micromap plots for a collection of geographic areas. \n Linked Micromaps are visualizations of georeferenced data that link statistical\n graphics to an organized series of small maps or graphic images. \n The Help description contains examples of how to use the micromapST function.\n Contained in this package are border group datasets to support creating micromaps for the \n 50 U.S. states and District of Columbia (51 areas), the U. S. 20 Seer Registries, \n the 105 counties in the state of Kansas, the 62 counties of New York,\n the 24 counties of Maryland, the 29 counties of Utah, the 32 administrative areas \n in China, the 218 administrative areas in the UK and Ireland (for testing only), \n the 25 districts in the city of Seoul South Korea, and the 52 counties on the Africa \n continent.\n A border group dataset contains the boundaries related to the data level areas, \n a second layer boundaries, a top or third layer boundary, a parameter list of \n run options, and a cross indexing table between area names, abbreviations, \n numeric identification and alias matching strings for the specific geographic\n area. By specifying a border group, the package create micromaps for any\n geographic region. The user can create and provide their own border group dataset \n for any area beyond the areas contained within the package.\n Copyrighted 2013, 2014, 2015 and 2016 by Carr, Pearson and Pickle.","Published":"2016-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"micropan","Version":"1.1.2","Title":"Microbial Pan-Genome Analysis","Description":"A collection of functions for computations and visualizations of\n microbial pan-genomes.","Published":"2017-01-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"microplot","Version":"1.0-16","Title":"R Graphics as Microplots (Sparklines) in 'LaTeX', 'HTML',\n'Excel'","Description":"Prepare lists of R graphics files to be used as\n\t microplots (sparklines) in tables in either 'LaTeX',\n\t 'HTML', or 'Excel' files. For 'LaTeX', use the\n\t 'Hmisc::latex' function or 'xtable::xtable' function with\n\t 'Sweave', 'knitr', 'rmarkdown', or 'Emacs' 'org-mode' to\n\t construct 'latex' tabular environments which include the\n\t graphs. For 'HTML' files, use either 'Emacs' 'org-mode' or the\n\t 'htmlTable::htmlTable' function to construct an 'HTML' file\n\t containing tables which include the graphs. For 'Excel'\n\t use on 'Windows', the file 'examples/irisExcel.xls' includes 'VBA'\n\t code which brings the individual panels into individual\n\t cells in the spreadsheet. Examples in the 'examples'\n\t subdirectory and demos are shown with 'lattice' graphics,\n\t 'base' graphics, and 'ggplot2' graphics. Examples for 'LaTeX'\n\t include 'Sweave' (both 'LaTeX'-style and 'Noweb'-style), 'knitr',\n\t 'emacs' 'org-mode', and 'rmarkdown' input files and their 'pdf'\n\t output files. Examples for 'HTML' include 'org-mode' and 'Rmd'\n\t input files and their webarchive 'HTML' output files. In\n\t addition, the 'as.orgtable' function can display a\n\t 'data.frame' in an 'org-mode' document.","Published":"2017-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"microPop","Version":"1.2","Title":"Modelling Microbial Populations","Description":"Modelling interacting microbial populations - example applications include human gut microbiota, rumen microbiota and phytoplankton. Solves a system of ordinary differential equations to simulate microbial growth and resource uptake over time.","Published":"2017-04-21","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"microseq","Version":"1.2","Title":"Basic Biological Sequence Analysis","Description":"Basic functions for microbial sequence data analysis.","Published":"2017-01-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MicroStrategyR","Version":"1.0-1","Title":"MicroStrategyR Package","Description":"Deploys your R Analytic to MicroStrategy","Published":"2013-04-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"MicSim","Version":"1.0.12","Title":"Performing Continuous-Time Microsimulation","Description":"This entry-level toolkit allows performing continuous-time microsimulation for a wide range of demographic applications. Individual life-courses are specified by a continuous-time multi-state model. ","Published":"2016-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"midasr","Version":"0.6","Title":"Mixed Data Sampling Regression","Description":"Methods and tools for mixed frequency time series data analysis.\n Allows estimation, model selection and forecasting for MIDAS regressions.","Published":"2016-08-08","License":"GPL-2 | MIT + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"midastouch","Version":"1.3","Title":"Multiple Imputation by Distance Aided Donor Selection","Description":"Contains the function mice.impute.midastouch(). Technically this function is to be run from within the 'mice' package (van Buuren et al. 2011), type ??mice. It substitutes the method 'pmm' within mice by 'midastouch'. The authors have shown that 'midastouch' is superior to default 'pmm'. Many ideas are based on Siddique / Belin 2008's MIDAS.","Published":"2016-02-07","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"MIDN","Version":"1.0","Title":"Nearly Exact Sample Size Calculation for Exact Powerful\nNonrandomized Tests for Differences Between Binomial\nProportions","Description":"Implementation of the mid-n algorithms presented in \n Wellek S (2015) Statistica Neerlandica 69, 358-373 for exact \n sample size calculation for superiority trials with binary outcome.","Published":"2016-10-28","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"midrangeMCP","Version":"1.3","Title":"Multiple Comparisons Procedures Based on Studentized Midrange\nand Range Distributions","Description":"Apply tests of multiple comparisons based on studentized midrange\n and range distributions. The tests are: Tukey Midrange test, Student-Newman-\n Keuls Midrange test, Skott-Knott Midrange test and Skott-Knott Range test.","Published":"2016-07-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MigClim","Version":"1.6","Title":"Implementing dispersal into species distribution models","Description":"Functions for implementing species dispersal into projections\n of species distribution models (e.g. under climate change scenarios).","Published":"2013-12-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"migest","Version":"1.7.3","Title":"Methods for the Indirect Estimation of Bilateral Migration","Description":"Indirect methods for estimating bilateral migration flows in the presence of partial or missing data. Methods might be relevant to other categorical data situations on non-migration data, where for example, marginal totals are known and only auxiliary bilateral data is available.","Published":"2016-10-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"migration.indices","Version":"0.3.0","Title":"Migration indices","Description":"This package provides various indices, like Crude Migration Rate,\n different Gini indices or the Coefficient of Variation among others, to\n show the (un)equality of migration.","Published":"2013-10-07","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"migui","Version":"1.1","Title":"Graphical User Interface to the 'mi' Package","Description":"This GUI for the mi package walks the user through the steps of multiple imputation and the analysis of completed data.","Published":"2015-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MIICD","Version":"2.4","Title":"Multiple Imputation for Interval Censored Data","Description":"Implements multiple imputation for proportional hazards regression\n with interval censored data or proportional sub-distribution hazards\n regression for interval censored competing risks data. The main functions\n allow to estimate survival function, cumulative incidence function, Cox\n and Fine & Gray regression coefficients and associated variance-covariance\n matrix. 'MIICD' functions call 'Surv', 'survfit' and 'coxph' from the\n 'survival' package, 'crprep' from the 'mstate' package, and 'mvrnorm' from\n the 'MASS' package.","Published":"2017-05-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MIIVsem","Version":"0.5.2","Title":"Model Implied Instrumental Variable (MIIV) Estimation of\nStructural Equation Models","Description":"Functions for estimating structural equation models using \n instrumental variables.","Published":"2017-05-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MILC","Version":"1.0","Title":"MIcrosimulation Lung Cancer (MILC) model","Description":"The MILC package is designed to predict individual trajectories using the continuous time microsimulation model MILC, that describes the natural history of lung cancer.","Published":"2014-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"milr","Version":"0.3.0","Title":"Multiple-Instance Logistic Regression with LASSO Penalty","Description":"The multiple instance data set consists of many independent\n subjects (called bags) and each subject is composed of several components\n (called instances). The outcomes of such data set are binary or categorical responses,\n and, we can only observe the subject-level outcomes. For example, in manufacturing\n processes, a subject is labeled as \"defective\" if at least one of its own\n components is defective, and otherwise, is labeled as \"non-defective\". The\n 'milr' package focuses on the predictive model for the multiple instance\n data set with binary outcomes and performs the maximum likelihood estimation\n with the Expectation-Maximization algorithm under the framework of logistic\n regression. Moreover, the LASSO penalty is attached to the likelihood function\n for simultaneous parameter estimation and variable selection.","Published":"2017-06-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mime","Version":"0.5","Title":"Map Filenames to MIME Types","Description":"Guesses the MIME type from a filename extension using the data\n derived from /etc/mime.types in UNIX-type systems.","Published":"2016-07-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"MImix","Version":"1.0","Title":"Mixture summary method for multiple imputation","Description":"Tools to combine results for multiply-imputed data using\n mixture approximations","Published":"2012-01-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MindOnStats","Version":"0.11","Title":"Data sets included in Utts and Heckard's Mind on Statistics","Description":"66 data sets that were imported using read.table() where appropriate but more commonly after converting to a csv file for importing via read.csv().","Published":"2014-12-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mindr","Version":"1.0.4","Title":"Convert Files Between Markdown or Rmarkdown Files and Mindmaps","Description":"Convert Markdown ('.md') or Rmarkdown ('.Rmd') files into FreeMind mindmap ('.mm') files, and vice versa. FreeMind mindmap ('.mm') files can be opened by or imported to common mindmap software such as 'FreeMind' () and 'XMind' ().","Published":"2017-06-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"minerva","Version":"1.4.5","Title":"Maximal Information-Based Nonparametric Exploration R Package\nfor Variable Analysis","Description":"R wrapper for 'cmine' implementation of Maximal\n Information-based Nonparametric Exploration statistics (MIC and\n MINE family). Detailed information of the ANSI C implementation 'cmine'\n\tcan be found at 'http://mpba.fbk.eu/cmine'.","Published":"2016-03-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Miney","Version":"0.1","Title":"Implementation of the Well-Known Game to Clear Bombs from a\nGiven Field (Matrix)","Description":"This package implements the core idea of games known as\n 'Minesweeper' on Microsoft Windows or 'KMines' for KDE on\n Unix-like operating systems.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"miniCRAN","Version":"0.2.7","Title":"Create a Mini Version of CRAN Containing Only Selected Packages","Description":"Makes it possible to create an internally consistent\n repository consisting of selected packages from CRAN-like repositories.\n The user specifies a set of desired packages, and miniCRAN recursively\n reads the dependency tree for these packages, then downloads only this\n subset. The user can then install packages from this repository directly,\n rather than from CRAN. This is useful in production settings, e.g. server\n behind a firewall, or remote locations with slow broadband access.","Published":"2016-08-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"miniGUI","Version":"0.8.0","Title":"tktcl quick and simple function GUI","Description":"quick and simple tktcl miniGUI to call functions.","Published":"2012-09-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"minimalRSD","Version":"1.0.0","Title":"Minimally Changed CCD and BBD","Description":"Generate central composite designs (CCD)with full as well \n as fractional factorial points (half replicate) and Box Behnken \n designs (BBD) with minimally changed run sequence.","Published":"2017-01-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"minimap","Version":"0.1.0","Title":"Create Tile Grid Maps","Description":"Create tile grid maps, which are like choropleth maps except each\n region is represented with equal visual space.","Published":"2016-02-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"minimax","Version":"1.0","Title":"Minimax distribution family","Description":"The minimax family of distributions is a two-parameter\n family like the beta family, but computationally a lot more\n tractible.","Published":"2011-07-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"minimaxdesign","Version":"0.1.2","Title":"Minimax and Minimax Projection Designs","Description":"Provides two main functions: mMcPSO() and\n miniMaxPro(), which generates minimax designs and minimax projection designs using\n a hybrid clustering - particle swarm optimization (PSO) algorithm. These designs can be used\n in a variety of settings, e.g., as space-filling designs for computer experiments or\n sensor allocation designs. A detailed description of the two designs and the employed\n algorithms can be found in Mak and Joseph (2017) .","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"minimist","Version":"0.1","Title":"Parse Argument Options","Description":"A binding to the minimist JavaScript library. This module implements\n the guts of optimist's argument parser without all the fanciful decoration.","Published":"2015-02-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"miniUI","Version":"0.1.1","Title":"Shiny UI Widgets for Small Screens","Description":"Provides UI widget and layout functions for writing Shiny apps\n that work well on small screens.","Published":"2016-01-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"minpack.lm","Version":"1.2-1","Title":"R Interface to the Levenberg-Marquardt Nonlinear Least-Squares\nAlgorithm Found in MINPACK, Plus Support for Bounds","Description":"The nls.lm function provides an R interface to lmder and lmdif from the MINPACK library, for solving nonlinear least-squares problems by a modification of the Levenberg-Marquardt algorithm, with support for lower and upper parameter bounds. The implementation can be used via nls-like calls using the nlsLM function. ","Published":"2016-11-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"minPtest","Version":"1.7","Title":"Gene region-level testing procedure for SNP data, using the min\nP test resampling approach","Description":"Package minPtest is designed for estimating a gene region-level summary for SNP data from case-control studies using a permutation-based resampling method, called min P test, allowing execution on a compute cluster or multicore computer.","Published":"2013-12-19","License":"GPL (>= 2.14)","snapshot_date":"2017-06-23"} {"Package":"minqa","Version":"1.2.4","Title":"Derivative-free optimization algorithms by quadratic\napproximation","Description":"Derivative-free optimization by quadratic approximation\n based on an interface to Fortran implementations by M. J. D.\n Powell.","Published":"2014-10-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"minque","Version":"1.1","Title":"An R Package for Linear Mixed Model Analyses","Description":"This package offers three important components: (1) to construct a use-defined linear mixed model, (2) to employ one of linear mixed model approaches: minimum norm quadratic unbiased estimation (MINQUE) (Rao, 1971) for variance component estimation and random effect prediction; and (3) to employ a jackknife resampling technique to conduct various statistical tests. In addition, this package provides the function for model or data evaluations.This R package offers fast computations for large data sets analyses for various irregular data structures.","Published":"2014-09-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MInt","Version":"1.0.1","Title":"Learn Direct Interaction Networks","Description":"Learns direct microbe-microbe interaction networks using a Poisson\n multivariate-normal hierarchical model with an L1 penalized precision\n matrix. Optimization is carried out using an iterative conditional modes\n algorithm.","Published":"2015-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"minval","Version":"0.8","Title":"MINimal VALidation for Stoichiometric Reactions","Description":"For a given set of stoichiometric reactions, this package\n evaluates the mass and charge balance, extracts all reactants, products, orphan\n metabolites, metabolite names and compartments. Also are included some options\n to characterize and write models in TSV and SBML formats.","Published":"2017-06-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"minxent","Version":"0.01","Title":"Entropy Optimization Distributions","Description":"This package implements entropy optimization distribution\n under specified constraints. It also offers an R interface to\n the MinxEnt and MaxEnt distributions.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mipfp","Version":"3.1","Title":"Multidimensional Iterative Proportional Fitting and Alternative\nModels","Description":"An implementation of the iterative proportional fitting (IPFP), \n maximum likelihood, minimum chi-square and weighted least squares procedures\n for updating a N-dimensional array with respect to given target marginal \n distributions (which, in turn can be multidimensional). The package also\n provides an application of the IPFP to simulate multivariate Bernoulli\n distributions.","Published":"2016-12-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MIPHENO","Version":"1.2","Title":"Mutant Identification through Probabilistic High throughput\nEnabled NOrmalization","Description":"This package contains functions to carry out processing of\n high throughput data analysis and detection of putative\n hits/mutants. Contents include a function for post-hoc quality\n control for removal of outlier sample sets, a median-based\n normalization method for use in datasets where there are no\n explicit controls and where most of the responses are of the\n wildtype/no response class (see accompanying paper). The\n package also includes a way to prioritize individuals of\n interest using am empirical cumulative distribution function.\n Methods for generating synthetic data as well as data from the\n Chloroplast 2010 project are included.","Published":"2012-01-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"miRada","Version":"1.13.8-8","Title":"MicroRNA Microarray Data Analysis","Description":"This package collects algorithms/functions developed for\n microRNA profiling data analyses. Analytical platforms include\n traditional hybridization microarray, CGH, beads-based\n microarray, and qRT-PCR array.","Published":"2013-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MiRAnorm","Version":"1.0.0","Title":"Adaptive Normalization for miRNA Data","Description":"An adaptive normalization algorithm that selects housekeeping genes\n based on the sample level variability in the data. This is suitable for any data\n obtained from RT-qPCR assays. A manuscript describing the method is submitted \n to Genome Biology under ``MiRA-norm: An Adaptive Method for the Normalization of MicroRNA Array Data``, Yuda Zhu et al.","Published":"2016-11-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"miRNAss","Version":"1.0","Title":"Genome-Wide Discovery of Pre-miRNAs with few Labeled Examples","Description":"Machine learning method specifically designed for\n pre-miRNA prediction. It takes advantage of unlabeled sequences to improve\n the prediction rates even when there are just a few positive examples, when\n the negative examples are unreliable or are not good representatives of\n its class. Furthermore, the method can automatically search for negative\n examples if the user is unable to provide them. MiRNAss can find a good\n boundary to divide the pre-miRNAs from other groups of sequences; it\n automatically optimizes the threshold that defines the classes boundaries,\n and thus, it is robust to high class imbalance. Each step of the method is\n scalable and can handle large volumes of data.","Published":"2017-05-06","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"MiRSEA","Version":"1.1","Title":"'MicroRNA' Set Enrichment Analysis","Description":"The tools for 'MicroRNA Set Enrichment Analysis' can identify risk pathways(or prior gene sets) regulated by microRNA set in the context of microRNA expression data. (1) This package constructs a correlation profile of microRNA and pathways by the hypergeometric statistic test. The gene sets of pathways derived from the three public databases (Kyoto Encyclopedia of Genes and Genomes ('KEGG'); 'Reactome'; 'Biocarta') and the target gene sets of microRNA are provided by four databases('TarBaseV6.0'; 'mir2Disease'; 'miRecords'; 'miRTarBase';). (2) This package can quantify the change of correlation between microRNA for each pathway(or prior gene set) based on a microRNA expression data with cases and controls. (3) This package uses the weighted Kolmogorov-Smirnov statistic to calculate an enrichment score (ES) of a microRNA set that co-regulate to a pathway , which reflects the degree to which a given pathway is associated with the specific phenotype. (4) This package can provide the visualization of the results.","Published":"2015-07-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mirt","Version":"1.24","Title":"Multidimensional Item Response Theory","Description":"Analysis of dichotomous and polytomous response data using\n unidimensional and multidimensional latent trait models under the Item\n Response Theory paradigm. Exploratory and confirmatory models can be\n estimated with quadrature (EM) or stochastic (MHRM) methods. Confirmatory\n bi-factor and two-tier analyses are available for modeling item testlets.\n Multiple group analysis and mixed effects designs also are available for\n detecting differential item and test functioning as well as modelling\n item and person covariates. Finally, latent class models such as the DINA,\n DINO, multidimensional latent class, and several other discrete latent\n variable models are supported.","Published":"2017-05-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mirtCAT","Version":"1.5","Title":"Computerized Adaptive Testing with Multidimensional Item\nResponse Theory","Description":"Provides tools to generate an HTML interface for creating adaptive\n and non-adaptive educational and psychological tests using the shiny\n package. Suitable for applying unidimensional and multidimensional\n computerized adaptive tests (CAT) using item response theory methodology and for\n creating simple questionnaires forms to collect response data directly in R.\n Additionally, optimal test designs (e.g., \"shadow testing\") are supported\n for tests which contain a large number of item selection constraints.\n Finally, package contains tools useful for performing Monte Carlo simulations \n for studying the behavior of computerized adaptive test banks.","Published":"2017-05-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"miRtest","Version":"1.8","Title":"combined miRNA- and mRNA-testing","Description":"combined miRNA- and mRNA-testing","Published":"2014-11-27","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"misc3d","Version":"0.8-4","Title":"Miscellaneous 3D Plots","Description":"A collection of miscellaneous 3d plots, including\n isosurfaces.","Published":"2013-01-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"miscF","Version":"0.1-3","Title":"Miscellaneous Functions","Description":"Various functions for random number generation, density \n estimation, classification, curve fitting, and spatial \n data analysis.","Published":"2016-07-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"miscFuncs","Version":"1.2-10","Title":"Miscellaneous Useful Functions Including LaTeX Tables, Kalman\nFiltering and Development Tools","Description":"Implementing various things including functions for LaTeX tables,\n the Kalman filter, web scraping, development tools, relative risk and odds\n ratio.","Published":"2016-11-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"misclassGLM","Version":"0.2.0","Title":"Computation of Generalized Linear Models with Misclassified\nCovariates Using Side Information","Description":"Estimates models that extend the standard GLM to take\n misclassification into account. The models require side information from a secondary data set\n on the misclassification process, i.e. some sort of misclassification\n probabilities conditional on some common covariates.\n A detailed description of the algorithm can be found in\n Dlugosz, Mammen and Wilke (2015) \\url{http://www.zew.de/PU70410}.","Published":"2016-09-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"miscor","Version":"0.1-1","Title":"Miscellaneous Functions for the Correlation Coefficient","Description":"Statistical test for the product-moment correlation coefficient based on H0: rho = rho0 \n including sample size computation, statistical test for comparing the product-moment \n correlation coefficient in independent ad dependent samples, sequential triangular\n test for the product-moment correlation coefficient, partial and semipartial correlation,\n simulation of bivariate normal and non-normal distribution with a specified correlation.","Published":"2017-04-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"miscset","Version":"1.1.0","Title":"Miscellaneous Tools Set","Description":"A collection of miscellaneous methods to simplify various tasks,\n including plotting, data.frame and matrix transformations, environment\n functions, regular expression methods, and string and logical operations, as\n well as numerical and statistical tools. Most of the methods are simple but\n useful wrappers of common base R functions, which extend S3 generics or\n provide default values for important parameters.","Published":"2017-02-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"miscTools","Version":"0.6-22","Title":"Miscellaneous Tools and Utilities","Description":"Miscellaneous small tools and utilities.\n Many of them facilitate the work with matrices,\n e.g. inserting rows or columns, creating symmetric matrices,\n or checking for semidefiniteness.\n Other tools facilitate the work with regression models,\n e.g. extracting the standard errors,\n obtaining the number of (estimated) parameters,\n or calculating R-squared values.","Published":"2016-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mise","Version":"0.1.0","Title":"Clears the Workspace (Mise en Place)","Description":"Clears the workspace. Useful for the beginnings of R scripts, to\n avoid potential problems with accidentally using information from variables\n or functions from previous script evaluations, too many figure windows open\n at the same time, packages that you don't need any more, or a cluttered\n console. Uses code from various StackOverflow users. See help(mise) for\n pointers to the relevant StackOverflow pages.","Published":"2016-06-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MiSPU","Version":"1.0","Title":"Microbiome Based Sum of Powered Score (MiSPU) Tests","Description":"There is an increasing interest in investigating how the compositions of microbial communities are associated with human health and disease. In this package, we present a novel global testing method called aMiSPU, that is highly adaptive and thus high powered across various scenarios, alleviating the issue with the choice of a phylogenetic distance. Our simulations and real data analysis demonstrated that aMiSPU test was often more powerful than several competing methods while correctly controlling type I error rates.","Published":"2016-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"misreport","Version":"0.1.1","Title":"Statistical Analysis of Misreporting on Sensitive Survey\nQuestions","Description":"Enables investigation of the predictors of misreporting on sensitive survey questions through a multivariate list experiment regression method. The method permits researchers to model whether a survey respondent's answer to the sensitive item in a list experiment is different from his or her answer to an analogous direct question.","Published":"2017-02-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"missDeaths","Version":"2.5","Title":"Simulating and Analyzing Time to Event Data in the Presence of\nPopulation Mortality","Description":"Implements two methods: a nonparametric risk adjustment and a\n data imputation method that use general population mortality tables to allow a\n correct analysis of time to disease recurrence. Also includes a powerful set of\n object oriented survival data simulation functions.","Published":"2017-05-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"missForest","Version":"1.4","Title":"Nonparametric Missing Value Imputation using Random Forest","Description":"The function 'missForest' in this package is used to\n impute missing values particularly in the case of mixed-type\n data. It uses a random forest trained on the observed values of\n a data matrix to predict the missing values. It can be used to\n impute continuous and/or categorical data including complex\n interactions and non-linear relations. It yields an out-of-bag\n (OOB) imputation error estimate without the need of a test set\n or elaborate cross-validation. It can be run in parallel to \n save computation time.","Published":"2013-12-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MissingDataGUI","Version":"0.2-5","Title":"A GUI for Missing Data Exploration","Description":"Provides numeric and graphical\n summaries for the missing values from both categorical\n and quantitative variables. A variety of imputation\n methods are applied, including the univariate imputations\n like fixed or random values, multivariate imputations\n like the nearest neighbors and multiple imputations,\n and imputations conditioned on a categorical variable.","Published":"2016-04-25","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"missMDA","Version":"1.11","Title":"Handling Missing Values with Multivariate Data Analysis","Description":"Imputation of incomplete continuous or categorical datasets; Missing values are imputed with a principal component analysis (PCA), a multiple correspondence analysis (MCA) model or a multiple factor analysis (MFA) model; Perform multiple imputation with and in PCA or MCA.","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MissMech","Version":"1.0.2","Title":"Testing Homoscedasticity, Multivariate Normality, and Missing\nCompletely at Random","Description":"To test whether the missing data mechanism, in a set of incompletely observed data, is one of missing completely at random (MCAR). \n For detailed description see Jamshidian, M. Jalal, S., and Jansen, C. (2014). \"MissMech: An R Package for Testing Homoscedasticity, Multivariate Normality, and Missing Completely at Random (MCAR),\" Journal of Statistical Software, 56(6), 1-31. URL http://www.jstatsoft.org/v56/i06/.","Published":"2015-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MiST","Version":"1.0","Title":"Mixed effects Score Test for continuous outcomes","Description":"Test for association between a set of SNPS/genes and\n continuous or binary outcomes by including variant\n characteristic information and using (weighted) score\n statistics.","Published":"2013-12-14","License":"LGPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"mistat","Version":"1.0-4","Title":"Data Sets, Functions and Examples from the Book: \"Modern\nIndustrial Statistics\" by Kenett, Zacks and Amberti","Description":"Provide all the data sets and statistical analysis applications used in \"Modern Industrial Statistics: with applications in R, MINITAB and JMP\" by R.S. Kenett and S. Zacks with contributions by D. Amberti, John Wiley and Sons, 2013, which is a second revised and expanded revision of \"Modern Industrial Statistics: Design and Control of Quality and Reliability\", R. Kenett and S. Zacks, Duxbury/Wadsworth Publishing, 1998.","Published":"2016-08-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mistral","Version":"2.1.0","Title":"Methods in Structural Reliability Analysis","Description":"Various reliability analysis methods for rare event inference: \n 1) computing failure probability (probability that the output of a numerical model exceeds a threshold),\n 2) computing quantiles of low or high-order,\n 3) Wilks formula to compute quantile(s) from a sample or the size of the required i.i.d. sample.","Published":"2016-04-03","License":"CeCILL","snapshot_date":"2017-06-23"} {"Package":"MitISEM","Version":"1.1","Title":"Mixture of Student t Distributions using Importance Sampling and\nExpectation Maximization","Description":"Flexible multivariate function approximation using adapted\n Mixture of Student t Distributions. Mixture of t distribution\n is obtained using Importance Sampling weighted Expectation\n Maximization algorithm.","Published":"2017-05-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mitml","Version":"0.3-5","Title":"Tools for Multiple Imputation in Multilevel Modeling","Description":"Provides tools for multiple imputation of missing data in multilevel\n modeling. Includes a user-friendly interface to the packages 'pan' and 'jomo',\n and several functions for visualization, data management and the analysis \n of multiply imputed data sets.","Published":"2017-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mitools","Version":"2.3","Title":"Tools for multiple imputation of missing data","Description":"Tools to perform analyses and combine results from\n multiple-imputation datasets.","Published":"2014-09-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MittagLeffleR","Version":"0.1.0","Title":"The Mittag-Leffler Distribution","Description":"Provides probability density, distribution function, \n quantile function and random variate generation for the Mittag-Leffler \n distributions, and the Mittag-Leffler function. Based on the algorithm\n by Garrappa, R. (2015) .","Published":"2017-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mix","Version":"1.0-10","Title":"Estimation/Multiple Imputation for Mixed Categorical and\nContinuous Data","Description":"Estimation/multiple imputation programs for mixed categorical\n and continuous data.","Published":"2017-06-12","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"mixAK","Version":"5.0","Title":"Multivariate Normal Mixture Models and Mixtures of Generalized\nLinear Mixed Models Including Model Based Clustering","Description":"Contains a mixture of statistical methods including the MCMC methods to analyze normal mixtures. Additionally, model based clustering methods are implemented to perform classification based on (multivariate) longitudinal (or otherwise correlated) data. The basis for such clustering is a mixture of multivariate generalized linear mixed models.","Published":"2017-03-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"MixAll","Version":"1.2.0","Title":"Clustering using Mixture Models","Description":"Algorithms and methods for estimating parametric mixture models for\n mixed data and with missing data.","Published":"2016-08-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixcat","Version":"1.0-3","Title":"Mixed effects cumulative link and logistic regression models","Description":"Mixed effects cumulative and baseline logit link models\n for the analysis of ordinal or nominal responses, with\n non-parametric distribution for the random effects","Published":"2012-11-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixdist","Version":"0.5-4","Title":"Finite Mixture Distribution Models","Description":"This package contains functions for fitting finite mixture\n distribution models to grouped data and conditional data by the\n method of maximum likelihood using a combination of a\n Newton-type algorithm and the EM algorithm.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MixedDataImpute","Version":"0.1","Title":"Missing Data Imputation for Continuous and Categorical Data\nusing Nonparametric Bayesian Joint Models","Description":"Missing data imputation for continuous and categorical data, using nonparametric Bayesian joint models (specifically the hierarchically coupled mixture model with local dependence described in Murray and Reiter (2015); see 'citation(\"MixedDataImpute\")' or http://arxiv.org/abs/1410.0438). See '?hcmm_impute' for example usage. ","Published":"2016-02-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mixedMem","Version":"1.1.0","Title":"Tools for Discrete Multivariate Mixed Membership Models","Description":"Fits mixed membership models with discrete multivariate data (with\n or without repeated measures) following the general framework of Erosheva et al\n (2004). This package uses a Variational EM approach by approximating the\n posterior distribution of latent memberships and selecting hyperparameters\n through a pseudo-MLE procedure. Currently supported data types are\n Bernoulli, multinomial and rank (Plackett-Luce). The extended GoM model with fixed stayers from Erosheva et al (2007) is now also supported. See Airoldi et al (2014) for other examples of mixed membership models.","Published":"2015-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MixedPoisson","Version":"2.0","Title":"Mixed Poisson Models","Description":"The estimation of the parameters in mixed Poisson models. ","Published":"2016-12-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mixedsde","Version":"2.0","Title":"Estimation Methods for Stochastic Differential Mixed Effects\nModels","Description":"Inference on stochastic differential models Ornstein-Uhlenbeck or\n Cox-Ingersoll-Ross, with one or two random effects in the drift function.","Published":"2016-07-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MixedTS","Version":"1.0.4","Title":"Mixed Tempered Stable Distribution","Description":"We provide detailed functions for univariate Mixed Tempered Stable distribution. ","Published":"2015-10-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixEMM","Version":"1.0","Title":"A Mixed-Effects Model for Analyzing Cluster-Level Non-Ignorable\nMissing Data","Description":"Contains functions for estimating a mixed-effects model for\n clustered data (or batch-processed data) with cluster-level (or batch-\n level) missing values in the outcome, i.e., the outcomes of some \n clusters are either all observed or missing altogether. The model is \n developed for analyzing incomplete data from labeling-based quantitative \n proteomics experiments but is not limited to this type of data. \n We used an expectation conditional maximization (ECM) algorithm for model \n estimation. The cluster-level missingness may depend on the average \n value of the outcome in the cluster (missing not at random).","Published":"2017-06-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mixer","Version":"1.8","Title":"Random graph clustering","Description":"Routines for the analysis (unsupervised clustering) of\n networks using MIXtures of Erdos-Renyi random graphs","Published":"2015-01-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixexp","Version":"1.2.5","Title":"Design and Analysis of Mixture Experiments","Description":"Functions for creating designs for mixture experiments, making ternary contour plots, and making mixture effect plots.","Published":"2016-08-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MIXFIM","Version":"1.0","Title":"Evaluation of the FIM in NLMEMs using MCMC","Description":"Evaluation and optimization of the Fisher Information Matrix in NonLinear Mixed Effect Models using Markov Chains Monte Carlo for continuous and discrete data.","Published":"2015-08-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MixGHD","Version":"2.1","Title":"Model Based Clustering, Classification and Discriminant Analysis\nUsing the Mixture of Generalized Hyperbolic Distributions","Description":"Carries out model-based clustering, classification and discriminant analysis using five different models. The models are all based on the generalized hyperbolic distribution.The first model 'MGHD' is the classical mixture of generalized hyperbolic distributions. The 'MGHFA' is the mixture of generalized hyperbolic factor analyzers for high dimensional data sets. The 'MSGHD', mixture of multiple scaled generalized hyperbolic distributions. The 'cMSGHD' is a 'MSGHD' with convex contour plots. The 'MCGHD', mixture of coalesced generalized hyperbolic distributions is a new more flexible model.","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixKernel","Version":"0.1","Title":"Omics Data Integration Using Kernel Methods","Description":"Kernel-based methods are powerful methods for integrating \n heterogeneous types of data. mixKernel aims at providing methods to combine\n kernel for unsupervised exploratory analysis. Different solutions are \n provided to compute a meta-kernel, in a consensus way or in a way that \n best preserves the original topology of the data. mixKernel also integrates\n kernel PCA to visualize similarities between samples in a non linear space\n and from the multiple source point of view. Functions to assess and display\n important variables are also provided in the package. ","Published":"2017-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixlink","Version":"0.1.4","Title":"Mixture Link Regression","Description":"The Mixture Link model is a proposed extension to generalized linear models, where the outcome distribution is a finite mixture of J > 1 densities. This package supports Mixture Link computations for Poisson and Binomial outcomes. This includes the distribution functions, numerical maximum likelihood estimation, Bayesian analysis, and quantile residuals to assess model fit.","Published":"2016-12-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixlm","Version":"1.2.1","Title":"Mixed Model ANOVA and Statistics for Education","Description":"The main functions perform mixed models analysis by least squares\n or REML by adding the function r() to formulas of lm() and glm(). A collection of\n text-book statistics for higher education is also included, e.g. modifications\n of the functions lm(), glm() and associated summaries from the package 'stats'.","Published":"2017-05-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MixMAP","Version":"1.3.4","Title":"Implements the MixMAP Algorithm","Description":"A collection of functions to implement the MixMAP algorithm, which performs gene\n level tests of association using data from a previous GWAS or data from a\n meta-analysis of several GWAS. Conceptually, genes are detected as\n significant if the collection of p-values within a gene are determined to\n be collectively smaller than would be observed by chance.","Published":"2015-08-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mixOmics","Version":"6.1.3","Title":"Omics Data Integration Project","Description":"Multivariate methods are well suited to large omics data sets where the number of variables (e.g. genes, proteins, metabolites) is much larger than the number of samples (patients, cells, mice). They have the appealing properties of reducing the dimension of the data by using instrumental variables (components), which are defined as combinations of all variables. Those components are then used to produce useful graphical outputs that enable better understanding of the relationships and correlation structures between the different data sets that are integrated. mixOmics offers a wide range of multivariate methods for the exploration and integration of biological datasets with a particular focus on variable selection. The package proposes several sparse multivariate models we have developed to identify the key variables that are highly correlated, and/or explain the biological outcome of interest. The data that can be analysed with mixOmics may come from high throughput sequencing technologies, such as omics data (transcriptomics, metabolomics, proteomics, metagenomics etc) but also beyond the realm of omics (e.g. spectral imaging). The methods implemented in mixOmics can also handle missing values without having to delete entire rows with missing data. A non exhaustive list of methods include variants of generalised Canonical Correlation Analysis, sparse Partial Least Squares and sparse Discriminant Analysis. Recently we implemented integrative methods to combine multiple data sets: N-integration with variants of Generalised Canonical Correlation Analysis and P-integration with variants of multi-group Partial Least Squares.","Published":"2017-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixor","Version":"1.0.3","Title":"Mixed-Effects Ordinal Regression Analysis","Description":"Provides the function 'mixord' for fitting a mixed-effects ordinal and binary response models and associated methods for printing, summarizing, extracting estimated coefficients and variance-covariance matrix, and estimating contrasts for the fitted models.","Published":"2015-08-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixpack","Version":"0.3.6","Title":"Tools to Work with Mixture Components","Description":"A collection of tools implemented to facilitate the analysis of the components of a finite mixture distributions. The package has some functions to generate random samples coming from a finite mixture. The package provides a C++ implementation for the construction of a hierarchy over the components of a given finite mixture.","Published":"2017-01-27","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mixPHM","Version":"0.7-2","Title":"Mixtures of Proportional Hazard Models","Description":"Fits multiple variable mixtures of various parametric proportional hazard models using the EM-Algorithm. Proportionality restrictions can be imposed on the latent groups and/or on the variables. Several survival distributions can be specified. Missing values and censored values are allowed. Independence is assumed over the single variables.","Published":"2015-07-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mixR","Version":"0.1.0","Title":"Finite Mixture Modeling for Raw and Binned Data","Description":"Performs maximum likelihood estimation for finite mixture models for families including Normal, Weibull, Gamma and Lognormal by using EM algorithm, together with Newton-Raphson algorithm or bisection method when necessary. It also conducts mixture model selection by using information criteria or bootstrap likelihood ratio test. The data used for mixture model fitting can be raw data or binned data. The model fitting process is accelerated by using R package 'Rcpp'.","Published":"2017-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixRasch","Version":"1.1","Title":"Mixture Rasch Models with JMLE","Description":"Estimates Rasch models and mixture Rasch models, including the dichotomous Rasch model, the rating scale model, and the partial credit model.","Published":"2014-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixreg","Version":"0.0-5","Title":"Functions to fit mixtures of regressions","Description":"Fits mixtures of (possibly multivariate) regressions\n\t(which has been described as doing ANCOVA when you don't\n\tknow the levels).","Published":"2014-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MixRF","Version":"1.0","Title":"A Random-Forest-Based Approach for Imputing Clustered Incomplete\nData","Description":"It offers random-forest-based functions to impute clustered\n incomplete data. The package is tailored for but not limited to imputing\n multitissue expression data, in which a gene's expression is measured on the\n collected tissues of an individual but missing on the uncollected tissues.","Published":"2016-04-06","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mixsep","Version":"0.2.1-2","Title":"Forensic Genetics DNA Mixture Separation","Description":"Separates DNA mixtures using a statistical model within a\n greedy algorithm with a useful tcl/tk GUI.","Published":"2013-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MixSIAR","Version":"3.1.7","Title":"Bayesian Mixing Models in R","Description":"Creates and runs Bayesian mixing models to analyze\n biotracer data (i.e. stable isotopes, fatty acids), which estimate the\n proportions of source (prey) contributions to a mixture (consumer). 'MixSIAR'\n is not one model, but a framework that allows a user to create a mixing model\n based on their data structure and research questions, via options for fixed/\n random effects, source data types, priors, and error terms. 'MixSIAR' incorporates\n several years of advances since 'MixSIR' and 'SIAR', and includes both GUI\n (graphical user interface) and script versions.","Published":"2016-08-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MixSim","Version":"1.1-3","Title":"Simulating Data to Study Performance of Clustering Algorithms","Description":"The utility of this package is in simulating mixtures of Gaussian\n distributions with different levels of overlap between mixture\n components. Pairwise overlap, defined as a sum of two\n misclassification probabilities, measures the degree of\n interaction between components and can be readily employed to\n control the clustering complexity of datasets simulated from\n mixtures. These datasets can then be used for systematic\n performance investigation of clustering and finite mixture\n modeling algorithms. Among other capabilities of 'MixSim', there\n are computing the exact overlap for Gaussian mixtures,\n simulating Gaussian and non-Gaussian data, simulating outliers\n and noise variables, calculating various measures of agreement\n between two partitionings, and constructing parallel\n distribution plots for the graphical display of finite mixture\n models.","Published":"2017-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixsmsn","Version":"1.1-2","Title":"Fitting Finite Mixture of Scale Mixture of Skew-Normal\nDistributions","Description":"Functions to fit finite mixture of scale mixture of\n skew-normal (FM-SMSN) distributions.","Published":"2016-08-23","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"mixtNB","Version":"1.0","Title":"DE Analysis of RNA-Seq Data by Mixtures of NB","Description":"Differential expression analysis of RNA-Seq data when replicates under two conditions are available is performed. First, mixtures of Negative Binomial distributions are fitted on the data in order to estimate the dispersions, then the Wald test is computed. ","Published":"2015-05-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mixtools","Version":"1.1.0","Title":"Tools for Analyzing Finite Mixture Models","Description":"Analyzes finite mixture models for various parametric and semiparametric settings. This includes mixtures of parametric distributions (normal, multivariate normal, multinomial, gamma), various Reliability Mixture Models (RMMs), mixtures-of-regressions settings (linear regression, logistic regression, Poisson regression, linear regression with changepoints, predictor-dependent mixing proportions, random effects regressions, hierarchical mixtures-of-experts), and tools for selecting the number of components (bootstrapping the likelihood ratio test statistic and model selection criteria). Bayesian estimation of mixtures-of-linear-regressions models is available as well as a novel data depth method for obtaining credible bands. This package is based upon work supported by the National Science Foundation under Grant No. SES-0518772.","Published":"2017-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mixtox","Version":"1.3.2","Title":"Curve Fitting and Mixture Toxicity Assessment","Description":"Curve Fitting for monotonic(sigmoidal) & non-monotonic(J-shaped) \n concentration-response data. Prediction of mixture toxicity based on reference \n models such as 'concentration addition', 'independent action', and 'generalized \n concentration addition'.","Published":"2017-02-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mixture","Version":"1.4","Title":"Mixture Models for Clustering and Classification","Description":"An implementation of all 14 Gaussian parsimonious\n clustering models (GPCMs) for model-based clustering and\n model-based classification.","Published":"2016-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MixtureInf","Version":"1.1","Title":"Inference for Finite Mixture Models","Description":"Functions for computing the penalized maximum likelihood estimate (PMLE) or maximum likelihood estimate (MLE), testing the order of a finite mixture model using EM-test, drawing histogram of observations and the fitted density or probability mass function of the mixture model.","Published":"2016-04-07","License":"AGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mizer","Version":"0.2","Title":"Multi-species sIZE spectrum modelling in R","Description":"A set of classes and methods to set up and run multispecies, trait\n based and community size spectrum ecological models, focussed on the marine\n environment.","Published":"2014-04-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mkde","Version":"0.1","Title":"2D and 3D movement-based kernel density estimates (MKDEs)","Description":"Provides functions to compute and visualize movement-based kernel density estimates (MKDEs) for animal utilization distributions in 2 or 3 spatial dimensions.","Published":"2014-08-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mkin","Version":"0.9.45","Title":"Kinetic Evaluation of Chemical Degradation Data","Description":"Calculation routines based on the FOCUS Kinetics Report (2006,\n 2014). Includes a function for conveniently defining differential equation\n models, model solution based on eigenvalues if possible or using numerical\n solvers and a choice of the optimisation methods made available by the 'FME'\n package. If a C compiler (on windows: 'Rtools') is installed, differential\n equation models are solved using compiled C functions. Please note that no\n warranty is implied for correctness of results or fitness for a particular\n purpose.","Published":"2016-12-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"MKLE","Version":"0.05","Title":"Maximum kernel likelihood estimation","Description":"Package for fast computation of the maximum kernel\n likelihood estimator (mkle)","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"MKmisc","Version":"0.993","Title":"Miscellaneous Functions from M. Kohl","Description":"Contains several functions for statistical data analysis; e.g. for sample size and power calculations, computation of confidence intervals, and generation of similarity matrices.","Published":"2016-09-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"mkssd","Version":"1.1","Title":"Efficient multi-level k-circulant supersaturated designs","Description":"mkssd is a package that generates efficient balanced\n non-aliased multi-level k-circulant supersaturated designs by\n interchanging the elements of the generator vector. The package\n tries to generate a supersaturated design that has chisquare\n efficiency more than user specified efficiency level (mef). The\n package also displays the progress of generation of an\n efficient multi-level k-circulant design through a progress\n bar. The progress of 100% means that one full round of\n interchange is completed. More than one full round (typically\n 4-5 rounds) of interchange may be required for larger designs.","Published":"2011-08-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mlbench","Version":"2.1-1","Title":"Machine Learning Benchmark Problems","Description":"A collection of artificial and real-world machine learning\n benchmark problems, including, e.g., several data sets from the\n UCI repository.","Published":"2012-07-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MLCIRTwithin","Version":"2.1","Title":"Latent Class Item Response Theory (LC-IRT) Models under\nWithin-Item Multidimensionality","Description":"Framework for the Item Response Theory analysis of dichotomous and ordinal polytomous outcomes under the assumption of within-item multidimensionality and discreteness of the latent traits. The fitting algorithms allow for missing responses and for different item parametrizations and are based on the Expectation-Maximization paradigm. Individual covariates affecting the class weights may be included in the new version together with possibility of constraints on all model parameters.","Published":"2016-08-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MLCM","Version":"0.4.1","Title":"Maximum Likelihood Conjoint Measurement","Description":"Conjoint measurement is a psychophysical procedure in which stimulus pairs are presented that vary along 2 or more dimensions and the observer is required to compare the stimuli along one of them. This package contains functions to estimate the contribution of the n scales to the judgment by a maximum likelihood method under several hypotheses of how the perceptual dimensions interact.","Published":"2014-02-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mlDNA","Version":"1.1","Title":"Machine Learning-based Differential Network Analysis of\nTranscriptome Data","Description":"Functions necessary to perform the machine learning-based\n differential network analysis of transcriptome data.","Published":"2013-11-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mldr","Version":"0.3.22","Title":"Exploratory Data Analysis and Manipulation of Multi-Label Data\nSets","Description":"Exploratory data analysis and manipulation functions for multi-\n label data sets along with an interactive Shiny application to ease their use.","Published":"2016-01-16","License":"LGPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mldr.datasets","Version":"0.3.15","Title":"R Ultimate Multilabel Dataset Repository","Description":"Large collection of multilabel datasets along with the functions\n needed to export them to several formats, to make partitions, and to obtain\n bibliographic information.","Published":"2016-01-16","License":"LGPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MLDS","Version":"0.4.5","Title":"Maximum Likelihood Difference Scaling","Description":"Difference scaling is a method for scaling perceived \n supra-threshold differences. The package contains functions that\n allow the user to design and run a difference scaling experiment, \n to fit the resulting data by maximum likelihood and test the\n internal validity of the estimated scale.","Published":"2015-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mle.tools","Version":"1.0.0","Title":"Expected/Observed Fisher Information and Bias-Corrected Maximum\nLikelihood Estimate(s)","Description":"Calculates the expected/observed Fisher information and the bias-corrected maximum likelihood estimate(s) via Cox-Snell Methodology.","Published":"2017-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mlearning","Version":"1.0-0","Title":"Machine learning algorithms with unified interface and confusion\nmatrices","Description":"This package provides a unified interface to various\n machine learning algorithms. Confusion matrices are provided\n too.","Published":"2013-01-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MLEcens","Version":"0.1-4","Title":"Computation of the MLE for bivariate (interval) censored data","Description":"This package contains functions to compute the\n nonparametric maximum likelihood estimator (MLE) for the\n bivariate distribution of (X,Y), when realizations of (X,Y)\n cannot be observed directly. To be more precise, we consider\n the situation where we observe a set of rectangles that are\n known to contain the unobservable realizations of (X,Y). We\n compute the MLE based on such a set of rectangles. The methods\n can also be used for univariate censored data (see data set\n 'cosmesis'), and for censored data with competing risks (see\n data set 'menopause'). We also provide functions to visualize\n the observed data and the MLE.","Published":"2013-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mlegp","Version":"3.1.4","Title":"Maximum Likelihood Estimates of Gaussian Processes","Description":"Maximum likelihood Gaussian process modeling for\n univariate and multi-dimensional outputs with diagnostic plots.\n Contact the maintainer for a package version that implements\n sensitivity analysis functionality.","Published":"2013-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mleur","Version":"1.0-6","Title":"Maximum likelihood unit root test","Description":"Provides functions for unit root testing using MLE method","Published":"2013-12-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mlgt","Version":"0.16","Title":"Multi-Locus Geno-Typing","Description":"Processing and analysis of high throughput (Roche 454)\n sequences generated from multiple loci and multiple biological\n samples. Sequences are assigned to their locus and sample of\n origin, aligned and trimmed. Where possible, genotypes are\n called and variants mapped to known alleles.","Published":"2012-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mlica2","Version":"2.1","Title":"Independent Component Analysis using Maximum Likelihood","Description":"An R code implementation of the maximum likelihood (fixed\n point) algorithm of Hyvaerinen, Karhuna, and Oja for\n independent component analysis.","Published":"2012-11-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MLID","Version":"1.0.1","Title":"Multilevel Index of Dissimilarity","Description":"Tools and functions to fit a multilevel index of dissimilarity.","Published":"2017-03-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mljar","Version":"0.1.1","Title":"R API for MLJAR","Description":"Provides an R API wrapper for 'mljar.com', a web service allowing for on-line training for machine learning models (see for more information).","Published":"2017-06-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mlma","Version":"4.0-1","Title":"Multilevel Mediation Analysis","Description":"Do multilevel mediation analysis with generalized additive multilevel models. ","Published":"2016-07-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mlmc","Version":"1.0.0","Title":"Multi-Level Monte Carlo","Description":"An implementation of Multi-level Monte Carlo for R. This package\n builds on the original 'Matlab' and C++ implementations by Mike Giles to provide\n a full MLMC driver and example level samplers. Multi-core parallel sampling\n of levels is provided built-in.","Published":"2016-04-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MLmetrics","Version":"1.1.1","Title":"Machine Learning Evaluation Metrics","Description":"A collection of evaluation metrics, including loss, score and\n utility functions, that measure regression, classification and ranking performance.","Published":"2016-05-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mlmmm","Version":"0.3-1.2","Title":"ML estimation under multivariate linear mixed models with\nmissing values","Description":"Computational strategies for multivariate linear\n mixed-effects models with missing values, Schafer and Yucel\n (2002), Journal of Computational and Graphical Statistics, 11,\n 421-442.","Published":"2010-07-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mlmRev","Version":"1.0-6","Title":"Examples from Multilevel Modelling Software Review","Description":"Data and examples from a multilevel modelling software review\n as well as other well-known data sets from the multilevel modelling\n literature.","Published":"2014-03-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mlogit","Version":"0.2-4","Title":"multinomial logit model","Description":"Estimation of the multinomial logit model","Published":"2013-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mlogitBMA","Version":"0.1-6","Title":"Bayesian Model Averaging for Multinomial Logit Models","Description":"Provides a modified function bic.glm of the BMA package that can be applied to multinomial logit (MNL) data. The data is converted to binary logit using the Begg & Gray approximation. The package also contains functions for maximum likelihood estimation of MNL. ","Published":"2013-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mlPhaser","Version":"0.01","Title":"Multi-Locus Haplotype Phasing","Description":"Phase haplotypes from genotypes based on a list of known\n haplotypes. Suited to highly diverse loci such as HLA.","Published":"2012-09-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MLPUGS","Version":"0.2.0","Title":"Multi-Label Prediction Using Gibbs Sampling (and Classifier\nChains)","Description":"An implementation of classifier chains (CC's) for multi-label\n prediction. Users can employ an external package (e.g. 'randomForest',\n 'C50'), or supply their own. The package can train a single set of CC's or\n train an ensemble of CC's -- in parallel if running in a multi-core\n environment. New observations are classified using a Gibbs sampler since\n each unobserved label is conditioned on the others. The package includes\n methods for evaluating the predictions for accuracy and aggregating across\n iterations and models to produce binary or probabilistic classifications.","Published":"2016-07-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mlr","Version":"2.11","Title":"Machine Learning in R","Description":"Interface to a large number of classification and regression\n techniques, including machine-readable parameter descriptions. There is\n also an experimental extension for survival analysis, clustering and\n general, example-specific cost-sensitive learning. Generic resampling,\n including cross-validation, bootstrapping and subsampling. Hyperparameter\n tuning with modern optimization techniques, for single- and multi-objective\n problems. Filter and wrapper methods for feature selection. Extension of\n basic learners with additional operations common in machine learning, also\n allowing for easy nested resampling. Most operations can be parallelized.","Published":"2017-03-15","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mlrMBO","Version":"1.1.0","Title":"A Toolbox for Model-Based Optimization of Expensive Black-Box\nFunctions","Description":"Flexible and comprehensive R toolbox for model-based optimization\n ('MBO'), also known as Bayesian optimization. It is designed for both single-\n and multi-objective optimization with mixed continuous, categorical and\n conditional parameters. The machine learning toolbox 'mlr' provide dozens\n of regression learners to model the performance of the target algorithm with\n respect to the parameter settings. It provides many different infill criteria\n to guide the search process. Additional features include multipoint batch\n proposal, parallel execution as well as visualization and sophisticated\n logging mechanisms, which is especially useful for teaching and understanding\n of algorithm behavior. 'mlrMBO' is implemented in a modular fashion, such that\n single components can be easily replaced or adapted by the user for specific use\n cases.","Published":"2017-05-12","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MLRMPA","Version":"1.0","Title":"A package for Multilinear Regression Model Population Analysis","Description":"This package provides Multilinear Regression Model Population \n Analysis to build a pool of models between quantitative activity and chemical \n\t\t\t descriptors. Also some useful model validation functions. Contains all \n\t\t\t molecular descriptors of 101 organic compounds and acticity dataset.\t\t ","Published":"2013-09-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mlsjunkgen","Version":"0.1.1","Title":"Use the MLS Junk Generator Algorithm to Generate a Stream of\nPseudo-Random Numbers","Description":"Generate a stream of pseudo-random numbers generated using the MLS \n Junk Generator algorithm. Functions exist to generate single pseudo-random \n numbers as well as a vector, data frame, or matrix of pseudo-random numbers.","Published":"2015-09-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mlt","Version":"0.2-0","Title":"Most Likely Transformations","Description":"Likelihood-based estimation of conditional transformation\n models via the most likely transformation approach described in\n Hothorn et al. (2016) .","Published":"2017-06-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mlt.docreg","Version":"0.2-0","Title":"Most Likely Transformations: Documentation and Regression Tests","Description":"Additional documentation, a package vignette and \n regression tests for package mlt.","Published":"2017-06-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mltools","Version":"0.2.0","Title":"Machine Learning Tools","Description":"A collection of machine learning helper functions, particularly assisting in the Exploratory Data Analysis phase.\n Makes heavy use of the 'data.table' package for optimal speed and memory efficiency. Highlights include a versatile bin_data() \n function, sparsify() for converting a data.table to sparse matrix format with one-hot encoding, fast evaluation metrics, and \n empirical_cdf() for calculating empirical Multivariate Cumulative Distribution Functions.","Published":"2017-01-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mlVAR","Version":"0.3.3","Title":"Multi-Level Vector Autoregression","Description":"Estimates the multi-level vector autoregression model on time-series data.\n Three network structures are obtained: temporal networks, contemporaneous\n networks and between-subjects networks.","Published":"2017-03-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mlxR","Version":"3.2.0","Title":"Simulation of Longitudinal Data","Description":"Simulation and visualization of complex\n models for longitudinal data. The models are encoded using the model coding\n language 'Mlxtran', automatically converted into C++ codes, compiled on the\n fly and linked to R using the 'Rcpp' package. That allows one to implement\n very easily complex ODE-based models and complex statistical models,\n including mixed effects models, for continuous, count, categorical, and\n time-to-event data.","Published":"2017-04-29","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MM","Version":"1.6-2","Title":"The multiplicative multinomial distribution","Description":"Description: Various utilities for the Multiplicative\n Multinomial distribution","Published":"2013-01-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MM2S","Version":"1.0.5","Title":"Single-Sample Classifier of Medulloblastoma Subtypes for\nMedulloblastoma Patient Samples, Mouse Models, and Cell Lines","Description":"Description: A single-sample classifier that generates Medulloblastoma (MB) subtype predictions for single-samples of human MB patients and model systems, including cell lines and mouse-models. The MM2S algorithm uses a systems-based methodology that facilitates application of the algorithm on samples irrespective of their platform or source of origin. MM2S demonstrates > 96% accuracy for patients of well-characterized normal cerebellum, Wingless (WNT), or Sonic hedgehog (SHH) subtypes, and the less-characterized Group4 (86%) and Group3 (78.2%). MM2S also enables classification of MB cell lines and mouse models into their human counterparts.This package contains function for implementing the classifier onto human data and mouse data, as well as graphical rendering of the results as PCA plots and heatmaps. ","Published":"2016-02-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MM2Sdata","Version":"1.0.1","Title":"Gene Expression Datasets for the 'MM2S' Package","Description":"Gene Expression datasets for the 'MM2S' package. Contains normalized expression data for Human Medulloblastoma ('GSE37418') as well as Mouse Medulloblastoma models ('GSE36594'). ","Published":"2015-06-17","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"mma","Version":"5.0-0","Title":"Multiple Mediation Analysis","Description":"Used for general multiple mediation analysis. \n\tThe analysis method is described in Yu et al. (2014) \"General Multiple Mediation Analysis With an Application to Explore Racial Disparity in Breast Cancer Survival\", published on Journal of Biometrics & Biostatistics, 5(2):189; and Yu et al.(2017) \"Exploring racial disparity in obesity: a mediation analysis considering geo-coded environmental factors\", published on Spatial and Spatio-temporal Epidemiology, 21, 13-23. ","Published":"2017-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mmand","Version":"1.5.0","Title":"Mathematical Morphology in Any Number of Dimensions","Description":"Provides tools for performing mathematical morphology operations,\n such as erosion and dilation, on data of arbitrary dimensionality. Can also\n be used for finding connected components, resampling, filtering, smoothing\n and other image processing-style operations.","Published":"2017-05-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mmap","Version":"0.6-12","Title":"Map Pages of Memory","Description":"R interface to POSIX mmap and Window's MapViewOfFile","Published":"2013-08-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mmc","Version":"0.0.3","Title":"Multivariate Measurement Error Correction","Description":"Provides routines for multivariate measurement error correction. Includes procedures for linear, logistic and Cox regression models. Bootstrapped standard errors and confidence intervals can be obtained for corrected estimates.","Published":"2015-08-12","License":"GNU General Public License (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mmcm","Version":"1.2-6","Title":"Modified Maximum Contrast Method","Description":"An implementation of modified maximum contrast methods\n and the maximum contrast method: Functions 'mmcm.mvt' and\n 'mcm.mvt' give P-value by using randomized quasi-Monte Carlo\n method with 'pmvt' function of package 'mvtnorm', and\n 'mmcm.resamp' gives P-value by using a permutation method.","Published":"2016-01-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MMDai","Version":"1.2.0","Title":"Multivariate Multinomial Distribution Approximation and\nImputation for Incomplete Data","Description":"Fit incomplete categorical data with infinite mixture of multinomial distribution\n (Dunson and Xing (2009) ). Perform efficient missing data imputation and\n other statistical inference based on joint distribution estimation.","Published":"2017-03-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mmds","Version":"1.1","Title":"Mixture Model Distance Sampling (mmds)","Description":"This library implements mixture model distance sampling\n methods. See Miller and Thomas (in prep.).","Published":"2012-04-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mme","Version":"0.1-5","Title":"Multinomial Mixed Effects Models","Description":"mme fit Gaussian Multinomial mixed-effects models for small area estimation: Model 1, with one\n random effect in each category of the response variable; Model 2, introducing\n independent time effect; Model 3, introducing correlated time effect.\n mme calculates analytical and parametric bootstrap MSE estimators.","Published":"2014-08-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mmeln","Version":"1.2","Title":"Estimation of Multinormal Mixture Distribution","Description":"Fit multivariate mixture of normal distribution using\n covariance structure.","Published":"2015-09-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mmeta","Version":"2.3","Title":"Multivariate Meta-Analysis","Description":"A novel multivariate meta-analysis.","Published":"2017-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mmm","Version":"1.4","Title":"an R package for analyzing multivariate longitudinal data with\nmultivariate marginal models","Description":"fits multivariate marginal models for multivariate longitudinal data for both continuous and discrete responses","Published":"2014-01-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mmm2","Version":"1.2","Title":"Multivariate marginal models with shared regression parameters","Description":"Fits multivariate marginal models with shared regression parameters for discrete and continuous responses","Published":"2013-12-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MMMS","Version":"0.1","Title":"Multi-Marker Molecular Signature for Treatment-specific Subgroup\nIdentification","Description":"The package implements a multi-marker molecular signature (MMMS) approach\n for treatment-specific subgroup identification.","Published":"2014-03-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mmod","Version":"1.3.3","Title":"Modern Measures of Population Differentiation","Description":"Provides functions for measuring\n population divergence from genotypic data.","Published":"2017-04-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mMPA","Version":"0.1.0","Title":"Implementation of Marker-Assisted Mini-Pooling with Algorithm","Description":"To determine the number of quantitative assays needed for a sample \n of data using pooled testing methods, which include mini-pooling (MP), MP \n with algorithm (MPA), and marker-assisted MPA (mMPA). To estimate the number \n of assays needed, the package also provides a tool to conduct Monte Carlo (MC) \n to simulate different orders in which the sample would be collected to form pools. \n Using MC avoids the dependence of the estimated number of assays on any specific \n ordering of the samples to form pools.","Published":"2017-03-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mmpf","Version":"0.0.3","Title":"Monte-Carlo Methods for Prediction Functions","Description":"Marginalizes prediction functions using Monte-Carlo integration and computes permutation importance.","Published":"2017-03-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mmpp","Version":"0.4","Title":"Various Similarity and Distance Metrics for Marked Point\nProcesses","Description":"Compute similarities and distances between marked point processes.","Published":"2015-08-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mmppr","Version":"0.1","Title":"Markov Modulated Poisson Process for Unsupervised Event\nDetection in Time Series of Counts","Description":"Time-series of count data occur in many different contexts. A\n Markov-modulated Poisson process provides a framework for detecting\n anomalous events using an unsupervised learning approach.","Published":"2016-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MMS","Version":"3.00","Title":"Fixed effects Selection in Linear Mixed Models","Description":"Fixed effects Selection in Linear Mixed Models","Published":"2014-04-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mmtfa","Version":"0.1","Title":"Model-Based Clustering and Classification with Mixtures of\nModified t Factor Analyzers","Description":"Fits a family of mixtures of multivariate t-distributions under a continuous t-distributed latent variable structure for the purpose of clustering or classification. The alternating expectation-conditional maximization algorithm is used for parameter estimation.","Published":"2015-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MMWRweek","Version":"0.1.1","Title":"Convert Dates to MMWR Day, Week, and Year","Description":"The first day of any MMWR week is Sunday.\n MMWR week numbering is sequential beginning with 1\n and incrementing with each week to a maximum of 52\n or 53. MMWR week #1 of an MMWR year is the first week\n of the year that has at least four days in the calendar\n year. This package provides functionality to convert\n Dates to MMWR day, week, and year and the reverse.","Published":"2015-11-25","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mnis","Version":"0.2.6","Title":"Easy Downloading Capabilities for the Members' Name Information\nService","Description":"An API package for the Members' Name Information Service operated by the UK parliament. Documentation for the API itself can be found here: .","Published":"2017-06-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mnlogit","Version":"1.2.5","Title":"Multinomial Logit Model","Description":"Time and memory efficient estimation of multinomial logit models using maximum likelihood method. Numerical optimization performed by Newton-Raphson method using an optimized, parallel C++ library to achieve fast computation of Hessian matrices. Motivated by large scale multiclass classification problems in econometrics and machine learning.","Published":"2016-11-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MNM","Version":"1.0-2","Title":"Multivariate Nonparametric Methods. An Approach Based on Spatial\nSigns and Ranks","Description":"Multivariate tests, estimates and methods based on the identity score, spatial sign score and spatial rank score are provided. The methods include one and c-sample problems, shape estimation and testing, linear regression and principal components. ","Published":"2016-06-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mnormpow","Version":"0.1.1","Title":"Multivariate Normal Distributions with Power Integrand","Description":"Computes integral of f(x)*x_i^k on a product of intervals,\n where f is the density of a gaussian law.\n This a is small alteration of the mnormt code from A. Genz and A. Azzalini.","Published":"2014-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mnormt","Version":"1.5-5","Title":"The Multivariate Normal and t Distributions","Description":"Functions are provided for computing the density and the\n distribution function of multivariate normal and \"t\" random variables,\n and for generating random vectors sampled from these distributions. \n Probabilities are computed via non-Monte Carlo methods; different routines \n are used in the case d=1, d=2, d>2, if d denotes the number of dimensions.","Published":"2016-10-15","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"MNP","Version":"3.0-1","Title":"R Package for Fitting the Multinomial Probit Model","Description":"Fits the Bayesian multinomial probit model via Markov chain\n Monte Carlo. The multinomial probit model is often used to analyze \n the discrete choices made by individuals recorded in survey data. \n Examples where the multinomial probit model may be useful include the \n analysis of product choice by consumers in market research and the \n analysis of candidate or party choice by voters in electoral studies. \n The MNP package can also fit the model with different choice sets for \n each individual, and complete or partial individual choice orderings \n of the available alternatives from the choice set. The estimation is\n based on the efficient marginal data augmentation algorithm that is \n developed by Imai and van Dyk (2005). ``A Bayesian Analysis of the \n Multinomial Probit Model Using the Data Augmentation,'' Journal of \n Econometrics, Vol. 124, No. 2 (February), pp. 311-334. \n Detailed examples are given in \n Imai and van Dyk (2005). ``MNP: R Package for Fitting the Multinomial \n Probit Model.'' Journal of Statistical Software, Vol. 14, No. 3 (May), \n pp. 1-32. .","Published":"2017-05-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MNS","Version":"1.0","Title":"Mixed Neighbourhood Selection","Description":"An implementation of the mixed neighbourhood selection (MNS) algorithm. The MNS algorithm can be used to estimate multiple related precision matrices. In particular, the motivation behind this work was driven by the need to understand functional connectivity networks across multiple subjects. This package also contains an implementation of a novel algorithm through which to simulate multiple related precision matrices which exhibit properties frequently reported in neuroimaging analysis. ","Published":"2015-12-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Mobilize","Version":"2.16-4","Title":"Mobilize plots and functions","Description":"Some canned plots and functions designed for the mobilize project.\n Designed to be called remotely.","Published":"2014-09-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"MOCCA","Version":"1.2","Title":"Multi-objective optimization for collecting cluster alternatives","Description":"This package provides methods to analyze cluster\n alternatives based on multi-objective optimization of cluster\n validation indices.","Published":"2012-12-24","License":"Artistic License 2.0","snapshot_date":"2017-06-23"} {"Package":"mockery","Version":"0.3.0","Title":"Mocking Library for R","Description":"\n The two main functionalities of this package are creating mock\n objects (functions) and selectively intercepting calls to a given\n function that originate in some other function. It can be used\n with any testing framework available for R. Mock objects can\n be injected with either this package's own stub() function or a\n similar with_mock() facility present in the testthat package. ","Published":"2016-12-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mockr","Version":"0.1","Title":"Mocking in R","Description":"Provides a means to mock a package function, i.e., temporarily substitute it for testing. Designed as a drop-in replacement for 'testthat::with_mock()', which may break in R 3.4.0 and later.","Published":"2017-04-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mod09nrt","Version":"0.14","Title":"Extraction of Bands from MODIS Surface Reflectance Product MOD09\nNRT","Description":"Package for processing downloaded MODIS Surface reflectance\n Product HDF files. Specifically, MOD09 surface reflectance product files, and\n the associated MOD03 geolocation files (for MODIS-TERRA). The package will be\n most effective if the user installs MRTSwath (MODIS Reprojection Tool for swath\n products; , and\n adds the directory with the MRTSwath executable to the default R PATH by editing\n ~/.Rprofile.","Published":"2016-07-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Modalclust","Version":"0.6","Title":"Hierarchical Modal Clustering","Description":"Performs Modal Clustering (MAC) including Hierarchical Modal Clustering (HMAC) along with their parallel implementation (PHMAC) over several processors. These model-based non-parametric clustering techniques can extract clusters in very high dimensions with arbitrary density shapes. By default clustering is performed over several resolutions and the results are summarised as a hierarchical tree. Associated plot functions are also provided. There is a package vignette that provides many examples. This version adheres to CRAN policy of not spanning more than two child processes by default.","Published":"2014-05-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"modeest","Version":"2.1","Title":"Mode Estimation","Description":"This package provides estimators of the mode of univariate\n unimodal data or univariate unimodal distributions","Published":"2012-10-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"modehunt","Version":"1.0.7","Title":"Multiscale Analysis for Density Functions","Description":"Given independent and identically distributed observations X(1), ..., X(n) from a density f,\n provides five methods to perform a multiscale analysis about f as well as the necessary critical\n values. The first method, introduced in Duembgen and Walther (2008), provides simultaneous confidence statements\n for the existence and location of local increases (or decreases) of f, based on all intervals I(all) spanned by\n any two observations X(j), X(k). The second method approximates the latter approach by using only a subset of\n I(all) and is therefore computationally much more efficient, but asymptotically equivalent. Omitting the additive\n correction term Gamma in either method offers another two approaches which are more powerful on small scales and\n less powerful on large scales, however, not asymptotically minimax optimal anymore. Finally, the block procedure is a\n compromise between adding Gamma or not, having intermediate power properties. The latter is again asymptotically\n equivalent to the first and was introduced in Rufibach and Walther (2010).","Published":"2015-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"modelfree","Version":"1.1-1","Title":"Model-free estimation of a psychometric function","Description":"Local linear estimation of psychometric functions.\n Provides functions for nonparametric estimation of a\n psychometric function and for estimation of a derived threshold\n and slope, and their standard deviations and confidence\n intervals","Published":"2012-08-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ModelGood","Version":"1.0.9","Title":"Validation of risk prediction models","Description":"Bootstrap cross-validation for ROC, AUC and Brier score to assess\n and compare predictions of binary status responses.","Published":"2014-11-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ModelMap","Version":"3.3.5","Title":"Modeling and Map Production using Random Forest and Stochastic\nGradient Boosting","Description":"Creates sophisticated models of training data and validates the models with an independent test set, cross validation, or in the case of Random Forest Models, with Out Of Bag (OOB) predictions on the training data. Create graphs and tables of the model validation results. Applies these models to GIS .img files of predictors to create detailed prediction surfaces. Handles large predictor files for map making, by reading in the .img files in chunks, and output to the .txt file the prediction for each data chunk, before reading the next chunk of data.","Published":"2016-07-03","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"ModelMetrics","Version":"1.1.0","Title":"Rapid Calculation of Model Metrics","Description":"Collection of metrics for evaluating models written in C++ using 'Rcpp'.","Published":"2016-08-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"modelObj","Version":"3.0","Title":"A Model Object Framework for Regression Analysis","Description":"A utility library to facilitate the generalization of statistical methods built on a regression framework. Package developers can use 'modelObj' methods to initiate a regression analysis without concern for the details of the regression model and the method to be used to obtain parameter estimates. The specifics of the regression step are left to the user to define when calling the function. The user of a function developed within the 'modelObj' framework creates as input a 'modelObj' that contains the model and the R methods to be used to obtain parameter estimates and to obtain predictions. In this way, a user can easily go from linear to non-linear models within the same package. ","Published":"2017-05-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"modelr","Version":"0.1.0","Title":"Modelling Functions that Work with the Pipe","Description":"Functions for modelling that help you seamlessly integrate\n modelling into a pipeline of data manipulation and visualisation.","Published":"2016-08-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"modeltools","Version":"0.2-21","Title":"Tools and Classes for Statistical Models","Description":"A collection of tools to deal with statistical models. \n The functionality is experimental and the user interface is likely to\n change in the future. The documentation is rather terse, but packages `coin'\n and `party' have some working examples. However, if you find the\n implemented ideas interesting we would be very interested in a discussion\n of this proposal. Contributions are more than welcome!","Published":"2013-09-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"modes","Version":"0.7.0","Title":"Find the Modes and Assess the Modality of Complex and Mixture\nDistributions, Especially with Big Datasets","Description":"Designed with a dual purpose of\n accurately estimating the mode (or modes) as well as characterizing\n the modality of data. The specific application area includes complex\n or mixture distributions particularly in a big data environment.\n The heterogeneous nature of (big) data may require deep introspective\n statistical and machine learning techniques, but these statistical tools\n often fail when applied without first understanding the data. In small\n datasets, this often isn't a big issue, but when dealing with large scale\n data analysis or big data thoroughly inspecting each dimension\n typically yields an O(n^n-1) problem. As such, dealing with big data\n require an alternative toolkit. This package not only identifies the\n mode or modes for various data types, it also provides a programmatic\n way of understanding the modality (i.e. unimodal, bimodal, etc.) of\n a dataset (whether it's big data or not). See\n for examples and discussion.","Published":"2016-03-07","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"modEvA","Version":"1.3.2","Title":"Model Evaluation and Analysis","Description":"Analyses species distribution models and evaluates their performance. It includes functions for performing variation partitioning, calculating several measures of model discrimination and calibration, optimizing prediction thresholds based on a number of criteria, performing multivariate environmental similarity surface (MESS) analysis, and displaying various analytical plots.","Published":"2016-07-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"modeval","Version":"0.1.3","Title":"Evaluation of Classification Model Options","Description":"Designed to assist novice to intermediate analysts in choosing\n an optimal classification model, particularly for working with relatively\n small data sets. It provides cross-validated results comparing several \n different models at once using a consistent set of performance metrics,\n so users can hone in on the most promising approach rather than attempting\n single model fittings at a time. The package predefined 12 most common \n classification models, although users are free to select from the 200+ \n other options available in caret package.","Published":"2017-04-11","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"MODIS","Version":"1.0.0","Title":"Acquisition and Processing of MODIS Products","Description":"Download and processing functionalities for the Moderate Resolution\n Imaging Spectroradiometer (MODIS). The package provides automated access to the\n global online data archives (LPDAAC and LAADS) and processing capabilities such\n as file conversion, mosaicking, subsetting and time series filtering.","Published":"2017-01-10","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"modiscloud","Version":"0.14","Title":"R tools for processing Level 2 Cloud Mask products from MODIS","Description":"Package for processing downloaded MODIS Cloud Product HDF\n files and derived files. Specifically, MOD35_L2 cloud product\n files, and the associated MOD03 geolocation files (for\n MODIS-TERRA); and MYD35_L2 cloud product files, and the\n associated MYD03 geolocation files (for MODIS-AQUA). The\n package will be most effective if the user installs MRTSwath\n (MODIS Reprojection Tool for swath products;\n https://lpdaac.usgs.gov/tools/modis_reprojection_tool_swath),\n and adds the directory with the MRTSwath executable to the\n default R PATH by editing ~/.Rprofile.","Published":"2013-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MODISSnow","Version":"0.1.0.0","Title":"Provides a Function to Download MODIS Snow Cover","Description":"Package for downloading Moderate-resolution Imaging Spectroradiometer (MODIS) snow cover data. Global daily snow cover at 500 m resolution derived from MODIS is made available by the National Snow and Ice Center Data Center .","Published":"2016-12-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MODISTools","Version":"0.95.1","Title":"MODIS Subsetting Tools","Description":"Provides several functions for downloading, storing and processing \n\t\tsubsets of MODIS Land Processes data as a batch process.","Published":"2017-02-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MODIStsp","Version":"1.3.2","Title":"A Tool for Automating Download and Preprocessing of MODIS Land\nProducts Data","Description":"Allows automating the creation of time series of rasters derived\n from MODIS Satellite Land Products data. It performs several typical\n preprocessing steps such as download, mosaicking, reprojection and resize\n of data acquired on a specified time period. All processing parameters\n can be set using a user-friendly GUI. Users can select which layers of\n the original MODIS HDF files they want to process, which additional\n Quality Indicators should be extracted from aggregated MODIS Quality\n Assurance layers and, in the case of Surface Reflectance products\n , which Spectral Indexes should be computed from the original reflectance\n bands. For each output layer, outputs are saved as single-band raster\n files corresponding to each available acquisition date. Virtual files\n allowing access to the entire time series as a single file are also created.\n Command-line execution exploiting a previously saved processing options\n file is also possible, allowing to automatically update time series\n related to a MODIS product whenever a new image is available.","Published":"2017-04-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"modmarg","Version":"0.5.0","Title":"Calculating Marginal Effects and Levels with Errors","Description":"Calculate predicted levels and marginal effects from 'glm' objects,\n using the delta method to calculate standard errors. This is an R-based\n version of the 'margins' command from Stata.","Published":"2017-06-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"modMax","Version":"1.1","Title":"Community Structure Detection via Modularity Maximization","Description":"The algorithms implemented here are used to detect the community structure of a network. \n These algorithms follow different approaches, but are all based on the concept of modularity maximization.","Published":"2015-07-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"modQR","Version":"0.1.1","Title":"Multiple-Output Directional Quantile Regression","Description":"Contains basic tools for performing \n multiple-output quantile regression and computing \n regression quantile contours by means of directional \n regression quantiles. In the location case, one can thus \n obtain halfspace depth contours in two to six dimensions. ","Published":"2016-03-02","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"modTempEff","Version":"1.5.2","Title":"Modelling temperature effects using time series data","Description":"Fits a Constrained Segmented Distributed Lag regression model \n\tto epidemiological time series of mortality, temperature, and other confounders.","Published":"2014-09-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"moduleColor","Version":"1.08-3","Title":"Basic Module Functions","Description":"Methods for color labeling, calculation of eigengenes, merging of closely related modules.","Published":"2014-11-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"modules","Version":"0.5.0","Title":"Self Contained Units of Source Code","Description":"Provides modules as an organizational unit for source code. Modules\n enforce to be more rigorous when defining dependencies and have\n a local search path. They can be used as a sub unit within packages\n or in scripts.","Published":"2016-11-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"modygliani","Version":"1.0","Title":"MOlecular DYnamics GLobal ANalysis","Description":"RMSD and Internal Energy analysis of NAMD and YASARA Molecular Dynamics output files. Allows to comparison of different dynamics per different complexes. Input files have to be ASCII files tab separated.","Published":"2016-07-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MOEADr","Version":"0.2.1","Title":"Component-Wise MOEA/D Implementation","Description":"Modular implementation of Multiobjective Evolutionary Algorithms \n based on Decomposition (MOEA/D) [Zhang and Li (2007), \n ] for quick assembling and \n testing of new algorithmic components, as well as easy \n replication of published MOEA/D proposals.","Published":"2017-03-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"moezipfR","Version":"1.0.2","Title":"Marshall-Olkin Extended Zipf","Description":"Statistical utilities for the analysis of data by means of the Marshall-Olkin Extended Zipf distribution are presented. The distribution is a two-parameter extension of the widely used Zipf model. By plotting the probabilities in log-log scale, this two-parameter extension allows a concave as well as a convex behavior of the function at the beginning of the distribution, maintaining the linearity, associated to the Zipf model, in the tail.","Published":"2017-03-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mogavs","Version":"1.0.1","Title":"Multiobjective Genetic Algorithm for Variable Selection in\nRegression","Description":"Functions for exploring the best subsets in regression with a genetic algorithm. The package is much faster than methods relying on complete enumeration, and is suitable for datasets with large number of variables.","Published":"2015-11-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MOJOV","Version":"1.0.1","Title":"Mojo Variants: Rare Variants analysis","Description":"A package for analysis between rare variants and\n quantitative traits by CMC (the combined multivariate and\n collapsing method).","Published":"2013-02-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mokken","Version":"2.8.5","Title":"Perform Mokken Scale Analysis in R","Description":"Contains functions for performing Mokken\n scale analysis on test and questionnaire data. It includes an automated\n item selection algorithm, and various checks of model assumptions.","Published":"2017-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"moko","Version":"1.0.0","Title":"Multi-Objective Kriging Optimization","Description":"Multi-Objective optimization based on the Kriging metamodel.\n Important functions: mkm, VMPF, MEGO and HEGO.","Published":"2016-09-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"molaR","Version":"4.2","Title":"Dental Surface Complexity Measurement Tools","Description":"Surface topography calculations of Dirichlet's normal energy,\n relief index, surface slope, and orientation patch count for teeth using scans of\n enamel caps.\n Importantly, for the relief index and orientation patch count calculations to\n work, the scanned tooth files must be oriented with the occlusal plane parallel\n to the x and y axes, and perpendicular to the z axis. The files should also be\n simplified, and smoothed in some other software prior to uploading into R.","Published":"2016-08-31","License":"ACM","snapshot_date":"2017-06-23"} {"Package":"mombf","Version":"1.9.5","Title":"Moment and Inverse Moment Bayes Factors","Description":"Model selection and parameter estimation based on non-local and Zellner priors. Bayes factors, marginal densities and variable selection in regression setups. Routines to sample, evaluate prior densities, distribution functions and quantiles are included.","Published":"2017-05-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"momentchi2","Version":"0.1.5","Title":"Moment-Matching Methods for Weighted Sums of Chi-Squared Random\nVariables","Description":"A collection of moment-matching methods for computing the cumulative distribution function of a positively-weighted sum of chi-squared random variables. Methods include the Satterthwaite-Welch method, Hall-Buckley-Eagleson method, Wood's F method, and the Lindsay-Pilla-Basak method.","Published":"2016-09-26","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"moments","Version":"0.14","Title":"Moments, cumulants, skewness, kurtosis and related tests","Description":"Functions to calculate: moments, Pearson's kurtosis,\n Geary's kurtosis and skewness; tests related to them\n (Anscombe-Glynn, D'Agostino, Bonett-Seier).","Published":"2015-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"momentuHMM","Version":"1.0.0","Title":"Maximum Likelihood Analysis of Animal Movement Behavior Using\nMultivariate Hidden Markov Models","Description":"Extended tools for analyzing telemetry data using (multivariate) hidden Markov models. These include processing of tracking data, fitting HMMs to location and auxiliary biotelemetry or environmental data, multiple imputation for incorporating location measurement error and missing data, visualization of data and fitted model, decoding of the state process...","Published":"2017-06-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Momocs","Version":"1.1.6","Title":"Morphometrics using R","Description":"A complete toolkit for morphometrics, from data\n extraction to multivariate analyses. Most common 2D\n morphometrics approaches are included: outlines, open\n outlines, configurations of landmarks, traditional\n morphometrics, and facilities for data preparation,\n manipulation and visualization with a consistent grammar\n throughout. Momocs allows reproducible, complex\n morphometric analyses, paves the way for a pure\n open-source workflow in R, and other morphometrics\n approaches should be easy to plug in, or develop from, on\n top of this canvas.","Published":"2017-04-17","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"momr","Version":"1.1","Title":"Mining Metaomics Data (MetaOMineR)","Description":"'MetaOMineR' suite is a set of R packages that offers many functions and modules needed for the analyses \n of quantitative metagenomics data. 'momr' is the core package and contains routines for biomarker identification and exploration.\n Developed since the beginning of field, 'momr' has evolved and is structured around the different modules \n such as preprocessing, analysis, vizualisation, etc. See package help for more information.","Published":"2015-07-27","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"mondate","Version":"0.10.01.02","Title":"Keep track of dates in terms of months","Description":"Keep track of dates in terms of months.\n Model dates as at close of business.\n Perform date arithmetic in units of \"months\" and \"years\" (multiples of months).\n Allow \"infinite\" dates to model \"ultimate\" time spans.","Published":"2013-07-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Mondrian","Version":"1.0-0","Title":"A Simple Graphical Representation of the Relative Occurrence and\nCo-Occurrence of Events","Description":"The unique function of this package allows representing in a single graph the relative occurrence and co-occurrence of events measured in a sample. \n As examples, the package was applied to describe the occurrence and co-occurrence of different species of bacterial or viral symbionts infecting arthropods at the individual level. The graphics allows determining the prevalence of each symbiont and the patterns of multiple infections (i.e. how different symbionts share or not the same individual hosts). \n We named the package after the famous painter as the graphical output recalls Mondrian’s paintings.","Published":"2016-03-04","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"MonetDB.R","Version":"1.0.1","Title":"Connect MonetDB to R","Description":"Allows to pull data from MonetDB into R. Includes a DBI implementation and a dplyr backend.","Published":"2016-03-21","License":"MPL (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"MonetDBLite","Version":"0.3.1","Title":"In-Process Version of MonetDB for R","Description":"An in-process version of MonetDB, a relational database focused on analytical tasks. Similar to SQLite, the database runs entirely inside the R shell, with the main difference that queries complete much faster thanks to MonetDB's columnar architecture.","Published":"2016-06-17","License":"MPL (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"mongolite","Version":"1.2","Title":"Fast and Simple 'MongoDB' Client for R","Description":"High-performance 'MongoDB' client based on 'libmongoc' and 'jsonlite'.\n Includes support for aggregation, indexing, map-reduce, streaming, encryption,\n enterprise authentication. The online user manual provides an overview of the \n available methods in the package: .","Published":"2017-04-11","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"monitoR","Version":"1.0.5","Title":"Acoustic Template Detection in R","Description":"Acoustic template detection and monitoring database interface. Create, modify, save, and use templates for detection of animal vocalizations. View, verify, and extract results. Upload a MySQL schema to a existing instance, manage survey metadata, write and read templates and detections locally or to the database. ","Published":"2017-02-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"monkeylearn","Version":"0.1.3","Title":"Accesses the Monkeylearn API for Text Classifiers and Extractors","Description":"Allows using some services of Monkeylearn which is\n a Machine Learning platform on the cloud for text analysis (classification and extraction).","Published":"2017-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"monmlp","Version":"1.1.4","Title":"Monotone Multi-Layer Perceptron Neural Network","Description":"Train and make predictions from a multi-layer perceptron neural\n network with optional partial monotonicity constraints.","Published":"2017-03-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"monogeneaGM","Version":"1.1","Title":"Geometric Morphometric Analysis of Monogenean Anchors","Description":"Geometric morphometric and evolutionary biology analyses of anchor shape from four-anchored monogeneans. ","Published":"2016-02-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"monographaR","Version":"1.2.0","Title":"Taxonomic Monographs Tools","Description":"Contains functions intended to facilitate the production of plant taxonomic monographs. The package includes functions to convert tables into taxonomic descriptions, lists of collectors, examined specimens, and can generate a monograph skeleton. Additionally, wrapper functions to batch the production of phenology charts and distributional and diversity maps are also available. ","Published":"2016-07-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MonoInc","Version":"1.1","Title":"Monotonic Increasing","Description":"Various imputation methods are utilized in this package, where one can flag and impute non-monotonic data that is outside of a prespecified range.","Published":"2016-05-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"monomvn","Version":"1.9-7","Title":"Estimation for Multivariate Normal and Student-t Data with\nMonotone Missingness","Description":"Estimation of multivariate normal and student-t data of \n arbitrary dimension where the pattern of missing data is monotone.\n Through the use of parsimonious/shrinkage regressions \n (plsr, pcr, lasso, ridge, etc.), where standard regressions fail, \n the package can handle a nearly arbitrary amount of missing data. \n The current version supports maximum likelihood inference and \n\t a full Bayesian approach employing scale-mixtures for Gibbs sampling.\n\t Monotone data augmentation extends this \n\t Bayesian approach to arbitrary missingness patterns. \n\t A fully functional standalone interface to the Bayesian lasso \n\t (from Park & Casella), Normal-Gamma (from Griffin & Brown),\n Horseshoe (from Carvalho, Polson, & Scott), and ridge regression \n with model selection via Reversible Jump, and student-t errors \n (from Geweke) is also provided.","Published":"2017-01-08","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"MonoPhy","Version":"1.2","Title":"Allows to Explore Monophyly (or Lack of it) of Taxonomic Groups\nin a Phylogeny","Description":"Requires rooted resolved phylogeny as input and creates a table of genera, their monophyly-status, which taxa cause problems in monophyly etc. Different information can be extracted from the output and a plot function allows visualization of the results in a number of ways.","Published":"2016-07-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MonoPoly","Version":"0.3-8","Title":"Functions to Fit Monotone Polynomials","Description":"Functions for fitting monotone polynomials to data.","Published":"2016-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"monreg","Version":"0.1.3","Title":"Nonparametric Monotone Regression","Description":"Estimates monotone regression and variance functions in a\n nonparametric model.","Published":"2015-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MonteCarlo","Version":"1.0.2","Title":"Automatic Parallelized Monte Carlo Simulations","Description":"Simplifies Monte Carlo simulation studies by automatically \n setting up loops to run over parameter grids and parallelising\n the Monte Carlo repetitions. It also generates LaTeX tables.","Published":"2017-04-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"moonBook","Version":"0.1.3","Title":"Functions and Datasets for the Book by Keon-Woong Moon","Description":"Several analysis-related functions for the book entitled \"R\n statistics and graph for medical articles\" (written in Korean), version 1,\n by Keon-Woong Moon with Korean demographic data with several plot\n functions.","Published":"2015-02-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"moonsun","Version":"0.1.3","Title":"Basic astronomical calculations with R","Description":"A collection of basic astronomical routines for R based on\n \"Practical astronomy with your calculator\" by Peter\n Duffet-Smith.","Published":"2013-12-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mopa","Version":"1.0.0","Title":"Species Distribution MOdeling with Pseudo-Absences","Description":"Tools for transferable species distribution modeling and pseudo-absence \n data generation allowing the straightforward design of relatively complex experiments \n with multiple factors affecting the uncertainty (variability) of SDM outputs \n (pseudo-absence sample, climate projection, modeling algorithm, etc.), and the \n quantification of the contribution of different factors to the final variability \n following the method described in Deque el al. (2010) . \n Multiple methods for pseudo-absence data generation can be applied, including the novel \n Three-step method as described in Iturbide et al. (2015) .\n Additionally, a function for niche overlap calculation is provided, considering the metrics \n described in Warren et al. (2008) <10.1111/j.1558-5646.2008.00482.x> and in\n Pianka (1973) <10.1146/annurev.es.04.110173.000413>.","Published":"2017-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mopsocd","Version":"0.5.1","Title":"MOPSOCD: Multi-objective Particle Swarm Optimization with\nCrowding Distance","Description":"A multi-objective optimization solver based on particle\n swarm optimization with crowding distance.","Published":"2013-06-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MOQA","Version":"2.0.0","Title":"Basic Quality Data Assurance for Epidemiological Research","Description":"With the provision of several tools and templates the MOSAIC project (DFG-Grant Number HO 1937/2-1) supports the implementation of a central data management in epidemiological research projects. The 'MOQA' package enables epidemiologists with none or low experience in R to generate basic data quality reports for a wide range of application scenarios. See for more information. Please read and cite the corresponding open access publication (using the former package-name) in METHODS OF INFORMATION IN MEDICINE by M. Bialke, H. Rau, T. Schwaneberg, R. Walk, T. Bahls and W. Hoffmann (2017) . .","Published":"2017-06-22","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"MOrder","Version":"0.1","Title":"Check Time Homogeneity and Markov Chain Order","Description":"MOrder provide functions to check time homogeneity and order\n of markov chain by using chi-squared test, AIC value and BIC value.","Published":"2014-12-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"morgenstemning","Version":"1.0","Title":"Color schemes compatible with red-green color perception\ndifficulties","Description":"This package is a port of the MATLAB colourmap functions\n accompanying the paper M. Geissbuehler and T. Lasser, \"How to display data\n by color schemes compatible with red-green color perception deficiencies,\"\n Opt. Express 21, 9862-9874 (2013) to R.","Published":"2014-02-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Morpho","Version":"2.5.1","Title":"Calculations and Visualisations Related to Geometric\nMorphometrics","Description":"A toolset for Geometric Morphometrics and mesh processing. This\n includes (among other stuff) mesh deformations based on reference points,\n permutation tests, detection of outliers, processing of sliding\n semi-landmarks and semi-automated surface landmark placement.","Published":"2017-04-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"morse","Version":"2.2.0","Title":"MOdelling Tools for Reproduction and Survival Data in\nEcotoxicology","Description":"Tools for ecotoxicologists and regulators dedicated to the\n mathematical and statistical modelling of bioassay data. They use advanced and\n innovative methods for a valuable quantitative environmental risk assessment.","Published":"2016-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MorseGen","Version":"1.2","Title":"Simple raw data generator based on user-specified summary\nstatistics","Description":"MorseGen is a program for generating raw data based on\n user-specified summary (descriptive) statistics. Samples based\n on the supplied statistics are drawn from a normal distribution\n (or, in some cases, an exponential distribution) and scaled to\n match the desired descriptive statistics. Intended uses include\n creating raw data that fits desired characteristics or to\n replicate the results in a published study.","Published":"2012-06-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MortalitySmooth","Version":"2.3.4","Title":"Smoothing and Forecasting Poisson Counts with P-Splines","Description":"Smoothing one- and two-dimensional Poisson counts with\n P-splines specifically tailored to mortality data. \n Extra-Poisson variation can be accounted as well as forecasting.\n Collection of mortality data and a specific function for\n\t selecting those data by country, sex, age and years. ","Published":"2015-02-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mosaic","Version":"0.14.4","Title":"Project MOSAIC Statistics and Mathematics Teaching Utilities","Description":"Data sets and utilities from Project MOSAIC (http://mosaic-web.org) used\n to teach mathematics, statistics, computation and modeling. Funded by the\n NSF, Project MOSAIC is a community of educators working to tie together\n aspects of quantitative work that students in science, technology,\n engineering and mathematics will need in their professional lives, but\n which are usually taught in isolation, if at all.","Published":"2016-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mosaicData","Version":"0.14.0","Title":"Project MOSAIC Data Sets","Description":"Data sets from Project MOSAIC (http://mosaic-web.org) used\n to teach mathematics, statistics, computation and modeling. Funded by the\n NSF, Project MOSAIC is a community of educators working to tie together\n aspects of quantitative work that students in science, technology,\n engineering and mathematics will need in their professional lives, but\n which are usually taught in isolation, if at all.","Published":"2016-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MoTBFs","Version":"1.0","Title":"Learning Hybrid Bayesian Networks using Mixtures of Truncated\nBasis Functions","Description":"Learning, manipulation and evaluation of mixtures of truncated basis functions \n (MoTBFs), which include mixtures of polynomials (MOPs) and mixtures of truncated \n exponentials (MTEs). MoTBFs are a flexible framework for modelling hybrid Bayesian\n networks. The package provides functionality for learning univariate, multivariate and\n conditional densities, with the possibility of incorporating prior knowledge. Structural\n learning of hybrid Bayesian networks is also provided. A set of useful tools is provided,\n including plotting, printing and likelihood evaluation. This package makes use of S3 \n objects, with two new classes called 'motbf' and 'jointmotbf'.","Published":"2015-09-28","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"MotilityLab","Version":"0.2-5","Title":"Quantitative Analysis of Motion","Description":"Statistics to quantify tracks of moving things (x-y-z-t data),\n such as cells, bacteria or animals. Available measures include mean square\n displacement, confinement ratio, autocorrelation, straightness, turning angle,\n and fractal dimension.","Published":"2016-11-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"moult","Version":"2.0.0","Title":"Models for Analysing Moult in Birds","Description":"Functions to estimate start and duration of moult from moult \n data, based on models developed in Underhill \n and Zucchini (1988, 1990). ","Published":"2016-05-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mountainplot","Version":"1.1","Title":"Mountain Plots, Folded Empirical Cumulative Distribution Plots","Description":"Lattice functions for drawing folded empirical cumulative\n distribution plots, or mountain plots. A mountain plot is similar\n to an empirical CDF plot, except that the curve increases from\n 0 to 0.5, then decreases from 0.5 to 1 using an inverted scale at\n the right side. See: Monti (1995), Folded empirical distribution\n function curves-mountain plots. The American Statistician, 49, 342-345.","Published":"2015-07-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mousetrack","Version":"1.0.0","Title":"Mouse-Tracking Measures from Trajectory Data","Description":"Extract from two-dimensional x-y coordinates of an arm-reaching trajectory, several dependent measures such as area under the curve, latency to start the movement, x-flips, etc.; which characterize the action-dynamics of the response. Mainly developed to analyze data coming from mouse-tracking experiments. ","Published":"2015-01-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mousetrap","Version":"3.1.0","Title":"Process and Analyze Mouse-Tracking Data","Description":"Mouse-tracking, the analysis of mouse movements in computerized\n experiments, is a method that is becoming increasingly popular in the\n cognitive sciences. The mousetrap package offers functions for importing,\n preprocessing, analyzing, aggregating, and visualizing mouse-tracking data.","Published":"2017-05-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"move","Version":"2.1.0","Title":"Visualizing and Analyzing Animal Track Data","Description":"Contains functions to access movement data stored in 'movebank.org'\n as well as tools to visualize and statistically analyze animal movement data,\n among others functions to calculate dynamic Brownian Bridge Movement Models.\n Move helps addressing movement ecology questions.","Published":"2016-08-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"moveHMM","Version":"1.4","Title":"Animal Movement Modelling using Hidden Markov Models","Description":"Provides tools for animal movement modelling using hidden Markov\n models. These include processing of tracking data, fitting hidden Markov models\n to movement data, visualization of data and fitted model, decoding of the state\n process...","Published":"2017-04-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"moveVis","Version":"0.9.1","Title":"Movement Data Visualization","Description":"Tools to visualize movement data of any kind, e. g by creating path animations from GPS point data.","Published":"2017-05-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"moveWindSpeed","Version":"0.2.1","Title":"Estimate Wind Speeds from Bird Trajectories","Description":"Estimating wind speed from trajectories of individually tracked birds using a maximum likelihood approach.","Published":"2017-02-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"movMF","Version":"0.2-1","Title":"Mixtures of von Mises-Fisher Distributions","Description":"Fit and simulate mixtures of von Mises-Fisher distributions.","Published":"2017-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mp","Version":"0.4.1","Title":"Multidimensional Projection Techniques","Description":"Multidimensional projection techniques are used to create two\n dimensional representations of multidimensional data sets.","Published":"2016-08-15","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mpa","Version":"0.7.3","Title":"CoWords Method","Description":"CoWords Method","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MPAgenomics","Version":"1.1.2","Title":"Multi-Patient Analysis of Genomic Markers","Description":"Preprocess and analysis of genomic data. MPAgenomics\n provides wrappers from commonly used packages to streamline their repeated\n manipulation, offering an easy-to-use pipeline. The segmentation of\n successive multiple profiles is performed with an automatic choice of\n parameters involved in the wrapped packages. Considering multiple profiles\n in the same time, MPAgenomics wraps efficient penalized regression methods\n to select relevant markers associated with a given outcome.","Published":"2014-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mpath","Version":"0.2-4","Title":"Regularized Linear Models","Description":"Algorithms for fitting model-based penalized coefficient paths. Currently the models include penalized Poisson, negative binomial, zero-inflated Poisson and zero-inflated negative binomial regression models. The penalties include least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP), and each possibly combining with L_2 penalty.","Published":"2016-08-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mpbart","Version":"0.2","Title":"Multinomial Probit Bayesian Additive Regression Trees","Description":"Fits Multinomial Probit Bayesian Additive Regression Trees.","Published":"2016-02-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MPCI","Version":"1.0.7","Title":"Multivariate Process Capability Indices (MPCI)","Description":"It performs the followings Multivariate Process Capability Indices: Shahriari et al. (1995) Multivariate Capability Vector, Taam et al. (1993) Multivariate Capability Index (MCpm), Pan and Lee (2010) proposal (NMCpm) and the followings based on Principal Component Analysis (PCA):Wang and Chen (1998), Xekalaki and Perakis (2002) and Wang (2005). Two datasets are included. ","Published":"2015-10-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mpcv","Version":"1.1","Title":"Multivariate Process Capability Vector","Description":"Multivariate process capability analysis using the multivariate process capability vector. Allows to analyze a multivariate process with both normally and non-normally distributed and also with dependent and independent quality characteristics.","Published":"2014-10-09","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"MPDiR","Version":"0.1-16","Title":"Data sets and scripts for Modeling Psychophysical Data in R","Description":"Data sets and scripts for Modeling Psychophysical Data in R (Springer).","Published":"2014-12-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mpe","Version":"1.0","Title":"Multiple Primary Endpoints","Description":"Functions for calculating sample size and power for clinical trials\n with multiple (co-)primary endpoints.","Published":"2017-02-02","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"mph","Version":"0.9","Title":"Multiscale persistent homology","Description":"A fast approach to compute approximate persistent homology using a multiscale approach","Published":"2014-01-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MPINet","Version":"1.0","Title":"The package can implement the network-based metabolite pathway\nidentification of pathways","Description":"(1) Our system provides a network-based strategies for metabolite pathway identification.(2) The MPINet can support the identification of pathways using Hypergeometric test based on metabolite set. (3)MPINet can support pathways from multiple databases.","Published":"2013-07-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MPLikelihoodWB","Version":"1.0","Title":"Modified Profile Likelihood Estimation for Weibull Shape and\nRegression Parameters","Description":"Computing modified profile likelihood estimates for Weibull Shape and Regression Parameters. Modified likelihood estimates are provided.","Published":"2016-01-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mplot","Version":"0.7.9","Title":"Graphical Model Stability and Variable Selection Procedures","Description":"Model stability and variable inclusion plots [Mueller and Welsh\n (2010, ); Murray, Heritier and Mueller\n (2013, )] as well as the adaptive fence [Jiang et al.\n (2008, ); Jiang et al. \n (2009, )] for linear and generalised linear models.","Published":"2016-08-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MplusAutomation","Version":"0.6-4","Title":"Automating Mplus Model Estimation and Interpretation","Description":"Leverages the R language to automate latent variable model estimation\n\tand interpretation using Mplus, a powerful latent variable modeling program \n\tdeveloped by Muthen and Muthen (www.statmodel.com). Specifically, this package\n provides routines for creating related groups of models, running batches of\n models, and extracting and tabulating model parameters and fit statistics.","Published":"2016-06-09","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"mpm","Version":"1.0-22","Title":"Multivariate Projection Methods","Description":"Exploratory graphical analysis of multivariate data,\n specifically gene expression data with different projection\n methods: principal component analysis, correspondence analysis,\n spectral map analysis.","Published":"2011-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mpMap","Version":"1.14","Title":"Multi-parent RIL genetic analysis","Description":"Tools for constructing linkage maps, reconstructing\n haplotypes, estimating linkage disequilibrium and QTL mapping\n in multi-parent RIL designs (e.g. MAGIC)","Published":"2012-06-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mpmcorrelogram","Version":"0.1-3","Title":"Multivariate Partial Mantel Correlogram","Description":"Functions to compute and plot multivariate (partial)\n Mantel correlograms.","Published":"2013-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mpmi","Version":"0.42","Title":"Mixed-Pair Mutual Information Estimators","Description":"Uses a kernel smoothing approach to calculate Mutual Information\n for comparisons between all types of variables including continuous vs\n continuous, continuous vs discrete and discrete vs discrete. Uses a\n nonparametric bias correction giving Bias Corrected Mutual Information\n (BCMI). Implemented efficiently in Fortran 95 with OpenMP and suited to\n large genomic datasets. ","Published":"2016-11-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mpoly","Version":"1.0.5","Title":"Symbolic Computation and More with Multivariate Polynomials","Description":"Symbolic computing with multivariate polynomials in R.","Published":"2017-06-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Mposterior","Version":"0.1.2","Title":"Mposterior: R package for Robust and Scalable Bayes via a Median\nof Subset Posterior Measures","Description":"Mposterior package provides a general framework for estimating a\n median of subset posterior measures (M-posterior). Each subset posterior measure\n is represented as a matrix of posterior samples; rows represent the atoms and \n columns index the dimensions. All subset measures are represented as list of \n matrices; length of the list equals the number of subsets (or machines). The \n distance between subset posterior measures and M-posterior is measured using the RBF kernel.\n M-posterior is represented as a weighted combination of empirical measures based on \n subsets measures. These weights are estimated using Weiszfeld algorithm implemented\n in this package.","Published":"2014-06-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mppa","Version":"1.0","Title":"Statistics for analysing multiple simultaneous point processes\non the real line","Description":"A procedure to test for dependence between point processes on the real line, e.g. causal dependence, correlation, inhibition or anti-correlation. The package also provides a number of utilities for plotting simultaneous point processes, and combining p-values.","Published":"2014-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mpr","Version":"1.0.4","Title":"Multi-Parameter Regression (MPR)","Description":"Package for fitting Multi-Parameter Regression (MPR) models to right-censored survival data. These are flexible parametric regression models which extend standard models, for example, proportional hazards.","Published":"2016-10-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MPSEM","Version":"0.3-1","Title":"Modeling Phylogenetic Signals using Eigenvector Maps","Description":"Computational tools to represent phylogenetic signals using adapted eigenvector maps.","Published":"2015-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mpt","Version":"0.5-4","Title":"Multinomial Processing Tree Models","Description":"Fitting and testing multinomial processing tree (MPT) models, a\n class of statistical models for categorical data. The parameters are the\n link probabilities of a tree-like graph and represent the latent cognitive\n processing steps executed to arrive at observable response categories\n (Batchelder & Riefer, 1999 ; Erdfelder et al., 2009\n ; Riefer & Batchelder, 1988\n ).","Published":"2016-09-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MPTinR","Version":"1.10.3","Title":"Analyze Multinomial Processing Tree Models","Description":"Provides a user-friendly way for the analysis of multinomial processing tree (MPT) models (e.g., Riefer, D. M., and Batchelder, W. H. [1988]. Multinomial modeling and the measurement of cognitive processes. Psychological Review, 95, 318-339) for single and multiple datasets. The main functions perform model fitting and model selection. Model selection can be done using AIC, BIC, or the Fisher Information Approximation (FIA) a measure based on the Minimum Description Length (MDL) framework. The model and restrictions can be specified in external files or within an R script in an intuitive syntax or using the context-free language for MPTs. The 'classical' .EQN file format for model files is also supported. Besides MPTs, this package can fit a wide variety of other cognitive models such as SDT models (see fit.model). It also supports multicore fitting and FIA calculation (using the snowfall package), can generate or bootstrap data for simulations, and plot predicted versus observed data.","Published":"2015-07-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mptools","Version":"1.0.1","Title":"RAMAS Metapop Tools","Description":"'RAMAS Metapop' is a \n popular software package for performing spatially-explicit population \n viability analysis. It is primarily GUI-driven, but can benefit from \n integration into an R workflow, wherein model results can be subjected to \n further analysis. 'RAMAS Metapop' stores metapopulation model parameter \n settings and population dynamics simulation results in plain text files \n (.mp files). This package facilitates access, summary and visualisation of \n 'RAMAS Metapop 5' outputs in order to better integrate 'RAMAS' analyses into \n an R workflow.","Published":"2016-02-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MPV","Version":"1.38","Title":"Data Sets from Montgomery, Peck and Vining's Book","Description":"Most of this package consists of data sets from the \n textbook Introduction\n to Linear Regression Analysis (3rd ed), by Montgomery et al.\n Some additional data sets and functions useful in an\n undergraduate regression course are included.","Published":"2015-04-12","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"mQTL","Version":"1.0","Title":"Metabolomic Quantitative Trait Locus Mapping","Description":"mQTL provides a complete QTL analysis pipeline for metabolomic data. \n Distinctive features include normalisation using PQN approach, peak alignment \n using RSPA approach, dimensionality reduction using SRV approach and finally \n QTL mapping using R/qtl package.","Published":"2013-10-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mratios","Version":"1.3.17","Title":"Inferences for ratios of coefficients in the general linear\nmodel","Description":"With this package, it is possible to perform\n (simultaneous) inferences for ratios of linear combinations of\n coefficients in the general linear model. In particular, tests\n and confidence interval estimations for ratios of treatment\n means in the normal one-way layout and confidence interval\n estimations like in (multiple) slope ratio and parallel line\n assays can be carried out. Moreover, it is possible to\n calculate the sample sizes required in comparisons with a\n control based on relative margins. For the simple two-sample\n problem, functions for a t-test for ratio-formatted hypotheses\n and the corresponding Fieller-type confidence interval are\n provided assuming homogeneous or heterogeneous group variances.","Published":"2012-11-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mrbsizeR","Version":"1.0.1","Title":"Scale Space Multiresolution Analysis of Random Signals","Description":"A method for the multiresolution analysis of spatial fields and images to capture scale-dependent features. mrbsizeR is based on scale space smoothing and uses differences of smooths at neighbouring scales for finding features on different scales. To infer which of the captured features are credible, Bayesian analysis is used.\n The scale space multiresolution analysis has three steps: (1) Bayesian signal reconstruction. (2) Using differences of smooths, scale-dependent features of the reconstructed signal can be found. (3) Posterior credibility analysis of the differences of smooths created. The method has first been proposed by Holmstrom, Pasanen, Furrer, Sain (2011) .","Published":"2017-04-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MRCE","Version":"2.1","Title":"Multivariate Regression with Covariance Estimation","Description":"Compute and select tuning parameters for the MRCE estimator proposed by Rothman, Levina, and Zhu (2010) . This estimator fits the multiple output linear regression model with a sparse estimator of the error precision matrix and a sparse estimator of the regression coefficient matrix.","Published":"2017-01-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mRchmadness","Version":"1.0.0","Title":"Numerical Tools for Filling Out an NCAA Basketball Tournament\nBracket","Description":"Scrape season results, estimate win probabilities, and find a\n competitive bracket for your office pool. Additional utilities include:\n scraping population picks; simulating tournament results; and testing your\n bracket in simulation.","Published":"2017-04-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MRCV","Version":"0.3-3","Title":"Methods for Analyzing Multiple Response Categorical Variables\n(MRCVs)","Description":"The MRCV package provides functions for analyzing the association between\n one single response categorical variable (SRCV) and one multiple response\n categorical variable (MRCV), or between two or three MRCVs. A modified Pearson\n chi-square statistic can be used to test for marginal independence for the one or\n two MRCV case, or a more general loglinear modeling approach can be used to examine\n various other structures of association for the two or three MRCV case. Bootstrap-\n and asymptotic-based standardized residuals and model-predicted odds ratios are\n available, in addition to other descriptive information.","Published":"2014-09-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mrds","Version":"2.1.17","Title":"Mark-Recapture Distance Sampling","Description":"Animal abundance estimation via conventional, multiple covariate\n and mark-recapture distance sampling (CDS/MCDS/MRDS). Detection function\n fitting is performed via maximum likelihood. Also included are diagnostics\n and plotting for fitted detection functions. Abundance estimation is via a\n Horvitz-Thompson-like estimator.","Published":"2016-10-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mreg","Version":"1.1","Title":"Fits regression models when the outcome is partially missing","Description":"Implements the methods described in Bond S, Farewell V, 2006, Exact Likelihood Estimation for a Negative Binomial Regression Model with Missing Outcomes, Biometrics","Published":"2013-11-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mregions","Version":"0.1.4","Title":"Marine Regions Data from 'Marineregions.org'","Description":"Tools to get marine regions data from . Includes tools to get region metadata, as well as\n data in 'GeoJSON' format, as well as Shape files. Use cases include using \n data downstream to visualize 'geospatial' data by marine region, mapping \n variation among different regions, and more.","Published":"2016-12-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mrfDepth","Version":"1.0.4","Title":"Depth Measures in Multivariate, Regression and Functional\nSettings","Description":"Tools to compute depth measures and implementations of related \n tasks such as outlier detection, data exploration and \n classification of multivariate, regression and functional data.","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mrgsolve","Version":"0.8.6","Title":"Simulate from ODE-Based Population PK/PD and Systems\nPharmacology Models","Description":"Facilitates simulation from hierarchical, ordinary\n differential equation (ODE) based models typically employed in drug development.\n A model specification file is created consisting of R and C++ code that\n is parsed, compiled, and dynamically loaded into the R session. Input data are\n passed in and simulated data are returned as R objects. A dosing event engine\n allows interventions (bolus and infusion) to be managed separately from the \n model code. Differential equations are solved with the 'DLSODA' routine \n in 'ODEPACK' (). ","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MRH","Version":"2.2","Title":"Multi-Resolution Estimation of the Hazard Rate","Description":"Used on survival data to jointly estimate the hazard rate and the effects of covariates on failure times. Can accommodate covariates under the proportional and non-proportional hazards setting, and is ideal for analysis of survival data with long-term follow-up.","Published":"2016-02-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mri","Version":"0.1.1","Title":"Modified Rand Index (1 and 2.1 and 2.2) and Modified Adjusted\nRand Index (1 and 2.1 and 2.2)","Description":"Provides three Modified Rand Indices and three Modified Adjusted Rand Indices for comparing two partitions, which are usually obtained on two different sets of units, where one is a subset of another set of units. Splitting and merging of clusters have a different affects on the value of the indices.","Published":"2015-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mritc","Version":"0.5-0","Title":"MRI Tissue Classification","Description":"Various methods for MRI tissue classification.","Published":"2015-01-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mRm","Version":"1.1.6","Title":"An R Package for Conditional Maximum Likelihood Estimation in\nMixed Rasch Models","Description":"Conditional maximum likelihood estimation via the EM algorithm and information-criterion-based model selection in binary mixed Rasch models.","Published":"2016-12-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mrMLM","Version":"2.1","Title":"Multi-Locus Random-SNP-Effect Mixed Linear Model for Genome-Wide\nAssociation Studies and Linkage Analyses","Description":"Conduct multi-locus GWAS and multi-locus QTL mapping under the framework of random-SNP-effect mixed linear model (mrMLM). First, each position (or marker) on the genome is scanned by mrMLM algorithm. Bonferroni correction is replaced by a less stringent selection criterion for significant test. Then, all the markers (or QTL) that are potentially associated with the trait are included in a multi-locus model, their effects are estimated by empirical Bayes and true QTN or QTL are identified by likelihood ratio test.","Published":"2017-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MRMR","Version":"0.1.4","Title":"Multivariate Regression Models for Reserving","Description":"Non-life runoff reserves may be analyzed using linear models. This generalizes the special cases of multiplicative chain ladder and\n the additive model. In addition, the package provides visual and statistical diagnostics to assess the quality of modeled link ratios.","Published":"2016-07-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mRMRe","Version":"2.0.5","Title":"R package for parallelized mRMR ensemble feature selection","Description":"This package contains a set of function to compute mutual information matrices from continuous, categorical and survival variables. It also contains function to perform feature selection with mRMR and a new ensemble mRMR technique.","Published":"2015-03-21","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"mro","Version":"0.1.1","Title":"Multiple Correlation","Description":"Computes multiple correlation coefficient when the data matrix is given and tests its significance.","Published":"2017-04-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MRQoL","Version":"1.0","Title":"Minimal Clinically Important Difference and Response Shift\nEffect for Health-Related Quality of Life","Description":"We can calculate directly used this package the Minimal Clinically Important Difference by applying the Anchor-based method and the Response shift effect by applying the Then-Test method.","Published":"2015-07-31","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"MRS","Version":"1.2.1","Title":"Multi-Resolution Scanning for Cross-Sample Differences","Description":"An implementation of the MRS algorithm for comparison across distributions. \n The model is based on a nonparametric process taking the form of a Markov model \n that transitions between a \"null\" and an \"alternative\" state \n on a multi-resolution partition tree of the sample space. \n MRS effectively detects and characterizes a variety of underlying differences. \n These differences can be visualized using several plotting functions.","Published":"2016-07-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"MRSP","Version":"0.4.3","Title":"Multinomial Response Models with Structured Penalties","Description":"Fits regularized multinomial response models using penalized loglikelihood methods with structured penalty terms.","Published":"2014-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MRsurv","Version":"0.2","Title":"A multiplicative-regression model for relative survival","Description":"This package contains functions, data and examples to compute a multiplicative-regression model for relative survival.","Published":"2013-10-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MRwarping","Version":"1.0","Title":"Multiresolution time warping for functional data","Description":"The Bayesian procedure starts with one warplet in the model and uses the posterior distributions as priors for a more extended model with one more warplet. The model is built with adding one warplet at a time and allows for amplitude variations.","Published":"2013-10-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ms.sev","Version":"1.0.4","Title":"Package for Calculation of ARMSS, Local MSSS and Global MSSS","Description":"Calculates ARMSS (age related multiple sclerosis severity), and both local and global MSSS (multiple sclerosis severity score).","Published":"2016-12-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"msaenet","Version":"2.6","Title":"Multi-Step Adaptive Estimation Methods for Sparse Regressions","Description":"Multi-step adaptive elastic-net (MSAENet) algorithm for\n feature selection in high-dimensional regressions proposed in\n Xiao and Xu (2015) ,\n with additional support for multi-step adaptive MCP-net\n (MSAMNet) and multi-step adaptive SCAD-net (MSASNet) methods.","Published":"2017-04-24","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"msaFACE","Version":"0.1.0","Title":"Moving Subset Analysis FACE","Description":"The new methodology \"moving subset analysis\" provides functions to investigate the effect of environmental conditions on the CO2 fertilization effect within longterm free air carbon enrichment (FACE) experiments. In general, the functionality is applicable to derive the influence of a third variable (forcing experiment-support variable) on the relation between a dependent and an independent variable.","Published":"2016-11-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"msap","Version":"1.1.8","Title":"Statistical analysis for Methylation-sensitive Amplification\nPolymorphism data","Description":"Statistical Analyses of Methylation-sensitive Amplification Polymorphism (MSAP) assays.","Published":"2014-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"msaR","Version":"0.2.0","Title":"Multiple Sequence Alignment for R Shiny","Description":"Visualises multiple sequence alignments dynamically within the\n Shiny web application framework.","Published":"2017-02-24","License":"BSL-1.0","snapshot_date":"2017-06-23"} {"Package":"msarc","Version":"1.4.5","Title":"Draw Diagrams (mis)Representing the Results of Mass Spec\nExperiments","Description":"The output of an affinity-purification mass spectrometry\n experiment is typically a list of proteins that were observed\n in the experiment, identified by UniProt identifiers\n (http://www.uniprot.org/). This package takes as input a list of\n UniProt identifiers, and the associated Mascot scores from the\n experiment (which indicate the likelihood that the protein has\n been correctly identified), clusters them by gene ontology\n category (http://geneontology.org/), then draws diagrams\n showing the results in hierarchical clusters by category, with lines\n for individual proteins representing the associated Mascot score.\n The results are in general not publication-ready, but will rather\n require editing via a graphics editor that can interpret SVG (scalable\n vector graphics) format. As an alternative representation, the\n package will also generate tag clouds based on the Mascot scores.","Published":"2015-01-27","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"msBP","Version":"1.3","Title":"Multiscale Bernstein Polynomials for Densities","Description":"Performs Bayesian nonparametric multiscale density estimation and multiscale testing of group differences with multiscale Bernstein polynomials (msBP) mixtures as in Canale and Dunson (2016).","Published":"2017-06-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MSBVAR","Version":"0.9-3","Title":"Markov-Switching, Bayesian, Vector Autoregression Models","Description":"Provides methods for estimating frequentist and\n Bayesian Vector Autoregression (VAR) models and Markov-switching\n Bayesian VAR (MSBVAR). Functions for reduced\n form and structural VAR models are also available. Includes\n methods for the generating posterior inferences for these models,\n forecasts, impulse responses (using likelihood-based error bands),\n and forecast error decompositions. Also includes utility functions\n for plotting forecasts and impulse responses, and generating draws\n from Wishart and singular multivariate normal densities. Current\n version includes functionality to build and evaluate models with\n Markov switching.","Published":"2016-11-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MSCMT","Version":"1.2.0","Title":"Multivariate Synthetic Control Method Using Time Series","Description":"Multivariate Synthetic Control Method Using Time Series. \n Two generalizations of the synthetic control method (which has already an \n implementation in package 'Synth') are implemented: first, 'MSCMT' allows \n for using multiple outcome variables, second, time series can be supplied as \n economic predictors. \n Much effort has been taken to make the implementation as stable as possible \n (including edge cases) without losing computational efficiency.","Published":"2017-01-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"MScombine","Version":"1.1","Title":"Combine Data from Positive and Negative Ionization Mode Finding\nCommon Entities","Description":"Find common entities detected in both positive and negative\n ionization mode, delete this entity in the less sensible mode and combine both\n matrices.","Published":"2015-12-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mscstexta4r","Version":"0.1.2","Title":"R Client for the Microsoft Cognitive Services Text Analytics\nREST API","Description":"R Client for the Microsoft Cognitive Services Text Analytics\n REST API, including Sentiment Analysis, Topic Detection, Language Detection,\n and Key Phrase Extraction. An account MUST be registered at the Microsoft\n Cognitive Services website \n in order to obtain a (free) API key. Without an API key, this package will\n not work properly.","Published":"2016-06-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mscsweblm4r","Version":"0.1.2","Title":"R Client for the Microsoft Cognitive Services Web Language Model\nREST API","Description":"R Client for the Microsoft Cognitive Services Web Language Model\n REST API, including Break Into Words, Calculate Conditional\n Probability, Calculate Joint Probability, Generate Next Words, and List\n Available Models. A valid account MUST be registered at the Microsoft\n Cognitive Services website \n in order to obtain a (free) API key. Without an API key, this package will\n not work properly.","Published":"2016-06-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"msda","Version":"1.0.2","Title":"Multi-Class Sparse Discriminant Analysis","Description":"Efficient procedures for computing a new Multi-Class Sparse Discriminant Analysis method that estimates all discriminant directions simultaneously.","Published":"2015-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mseapca","Version":"1.0","Title":"Metabolite set enrichment analysis for factor loading in\nprincipal component analysis","Description":"This package provides functions for metabolite set\n enrichment analysis (MSEA) and principal component analysis\n (PCA), and converting metabolite set list from your own csv\n files or KEGG's tar.gz files to XML documents. This package is\n suitable for computation of MSEA for factor loading in PCA.","Published":"2012-04-15","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"MSeasy","Version":"5.3.3","Title":"Preprocessing of Gas Chromatography-Mass Spectrometry (GC-MS)\ndata","Description":"Package for the detection of molecules in complex mixtures\n of compounds. It creates an initial_DATA matrix from several\n GC-MS analyses by collecting and assembling the information\n from chromatograms and mass spectra (MS.DataCreation), It can\n read several format (ASCII, CDF, mzML, mzXML or mzData).It\n tests for the best unsupervised clustering method to group\n similar mass spectra into molecules (MS.test.clust).It runs the\n optimal unsupervised clustering method on the initial_DATA\n matrix, identifies the optimal number of clusters, produces\n different files for facilitating the quality control and\n identification of putative molecules, and returns\n fingerprinting or profiling matrices (MS.clust).It converts\n output files from MS.clust for NIST mass spectral library\n search and ARISTO webtool search","Published":"2013-03-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MSeasyTkGUI","Version":"5.3.3","Title":"MSeasy Tcl/Tk Graphical User Interface","Description":"A Tcl/Tk GUI for some basic functions in the MSeasy\n package","Published":"2013-03-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MSG","Version":"0.3","Title":"Data and Functions for the Book Modern Statistical Graphics","Description":"A companion to the Chinese book ``Modern Statistical Graphics''.","Published":"2016-02-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"MSGARCH","Version":"0.17.7","Title":"Markov-Switching GARCH Models","Description":"The MSGARCH package offers methods to fit (by Maximum Likelihood or Bayesian), simulate, and forecast various Markov-Switching GARCH processes.","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"msgl","Version":"2.3.6","Title":"High Dimensional Multiclass Classification Using Sparse Group\nLasso","Description":"Multinomial logistic regression with sparse group lasso\n penalty. Simultaneous feature selection and parameter\n estimation for classification. Suitable for high dimensional\n multiclass classification with many classes. The algorithm\n computes the sparse group lasso penalized maximum likelihood\n estimate. Use of parallel computing for cross validation and\n subsampling is supported through the 'foreach' and 'doParallel'\n packages. Development version is on GitHub, please report\n package issues on GitHub.","Published":"2017-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MSGLasso","Version":"2.1","Title":"Multivariate Sparse Group Lasso for the Multivariate Multiple\nLinear Regression with an Arbitrary Group Structure","Description":"For fitting multivariate response and multiple predictor linear regressions with an arbitrary group structure assigned on the regression coefficient matrix, using the multivariate sparse group lasso and the mixed coordinate descent algorithm.","Published":"2016-11-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"msgpackR","Version":"1.1","Title":"A library to serialize or unserialize data in MessagePack format","Description":"This is the library that can serialize or unserialize MessagePack format data.","Published":"2013-11-22","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"msgps","Version":"1.3","Title":"Degrees of freedom of elastic net, adaptive lasso and\ngeneralized elastic net","Description":"This package computes the degrees of freedom of the lasso,\n elastic net, generalized elastic net and adaptive lasso based\n on the generalized path seeking algorithm. The optimal model\n can be selected by model selection criteria including Mallows'\n Cp, bias-corrected AIC (AICc), generalized cross validation\n (GCV) and BIC.","Published":"2012-05-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"msgtools","Version":"0.2.7","Title":"Tools for Developing Diagnostic Messages","Description":"A number of utilities for developing and maintaining error, warning,\n and other messages in R packages, including checking for consistency across\n messages, spell-checking messages, and building message translations into\n various languages for purposes of localization.","Published":"2017-03-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"msir","Version":"1.3.1","Title":"Model-Based Sliced Inverse Regression","Description":"An R package for dimension reduction based on finite Gaussian mixture modeling of inverse regression.","Published":"2016-04-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MSIseq","Version":"1.0.0","Title":"Assess Tumor Microsatellite Instability with a Decision Tree\nClassifier from Exome Somatic Mutations","Description":"A decision tree classifier for detecting microsatellite instability (MSI) in somatic mutation data from whole exome sequencing. MSI is detected based on different mutation rates in all sites as well as in simple sequence repeats. This mechanism can also be applied to sequence data of targeted gene panels with shorter sequence length.","Published":"2015-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"msltrend","Version":"1.0","Title":"Improved Techniques to Estimate Trend, Velocity and Acceleration\nfrom Sea Level Records","Description":"Analysis of annual average ocean water level time series\n from long (minimum length 80 years) individual records, providing improved\n estimates of trend (mean sea level) and associated real-time velocities and\n accelerations. Improved trend estimates are based on Singular Spectrum Analysis\n methods. Various gap-filling options are included to accommodate incomplete time\n series records. The package also contains a forecasting module to consider the\n implication of user defined quantum of sea level rise between the end of the\n available historical record and the year 2100. A wide range of screen and pdf\n plotting options are available in the package.","Published":"2016-01-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"msm","Version":"1.6.4","Title":"Multi-State Markov and Hidden Markov Models in Continuous Time","Description":"Functions for fitting continuous-time Markov and hidden\n Markov multi-state models to longitudinal data. Designed for\n processes observed at arbitrary times in continuous time (panel data)\n but some other observation schemes are supported. Both Markov\n transition rates and the hidden Markov output process can be modelled\n in terms of covariates, which may be constant or piecewise-constant\n in time.","Published":"2016-10-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"msma","Version":"0.7","Title":"Multiblock Sparse Multivariable Analysis","Description":"There are several functions to implement the method for analysis in a multiblock multivariable data. If the input is only a matrix, then the principal components analysis (PCA) is implemented. If the input is a list of matrices, then the multiblock PCA is implemented. If the input is two matrices for exploratory and objective variables, then the partial least squares (PLS) analysis is implemented. If the input is two list of matrices for exploratory and objective variables, then the multiblock PLS analysis is implemented. Moreover, if the extra outcome variable is specified, then the supervised version for the methods above is implemented. For each methods, the sparse modeling is also incorporated. Functions to select the number of components and the regularized parameters are also provided. ","Published":"2016-01-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"msme","Version":"0.5.1","Title":"Functions and Datasets for \"Methods of Statistical Model\nEstimation\"","Description":"This package provides functions and datasets from the book \"Methods of Statistical Model Estimation\".","Published":"2014-07-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"msmtools","Version":"1.3","Title":"Building Augmented Data to Run Multi-State Models with 'msm'\nPackage","Description":"A fast and general method for restructuring classical longitudinal data into\n augmented ones. The reason for this is to facilitate the modeling of longitudinal data under\n a multi-state framework using the 'msm' package.","Published":"2017-06-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"msos","Version":"1.1.0","Title":"Data Sets and Functions Used in Multivariate Statistics: Old\nSchool by John Marden","Description":"Contains necessary Multivariate Analysis methods and data sets used\n in John Marden's book Multivariate Statistics: Old School (2015) .\n This is also serves as a companion package for the \n STAT 571: Multivariate Analysis course at the University of Illinois\n at Urbana-Champaign (UIUC). ","Published":"2017-04-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MSQC","Version":"1.0.2","Title":"Multivariate Statistical Quality Control","Description":"This is a toolkit for multivariate process monitoring. It computes several multivariate control charts e.g. Hotelling, Chi-squared, MEWMA, MCUSUM and Generalized Variance. Ten didactic datasets are included. It includes some techniques for assessing multivariate normality e.g. Mardia's, Royston's and Henze-Zirkler's tests. Please, see the NEWS file for the latest changes in the package.","Published":"2016-06-01","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"msr","Version":"0.4.4","Title":"Morse-Smale Approximation, Regression and Visualization","Description":"Discrete Morse-Smale complex approximation based on kNN graph. The Morse-Smale complex provides a decomposition of the domain. This package provides methods to compute a hierarchical sequence of Morse-Smale complicies and tools that exploit this domain decomposition for regression and visualization of scalar functions.","Published":"2015-11-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mssqlR","Version":"1.0.0","Title":"MSSQL Querying using R","Description":"Can be used to query data from data from Microsoft SQL Server (MSSQL, see for more information). Based on the concepts of Entity Framework, the package allows querying data from MSSQL Database.","Published":"2017-06-20","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"msSurv","Version":"1.2-2","Title":"Nonparametric Estimation for Multistate Models","Description":"Nonparametric estimation for right censored, left truncated time to event data in multistate models.","Published":"2015-04-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MST","Version":"2.0","Title":"Multivariate Survival Trees","Description":"Constructs trees for multivariate survival data using marginal and frailty models.\n Grows, prunes, and selects the best-sized tree.","Published":"2017-04-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mstate","Version":"0.2.10","Title":"Data Preparation, Estimation and Prediction in Multi-State\nModels","Description":"Contains functions for data preparation, descriptives, hazard estimation and prediction with Aalen-Johansen or simulation in competing risks and multi-state models.","Published":"2016-12-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mstherm","Version":"0.4.7","Title":"Analyze MS/MS Protein Melting Data","Description":"Software to aid in modeling and analyzing mass-spectrometry-based\n proteome melting data. Quantitative data is imported and normalized and\n thermal behavior is modeled at the protein level. Methods exist for\n normalization, modeling, visualization, and export of results. For a\n general introduction to MS-based thermal profiling, see Savitski et al.\n (2014) .","Published":"2017-04-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mstR","Version":"1.0","Title":"Procedures to Generate Patterns under Multistage Testing","Description":"Generation of response patterns under dichotomous and polytomous computerized multistage testing (MST) framework. It holds various IRT- and score-based methods to select the next module and estimate ability levels. ","Published":"2017-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MSwM","Version":"1.2","Title":"Fitting Markov Switching Models","Description":"Univariate Autoregressive Markov Switching Models for Linear and Generalized Models","Published":"2014-02-24","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"MTA","Version":"0.1.0","Title":"Multiscalar Territorial Analysis","Description":"Build multiscalar territorial analysis based on various contexts.","Published":"2017-03-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mtconnectR","Version":"1.1.0","Title":"Read Data from Delimited 'MTConnect' Data Files and Perform some\nAnalysis","Description":"Read data in the 'MTConnect' standard.\n You can use the package to read data from historical 'MTConnect logs' along\n with the 'devices.xml' describing\n the device. The data is organised into a 'MTConnectDevice' S4 data structure\n and some convenience methods are also provided for basic read/view operations.\n The package also includes some functions for analysis of 'MTConnect' data. This includes\n functions to simulate data (primarily position data, feed rate and velocities) \n based on the G code and visualisation functions to compare the actual and simulated data.","Published":"2017-03-27","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"MTDrh","Version":"0.1.0","Title":"Mass Transportation Distance Rank Histogram","Description":"The Mass Transportation Distance rank histogram was developed to assess the reliability of scenarios with equal or different probabilities of occurrence .","Published":"2016-12-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mtk","Version":"1.0","Title":"Mexico ToolKit library (MTK)","Description":"MTK (Mexico ToolKit) is a generic platform for the sensitivity and uncertainty analysis of complex models. It provides functions and facilities for experimental design, model simulation, sensitivity and uncertainty analysis, methods integration and data reporting, etc.","Published":"2014-07-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MTS","Version":"0.33","Title":"All-Purpose Toolkit for Analyzing Multivariate Time Series (MTS)\nand Estimating Multivariate Volatility Models","Description":"Multivariate Time Series (MTS) is a general package for analyzing multivariate linear time series and estimating multivariate volatility models. It also handles factor models, constrained factor models, asymptotic principal component analysis commonly used in finance and econometrics, and principal volatility component analysis. (a) For the multivariate linear time series analysis, the package performs model specification, estimation, model checking, and prediction for many widely used models, including vector AR models, vector MA models, vector ARMA models, seasonal vector ARMA models, VAR models with exogenous variables, multivariate regression models with time series errors, augmented VAR models, and Error-correction VAR models for co-integrated time series. For model specification, the package performs structural specification to overcome the difficulties of identifiability of VARMA models. The methods used for structural specification include Kronecker indices and Scalar Component Models. (b) For multivariate volatility modeling, the MTS package handles several commonly used models, including multivariate exponentially weighted moving-average volatility, Cholesky decomposition volatility models, dynamic conditional correlation (DCC) models, copula-based volatility models, and low-dimensional BEKK models. The package also considers multiple tests for conditional heteroscedasticity, including rank-based statistics. (c) Finally, the MTS package also performs forecasting using diffusion index, transfer function analysis, Bayesian estimation of VAR models, and multivariate time series analysis with missing values.Users can also use the package to simulate VARMA models, to compute impulse response functions of a fitted VARMA model, and to calculate theoretical cross-covariance matrices of a given VARMA model. ","Published":"2015-02-12","License":"Artistic License 2.0","snapshot_date":"2017-06-23"} {"Package":"mtsdi","Version":"0.3.3","Title":"Multivariate time series data imputation","Description":"This is an EM algorithm based method for imputation of\n missing values in multivariate normal time series. The\n imputation algorithm accounts for both spatial and temporal\n correlation structures. Temporal patterns can be modelled using\n an ARIMA(p,d,q), optionally with seasonal components, a\n non-parametric cubic spline or generalised additive models with\n exogenous covariates. This algorithm is specially tailored for\n climate data with missing measurements from several monitors\n along a given region.","Published":"2012-11-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MTurkR","Version":"0.8.0","Title":"R Client for the MTurk Requester API","Description":"Provides programmatic access to the Amazon Mechanical Turk (MTurk) Requester API.","Published":"2017-01-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MTurkRGUI","Version":"0.1.5","Title":"A Graphical User Interface for MTurkR","Description":"A graphical user interface (GUI) for the MTurkR package.","Published":"2015-12-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MUCflights","Version":"0.0-3","Title":"Munich Franz-Josef-Strauss Airport Pattern Analysis","Description":"Functions for downloading flight data from\n http://www.munich-airport.de and for analyzing flight patterns.","Published":"2011-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"muckrock","Version":"0.1.0","Title":"Data on Freedom of Information Act Requests","Description":"A data package containing public domain information on requests made by the\n 'MuckRock' (https://www.muckrock.com/) project under the United States\n Freedom of Information Act.","Published":"2016-06-06","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"mudata","Version":"0.1","Title":"Interchange Tools for Multi-Parameter Spatiotemporal Data","Description":"Formatting and structuring multi-parameter spatiotemporal data\n is often a time-consuming task. This package offers functions and data structures \n designed to easily organize and visualize these data for applications in geology, \n paleolimnology, dendrochronology, and paleoclimate.","Published":"2017-02-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mudfold","Version":"1.0","Title":"A Nonparametric Model for Unfolding Scale Analysis","Description":"Nonparametric item response theory model fruitful for the analysis of proximity data.","Published":"2017-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MuFiCokriging","Version":"1.2","Title":"Multi-Fidelity Cokriging models","Description":"This package builds multi-fidelity cokriging models from\n responses with different levels of fidelity. Important\n functions : MuFicokm, predict.MuFicokm, summary.MuFicokm.","Published":"2012-12-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"muhaz","Version":"1.2.6","Title":"Hazard Function Estimation in Survival Analysis","Description":"A package for producing a smooth estimate of the hazard\n function for censored data.","Published":"2014-08-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"muir","Version":"0.1.0","Title":"Exploring Data with Tree Data Structures","Description":"A simple tool allowing users to easily and dynamically explore or document a data set using a tree structure.","Published":"2015-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MullerPlot","Version":"0.1.2","Title":"Generates Muller Plot from Population/Abundance/Frequency\nDynamics Data","Description":"Generates Muller plot from parental/genealogy/phylogeny information and population/abundance/frequency dynamics data.\n Muller plots are plots which combine information about succession of different OTUs (genotypes, phenotypes, species, ...) and information about dynamics of their abundances (populations or frequencies) over time. They are powerful and fascinating tools to visualize evolutionary dynamics. They may be employed also in study of diversity and its dynamics, i.e. how diversity emerges and how changes over time. They are called Muller plots in honor of Hermann Joseph Muller which used them to explain his idea of Muller's ratchet (Muller, 1932, American Naturalist).\n A big difference between Muller plots and normal box plots of abundances is that a Muller plot depicts not only the relative abundances but also succession of OTUs based on their genealogy/phylogeny/parental relation. In a Muller plot, horizontal axis is time/generations and vertical axis represents relative abundances of OTUs at the corresponding times/generations. Different OTUs are usually shown with polygons with different colors and each OTU originates somewhere in the middle of its parent area in order to illustrate their succession in evolutionary process.\n To generate a Muller plot one needs the genealogy/phylogeny/parental relation of OTUs and their abundances over time.\n MullerPlot package has the tools to generate Muller plots which clearly depict the origin of successors of OTUs.","Published":"2016-10-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MultAlloc","Version":"1.2","Title":"Optimal Allocation in Stratified Sampling","Description":"Integer Programming Formulations Applied to Univariate and Multivariate Allocation Problems.","Published":"2015-12-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multcomp","Version":"1.4-6","Title":"Simultaneous Inference in General Parametric Models","Description":"Simultaneous tests and confidence intervals\n for general linear hypotheses in parametric models, including \n linear, generalized linear, linear mixed effects, and survival models.\n The package includes demos reproducing analyzes presented\n in the book \"Multiple Comparisons Using R\" (Bretz, Hothorn, \n Westfall, 2010, CRC Press).","Published":"2016-07-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multcompView","Version":"0.1-7","Title":"Visualizations of Paired Comparisons","Description":"Convert a logical vector or a vector of\n p-values or a correlation, difference, or distance\n matrix into a display identifying the pairs for\n which the differences were not significantly\n different. Designed for use in conjunction with\n the output of functions like TukeyHSD, dist{stats},\n simint, simtest, csimint, csimtest{multcomp},\n friedmanmc, kruskalmc{pgirmess}.","Published":"2015-07-31","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"multdyn","Version":"1.5.1","Title":"Multiregression Dynamic Models","Description":"The Multiregression Dynamic Models (MDM) are a multivariate\n graphical model for a multidimensional time series that allows the estimation of\n time-varying effective connectivity.","Published":"2017-03-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MultEq","Version":"2.3","Title":"Multiple Equivalence Tests and Simultaneous Confidence Intervals","Description":"Equivalence tests and related confidence intervals for the\n comparison of two treatments, simultaneously for one or many\n normally distributed, primary response variables (endpoints).\n The step-up procedure of Quan et al. (2001) is both applied for\n differences and extended to ratios of means. A related\n single-step procedure is also available.","Published":"2011-10-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"multfisher","Version":"1.0","Title":"Optimal Exact Tests for Multiple Binary Endpoints","Description":"Calculates exact hypothesis tests to compare a treatment and a reference group with respect to multiple binary endpoints.\n The tested null hypothesis is an identical multidimensional distribution of successes and failures in both groups. The alternative\n hypothesis is a larger success proportion in the treatment group in at least one endpoint. The tests are based on the multivariate\n permutation distribution of subjects between the two groups. For this permutation distribution, rejection regions are calculated \n that satisfy one of different possible optimization criteria. In particular, regions with maximal exhaustion of the nominal\n significance level, maximal power under a specified alternative or maximal number of elements can be found. Optimization is achieved\n by a branch-and-bound algorithm. By application of the closed testing principle, the global hypothesis tests are extended to multiple\n testing procedures.","Published":"2017-04-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"multgee","Version":"1.5.3","Title":"GEE Solver for Correlated Nominal or Ordinal Multinomial\nResponses","Description":"GEE solver for correlated nominal or ordinal multinomial responses using a local odds ratios parameterization.","Published":"2016-02-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"MultiABEL","Version":"1.1-6","Title":"Multi-Trait Genome-Wide Association Analysis","Description":"Multivariate genome-wide association analyses. The analysis can be\n performed on individual-level data or multiple single-trait genome-wide summary\n statistics.","Published":"2017-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"multiAssetOptions","Version":"0.1-1","Title":"Finite Difference Method for Multi-Asset Option Valuation","Description":"Efficient finite difference method for valuing European and American multi-asset options.","Published":"2015-01-31","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"multiband","Version":"0.1.0","Title":"Period Estimation for Multiple Bands","Description":"Algorithms for performing joint parameter estimation in\n astronomical survey data acquired in multiple bands.","Published":"2014-12-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MultiBD","Version":"0.2.0","Title":"Multivariate Birth-Death Processes","Description":"Computationally efficient functions to provide direct likelihood-based\n inference for partially-observed multivariate birth-death processes. Such processes\n range from a simple Yule model to the complex susceptible-infectious-removed model\n in disease dynamics. Efficient likelihood evaluation facilitates maximum likelihood\n estimation and Bayesian inference.","Published":"2016-12-05","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"multibiplotGUI","Version":"1.0","Title":"Multibiplot Analysis in R","Description":"A GUI with which users can construct and interact\n with Multibiplot Analysis and provides inferential results by using Bootstrap Methods.","Published":"2015-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"multiCA","Version":"1.0","Title":"Multinomial Cochran-Armitage Trend Test","Description":"Implements a generalization of the Cochran-Armitage trend test to\n multinomial data. In addition to an overall test, multiple testing adjusted\n p-values for trend in individual outcomes and power calculation is\n available.","Published":"2016-10-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"multicmp","Version":"1.0","Title":"Flexible Modeling of Multivariate Count Data via the\nMultivariate Conway-Maxwell-Poisson Distribution","Description":"A toolkit containing statistical analysis models motivated by multivariate forms of the Conway-Maxwell-Poisson (COM-Poisson) distribution for flexible modeling of multivariate count data, especially in the presence of data dispersion. Currently the package only supports bivariate data, via the bivariate COM-Poisson distribution described in Sellers et al. (2016) . Future development will extend the package to higher-dimensional data.","Published":"2017-05-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MultiCNVDetect","Version":"0.1-1","Title":"Multiple Copy Number Variation Detection","Description":"This package provides a tool for analysis of multiple CNV.","Published":"2014-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"multicon","Version":"1.6","Title":"Multivariate Constructs","Description":"Includes functions designed to examine relationships among multivariate constructs (e.g., personality). This includes functions for profile (within-person) analysis, dealing with large numbers of analyses, lens model analyses, and structural summary methods for data with circumplex structure. The package also includes functions for graphically comparing and displaying group means. ","Published":"2015-02-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multicool","Version":"0.1-10","Title":"Permutations of Multisets in Cool-Lex Order","Description":"A set of tools to permute multisets without loops or hash tables and to generate integer partitions. The permutation functions are based on C code from Aaron Williams. Cool-lex order is similar to colexicographical order. The algorithm is described in Williams, A. (2009) Loopless Generation of Multiset Permutations by Prefix Shifts. Symposium on Discrete Algorithms, New York, United States. The permutation code is distributed without restrictions. The code for stable and efficient computation of multinomial coefficients comes from Dave Barber. The code can be download from and is distributed without conditions. The package also generates the integer partitions of a positive, non-zero integer n. The C++ code for this is based on Python code from Jerome Kelleher which can be found here . The C++ code and Python code are distributed without conditions.","Published":"2016-11-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multiDimBio","Version":"1.1.1","Title":"Multivariate Analysis and Visualization for Biological Data","Description":"Code to support a systems biology research program from inception through publication. The methods focus on dimension reduction approaches to detect patterns in complex, multivariate experimental data and places an emphasis on informative visualizations. The goal for this project is to create a package that will evolve over time, thereby remaining relevant and reflective of current methods and techniques. As a result, we encourage suggested additions to the package, both methodological and graphical.","Published":"2016-12-16","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"multifwf","Version":"0.2.2","Title":"Read Fixed Width Format Files Containing Lines of Different Type","Description":"Read a table of fixed width formatted data of different types into\n a data.frame for each type.","Published":"2015-12-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MultiGHQuad","Version":"1.2.0","Title":"Multidimensional Gauss-Hermite Quadrature","Description":"Uses a transformed, rotated and optionally adapted n-dimensional\n grid of quadrature points to calculate the numerical integral of n multivariate\n normal distributed parameters.","Published":"2016-08-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"multigraph","Version":"0.60","Title":"Plot and Manipulate Multigraphs","Description":"Functions to plot and manipulate multigraphs, weighted multigraphs, and bipartite graphs with different layout options.","Published":"2017-05-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"multigroup","Version":"0.4.4","Title":"Multigroup Data Analysis","Description":"Several functions are presented in order to study data in a group structure,\n where the same set of variables are measured on different groups of\n individuals.","Published":"2015-03-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MultiLCIRT","Version":"2.11","Title":"Multidimensional Latent Class Item Response Theory Models","Description":"Framework for the Item Response Theory analysis of dichotomous and ordinal polytomous outcomes under the assumption of multidimensionality and discreteness of the latent traits. The fitting algorithms allow for missing responses and for different item parameterizations and are based on the Expectation-Maximization paradigm. Individual covariates affecting the class weights may be included in the new version (since 2.1).","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"multilevel","Version":"2.6","Title":"Multilevel Functions","Description":"The functions in this package are designed to be used in the analysis of multilevel data by applied psychologists. The package includes functions for estimating common within-group agreement and reliability indices. The package also contains basic data manipulation functions that facilitate the analysis of multilevel and longitudinal data.","Published":"2016-08-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"multilevelPSA","Version":"1.2.4","Title":"Multilevel Propensity Score Analysis","Description":"Functions to estimate and visualize propensity\n score analysis for multilevel, or clustered, data.","Published":"2015-12-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"multimark","Version":"2.0.0","Title":"Capture-Mark-Recapture Analysis using Multiple Non-Invasive\nMarks","Description":"Traditional and spatial capture-mark-recapture analysis with\n multiple non-invasive marks. The models implemented in 'multimark' combine\n encounter history data arising from two different non-invasive ``marks'',\n such as images of left-sided and right-sided pelage patterns of bilaterally\n asymmetrical species, to estimate abundance and related demographic\n parameters while accounting for imperfect detection. Bayesian models are\n specified using simple formulae and fitted using Markov chain Monte Carlo.\n Addressing deficiencies in currently available software, 'multimark' also\n provides a user-friendly interface for performing Bayesian multimodel\n inference using non-spatial or spatial capture-recapture data consisting of a single\n conventional mark or multiple non-invasive marks.","Published":"2016-09-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MultiMeta","Version":"0.1","Title":"Meta-analysis of Multivariate Genome Wide Association Studies","Description":"Allows running a meta-analysis of multivariate Genome Wide\n Association Studies (GWAS) and easily visualizing results through custom\n plotting functions. The multivariate setting implies that results for each\n single nucleotide polymorphism (SNP) include several effect sizes (also\n known as \"beta coefficients\", one for each trait), as well as related\n variance values, but also covariance between the betas. The main goal of\n the package is to provide combined beta coefficients across different\n cohorts, together with the combined variance/covariance matrix. The method\n is inverse-variance based, thus each beta is weighted by the inverse of its\n variance-covariance matrix, before taking the average across all betas. The\n default options of the main function \\code{multi_meta} will work with files\n obtained from GEMMA multivariate option for GWAS (Zhou & Stephens, 2014).\n It will work with any other output, as soon as columns are formatted to\n have the according names. The package also provides several plotting\n functions for QQ-plots, Manhattan Plots and custom summary plots.","Published":"2015-01-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"multinbmod","Version":"1.0","Title":"Regression analysis of overdispersed correlated count data","Description":"This is a likelihood approach for the regression analysis of overdispersed correlated count data with cluster varying covariates. The approach fits a multivariate negative binomial model by maximum likelihood and provides robust estimates of the regression coefficients.","Published":"2014-01-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multinet","Version":"1.0","Title":"Analysis and Mining of Multilayer Social Networks","Description":"Functions for the creation/generation and analysis of multilayer social networks.","Published":"2017-01-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"MultinomialCI","Version":"1.0","Title":"Simultaneous confidence intervals for multinomial proportions\naccording to the method by Sison and Glaz","Description":"An implementation of a method for building simultaneous\n confidence intervals for the probabilities of a multinomial\n distribution given a set of observations, proposed by Sison and\n Glaz in their paper Sison, C.P and J. Glaz. Simultaneous\n confidence intervals and sample size determination for\n multinomial proportions. Journal of the American Statistical\n Association, 90:366-369 (1995). The method is an R translation\n of the SAS code implemented by May and Johnson in their paper:\n May, W.L. and W.D. Johnson. Constructing two-sided simultaneous\n confidence intervals for multinomial proportions for small\n counts in a large number of cells. Journal of Statistical\n Software 5(6) (2000). Paper and code available at\n http://www.jstatsoft.org/v05/i06","Published":"2012-12-07","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"multinomRob","Version":"1.8-6.1","Title":"Robust Estimation of Overdispersed Multinomial Regression Models","Description":"MNL and overdispersed multinomial regression using robust\n (LQD and tanh) estimation","Published":"2013-02-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MultiOrd","Version":"2.2","Title":"Generation of Multivariate Ordinal Variates","Description":"A method for multivariate ordinal data generation given marginal distributions and correlation matrix based on the methodology proposed by Demirtas (2006).","Published":"2016-05-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multipanelfigure","Version":"0.9.0","Title":"Infrastructure to Assemble Multi-Panel Figures (from Grobs)","Description":"Tools to create a layout for figures made of multiple panels, and\n to fill the panels with base, lattice and ggplot2 plots, grobs, and PNG,\n JPEG, SVG and TIFF images.","Published":"2017-04-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"MultiPhen","Version":"2.0.2","Title":"A Package to Test for Pleiotropic Effects","Description":"Performs genetic association tests between SNPs\n (one-at-a-time) and multiple phenotypes (separately or in joint\n model).","Published":"2017-05-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multiPIM","Version":"1.4-3","Title":"Variable Importance Analysis with Population Intervention Models","Description":"Performs variable importance analysis using a causal inference approach. This is done by fitting Population Intervention Models. The default is to use a Targeted Maximum Likelihood Estimator (TMLE). The other available estimators are Inverse Probability of Censoring Weighted (IPCW), Double-Robust IPCW (DR-IPCW), and Graphical Computation (G-COMP) estimators. Inference can be obtained from the influence curve (plug-in) or by bootstrapping.","Published":"2015-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"multipleNCC","Version":"1.2-1","Title":"Weighted Cox-Regression for Nested Case-Control Data","Description":"Fit Cox proportional hazard models with a weighted \n partial likelihood. It handles one or multiple endpoints, additional matching \n and makes it possible to reuse controls for other endpoints.","Published":"2016-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multiplex","Version":"2.6","Title":"Algebraic Tools for the Analysis of Multiple Social Networks","Description":"Algebraic procedures for the analysis of multiple social networks are delivered with \n\t this package. Among other things, it makes possible to create and manipulate multivariate \n\t network data with different formats, and there are effective ways available to treat multiple \n\t networks with routines that combine algebraic systems like the partially ordered semigroup or \n\t the semiring structure together with the relational bundles occurring in different types of \n\t multivariate network data sets. It also provides an algebraic approach for two-mode networks \n\t through Galois derivations between families of the pairs of subsets in the two domains.","Published":"2017-05-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"multiplyr","Version":"0.1.1","Title":"Data Manipulation with Parallelism and Shared Memory Matrices","Description":"Provides a new form of data frame backed by shared memory matrices\n and a way to manipulate them. Upon creation these data frames are shared across\n multiple local nodes to allow for simple parallel processing.","Published":"2016-05-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"multipol","Version":"1.0-6","Title":"multivariate polynomials","Description":"Various utilities to manipulate multivariate polynomials","Published":"2013-01-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"multirich","Version":"2.1.1","Title":"Calculate Multivariate Richness via UTC and sUTC","Description":"Functions to calculate Unique Trait Combinations (UTC) and scaled\n Unique Trait Combinations (sUTC) as measures of multivariate richness. The\n package can also calculate beta-diversity for trait richness and can\n partition this into nestedness-related and turnover components. The code\n will also calculate several measures of overlap.","Published":"2015-09-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MultiRNG","Version":"1.0","Title":"Multivariate Pseudo-Random Number Generation","Description":"Pseudo-random number generation for 11 multivariate distributions: Normal, t, Uniform, Bernoulli, Hypergeometric, Beta (Dirichlet), Multinomial, Dirichlet-Multinomial, Laplace, Wishart, and Inverted Wishart.","Published":"2017-06-04","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"MultiRR","Version":"1.1","Title":"Bias, Precision, and Power for Multi-Level Random Regressions","Description":"Calculates bias, precision, and power for multi-level random regressions. Random regressions are types of hierarchical models in which data are structured in groups and (regression) coefficients can vary by groups. Tools to estimate model performance are designed mostly for scenarios where (regression) coefficients vary at just one level. 'MultiRR' provides simulation and analytical tools (based on 'lme4') to study model performance for random regressions that vary at more than one level (multi-level random regressions), allowing researchers to determine optimal sampling designs.","Published":"2015-10-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multisensi","Version":"2.0","Title":"Multivariate Sensitivity Analysis","Description":"Functions to perform sensitivity analysis on a model with multivariate output.","Published":"2016-04-27","License":"CeCILL-2","snapshot_date":"2017-06-23"} {"Package":"MultisiteMediation","Version":"0.0.1","Title":"Causal Mediation Analysis in Multisite Trials","Description":"We implement multisite causal mediation analysis using the methods proposed by Qin and Hong (in press). It enables causal mediation analysis in multisite trials, in which individuals are assigned to a treatment or a control group at each site. It allows for estimation and hypothesis testing for not only the population average but also the between-site variance of direct and indirect effects. This strategy conveniently relaxes the assumption of no treatment-by-mediator interaction while greatly simplifying the outcome model specification without invoking strong distributional assumptions.","Published":"2017-02-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MultiSkew","Version":"1.1","Title":"Measures, Tests and Removes Multivariate Skewness","Description":"Computes the third multivariate cumulant of either the raw, centered or standardized data. Computes the main measures of multivariate skewness, together with their bootstrap distributions. Finally, computes the least skewed linear projections of the data.","Published":"2017-06-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multisom","Version":"1.3","Title":"Clustering a Data Set using Multi-SOM Algorithm","Description":"Implements two versions of the algorithm namely: stochastic and batch. The package determines also the best number of clusters and offers to the user the best clustering scheme from different results.","Published":"2017-05-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multispatialCCM","Version":"1.0","Title":"Multispatial Convergent Cross Mapping","Description":"The multispatial convergent cross mapping algorithm can be used as a test for causal associations between pairs of processes represented by time series. This is a combination of convergent cross mapping (CCM), described in Sugihara et al., 2012, Science, 338, 496-500, and dew-drop regression, described in Hsieh et al., 2008, American Naturalist, 171, 71–80. The algorithm allows CCM to be implemented on data that are not from a single long time series. Instead, data can come from many short time series, which are stitched together using bootstrapping.","Published":"2014-10-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MultiSV","Version":"0.0-67","Title":"MultiSV: an R package for identification of structural\nvariations in multiple populations based on whole genome\nresequencing","Description":"MultiSV is an R package for identification of structural\n variations in multiple populations based on whole genome resequencing. It\n fits linear mixed model and identifies structural variations in multiple\n populations using whole genome sequencing data. It could also be\n manipulated to use on RNA-seq data for differential gene expression\n (implementation in future releases). Main steps for analysis include\n generating read depth in bins using ComputeBinCounts. conversion of bins to\n MultiSV format using Bin2MultiSV. Finally, identification of structural\n variations using CallMultiSV.","Published":"2014-08-27","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"multitaper","Version":"1.0-13","Title":"Spectral Analysis Tools using Multitaper Method","Description":"Implements multitaper spectral analysis using discrete prolate spheroidal sequences (Slepians) and sine tapers. It includes an adaptive weighted multitaper spectral estimate, a coherence estimate, Thomson's Harmonic F-test, and complex demodulation. The Slepians sequences are generated efficiently using a tridiagonal matrix solution, and jackknifed confidence intervals are available for most estimates. ","Published":"2017-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MultivariateRandomForest","Version":"1.1.5","Title":"Models Multivariate Cases Using Random Forests","Description":"Models and predicts multiple output features in single random forest considering the \n linear relation among the output features, see details in Rahman et al (2017).","Published":"2017-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MultiVarSel","Version":"1.0","Title":"Variable Selection in the Multivariate Linear Model","Description":"It provides a novel variable selection approach in the \n multivariate framework of the general linear model taking into account the dependence \n that may exist between the columns of the observations matrix. For further details we\n refer the reader to the paper Perrot-Dockes et al. (2017), .","Published":"2017-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multivator","Version":"1.1-9","Title":"A Multivariate Emulator","Description":"A multivariate generalization of the emulator package.","Published":"2017-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multiwave","Version":"1.2","Title":"Estimation of Multivariate Long-Memory Models Parameters","Description":"Computation of an estimation of the long-memory parameters and\n the long-run covariance matrix using a multivariate model\n\t (Lobato (1999) ; Shimotsu (2007) ). Two semi-parametric methods are\n\t implemented: a Fourier based approach (Shimotsu (2007) ) and a wavelet based\n\t approach (Achard and Gannaz (2016) ).","Published":"2016-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"multiway","Version":"1.0-3","Title":"Component Models for Multi-Way Data","Description":"Fits multi-way component models via alternating least squares algorithms with optional constraints: orthogonal, non-negative, unimodal, monotonic, periodic, smooth, or structure. Fit models include Individual Differences Scaling, Parallel Factor Analysis (1 and 2), Simultaneous Component Analysis, and Tucker Factor Analysis.","Published":"2017-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"multiwayvcov","Version":"1.2.3","Title":"Multi-Way Standard Error Clustering","Description":"Exports two functions implementing\n multi-way clustering using the method suggested by Cameron, Gelbach, &\n Miller (2011) and cluster (or block)\n bootstrapping for estimating variance-covariance matrices. Normal one and\n two-way clustering matches the results of other common statistical\n packages. Missing values are handled transparently and rudimentary\n parallelization support is provided.","Published":"2016-05-05","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"MultNonParam","Version":"1.2.5","Title":"Multivariate Nonparametric Methods","Description":"A collection of multivariate nonparametric methods, selected in\n part to support an MS level course in nonparametric statistical methods. Methods\n include adjustments for multiple comparisons, implementation of multivariate\n Mann-Whitney-Wilcoxon testing, inversion of these tests to produce a confidence\n region, some permutation tests for linear models, and some algorithms for\n calculating exact probabilities associated with one- and two- stage testing\n involving Mann-Whitney-Wilcoxon statistics.","Published":"2016-10-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"multxpert","Version":"0.1","Title":"Common Multiple Testing Procedures and Gatekeeping Procedures","Description":"Implementation of commonly used p-value-based and\n parametric multiple testing procedures (computation of adjusted\n p-values and simultaneous confidence intervals) and parallel\n gatekeeping procedures based on the methodology presented in\n the book \"Multiple Testing Problems in Pharmaceutical\n Statistics\" (edited by Alex Dmitrienko, Ajit C. Tamhane and\n Frank Bretz) published by Chapman and Hall/CRC Press 2009.","Published":"2011-01-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"muma","Version":"1.4","Title":"Metabolomics Univariate and Multivariate Analysis","Description":"Preprocessing of high-throughput data (normalization and\n scalings); Principal Component Analysis with help tool for\n choosing best-separating principal components and automatic\n testing for outliers; automatic univariate analysis for\n parametric and non-parametric data, with generation of specific\n reports (volcano and box plots); partial least square\n discriminant analysis (PLS-DA); orthogonal partial least square\n discriminant analysis (OPLS-DA); Statistical Total Correlation\n Spectroscopy (STOCSY); Ratio Analysis Nuclear Magnetic\n Resonance (NMR) Spectroscopy (RANSY).","Published":"2012-12-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MuMIn","Version":"1.15.6","Title":"Multi-Model Inference","Description":"Model selection and model averaging based on information criteria\n (AICc and alike).","Published":"2016-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"munfold","Version":"0.3.5","Title":"Metric Unfolding","Description":"Multidimensional unfolding using Schoenemann's algorithm for metric\n and Procrustes rotation of unfolding results.","Published":"2016-02-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"munsell","Version":"0.4.3","Title":"Utilities for Using Munsell Colours","Description":"Provides easy access to, and manipulation of, the Munsell \n colours. Provides a mapping between Munsell's \n original notation (e.g. \"5R 5/10\") and hexadecimal strings suitable \n for use directly in R graphics. Also provides utilities \n to explore slices through the Munsell colour tree, to transform \n Munsell colours and display colour palettes.","Published":"2016-02-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"munsellinterpol","Version":"1.0.2","Title":"Interpolate Munsell Renotation Data from Hue/Chroma to CIE/sRGB","Description":"Methods for interpolating data in the Munsell color system following the ASTM D-1535 standard. Hues and chromas with decimal values can be interpolated and converted to/from the Munsell color system and CIE xyY, CIE XYZ, sRGB, CIE Lab or CIE Luv. Chromas can be odd and hue steps can be real numbers. Based on the work by Paul Centore, \"The Munsell and Kubelka-Munk Toolbox\".","Published":"2015-07-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"muRL","Version":"0.1-11","Title":"Mailmerge using R, LaTeX, and the Web","Description":"Provides mailmerge methods for reading spreadsheets of addresses and other relevant information to create standardized but customizable letters. Provides a method for mapping US ZIP codes, including those of letter recipients. Provides a method for parsing and processing html code from online job postings of the American Political Science Association.","Published":"2017-06-13","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"murphydiagram","Version":"0.11","Title":"Murphy Diagrams for Forecast Comparisons","Description":"Data and code for the paper by Ehm, Gneiting, Jordan and Krueger ('Of Quantiles and Expectiles: Consistent Scoring Functions, Choquet Representations, and Forecast Rankings', 2015).","Published":"2016-02-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MUS","Version":"0.1.4","Title":"Monetary Unit Sampling and Estimation Methods, Widely Used in\nAuditing","Description":"Sampling and evaluation methods to apply Monetary Unit Sampling (or in older literature Dollar Unit Sampling) during an audit of financial statements.","Published":"2017-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"musica","Version":"0.1.3","Title":"Multiscale Climate Model Assessment","Description":"Provides functions allowing for (1) easy aggregation of multivariate time series into custom time scales, (2) comparison of statistical summaries between different data sets at multiple time scales (e.g. observed and bias-corrected data), (3) comparison of relations between variables and/or different data sets at multiple time scales (e.g. correlation of precipitation and temperature in control and scenario simulation) and (4) transformation of time series at custom time scales.","Published":"2016-09-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"musicNMR","Version":"0.0.2","Title":"Conversion of Nuclear Magnetic Resonance spectrum in audio file","Description":"This package is a collection of function for converting and modifying mono dimensional nuclear magnetic resonance spectra. ","Published":"2014-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"muStat","Version":"1.7.0","Title":"Prentice Rank Sum Test and McNemar Test","Description":"Performs Wilcox rank sum test, Kruskal rank sum test,\n Friedman rank sum test and McNemar test.","Published":"2012-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mut","Version":"1.1","Title":"Pairwise Likelihood Ratios","Description":"Main function LR2 calculates likelihood ratio for non-inbred relationships accounting for mutation, silent alleles and theta correction. Egeland, Pinto and Amorim (2017) . ","Published":"2017-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mutoss","Version":"0.1-10","Title":"Unified Multiple Testing Procedures","Description":"The Mutoss package and accompanying mutossGUI package are\n designed to ease the application and comparison of multiple\n hypothesis testing procedures.","Published":"2015-04-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mutossGUI","Version":"0.1-10","Title":"A Graphical User Interface for the MuToss Project","Description":"The mutossGUI package provides a graphical user interface for the MuToss Project.","Published":"2015-08-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mutSignatures","Version":"1.2","Title":"Decipher Mutational Signatures from Somatic Mutational Catalogs","Description":"Cancer cells accumulate DNA mutations as result of DNA damage and DNA repair processes. This computational framework is aimed at deciphering DNA mutational signatures operating in cancer. The input is a numeric matrix of DNA mutation counts detected in a panel of cancer samples. The framework performs Non-negative Matrix Factorization to extract the most likely signatures explaining the observed set of DNA mutations. The framework relies on parallelization and is optimized for use on multi-core systems. This framework is an R-based implementation of the original MATLAB WTSI framework by Alexandrov LB et al (2013) .","Published":"2017-01-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MuViCP","Version":"1.3.2","Title":"MultiClass Visualizable Classification using Combination of\nProjections","Description":"An ensemble classifier for multiclass classification. This is a novel classifier that natively works as an ensemble. It projects data on a large number of matrices, and uses very simple classifiers on each of these projections. The results are then combined, ideally via Dempster-Shafer Calculus.","Published":"2016-02-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MVA","Version":"1.0-6","Title":"An Introduction to Applied Multivariate Analysis with R","Description":"Functions, data sets, analyses and examples from the book \n `An Introduction to Applied Multivariate Analysis with R' \n (Brian S. Everitt and Torsten Hothorn, Springer, 2011). ","Published":"2015-07-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mvabund","Version":"3.12.3","Title":"Statistical Methods for Analysing Multivariate Abundance Data","Description":"A set of tools for displaying, modeling and analysing\n multivariate abundance data in community ecology. See\n 'mvabund-package.Rd' for details of overall package organization.\n The package is implemented with the Gnu Scientific Library\n (http://www.gnu.org/software/gsl/) and Rcpp\n (http://dirk.eddelbuettel.com/code/rcpp.html) R / C++ classes.","Published":"2017-04-27","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"MVar.pt","Version":"1.9.8","Title":"Analise multivariada (brazilian portuguese)","Description":"Pacote para analise multivariada, que possui funcoes que executam analise de correspondencia simples (CA) e multipla (MCA), analise de componentes principais (PCA), analise de correlacao canonica (CCA), analise fatorial (FA), escalonamento multidimensional (MDS), analise de cluster hierarquico e nao hierarquico, regressao linear, analise de multiplos fatores (MFA) para dados quantitativos, qualitativos, de frequencia (MFACT) e dados mistos. Tambem possui outras funcoes uteis para a analise multivariada.","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MVB","Version":"1.1","Title":"Mutivariate Bernoulli log-linear model","Description":"Fit log-linear model for multivariate Bernoulli\n distribution with mixed effect models and LASSO","Published":"2013-12-15","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"MvBinary","Version":"1.1","Title":"Modelling Multivariate Binary Data with Blocks of Specific\nOne-Factor Distribution","Description":"Modelling Multivariate Binary Data with Blocks of Specific One-Factor Distribution. Variables are grouped into independent blocks. Each variable is described by two continuous parameters (its marginal probability and its dependency strength with the other block variables), and one binary parameter (positive or negative dependency). Model selection consists in the estimation of the repartition of the variables into blocks. It is carried out by the maximization of the BIC criterion by a deterministic (faster) algorithm or by a stochastic (more time consuming but optimal) algorithm. Tool functions facilitate the model interpretation.","Published":"2016-12-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvbutils","Version":"2.7.4.1","Title":"Workspace organization, code and documentation editing, package\nprep and editing, etc","Description":"Hierarchical workspace tree, code editing and backup, easy\n package prep, editing of packages while loaded, per-object\n lazy-loading, easy documentation, macro functions, and\n miscellaneous utilities. Needed by debug package.","Published":"2013-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvc","Version":"1.3","Title":"Multi-View Clustering","Description":"An implementation of Multi-View Clustering (Bickel and Scheffer, 2004). Documents are generated by drawing word values from a categorical distribution for each word, given the cluster. This means words are not counted (multinomial, as in the paper), but words take on different values from a finite set of values (categorical). Thus, it implements Mixture of Categoricals EM (as opposed to Mixture of Multinomials developed in the paper), and Spherical k-Means. The latter represents documents as vectors in the categorical space.","Published":"2014-02-24","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mvcluster","Version":"1.0","Title":"Multi-View Clustering","Description":"Implementation of multi-view bi-clustering algorithms. When a sample is characterized by two or more sets of input features, it creates multiple data matrices for the same set of examples, each corresponding to a view. For instance, individuals who are diagnosed with a disorder can be described by their clinical symptoms (one view) and their genomic markers (another view). Rows of a data matrix correspond to examples and columns correspond to features. A multi-view bi-clustering algorithm groups examples (rows) consistently across the views and simultaneously identifies the subset of features (columns) in each view that are associated with the row groups. This mvcluster package includes three such methods. (1) MVSVDL1: multi-view bi-clustering based on singular value decomposition where the left singular vectors are used to identify row clusters and the right singular vectors are used to identify features (columns) for each row cluster. Each singular vector is regularized by the L1 vector norm. (2) MVLRRL0: multi-view bi-clustering based on sparse low rank representation (i.e., matrix approximation) where the decomposed components are regularized by the so-called L0 vector norm (which is not really a vector norm). (3) MVLRRL1: multi-view bi-clustering based on sparse low rank representation (i.e., matrix approximation) where the decomposed components are regularized by the L1 vector norm. ","Published":"2016-04-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mvctm","Version":"1.1","Title":"Multivariate Variance Components Tests for Multilevel Data","Description":"Permutation tests for variance components for 2-level, 3-level and 4-level data with univariate or multivariate responses.","Published":"2017-03-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mvcwt","Version":"1.3","Title":"Wavelet analysis of multiple time series","Description":"This package computes the continuous wavelet transform of\n irregularly sampled time series.","Published":"2014-07-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mvdalab","Version":"1.2","Title":"Multivariate Data Analysis Laboratory","Description":"An open-source implementation of latent variable methods and multivariate modeling tools. The focus is on exploratory analyses using dimensionality reduction methods including low dimensional embedding, classical multivariate statistical tools , and tools for enhanced interpretation of machine learning methods (i.e. intelligible models to provide important information for end-users). Target domains include extension to dedicated applications e.g. for manufacturing process modeling, spectroscopic analyses, and data mining.","Published":"2017-03-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mvglmmRank","Version":"1.1-2","Title":"Multivariate Generalized Linear Mixed Models for Ranking Sports\nTeams","Description":"Maximum likelihood estimates are obtained via an EM algorithm with either a first-order or a fully exponential Laplace approximation. ","Published":"2015-11-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mvinfluence","Version":"0.8","Title":"Influence Measures and Diagnostic Plots for Multivariate Linear\nModels","Description":"Computes regression deletion diagnostics for multivariate linear models and provides some associated\n\tdiagnostic plots. The diagnostic measures include hat-values (leverages), generalized Cook's distance, and\n\tgeneralized squared 'studentized' residuals. Several types of plots to detect influential observations are\n\tprovided.","Published":"2016-06-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"MVisAGe","Version":"0.1.0","Title":"Compute and Visualize Bivariate Associations","Description":"Pearson correlation coefficients are commonly used to quantify the strength\n\tof bivariate associations of genomic variables. For example, correlations of gene-level \n\tDNA copy number and gene expression measurements may be used to assess the impact of \n\tDNA copy number changes on gene expression in tumor tissue. MVisAGe enables users to \n\tquickly compute and visualize the correlations in order to assess the effect of regional \n\tgenomic events such as changes in DNA copy number or DNA methylation level.","Published":"2017-05-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MVLM","Version":"0.1.4","Title":"Multivariate Linear Model with Analytic p-Values","Description":"Allows a user to conduct multivariate multiple regression using analytic p-values rather than classic approximate F-tests.","Published":"2017-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvLSW","Version":"1.1","Title":"Multivariate, Locally Stationary Wavelet Process Estimation","Description":"Tools for analysing multivariate time series with wavelets. This includes: simulation of a multivariate locally stationary wavelet (mvLSW) process from a multivariate evolutionary wavelet spectrum (mvEWS); estimation of the mvEWS, local coherence and local partial coherence; and, estimation of the asymptotic variance for mvEWS elements. See Park, Eckley and Ombao (2014) for details.","Published":"2017-02-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mvmesh","Version":"1.4","Title":"Multivariate Meshes and Histograms in Arbitrary Dimensions","Description":"Define, manipulate and plot meshes on simplices, spheres, balls, rectangles and tubes.\n Directional and other multivariate histograms are provided.","Published":"2016-10-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mvmeta","Version":"0.4.7","Title":"Multivariate and Univariate Meta-Analysis and Meta-Regression","Description":"Collection of functions to perform fixed and random-effects multivariate and univariate meta-analysis and meta-regression.","Published":"2015-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvMORPH","Version":"1.0.9","Title":"Multivariate Comparative Tools for Fitting Evolutionary Models\nto Morphometric Data","Description":"Fits multivariate (Brownian Motion, Early Burst, ACDC, Ornstein-Uhlenbeck and Shifts) models of continuous traits evolution on trees and time series.","Published":"2017-06-08","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"MVN","Version":"4.0.2","Title":"Multivariate Normality Tests","Description":"Performs multivariate normality tests and graphical approaches and implements multivariate outlier detection and univariate normality of marginal distributions through plots and tests. ","Published":"2016-12-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvna","Version":"2.0","Title":"Nelson-Aalen Estimator of the Cumulative Hazard in Multistate\nModels","Description":"Computes the Nelson-Aalen estimator of the cumulative transition hazard for arbitrary Markov multistate models . ","Published":"2017-01-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mvnfast","Version":"0.2.0","Title":"Fast Multivariate Normal and Student's t Methods","Description":"Provides computationally efficient tools related to\n the multivariate normal and Student's t distributions. The main functionalities are:\n simulating multivariate random vectors, evaluating multivariate\n normal or Student's t densities and Mahalanobis distances. These tools are very efficient\n thanks to the use of C++ code and of the OpenMP API.","Published":"2017-02-18","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"mvngGrAd","Version":"0.1.5","Title":"Moving Grid Adjustment in Plant Breeding Field Trials","Description":"Package for moving grid adjustment \n\t in plant breeding field trials.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvnmle","Version":"0.1-11","Title":"ML estimation for multivariate normal data with missing values","Description":"Finds the maximum likelihood estimate of the mean vector\n and variance-covariance matrix for multivariate normal data\n with missing values.","Published":"2012-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvnormtest","Version":"0.1-9","Title":"Normality test for multivariate variables","Description":"Generalization of shapiro-wilk test for multivariate\n variables.","Published":"2012-04-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mvnpermute","Version":"1.0.0","Title":"Generate New Multivariate Normal Samples from Permutations","Description":"Given a vector of multivariate normal data, a matrix of\n covariates and the data covariance matrix, generate new multivariate normal\n samples that have the same covariance matrix based on permutations of\n the transformed data residuals.","Published":"2015-01-27","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"mvnTest","Version":"1.1-0","Title":"Goodness of Fit Tests for Multivariate Normality","Description":"Routines for assessing multivariate normality. Implements three Wald's type chi-squared tests; non-parametric Anderson-Darling and Cramer-von Mises tests; Doornik-Hansen test, Royston test and Henze-Zirkler test.","Published":"2016-02-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvoutlier","Version":"2.0.8","Title":"Multivariate Outlier Detection Based on Robust Methods","Description":"Various Methods for Multivariate Outlier Detection.","Published":"2017-01-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mvPot","Version":"0.1.2","Title":"Multivariate Peaks-over-Threshold Modelling for Spatial Extreme\nEvents","Description":"Tools for high-dimensional peaks-over-threshold inference and simulation\n of spatial extremal processes.","Published":"2017-02-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mvProbit","Version":"0.1-8","Title":"Multivariate Probit Models","Description":"Tools for estimating multivariate probit models,\n calculating conditional and unconditional expectations,\n and calculating marginal effects on conditional and unconditional\n expectations.","Published":"2015-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvprpb","Version":"1.0.4","Title":"Orthant Probability of the Multivariate Normal Distribution","Description":"Computes orthant probabilities multivariate normal distribution.","Published":"2014-10-06","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mvQuad","Version":"1.0-6","Title":"Methods for Multivariate Quadrature","Description":"Provides methods to construct multivariate grids, which can be used\n for multivariate quadrature. This grids can be based on different quadrature\n rules like Newton-Cotes formulas (trapezoidal-, Simpson's- rule, ...) or Gauss\n quadrature (Gauss-Hermite, Gauss-Legendre, ...). For the construction of the\n multidimensional grid the product-rule or the combination- technique can be\n applied.","Published":"2016-07-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MVR","Version":"1.32.0","Title":"Mean-Variance Regularization","Description":"This is a non-parametric method for joint adaptive mean-variance regularization and variance stabilization of high-dimensional data. It is suited for handling difficult problems posed by high-dimensional multivariate datasets (p >> n paradigm). Among those are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. Key features include:\n (i) Normalization and/or variance stabilization of the data,\n (ii) Computation of mean-variance-regularized t-statistics (F-statistics to follow),\n (iii) Generation of diverse diagnostic plots,\n (iv) Computationally efficient implementation using C/C++ interfacing and an option for parallel computing to enjoy a faster and easier experience in the R environment.","Published":"2017-05-29","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mvrtn","Version":"1.0","Title":"Mean and Variance of Truncated Normal Distribution","Description":"Mean, variance, and random variates for left/right truncated normal distributions.","Published":"2014-08-18","License":"LGPL (>= 2.0, < 3) | Mozilla Public License","snapshot_date":"2017-06-23"} {"Package":"mvsf","Version":"1.0","Title":"Shapiro-Francia Multivariate Normality Test","Description":"Generalization of the Shapiro-Francia test for\n multivariate variables.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvShapiroTest","Version":"1.0","Title":"Generalized Shapiro-Wilk test for multivariate normality","Description":"This package implements the generalization of the Shapiro-Wilk test\n for multivariate normality proposed by Villasenor-Alva and\n Gonzalez-Estrada (2009).","Published":"2013-11-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"mvSLOUCH","Version":"1.3.3","Title":"Multivariate Stochastic Linear Ornstein-Uhlenbeck Models for\nPhylogenetic Comparative Hypotheses","Description":"Fits multivariate Ornstein-Uhlenbeck types of models to continues trait data from species related by a common evolutionary history. ","Published":"2017-06-18","License":"GPL (>= 2) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"mvst","Version":"1.0.1","Title":"Bayesian Inference for the Multivariate Skew-t Model","Description":"Estimates the multivariate skew-t and nested models, as described in the articles Liseo, B., Parisi, A. (2013). Bayesian inference for the multivariate skew-normal model: a population Monte Carlo approach. Comput. Statist. Data Anal. and in Parisi, A., Liseo, B. Objective Bayesian analysis for the multivariate skew-t model (to appear).","Published":"2016-07-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"MVT","Version":"0.3","Title":"Estimation and Testing for the Multivariate t-Distribution","Description":"Routines to perform estimation and inference under the multivariate t-distribution.\n Currently, the following methodologies are implemented: multivariate mean and covariance\n estimation, hypothesis testing about the mean, equicorrelation and homogeneity of variances,\n the Wilson-Hilferty transformation, QQ-plots with envelopes and random variate\n generation. Some auxiliary functions are also provided.","Published":"2015-10-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvtboost","Version":"0.5.0","Title":"Tree Boosting for Multivariate Outcomes","Description":"Fits a multivariate model of decision trees for multiple, continuous outcome variables. A model for each outcome variable is fit separately, selecting predictors that explain covariance in the outcomes. Built on top of 'gbm', which fits an ensemble of decision trees to univariate outcomes.","Published":"2016-12-05","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"mvtmeta","Version":"1.0","Title":"Multivariate meta-analysis","Description":"This package contains functions to run fixed effects or\n random effects multivariate meta-analysis.","Published":"2012-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mvtnorm","Version":"1.0-6","Title":"Multivariate Normal and t Distributions","Description":"Computes multivariate normal and t probabilities, quantiles,\n random deviates and densities.","Published":"2017-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mvtsplot","Version":"1.0-1","Title":"Multivariate Time Series Plot","Description":"A function for plotting multivariate time series data","Published":"2012-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"mwa","Version":"0.4.1","Title":"Causal Inference in Spatiotemporal Event Data","Description":"Matched Wake Analysis (mwa) grants insights into causal relationships in spatiotemporal event data. ","Published":"2015-02-24","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"mwaved","Version":"1.1.4","Title":"Multichannel Wavelet Deconvolution with Additive Long Memory\nNoise","Description":"Computes the Wavelet deconvolution estimate of a common signal\n present in multiple channels that have possible different levels of blur\n and long memory additive error.","Published":"2016-04-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"MWLasso","Version":"1.3.1","Title":"Penalized Moving-Window Lasso Method for Genome-Wide Association\nStudies","Description":"The Moving-Window Lasso (MWLasso) method for genome-wide association studies. A window scans the design matrix. For predictors in the same window, their coefficients estimates are smoothed. ","Published":"2016-08-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"MWRidge","Version":"1.0.0","Title":"Two Stage Moving-Window Ridge Method for Prediction and\nEstimation","Description":"A two stage moving-window Ridge method for coefficients estimation and model prediction. In the first stage, moving-window penalty and L1 penalty are applied. In the second stage, ridge regression is applied.","Published":"2016-12-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"mxkssd","Version":"1.1","Title":"Efficient mixed-level k-circulant supersaturated designs","Description":"mxkssd is a package that generates efficient balanced\n mixed-level k-circulant supersaturated designs by interchanging\n the elements of the generator vector. The package tries to\n generate a supersaturated design that has EfNOD efficiency more\n than user specified efficiency level (mef). The package also\n displays the progress of generation of an efficient mixed-level\n k-circulant design through a progress bar. The progress of 100\n per cent means that one full round of interchange is completed.\n More than one full round (typically 4-5 rounds) of interchange\n may be required for larger designs.","Published":"2011-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"MXM","Version":"0.9.9","Title":"Discovering Multiple, Statistically-Equivalent Signatures","Description":"Feature selection methods for identifying minimal, statistically-equivalent and equally-predictive feature subsets. Bayesian network algorithms and related functions are also included. The package name 'MXM' stands for \"Mens eX Machina\", meaning \"Mind from the Machine\" in Latin. ","Published":"2017-03-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mycobacrvR","Version":"1.0","Title":"Integrative immunoinformatics for Mycobacterial diseases in R\nplatform","Description":"The mycobacrvR package contains utilities to provide detailed information for B cell and T cell epitopes for predicted adhesins from various servers such as ABCpred, Bcepred, Bimas, Propred, NetMHC and IEDB. Please refer the URL below to download data files (data_mycobacrvR.zip) used in functions of this package.","Published":"2013-12-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"mycor","Version":"0.1","Title":"Automatic Correlation and Regression Test in a Data Frame","Description":"Perform correlation and linear regression test\n among the numeric columns in a data frame automatically\n and make plots using pairs or lattice::parallelplot.","Published":"2014-10-02","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"myepisodes","Version":"1.1.1","Title":"MyEpisodes RSS/API functions","Description":"Useful functions for accessing MyEpisodes feeds and\n episode information as well as other tv episode related actions\n through www.myepisodes.com","Published":"2012-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Myrrix","Version":"1.1","Title":"Interface to Myrrix. Myrrix is a complete, real-time, scalable\nclustering and recommender system, evolved from Apache Mahout","Description":"Recommendation engine based on Myrrix. Myrrix is a complete,\n real-time, scalable clustering and recommender system, evolved from Apache\n Mahout. It uses Alternating Least Squares to build a recommendation engine.","Published":"2013-12-12","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"Myrrixjars","Version":"1.0-1","Title":"External jars required for package Myrrix","Description":"External jars required for package Myrrix. Myrrix is a\n recommendation engine","Published":"2013-07-26","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"myTAI","Version":"0.5.0","Title":"Evolutionary Transcriptomics Analyses","Description":"Investigate the evolution of biological processes by capturing evolutionary signatures in transcriptomes. The aim of this tool is to provide a transcriptome analysis environment for answering questions regarding the evolution of biological processes.","Published":"2017-03-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"mztwinreg","Version":"1.0-1","Title":"Regression Models for Monozygotic Twin Data","Description":"Linear and logistic regression models for quantitative genetic analysis of data from monozygotic twins.","Published":"2015-01-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nabor","Version":"0.4.7","Title":"Wraps 'libnabo', a Fast K Nearest Neighbour Library for Low\nDimensions","Description":"An R wrapper for 'libnabo', an exact or approximate k nearest\n neighbour library which is optimised for low dimensional spaces (e.g. 3D).\n 'libnabo' has speed and space advantages over the 'ANN' library wrapped by\n package 'RANN'. 'nabor' includes a knn function that is designed as a \n drop-in replacement for 'RANN' function nn2. In addition, objects which \n include the k-d tree search structure can be returned to speed up repeated \n queries of the same set of target points.","Published":"2017-05-19","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"NADA","Version":"1.6-1","Title":"Nondetects and Data Analysis for Environmental Data","Description":"Contains methods described by Dennis Helsel in \n his book \"Nondetects And Data Analysis: Statistics \n for Censored Environmental Data\".","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nadiv","Version":"2.14.3.1","Title":"(Non)Additive Genetic Relatedness Matrices","Description":"Constructs (non)additive genetic relationship matrices, and their inverses, from a pedigree to be used in linear mixed effect models (A.K.A. the 'animal model'). Also includes other functions to facilitate the use of animal models. Some functions have been created to be used in conjunction with the R package 'asreml' for the 'ASReml' software, which can be obtained upon purchase from 'VSN' international (http://www.vsni.co.uk/software/asreml).","Published":"2016-09-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NAEPprimer","Version":"1.0.1","Title":"The NAEP Primer","Description":"Contains a sample of the 2005 Grade 8 Mathematics data from the National Assessment of Educational Progress (NAEP). This data set is called the NAEP Primer.","Published":"2016-04-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"naivebayes","Version":"0.9.1","Title":"High Performance Implementation of the Naive Bayes Algorithm","Description":"High performance implementation of the Naive Bayes algorithm.","Published":"2017-01-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NAM","Version":"1.5.1","Title":"Nested Association Mapping","Description":"Designed for association studies in nested association mapping (NAM) panels, also handling experimental and random panels. It includes tools for genome-wide associations of multiple populations, marker quality control, population genetics analysis, genome-wide prediction, solving mixed models and finding variance components through likelihood and Bayesian methods.","Published":"2017-04-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"namedCapture","Version":"2017.06.01","Title":"Named Capture Regular Expressions","Description":"User-friendly wrappers for \n named capture regular expressions.","Published":"2017-06-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"namespace","Version":"0.9.1","Title":"Provide namespace managment functions not (yet) present in base\nR","Description":"This package provides user-level functions to manage\n namespaces not (yet) available in base R: 'registerNamespace',\n 'unregisterNamespace', 'makeNamespace', and\n 'getRegisteredNamespace'\n\n ('makeNamespaces' is extracted from the R 'base' package source code:\n src/library/base/R/namespace.R)","Published":"2012-09-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nandb","Version":"0.2.0","Title":"Number and Brightness Image Analysis","Description":"Functions for calculation of molecular number and brightness from \n images, as detailed in Digman et al. 2008 . \n Includes the implementation of the novel \"automatic detrending\" technique.","Published":"2017-05-29","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"nanop","Version":"2.0-6","Title":"Tools for Nanoparticle Simulation and Calculation of PDF and\nTotal Scattering Structure Function","Description":"This software package implements functions to simulate spherical, ellipsoid and cubic polyatomic nanoparticles with arbitrary crystal structures and to calculate the associated pair-distribution function and X-ray/neutron total-scattering signals. It also provides a target function that can be used for simultaneous fitting of small- and wide-angle total scattering data in real and reciprocal spaces. The target function can be generated either as a sum of weighted residuals for individual datasets or as a vector of residuals suitable for optimization using multi-criteria algorithms (e.g. Pareto methods).","Published":"2015-09-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NanoStringNorm","Version":"1.1.21","Title":"Normalize NanoString miRNA and mRNA Data","Description":"A set of tools for normalizing, diagnostics and visualization of NanoString nCounter data.","Published":"2015-11-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nanotime","Version":"0.2.0","Title":"Nanosecond-Resolution Time for R","Description":"Full 64-bit resolution date and time support with resolution up\n to nanosecond granularity is provided, with easy transition to and from the\n standard 'POSIXct' type.","Published":"2017-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NAPPA","Version":"2.0.1","Title":"Performs the Processing and Normalisation of Nanostring miRNA\nand mRNA Data","Description":"Enables the processing and normalisation of the standard mRNA data output from the Nanostring nCounter software.","Published":"2015-03-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"naptime","Version":"1.3.0","Title":"A Flexible and Robust Sys.sleep() Replacement","Description":"Provides a near drop-in replacement for base::Sys.sleep() that allows more types of input\n to produce delays in the execution of code and can silence/prevent typical sources of error.","Published":"2017-02-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"narray","Version":"0.2.2","Title":"Subset- And Name-Aware Array Utility Functions","Description":"Stacking arrays according to dimension names, subset-aware\n splitting and mapping of functions, intersecting along arbitrary\n dimensions, converting to and from data.frames, and many other helper\n functions.","Published":"2017-03-12","License":"Apache License (== 2.0) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"nasadata","Version":"0.9.0","Title":"Interface to Various NASA API's","Description":"Provides functions to access NASA's Earth Imagery and Assets API\n and the Earth Observatory Natural Event Tracker (EONET) webservice.","Published":"2016-05-07","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"nasaweather","Version":"0.1","Title":"Collection of datasets from the ASA 2006 data expo","Description":"This package contains tidied data from the ASA 2006 data expo,\n as well as a number of useful other related data sets.","Published":"2014-07-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nat","Version":"1.8.9","Title":"NeuroAnatomy Toolbox for Analysis of 3D Image Data","Description":"NeuroAnatomy Toolbox (nat) enables analysis and visualisation of 3D\n biological image data, especially traced neurons. Reads and writes 3D images\n in NRRD and 'Amira' AmiraMesh formats and reads surfaces in 'Amira' hxsurf\n format. Traced neurons can be imported from and written to SWC and 'Amira'\n LineSet and SkeletonGraph formats. These data can then be visualised in 3D\n via 'rgl', manipulated including applying calculated registrations, e.g.\n using the 'CMTK' registration suite, and analysed. There is also a simple\n representation for neurons that have been subjected to 3D skeletonisation\n but not formally traced; this allows morphological comparison between\n neurons including searches and clustering (via the 'nat.nblast' extension\n package).","Published":"2017-05-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nat.nblast","Version":"1.6.2","Title":"NeuroAnatomy Toolbox ('nat') Extension for Assessing Neuron\nSimilarity and Clustering","Description":"Extends package 'nat' (NeuroAnatomy Toolbox) by providing a\n collection of NBLAST-related functions.","Published":"2016-04-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nat.templatebrains","Version":"0.8.2","Title":"NeuroAnatomy Toolbox ('nat') Extension for Handling Template\nBrains","Description":"Extends package 'nat' (NeuroAnatomy Toolbox) by providing objects\n and functions for handling template brains.","Published":"2017-04-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nat.utils","Version":"0.5.1","Title":"File System Utility Functions for 'NeuroAnatomy Toolbox'","Description":"Utility functions that may be of general interest but are \n specifically required by the 'NeuroAnatomy Toolbox' ('nat'). Includes\n functions to provide a basic make style system to update files based on\n timestamp information, file locking and 'touch' utility. Convenience \n functions for working with file paths include 'abs2rel', 'split_path' \n and 'common_path'. Finally there are utility functions for working with \n 'zip' and 'gzip' files including integrity tests.","Published":"2015-07-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"natserv","Version":"0.1.4","Title":"'NatureServe' Interface","Description":"Interface to 'NatureServe' ().\n Includes methods to get data, image metadata, search taxonomic names,\n and make maps.","Published":"2017-01-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"naturalsort","Version":"0.1.3","Title":"Natural Ordering","Description":"Provides functions related to human natural ordering.\n It handles adjacent digits in a character sequence as a number so that\n natural sort function arranges a character vector by their numbers, not digit\n characters. It is typically seen when operating systems lists file names. For\n example, a sequence a-1.png, a-2.png, a-10.png looks naturally ordered because 1\n < 2 < 10 and natural sort algorithm arranges so whereas general sort algorithms\n arrange it into a-1.png, a-10.png, a-2.png owing to their third and fourth\n characters.","Published":"2016-08-30","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"nauf","Version":"1.1.0","Title":"Regression with NA Values in Unordered Factors","Description":"Fits regressions where unordered factors can be set to NA in \n subsets of the data where they are not applicable or otherwise not\n contrastive by using sum contrasts and setting NA values to zero.","Published":"2017-06-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"NB","Version":"0.9","Title":"Maximum Likelihood method in estimating effective population\nsize from genetic data","Description":"Estimate the effective population size of a closed population using genetic data collected from two or more data points. ","Published":"2014-10-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NB.MClust","Version":"1.1.1","Title":"Negative Binomial Model-Based Clustering","Description":"Model-based clustering of high-dimensional non-negative\n data that follow Generalized Negative Binomial distribution. All functions \n in this package applies to either continuous or integer data. Correlation\n between variables are allowed, while samples are assumed to be independent.","Published":"2017-06-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nbc4va","Version":"1.0","Title":"Bayes Classifier for Verbal Autopsy Data","Description":"An implementation of the Naive Bayes Classifier (NBC) algorithm used for Verbal Autopsy (VA) built on code from Miasnikof et al (2015) .","Published":"2016-07-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NbClust","Version":"3.0","Title":"Determining the Best Number of Clusters in a Data Set","Description":"It provides 30 indexes for determining the optimal number of clusters in a data set and offers the best clustering scheme from different results to the user. ","Published":"2015-04-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nbconvertR","Version":"1.0.2","Title":"Vignette Engine Wrapping IPython Notebooks","Description":"\n Calls the 'Jupyter'/'IPython' script 'nbconvert' to create vignettes from notebooks.\n Those notebooks ('.ipynb' files) are files containing rich text, code, and its output.\n Code cells can be edited and evaluated interactively.\n See for more information.","Published":"2015-07-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NBDdirichlet","Version":"1.3","Title":"NBD-Dirichlet Model of Consumer Buying Behavior for Marketing\nResearch","Description":"The Dirichlet (aka NBD-Dirichlet) model describes the\n purchase incidence and brand choice of consumer products. We\n estimate the model and summarize various theoretical quantities\n of interest to marketing researchers. Also provides functions\n for making tables that compare observed and theoretical\n statistics.","Published":"2016-03-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nbpMatching","Version":"1.5.1","Title":"Functions for Optimal Non-Bipartite Matching","Description":"Perform non-bipartite matching and matched randomization. A\n \"bipartite\" matching utilizes two separate groups, e.g. smokers being\n matched to nonsmokers or cases being matched to controls. A \"non-bipartite\"\n matching creates mates from one big group, e.g. 100 hospitals being\n randomized for a two-arm cluster randomized trial or 5000 children who\n have been exposed to various levels of secondhand smoke and are being\n paired to form a greater exposure vs. lesser exposure comparison. At the\n core of a non-bipartite matching is a N x N distance matrix for N potential\n mates. The distance between two units expresses a measure of similarity or\n quality as mates (the lower the better). The 'gendistance()' and\n 'distancematrix()' functions assist in creating this. The 'nonbimatch()'\n function creates the matching that minimizes the total sum of distances\n between mates; hence, it is referred to as an \"optimal\" matching. The\n 'assign.grp()' function aids in performing a matched randomization. Note\n bipartite matching can be performed using the prevent option in\n 'gendistance()'.","Published":"2016-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NBPSeq","Version":"0.3.0","Title":"Negative Binomial Models for RNA-Sequencing Data","Description":"Negative Binomial (NB) models for two-group comparisons and\n regression inferences from RNA-Sequencing Data.","Published":"2014-04-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NCA","Version":"2.0","Title":"Necessary Condition Analysis","Description":"Performs a Necessary Condition Analysis (NCA). (Dul, J. 2016. Necessary Condition Analysis (NCA). ''Logic and Methodology of 'Necessary but not Sufficient' causality.\" Organizational Research Methods 19(1), 10-52)\n NCA identifies necessary (but not sufficient) conditions in datasets. Instead of drawing a regression line ''through the middle of the data'' in an xy-plot, NCA draws the ceiling line. The ceiling line y = f(x) separates the area with observations from the area without observations.\n (Nearly) all observations are below the ceiling line: y <= f(x). The empty zone is in the upper left hand corner of the xy-plot (with the convention that the x-axis is ''horizontal'' and the y-axis is ''vertical'' and that values increase ''upwards'' and ''to the right''). The ceiling line is a (piecewise) linear non-decreasing line: a linear step function or a straight line. It indicates which level of x (e.g. an effort or input) is necessary but not sufficient for a (desired) level of y (e.g. good performance or output).","Published":"2016-05-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"nCal","Version":"2016.7-31","Title":"Nonlinear Calibration","Description":"Performs nonlinear calibration and curve fitting for data from Luminex, RT-PCR, ELISA, RPPA etc. Its precursor is Ruminex.","Published":"2016-07-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ncappc","Version":"0.2.1.1","Title":"NCA Calculation and Population PK Model Diagnosis","Description":"A flexible tool is presented here that can perform\n (i) traditional non-compartmental analysis (NCA) and\n (ii) simulation-based posterior predictive checks for a population\n pharmacokinetic (PK) model using NCA metrics.","Published":"2016-02-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ncar","Version":"0.3.4","Title":"Noncompartmental Analysis for Pharmacokinetic Data for Report","Description":"Conduct a noncompartmental analysis as closely as possible to the most widely used commercial software for pharmacokinetic analysis, i.e. 'Phoenix(R) WinNonlin(R)' .\n Some features are\n 1) CDISC SDTM terms\n 2) Automatic slope selection with the same criterion of WinNonlin(R)\n 3) Supporting both 'linear-up linear-down' and 'linear-up log-down' method\n 4) Interval(partial) AUCs with 'linear' or 'log' interpolation method\n 5) Produce pdf, rtf, text report files.\n * Reference: Gabrielsson J, Weiner D. Pharmacokinetic and Pharmacodynamic Data Analysis - Concepts and Applications. 5th ed. 2016. (ISBN:9198299107).","Published":"2017-03-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ncbit","Version":"2013.03.29","Title":"retrieve and build NBCI taxonomic data","Description":"making NCBI taxonomic data locally available and\n searchable as an R object","Published":"2013-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ncdf.tools","Version":"0.7.1.295","Title":"Easier 'NetCDF' File Handling","Description":"Set of tools to simplify the handling of 'NetCDF' files with the 'RNetCDF' package. \n Most functions are wrappers of basic functions from the 'RNetCDF' package to easily run combinations of these \n functions for frequently encountered tasks. ","Published":"2015-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ncdf4","Version":"1.16","Title":"Interface to Unidata netCDF (Version 4 or Earlier) Format Data\nFiles","Description":"Provides a high-level R interface to data files written using Unidata's netCDF library (version 4 or earlier), which are binary data files that are portable across platforms and include metadata information in addition to the data sets. Using this package, netCDF files (either version 4 or \"classic\" version 3) can be opened and data sets read in easily. It is also easy to create new netCDF dimensions, variables, and files, in either version 3 or 4 format, and manipulate existing netCDF files. This package replaces the former ncdf package, which only worked with netcdf version 3 files. For various reasons the names of the functions have had to be changed from the names in the ncdf package. The old ncdf package is still available at the URL given below, if you need to have backward compatibility. It should be possible to have both the ncdf and ncdf4 packages installed simultaneously without a problem. However, the ncdf package does not provide an interface for netcdf version 4 files.","Published":"2017-04-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ncdf4.helpers","Version":"0.3-3","Title":"Helper functions for use with the ncdf4 package","Description":"This package contains a collection of helper functions for dealing\n with NetCDF files opened using ncdf4.","Published":"2014-02-21","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"ncdump","Version":"0.0.3","Title":"Extract Metadata from 'NetCDF' Files as Data Frames","Description":"Tools for handling 'NetCDF' metadata in data frames. The metadata is provided\n as relations in tabular form, to avoid having to scan printed header output or to navigate \n nested lists of raw metadata. ","Published":"2017-05-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nCDunnett","Version":"1.1.0","Title":"Noncentral Dunnett's Test Distribution","Description":"Computes the noncentral Dunnett's test distribution (pdf, cdf and quantile) and generates random numbers. ","Published":"2015-11-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ncf","Version":"1.1-7","Title":"Spatial Nonparametric Covariance Functions","Description":"R functions for analyzing spatial (cross-)covariance: the\n nonparametric (cross-)covariance, the spline correlogram, the\n nonparametric phase coherence function, and related.","Published":"2016-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ncg","Version":"0.1.1","Title":"Computes the noncentral gamma function","Description":"Computes the noncentral gamma function: pdf, cdf, quantile\n function and inverse for the noncentrality parameter.","Published":"2012-07-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NCmisc","Version":"1.1.5","Title":"Miscellaneous Functions for Creating Adaptive Functions and\nScripts","Description":"A set of handy functions. Includes a versatile one line progress bar, one \n line function timer with detailed output, time delay function, text histogram, object \n preview, CRAN package search, simpler package installer, Linux command install check, \n a flexible Mode function, top function, simulation of correlated data, and more.","Published":"2017-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ncvreg","Version":"3.9-1","Title":"Regularization Paths for SCAD and MCP Penalized Regression\nModels","Description":"Efficient algorithms for fitting regularization paths for linear or\n logistic regression models penalized by MCP or SCAD, with optional additional\n L2 penalty.","Published":"2017-04-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ndjson","Version":"0.2.0","Title":"Wicked-Fast Streaming 'JSON' ('ndjson') Reader","Description":"Streaming 'JSON' ('ndjson') has one 'JSON' record per-line and many modern\n 'ndjson' files contain large numbers of records. These constructs may not be\n columnar in nature, but it's often useful to read in these files and \"flatten\"\n the structure out to work in an R data.frame-like context. Functions are provided that\n make it possible to read in plain 'ndjson' files or compressed ('gz') 'ndjson'\n files and either validate the format of the records or create \"flat\" data.table\n ('tbl_dt') structures from them.","Published":"2016-08-27","License":"AGPL","snapshot_date":"2017-06-23"} {"Package":"ndl","Version":"0.2.17","Title":"Naive Discriminative Learning","Description":"Naive discriminative learning implements learning and\n classification models based on the Rescorla-Wagner equations and their\n equilibrium equations.","Published":"2015-11-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ndtv","Version":"0.10.0","Title":"Network Dynamic Temporal Visualizations","Description":"Renders dynamic network data from 'networkDynamic' objects as movies, interactive animations, or other representations of changing relational structures and attributes.","Published":"2016-05-07","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"NEArender","Version":"1.4","Title":"Network Enrichment Analysis","Description":"Performs network enrichment analysis against functional gene sets.\n Benchmarks networks. Renders raw gene profile matrices of dimensionality 'Ngenes\n x Nsamples' into the space of gene set (typically pathway) enrichment scores of\n dimensionality 'Npathways x Nsamples'.","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nearfar","Version":"1.1","Title":"Near-Far Matching","Description":"Near-far matching is a study design technique for\n preprocessing observational data to mimic a pair-randomized trial.\n Individuals are matched to be near on measured confounders and far\n on levels of an instrumental variable.","Published":"2017-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"neariso","Version":"1.0","Title":"Near-Isotonic Regression","Description":"This package implements a path algorithm for Near-Isotonic\n Regression. For more details see the help files","Published":"2011-01-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"neat","Version":"1.1","Title":"Efficient Network Enrichment Analysis Test","Description":"Includes functions and examples to compute NEAT, the Network Enrichment Analysis Test described in Signorelli et al. (2016, ).","Published":"2017-05-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NeatMap","Version":"0.3.6.2","Title":"Non-clustered heatmap alternatives","Description":"NeatMap is a package to create heatmap like plots in 2 and\n 3 dimensions, without the need for cluster analysis. Like the\n heatmap, the plots created by NeatMap display both a\n dimensionally reduced representation of the data as well as the\n data itself. They are intended to be used in conjunction with\n dimensional reduction techniques such as PCA.","Published":"2014-07-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"needs","Version":"0.0.3","Title":"Attaches and Installs Packages","Description":"A simple function for easier package loading and auto-installation.","Published":"2016-03-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"needy","Version":"0.2","Title":"needy","Description":"needy is a small utility library designed to make testing function\n inputs less difficult. R is a dynamically typed language, but larger\n projects need input checking for scalabity. needy offers a single\n function, require_a( ), which lets you specify the traits an input object\n should have, such as class, size, numerical properties or number of\n parameters, while reducing boilerplate code and aiding debugging.","Published":"2013-07-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"NEff","Version":"1.1","Title":"Calculating Effective Sizes Based on Known Demographic\nParameters of a Population","Description":"Effective population sizes (often abbreviated as \"Neff\") are essential in biodiversity monitoring and conservation. For the first time, calculating effective sizes with data obtained within less than a generation but considering demographic parameters is possible. This individual based model uses demographic parameters of a population to calculate annual effective sizes and effective population sizes (per generation). A defined number of alleles and loci will be used to simulate the genotypes of the individuals. Stepwise mutation rates can be included. Variations in life history parameters (sex ratio, sex-specific survival, recruitment rate, reproductive skew) are possible. These results will help managers to define existing populations as viable or not. ","Published":"2015-05-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NegBinBetaBinreg","Version":"1.0","Title":"Negative Binomial and Beta Binomial Bayesian Regression Models","Description":"The Negative Binomial regression with mean and shape modeling and mean and variance modeling and Beta Binomial regression with mean and dispersion modeling.","Published":"2016-11-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"negenes","Version":"1.0-5","Title":"Estimating the Number of Essential Genes in a Genome","Description":"Estimating the number of essential genes in a genome on the basis of data from a random transposon mutagenesis experiment, through the use of a Gibbs sampler.","Published":"2016-05-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"neighbr","Version":"1.0","Title":"Classification, Regression, Clustering with K Nearest Neighbors","Description":"Classification, regression, and clustering with k nearest neighbors\n algorithm. Implements several distance and similarity measures, covering\n continuous and logical features. Outputs ranked neighbors. Most features of\n this package are directly based on the PMML specification for KNN.","Published":"2017-02-23","License":"GPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"neldermead","Version":"1.0-10","Title":"R port of the Scilab neldermead module","Description":"Provides several direct search optimization algorithms based on the\n simplex method. The provided algorithms are direct search algorithms, i.e.\n algorithms which do not use the derivative of the cost function. They are\n based on the update of a simplex. The following algorithms are available: the\n fixed shape simplex method of Spendley, Hext and Himsworth (unconstrained\n optimization with a fixed shape simplex), the variable shape simplex method of\n Nelder and Mead (unconstrained optimization with a variable shape simplex\n made), and Box's complex method (constrained optimization with a variable\n shape simplex).","Published":"2015-01-11","License":"CeCILL-2","snapshot_date":"2017-06-23"} {"Package":"neotoma","Version":"1.7.0","Title":"Access to the Neotoma Paleoecological Database Through R","Description":"Access paleoecological datasets from the Neotoma Paleoecological\n Database using the published API (). The functions\n in this package access various pre-built API functions and attempt to return\n the results from Neotoma in a usable format for researchers and the public.","Published":"2017-06-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"nephro","Version":"1.2","Title":"Utilities for Nephrology","Description":"Set of functions to estimate renal function and other phenotypes of interest in nephrology based on different biomechimal traits. MDRD, CKD-EPI, and Virga equations are compared in Pattaro (2013) , where the respective references are given. In addition, the software includes Stevens (2008) and Cockroft (1976) formulas.","Published":"2017-05-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"NEpiC","Version":"1.0.1","Title":"Network Assisted Algorithm for Epigenetic Studies Using Mean and\nVariance Combined Signals","Description":"Package for a Network assisted algorithm for Epigenetic studies using mean and variance Combined signals: NEpiC. NEpiC combines both signals in mean and variance differences in methylation level between case and control groups searching for differentially methylated sub-networks (modules) using the protein-protein interaction network.","Published":"2016-03-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NestedCategBayesImpute","Version":"1.0.0","Title":"Modeling and Generating Synthetic Versions of Nested Categorical\nData in the Presence of Impossible Combinations","Description":"This tool set provides a set of functions to fit the nested Dirichlet process mixture of products of multinomial distributions (NDPMPM) model for nested categorical household data in the presence of impossible combinations. It has direct applications in generating synthetic nested household data.","Published":"2016-11-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"NestedCohort","Version":"1.1-3","Title":"Survival Analysis for Cohorts with Missing Covariate Information","Description":"Estimate hazard ratios, survival curves and attributable\n risks for cohorts with missing covariates, using Cox models or\n Kaplan-Meier estimated for strata. This handles studies nested\n within cohorts, such as case-cohort studies with stratified\n sampling. See\n http://www.r-project.org/doc/Rnews/Rnews_2008-1.pdf","Published":"2013-02-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nestedRanksTest","Version":"0.2","Title":"Mann-Whitney-Wilcoxon Test for Nested Ranks","Description":"Calculate a Mann-Whitney-Wilcoxon test for a difference between treatment levels using nested ranks. This test can be used when observations are structured into several groups and each group has received both treatment levels. The p-value is determined via bootstrapping. The nested ranks test is intended to be one possible mixed-model extension of the Mann-Whitney-Wilcoxon test, for which treatment is a fixed effect and group membership is a random effect.","Published":"2015-06-06","License":"LGPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"net.security","Version":"0.1.0","Title":"Security Standards Data Sets","Description":"Provides functions for security standards data management. It comes with data frames of 1000 observations for each security standard and updates are possible from official sources to build updated data sets.","Published":"2017-04-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"netassoc","Version":"0.6.3","Title":"Inference of Species Associations from Co-Occurrence Data","Description":"Infers species associations from community matrices. Uses local and (optional) regional-scale co-occurrence data by comparing observed partial correlation coefficients between species to those estimated from regional species distributions. Extends Gaussian graphical models to a null modeling framework. Provides interface to a variety of inverse covariance matrix estimation methods. ","Published":"2017-06-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"netClass","Version":"1.2.1","Title":"netClass: An R Package for Network-Based Biomarker Discovery","Description":"netClass is an R package for network-based feature (gene)\n selection for biomarkers discovery via integrating biological\n information. This package adapts the following 5 algorithms\n for classifying and predicting gene expression data using prior\n knowledge: 1) average gene expression of pathway (aep); 2)\n pathway activities classification (PAC); 3) Hub network\n Classification (hubc); 4) filter via top ranked genes (FrSVM);\n 5) network smoothed t-statistic (stSVM).","Published":"2013-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NetCluster","Version":"0.2","Title":"Clustering for networks","Description":"Facilitates network clustering and evaluation of cluster\n configurations.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"netcoh","Version":"0.2","Title":"Statistical Modeling with Network Cohesion","Description":"Model fitting procedures for regression with network cohesion effects, when a network connecting sample individuals is available in a regression problem. In the future, other commonly used statistical models will be added, such as gaussian graphical model.","Published":"2016-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"netCoin","Version":"0.2.5","Title":"Interactive Networks with R","Description":"Create interactive networked coincidences. It joins the data analysis power of R to study coincidences and the visualization libraries of JavaScript in one package.","Published":"2017-03-31","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"NetComp","Version":"1.6","Title":"Network Generation and Comparison","Description":"This package contains functions to carry out high\n throughput data analysis and to conduct data set comparisons.\n Similarity matrices from high throughput phenotypic data\n containing uninformative (e.g. wild type) or missing data can\n be calculated to report similarity of response. A suite of\n graph comparisons using an adjacency or correlation matrix\n format are included to facilitate quick network analysis.","Published":"2012-08-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"NetData","Version":"0.3","Title":"Network Data for McFarland's SNA R labs","Description":"This package contains all data needed for Dan McFarland's\n SNA R labs.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"netdiffuseR","Version":"1.17.0","Title":"Analysis of Diffusion and Contagion Processes on Networks","Description":"Empirical statistical analysis, visualization and simulation\n of diffusion and contagion processes on networks. The package implements\n algorithms for calculating network diffusion statistics such as transmission\n rate, hazard rates, exposure models, network threshold levels, infectiousness\n (contagion), and susceptibility. The package is inspired by work published in\n Valente, et al., (2015) ; Valente (1995)\n , Myers (2000) , Iyengar and others\n (2011) , Burt (1987) ; among\n others.","Published":"2016-11-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"netgen","Version":"1.3","Title":"Network Generator for Combinatorial Graph Problems","Description":"Methods for the generation of a wide range of network geographies,\n e.g., grid networks or clustered networks. Useful for the generation of\n benchmarking instances for the investigation of, e.g., Vehicle-Routing-Problems\n or Travelling Salesperson Problems.","Published":"2016-01-22","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"netgsa","Version":"3.0","Title":"Network-Based Gene Set Analysis","Description":"Carry out Network-based Gene Set Analysis by incorporating external information about interactions among genes, as well as novel interactions learned from data.","Published":"2016-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NetIndices","Version":"1.4.4","Title":"Estimating network indices, including trophic structure of\nfoodwebs in R","Description":"Given a network (e.g. a food web), estimates several network indices. These include: Ascendency network indices, Direct and indirect dependencies, Effective measures, Environ network indices, General network indices, Pathway analysis, Network uncertainty indices and constraint efficiencies and the trophic level and omnivory indices of food webs.","Published":"2014-12-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"netmeta","Version":"0.9-5","Title":"Network Meta-Analysis using Frequentist Methods","Description":"A comprehensive set of functions providing frequentist methods for network meta-analysis and supporting Schwarzer et al. (2015) , Chapter 8 \"Network Meta-Analysis\":\n - frequentist network meta-analysis following Rücker (2012) ;\n - net heat plot and design-based decomposition of Cochran's Q according to Krahn et al. (2013) ;\n - measures characterizing the flow of evidence between two treatments by König et al. (2013) ;\n - ranking of treatments (frequentist analogue of SUCRA) according to Rücker & Schwarzer (2015) ;\n - partial order of treatment rankings ('poset') and Hasse diagram for 'poset' (Carlsen & Bruggemann, 2014) ;\n - split direct and indirect evidence to check consistency (Dias et al., 2010) ;\n - league table with network meta-analysis results;\n - automated drawing of network graphs described in Rücker & Schwarzer (2016) .","Published":"2017-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NetOrigin","Version":"1.0-2","Title":"Origin Estimation for Propagation Processes on Complex Networks","Description":"Performs network-based source estimation. Different approaches are available: effective distance median, recursive backtracking, and centrality-based source estimation. Additionally, we provide public transportation network data as well as methods for data preparation, source estimation performance analysis and visualization.","Published":"2016-07-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NetPreProc","Version":"1.1","Title":"Network Pre-Processing and Normalization","Description":"Package for the pre-processing and normalization of graphs.","Published":"2015-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NetRep","Version":"1.0.4","Title":"Permutation Testing Network Module Preservation Across Datasets","Description":"Functions for assessing the replication/preservation of a network \n module's topology across datasets through permutation testing.","Published":"2016-11-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nets","Version":"0.8","Title":"Network Estimation for Time Series","Description":"Sparse VAR estimation based on LASSO.","Published":"2016-03-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"NetSim","Version":"0.9","Title":"A Social Networks Simulation Tool in R","Description":"NetSim allows to combine and simulate a variety of micro-models to research their impact on the macro-features of social networks. ","Published":"2013-12-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NetSwan","Version":"0.1","Title":"Network Strengths and Weaknesses Analysis","Description":"A set of functions for studying network robustness, resilience and vulnerability. ","Published":"2015-11-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nettools","Version":"1.0.1","Title":"A Network Comparison Framework","Description":"A collection of network inference methods for co-expression networks, quantitative network distances and a novel framework for network stability analysis.","Published":"2014-09-12","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"NetWeaver","Version":"0.0.2","Title":"Graphic Presentation of Complex Genomic and Network Data\nAnalysis","Description":"Implements various simple function utilities and flexible pipelines to generate circular images for visualizing complex genomic and network data analysis features.","Published":"2017-01-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"network","Version":"1.13.0","Title":"Classes for Relational Data","Description":"Tools to create and modify network objects. The network class can represent a range of relational data types, and supports arbitrary vertex/edge/graph attributes.","Published":"2015-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NetworkChange","Version":"0.2","Title":"Bayesian Package for Network Changepoint Analysis","Description":"Network changepoint analysis for undirected network data. The package implements a hidden Markov multilinear tenstor regression model (Park and Sohn, 2017, ). Functions for break number detection using the approximate marginal likelihood and WAIC are also provided.","Published":"2017-05-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NetworkComparisonTest","Version":"2.0.1","Title":"Statistical Comparison of Two Networks Based on Three Invariance\nMeasures","Description":"This permutation based hypothesis test, suited for Gaussian and\n binary data, assesses the difference between two networks based on several\n invariance measures (network structure invariance, global strength invariance,\n edge invariance). Network structures are estimated with l1-regularized partial\n correlations (Gaussian data) or with l1-regularized logistic regression (eLasso,\n binary data). Suited for comparison of independent and dependent samples\n (currently, only for one group measured twice).","Published":"2016-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"networkD3","Version":"0.4","Title":"D3 JavaScript Network Graphs from R","Description":"Creates 'D3' 'JavaScript' network, tree, dendrogram, and Sankey\n graphs from 'R'.","Published":"2017-03-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"networkDynamic","Version":"0.9.0","Title":"Dynamic Extensions for Network Objects","Description":"Simple interface routines to facilitate the handling of network objects with complex intertemporal data. This is a part of the \"statnet\" suite of packages for network analysis.","Published":"2016-01-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"networkDynamicData","Version":"0.2.1","Title":"Dynamic (Longitudinal) Network Datasets","Description":"A collection of dynamic network data sets from various sources and multiple authors represented as 'networkDynamic'-formatted objects.","Published":"2016-01-12","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"NetworkInference","Version":"1.1.0","Title":"Inferring Latent Diffusion Networks","Description":"This is an R implementation of the netinf algorithm (Gomez Rodriguez, Leskovec, and Krause, 2010). Given a set of events that spread between a set of nodes the algorithm infers the most likely stable diffusion network that is underlying the diffusion process.","Published":"2017-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"networkreporting","Version":"0.1.1","Title":"Tools for using Network Reporting Estimators","Description":"Functions useful\n for producing estimates from data that were collected using network\n reporting techniques like network scale-up, indirect sampling,\n network reporting, and sibling history.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"NetworkRiskMeasures","Version":"0.1.2","Title":"Risk Measures for (Financial) Networks","Description":"Implements some risk measures for (financial) networks, such as DebtRank, Impact Susceptibility, Impact Diffusion and Impact Fluidity. ","Published":"2017-03-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"networksis","Version":"2.1-3","Title":"Simulate Bipartite Graphs with Fixed Marginals Through\nSequential Importance Sampling","Description":"Tools to simulate bipartite networks/graphs with the\n degrees of the nodes fixed and specified. 'networksis' is part\n of the 'statnet' suite of packages for network analysis.","Published":"2015-04-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"networkTomography","Version":"0.3","Title":"Tools for network tomography","Description":"networkTomography implements the methods developed and evaluated in\n Blocker and Airoldi (2011) and Airoldi and Blocker (2012). These include the\n authors' own dynamic multilevel model with calibration based upon a Gaussian\n state-space model in addition to implementations of the methods of Tebaldi &\n West (1998; Poisson-Gamma model with MCMC sampling), Zhang et al. (2002;\n tomogravity), Cao et al. (2000; Gaussian model with mean-variance relation),\n and Vardi (1996; method of moments). Data from the 1router network of Cao et\n al. (2000), the Abilene network of Fang et al. (2007), and the CMU network\n of Blocker and Airoldi (2011) are included for testing and reproducibility.","Published":"2014-01-10","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"networktools","Version":"1.0.0","Title":"Assorted Tools for Identifying Important Nodes in Networks\n(Impact, Expected Influence)","Description":"Includes assorted tools for network analysis. Specifically, includes functions for \n calculating impact statistics, which aim to identify how each node impacts \n the overall network structure (global strength impact, network structure impact, edge impact), \n and for calculating and visualizing expected influence. ","Published":"2017-04-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"neural","Version":"1.4.2.2","Title":"Neural Networks","Description":"RBF and MLP neural networks with graphical user interface","Published":"2014-09-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"neuralnet","Version":"1.33","Title":"Training of Neural Networks","Description":"Training of neural networks using backpropagation,\n resilient backpropagation with (Riedmiller, 1994) or without\n weight backtracking (Riedmiller and Braun, 1993) or the\n modified globally convergent version by Anastasiadis et al.\n (2005). The package allows flexible settings through\n custom-choice of error and activation function. Furthermore,\n the calculation of generalized weights (Intrator O & Intrator\n N, 1993) is implemented.","Published":"2016-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NeuralNetTools","Version":"1.5.0","Title":"Visualization and Analysis Tools for Neural Networks","Description":"Visualization and analysis tools to aid in the interpretation of\n neural network models. Functions are available for plotting,\n quantifying variable importance, conducting a sensitivity analysis, and\n obtaining a simple list of model weights.","Published":"2016-11-25","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"neurobase","Version":"1.13.2","Title":"'Neuroconductor' Base Package with Helper Functions for 'nifti'\nObjects","Description":"Base package for 'Neuroconductor', which includes many helper functions \n that interact with objects of class 'nifti', implemented by\n package 'oro.nifti', for reading/writing and also other manipulation functions.","Published":"2017-03-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"neuroblastoma","Version":"1.0","Title":"Neuroblastoma copy number profiles","Description":"Annotated neuroblastoma copy number profiles,\n\t a benchmark data set for change-point detection algorithms.","Published":"2013-07-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"neurohcp","Version":"0.6","Title":"Human Connectome Project Interface","Description":"Downloads and reads data from Human 'Connectome' Project \n using Amazon Web Services ('AWS') \n 'S3' buckets.","Published":"2017-05-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"neuroim","Version":"0.0.6","Title":"Data Structures and Handling for Neuroimaging Data","Description":"A collection of data structures that represent\n volumetric brain imaging data. The focus is on basic data handling for 3D\n and 4D neuroimaging data. In addition, there are function to read and write\n NIFTI files and limited support for reading AFNI files.","Published":"2016-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"neuropsychology","Version":"0.5.0","Title":"Toolbox for Psychologists, Neuropsychologists and\nNeuroscientists","Description":"Contains statistical functions (for patient assessment, data preprocessing and reporting, ...) and datasets useful in psychology, neuropsychology and neuroscience.","Published":"2017-03-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"neuRosim","Version":"0.2-12","Title":"Functions to Generate fMRI Data Including Activated Data, Noise\nData and Resting State Data","Description":"The package allows users to generate fMRI time series or 4D data. Some high-level functions are created for fast data generation with only a few arguments and a diversity of functions to define activation and noise. For more advanced users it is possible to use the low-level functions and manipulate the arguments.","Published":"2015-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Newdistns","Version":"2.1","Title":"Computes Pdf, Cdf, Quantile and Random Numbers, Measures of\nInference for 19 General Families of Distributions","Description":"Computes the probability density function, cumulative distribution function, quantile function, random numbers and measures of inference for the following general families of distributions (each family defined in terms of an arbitrary cdf G): Marshall Olkin G distributions, exponentiated G distributions, beta G distributions, gamma G distributions, Kumaraswamy G distributions, generalized beta G distributions, beta extended G distributions, gamma G distributions, gamma uniform G distributions, beta exponential G distributions, Weibull G distributions, log gamma G I distributions, log gamma G II distributions, exponentiated generalized G distributions, exponentiated Kumaraswamy G distributions, geometric exponential Poisson G distributions, truncated-exponential skew-symmetric G distributions, modified beta G distributions, and exponentiated exponential Poisson G distributions.","Published":"2016-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nFactors","Version":"2.3.3","Title":"Parallel Analysis and Non Graphical Solutions to the Cattell\nScree Test","Description":"Indices, heuristics and strategies to help determine the number of factors/components to retain:\n 1. Acceleration factor (af with or without Parallel Analysis);\n 2. Optimal Coordinates (noc with or without Parallel Analysis);\n 3. Parallel analysis (components, factors and bootstrap);\n 4. lambda > mean(lambda) (Kaiser, CFA and related);\n 5. Cattell-Nelson-Gorsuch (CNG);\n 6. Zoski and Jurs multiple regression (b, t and p);\n 7. Zoski and Jurs standard error of the regression coeffcient (sescree);\n 8. Nelson R2;\n 9. Bartlett khi-2;\n 10. Anderson khi-2;\n 11. Lawley khi-2 and\n 12. Bentler-Yuan khi-2.","Published":"2011-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nFCA","Version":"0.3","Title":"Numerical Formal Concept Analysis for Systematic Clustering","Description":"Numerical Formal Concept Analysis (nFCA) is a modern unsupervised learning tool for analyzing general numerical data. Given input data, this R package nFCA outputs two nFCA graphs: a H-graph and an I-graph that reveal systematic, hierarchical clustering and inherent structure of the data.","Published":"2015-02-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NFP","Version":"0.99.2","Title":"Network Fingerprint Framework in R","Description":"An implementation of the network fingerprint framework that introduced \n in paper \"Network fingerprint: a knowledge-based characterization of biomedical \n networks\" (Cui, 2015) . This method worked by making \n systematic comparisons to a set of well-studied \"basic networks\", measuring \n both the functional and topological similarity. A biological could be\n characterized as a spectrum-like vector consisting of similarities to basic \n networks. It shows great potential in biological network study.","Published":"2016-11-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ngram","Version":"3.0.3","Title":"Fast n-Gram 'Tokenization'","Description":"An n-gram is a sequence of n \"words\" taken, in order, from a\n body of text. This is a collection of utilities for creating,\n displaying, summarizing, and \"babbling\" n-grams. The\n 'tokenization' and \"babbling\" are handled by very efficient C\n code, which can even be built as its own standalone library.\n The babbler is a simple Markov chain. The package also offers\n a vignette with complete example 'workflows' and information about\n the utilities offered in the package.","Published":"2017-03-24","License":"BSD 2-clause License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ngramrr","Version":"0.2.0","Title":"A Simple General Purpose N-Gram Tokenizer","Description":"A simple n-gram (contiguous sequences of n items from a\n given sequence of text) tokenizer to be used with the 'tm' package with no\n 'rJava'/'RWeka' dependency.","Published":"2016-03-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ngspatial","Version":"1.2","Title":"Fitting the Centered Autologistic and Sparse Spatial Generalized\nLinear Mixed Models for Areal Data","Description":"Provides tools for analyzing spatial data, especially non-\n Gaussian areal data. The current version supports the sparse restricted\n spatial regression model of Hughes and Haran (2013) ,\n\tthe centered autologistic model of Caragea and Kaiser (2009) ,\n\tand the Bayesian spatial filtering model of Hughes (2017) .","Published":"2017-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NHANES","Version":"2.1.0","Title":"Data from the US National Health and Nutrition Examination Study","Description":"Body Shape and related measurements from the US National Health\n and Nutrition Examination Survey (NHANES, 1999-2004). See\n http://www.cdc.gov/nchs/nhanes.htm for details.","Published":"2015-07-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nhanesA","Version":"0.6.4.3.3","Title":"NHANES Data Retrieval","Description":"Utility to retrieve data from the National Health and Nutrition\n Examination Survey (NHANES).","Published":"2016-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NHEMOtree","Version":"1.0","Title":"Non-hierarchical evolutionary multi-objective tree learner to\nperform cost-sensitive classification","Description":"NHEMOtree performs cost-sensitive classification by\n solving the two-objective optimization problem of minimizing\n misclassification rate and minimizing total costs for\n classification. The three methods comprised in NHEMOtree are\n based on EMOAs with either tree representation or bitstring\n representation with an enclosed classification tree algorithm.","Published":"2013-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NHLData","Version":"1.0.0","Title":"Scores for Every Season Since the Founding of the NHL in 1917","Description":"Each dataset contains scores for every game during a specific season of the NHL.","Published":"2017-03-08","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"nhlscrapr","Version":"1.8.1","Title":"Compiling the NHL Real Time Scoring System Database for easy use\nin R","Description":"Compiling the NHL Real Time Scoring System Database for easy use in R.","Published":"2017-03-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"NHMM","Version":"3.7","Title":"Bayesian Non-Homogenous Markov and Mixture Models (Multiple Time\nSeries)","Description":"Bayesian HMM and NHMM modeling for multiple time series. The\n emission distribution can be mixtures of Gammas, Poissons, Normals and zero\n inflation is possible.","Published":"2016-11-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"NHMSAR","Version":"1.5","Title":"Non-Homogeneous Markov Switching Autoregressive Models","Description":"Calibration, simulation, validation of (non-)homogeneous Markov switching autoregressive models with Gaussian or von Mises innovations. Penalization methods are implemented for Markov Switching Vector Autoregressive Models of order 1 only. Most functions of the package handle missing values.","Published":"2017-05-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"NHPoisson","Version":"3.1","Title":"Modelling and Validation of Non Homogeneous Poisson Processes","Description":"Tools for modelling, ML estimation, validation analysis and simulation of non homogeneous Poisson processes in time. ","Published":"2015-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nhstplot","Version":"1.0.1","Title":"Plot Null Hypothesis Significance Tests","Description":"Illustrate graphically the most common Null Hypothesis Significance Testing procedures. More specifically, this package provides functions to plot Chi-Squared, F, t (one- and two-tailed) and z (one- and two-tailed) tests, by plotting the probability density under the null hypothesis as a function of the different test statistic values. Although highly flexible (color theme, fonts, etc.), only the minimal number of arguments (observed test statistic, degrees of freedom) are necessary for a clear and useful graph to be plotted, with the observed test statistic and the p value, as well as their corresponding value labels. The axes are automatically scaled to present the relevant part and the overall shape of the probability density function. This package is especially intended for education purposes, as it provides a helpful support to help explain the Null Hypothesis Significance Testing process, its use and/or shortcomings.","Published":"2016-11-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nice","Version":"0.4-1","Title":"Get or Set UNIX Niceness","Description":"Get or set UNIX priority (niceness) of running R process.","Published":"2016-11-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"nicheROVER","Version":"1.0","Title":"(Niche) (R)egion and Niche (Over)lap Metrics for\nMultidimensional Ecological Niches","Description":"This package uses a probabilistic method to calculate niche\n regions and pairwise niche overlap using multidimensional niche indicator\n data (e.g., stable isotopes, environmental variables, etc.). The niche\n region is defined as the joint probability density function of the\n multidimensional niche indicators at a user-defined probability alpha\n (e.g., 95%). Uncertainty is accounted for in a Bayesian framework, and the\n method can be extended to three or more indicator dimensions. It provides\n directional estimates of niche overlap, accounts for species-specific\n distributions in multivariate niche space, and produces unique and\n consistent bivariate projections of the multivariate niche region. A\n forthcoming article by Swanson et al. (Ecology, 2014) provides a detailed\n description of the methodology. See the package vignette for a worked\n example using fish stable isotope data.","Published":"2014-07-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NightDay","Version":"1.0.1","Title":"Night and Day Boundary Plot Funtion","Description":"Computes and plots the boundary between night and day.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"nima","Version":"0.3.0","Title":"Nima Hejazi's Miscellaneous R Code","Description":"Miscellaneous R functions developed over the course of statistical\n\t research. These include utilities that supplement the existing\n\t idiosyncrasies of R; extend plotting functionality and aesthetics; \n\t provide alternative presentations of matrix decompositions; extend \n\t types of random variables supported for simulation; extend access\n\t to command line tools and system information, making work on remote\n\t systems easier.","Published":"2016-03-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"nimble","Version":"0.6-5","Title":"Flexible BUGS-Compatible System for Hierarchical Statistical\nModeling and Algorithm Development","Description":"Flexible application of algorithms to models specified in the BUGS\n language. Algorithms can be written in the NIMBLE language and made available to\n any model.","Published":"2017-06-06","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Nippon","Version":"0.6.3-1","Title":"Japanese Utility Functions and Data","Description":"Japan-specific data is sometimes too unhandy for R users to manage. The utility functions and data in this package disencumber us from such an unnecessary burden. ","Published":"2016-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NIPTeR","Version":"1.0.2","Title":"Fast and Accurate Trisomy Prediction in Non-Invasive Prenatal\nTesting","Description":"Fast and Accurate Trisomy Prediction in Non-Invasive Prenatal\n Testing.","Published":"2016-03-09","License":"GNU Lesser General Public License","snapshot_date":"2017-06-23"} {"Package":"NISTnls","Version":"0.9-13","Title":"Nonlinear least squares examples from NIST","Description":"Datasets for testing nonlinear regression routines.","Published":"2012-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NISTunits","Version":"1.0.1","Title":"Fundamental Physical Constants and Unit Conversions from NIST","Description":"Fundamental physical constants (Quantity, Value, Uncertainty, Unit) for \n SI (International System of Units) and non-SI units, plus unit conversions\n Based on the data from NIST (National Institute of Standards and Technology, USA)","Published":"2016-08-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"nivm","Version":"0.3","Title":"Noninferiority Tests with Variable Margins","Description":"Noninferiority tests for difference in failure rates at a prespecified control rate or prespecified time.","Published":"2015-09-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"NlcOptim","Version":"0.5","Title":"Solve Nonlinear Optimization with Nonlinear Constraints","Description":"Optimization for nonlinear objective and constraint functions. Linear or nonlinear equality and inequality constraints are allowed. It accepts the input parameters as a constrained matrix.","Published":"2017-04-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nleqslv","Version":"3.3","Title":"Solve Systems of Nonlinear Equations","Description":"Solve a system of nonlinear equations using a Broyden or a Newton method\n with a choice of global strategies such as line search and trust region.\n There are options for using a numerical or user supplied Jacobian,\n for specifying a banded numerical Jacobian and for allowing\n a singular or ill-conditioned Jacobian.","Published":"2017-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nlme","Version":"3.1-131","Title":"Linear and Nonlinear Mixed Effects Models","Description":"Fit and compare Gaussian linear and nonlinear mixed-effects models.","Published":"2017-02-06","License":"GPL (>= 2) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"nlmeODE","Version":"1.1","Title":"Non-linear mixed-effects modelling in nlme using differential\nequations","Description":"This package combines the odesolve and nlme packages for\n mixed-effects modelling using differential equations.","Published":"2012-10-27","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"nlmeU","Version":"0.70-3","Title":"Datasets and utility functions enhancing functionality of nlme\npackage","Description":"nlmeU: Datasets and utility functions enhancing functionality of nlme package. Datasets, functions and scripts are described in book titled 'Linear Mixed-Effects Models:\n A Step-by-Step Approach' by Galecki and Burzykowski (2013). Package is under development.","Published":"2013-08-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nlmrt","Version":"2016.3.2","Title":"Functions for Nonlinear Least Squares Solutions","Description":"Replacement for nls() tools for working with nonlinear least squares problems.\n The calling structure is similar to, but much simpler than, that of the nls()\n function. Moreover, where nls() specifically does NOT deal with small or zero\n residual problems, nlmrt is quite happy to solve them. It also attempts to be\n more robust in finding solutions, thereby avoiding 'singular gradient' messages\n that arise in the Gauss-Newton method within nls(). The Marquardt-Nash approach\n in nlmrt generally works more reliably to get a solution, though this may be \n one of a set of possibilities, and may also be statistically unsatisfactory.\n Added print and summary as of August 28, 2012.","Published":"2016-03-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nlnet","Version":"1.0","Title":"Nonlinear Network Reconstruction and Clustering Based on DCOL\n(Distance Based on Conditional Ordered List)","Description":"It includes three methods: K-profiles clustering, non-linear network reconstruction, and non-linear hierarchical clustering.","Published":"2015-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nloptr","Version":"1.0.4","Title":"R interface to NLopt","Description":"\n nloptr is an R interface to NLopt. NLopt is a free/open-source library for\n nonlinear optimization, providing a common interface for a number of\n different free optimization routines available online as well as original\n implementations of various other algorithms.\n See http://ab-initio.mit.edu/wiki/index.php/NLopt_Introduction for more\n information on the available algorithms. During installation on Unix the\n NLopt code is downloaded and compiled from the NLopt website.","Published":"2014-08-04","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"NLP","Version":"0.1-10","Title":"Natural Language Processing Infrastructure","Description":"Basic classes and methods for Natural Language Processing.","Published":"2017-02-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NLPutils","Version":"0.0-4","Title":"Natural Language Processing Utilities","Description":"Utilities for Natural Language Processing.","Published":"2016-12-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nlreg","Version":"1.2-2","Title":"Higher Order Inference for Nonlinear Heteroscedastic Models","Description":"Likelihood inference based on higher order approximations \n for nonlinear models with possibly non constant variance","Published":"2014-04-03","License":"GPL (>= 2) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"NLRoot","Version":"1.0","Title":"searching for the root of equation","Description":"This is a package which can help you search for the root\n of a equation.","Published":"2012-07-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nlrr","Version":"0.1","Title":"Non-Linear Relative Risk Estimation and Plotting","Description":"Estimate the non-linear odds ratio and plot it against a continuous exposure.","Published":"2015-11-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nls2","Version":"0.2","Title":"Non-linear regression with brute force","Description":"Adds brute force and multiple starting values to nls.","Published":"2013-03-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nlsem","Version":"0.8","Title":"Fitting Structural Equation Mixture Models","Description":"Estimation of structural equation models with nonlinear effects\n and underlying nonnormal distributions.","Published":"2017-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nlshelper","Version":"0.2","Title":"Convenient Functions for Non-Linear Regression","Description":"A few utilities for summarizing, testing, and plotting non-linear\n regression models fit with nls(), nlsList() or nlme().","Published":"2017-04-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nlshrink","Version":"1.0.1","Title":"Non-Linear Shrinkage Estimation of Population Eigenvalues and\nCovariance Matrices","Description":"Non-linear shrinkage estimation of population eigenvalues and covariance\n matrices, based on publications by Ledoit and Wolf (2004, 2015, 2016).","Published":"2016-04-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nlsMicrobio","Version":"0.0-1","Title":"Nonlinear regression in predictive microbiology","Description":"Data sets and nonlinear regression models dedicated to predictive microbiology","Published":"2014-06-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nlsmsn","Version":"0.0-4","Title":"Fitting nonlinear models with scale mixture of skew-normal\ndistributions","Description":"Fit univariate non-linear scale mixture of\n skew-normal(NL-SMSN) regression.","Published":"2013-02-07","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"nlsr","Version":"2017.6.18","Title":"Functions for Nonlinear Least Squares Solutions","Description":"Provides tools for working with nonlinear least squares problems.\n It is intended to eventually supersede the nls() function in the R distribution.\n For example, nls() specifically does NOT deal with small or zero\n residual problems. Its Gauss-Newton method frequently stops with 'singular \n\tgradient' messages.","Published":"2017-06-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nlstimedist","Version":"1.1.1","Title":"Non-Linear Model Fitting of Time Distribution of Biological\nPhenomena","Description":"Fit biologically meaningful distribution functions to\n time-sequence data (phenology), estimate parameters to draw the cumulative\n distribution function and probability density function and calculate standard\n statistical moments and percentiles.","Published":"2017-06-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nlstools","Version":"1.0-2","Title":"Tools for Nonlinear Regression Analysis","Description":"Several tools for assessing the quality of fit of a\n gaussian nonlinear model are provided.","Published":"2015-07-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NlsyLinks","Version":"2.0.6","Title":"Utilities and Kinship Information for Research with the NLSY","Description":"Utilities and kinship information for behavior genetics and\n developmental research using the National Longitudinal Survey of Youth\n (NLSY; ).","Published":"2016-04-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"nlt","Version":"2.1-3","Title":"A nondecimated lifting transform for signal denoising","Description":"Uses a modified lifting algorithm on which it builds the\n nondecimated lifting transform. It has applications in wavelet\n shrinkage.","Published":"2012-11-11","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"nlts","Version":"0.2-0","Title":"(non)linear time series analysis","Description":"R functions for (non)linear time series analysis. A core\n topic is order estimation through cross-validation.","Published":"2013-06-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"nLTT","Version":"1.3.1","Title":"Calculate the NLTT Statistic","Description":"Provides functions to calculate the normalised Lineage-Through-\n Time (nLTT) statistic, given two phylogenetic trees. The nLTT statistic measures\n the difference between two Lineage-Through-Time curves, where each curve is\n normalised both in time and in number of lineages.","Published":"2016-10-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nlWaldTest","Version":"1.1.3","Title":"Wald Test of Nonlinear Restrictions and Nonlinear CI","Description":"Wald Test for nonlinear restrictions on model parameters and confidence\n intervals for nonlinear functions of parameters using delta-method. Applicable\n after ANY model, provided parameters estimates and their covariance matrix are\n available.","Published":"2016-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nmaINLA","Version":"0.1.1","Title":"Network Meta-Analysis using Integrated Nested Laplace\nApproximations","Description":"Performs network meta-analysis using integrated nested Laplace approximations ('INLA'). Includes methods to assess the heterogeneity and inconsistency in the network. Contains more than ten different network meta-analysis data. 'INLA' package can be obtained from . We recommend the testing version.","Published":"2017-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NMF","Version":"0.20.6","Title":"Algorithms and Framework for Nonnegative Matrix Factorization\n(NMF)","Description":"Provides a framework to perform Non-negative Matrix\n Factorization (NMF). The package implements a set of already published algorithms\n and seeding methods, and provides a framework to test, develop and plug\n new/custom algorithms. Most of the built-in algorithms have been optimized\n in C++, and the main interface function provides an easy way of performing\n parallel computations on multicore machines.","Published":"2015-05-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nmfgpu4R","Version":"0.2.5.2","Title":"Non-Negative Matrix Factorization (NMF) using CUDA","Description":"Wrapper package for the nmfgpu library, which implements several\n Non-negative Matrix Factorization (NMF) algorithms for CUDA platforms.\n By using the acceleration of GPGPU computing, the NMF can be used for\n real-world problems inside the R environment. All CUDA devices starting with\n Kepler architecture are supported by the library.","Published":"2016-10-17","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"NMFN","Version":"2.0","Title":"Non-negative Matrix Factorization","Description":"Non-negative Matrix Factorization","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"NMI","Version":"2.0","Title":"Normalized Mutual Information of Community Structure in Network","Description":"Calculates the normalized mutual information (NMI) of two community structures in network analysis.","Published":"2016-08-20","License":"GNU General Public License version 2","snapshot_date":"2017-06-23"} {"Package":"NMOF","Version":"0.40-0","Title":"Numerical Methods and Optimization in Finance","Description":"Functions, examples and data from the book\n \"Numerical Methods and Optimization in Finance\" by M.\n 'Gilli', D. 'Maringer' and E. Schumann (2011), ISBN\n 978-0123756626. The package provides implementations of\n several optimisation heuristics, such as Differential\n Evolution, Genetic Algorithms and Threshold Accepting.\n There are also functions for the valuation of financial\n instruments, such as bonds and options, and functions that\n help with stochastic simulations.","Published":"2016-10-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nmw","Version":"0.1.1","Title":"Understanding Nonlinear Mixed Effects Modeling for Population\nPharmacokinetics","Description":"This shows how NONMEM(R) software works. NONMEM's classical estimation methods like 'First Order(FO) approximation', 'First Order Conditional Estimation(FOCE)', and 'Laplacian approximation' are explained.","Published":"2017-03-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nneo","Version":"0.1.0","Title":"'NEON' 'API' Client","Description":"'NEON' 'API' () client.\n Includes methods for interacting with all 'API' routes.","Published":"2017-06-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"nnet","Version":"7.3-12","Title":"Feed-Forward Neural Networks and Multinomial Log-Linear Models","Description":"Software for feed-forward neural networks with a single\n hidden layer, and for multinomial log-linear models.","Published":"2016-02-02","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"nnetpredint","Version":"1.2","Title":"Prediction Intervals of Multi-Layer Neural Networks","Description":"Computing prediction intervals of neural network models (e.g.backpropagation) at certain confidence level. It can take the output from models trained by other packages like 'nnet', 'neuralnet', 'RSNNS', etc.","Published":"2015-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nnlasso","Version":"0.3","Title":"Non-Negative Lasso and Elastic Net Penalized Generalized Linear\nModels","Description":"Estimates of coefficients of lasso penalized linear regression and generalized linear models subject to non-negativity constraints on the parameters using multiplicative iterative algorithm. Entire regularization path for a sequence of lambda values can be obtained. Functions are available for creating plots of regularization path, cross validation and estimating coefficients at a given lambda value. There is also provision for obtaining standard error of coefficient estimates.","Published":"2016-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NNLM","Version":"0.4.1","Title":"Fast and Versatile Non-Negative Matrix Factorization","Description":"This is a package for Non-Negative Linear Models (NNLM). It implements\n fast sequential coordinate descent algorithms for non-negative linear regression\n and non-negative matrix factorization (NMF). It supports mean square error and Kullback-Leibler divergence loss.\n Many other features are also implemented, including missing value imputation, domain knowledge integration,\n designable W and H matrices and multiple forms of regularizations.","Published":"2016-01-03","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"nnls","Version":"1.4","Title":"The Lawson-Hanson algorithm for non-negative least squares\n(NNLS)","Description":"An R interface to the Lawson-Hanson implementation of an\n algorithm for non-negative least squares (NNLS). Also allows\n the combination of non-negative and non-positive constraints.","Published":"2012-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NNS","Version":"0.3.3","Title":"Nonlinear Nonparametric Statistics","Description":"Nonlinear nonparametric statistics using partial moments. Partial moments are the elements of variance and asymptotically approximate the area of f(x). These robust statistics provide the basis for nonlinear analysis while retaining linear equivalences. NNS offers: Numerical integration, Numerical differentiation, Clustering, Correlation, Dependence, Causal analysis, ANOVA, Regression, Classification, Seasonality, Autoregressive modelling, Normalization and Stochastic dominance. All routines based on: Viole, F. and Nawrocki, D. (2013), Nonlinear Nonparametric Statistics: Using Partial Moments (ISBN: 1490523995).","Published":"2017-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NNTbiomarker","Version":"0.29.11","Title":"Calculate Design Parameters for Biomarker Validation Studies","Description":"Helps a clinical trial team discuss\n the clinical goals of a well-defined biomarker with a diagnostic,\n staging, prognostic, or predictive purpose. From this discussion will\n come a statistical plan for a (non-randomized) validation trial.\n Both prospective and retrospective trials are supported. In a specific\n focused discussion, investigators should determine the range of\n \"discomfort\" for the NNT, number needed to treat. The meaning of\n the discomfort range, [NNTlower, NNTupper], is that within this range\n most physicians would feel discomfort either in treating or withholding\n treatment. A pair of NNT values bracketing that range, NNTpos and NNTneg,\n become the targets of the study's design. If the trial can demonstrate\n that a positive biomarker test yields an NNT less than NNTlower,\n and that a negative biomarker test yields an NNT less than NNTlower,\n then the biomarker may be useful for patients. A highlight of the package\n is visualization of a \"contra-Bayes\" theorem, which produces criteria for\n retrospective case-controls studies.","Published":"2015-08-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nodeHarvest","Version":"0.7-3","Title":"Node Harvest for Regression and Classification","Description":"Node harvest is a simple interpretable tree-like estimator for high-dimensional regression and classification. A few nodes are selected from an initially large ensemble of nodes, each associated with a positive weight. New observations can fall into one or several nodes and predictions are the weighted average response across all these groups. The package offers visualization of the estimator. Predictions can return the nodes a new observation fell into, along with the mean response of training observations in each node, offering a simple explanation of the prediction.","Published":"2015-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nodiv","Version":"1.1.6","Title":"Compares the Distribution of Sister Clades Through a Phylogeny","Description":"An implementation of the nodiv algorithm, see Borregaard, M.K., Rahbek, C., Fjeldsaa, J., Parra, J.L., Whittaker, R.J. & Graham, C.H. 2014. Node-based analysis of species distributions. Methods in Ecology and Evolution 5(11): 1225-1235. . Package for phylogenetic analysis of species distributions. The main function goes through each node in the phylogeny, compares the distributions of the two descendant nodes, and compares the result to a null model. This highlights nodes where major distributional divergence have occurred. The distributional divergence for these nodes is mapped using the SOS statistic.","Published":"2017-02-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"noia","Version":"0.97.1","Title":"Implementation of the Natural and Orthogonal InterAction (NOIA)\nmodel","Description":"The NOIA model, as described extensively in Alvarez-Castro & Carlborg (2007), is a framework facilitating the estimation of genetic effects and genotype-to-phenotype maps. This package provides the basic tools to perform linear and multilinear regressions from real populations (provided the phenotype and the genotype of every individuals), estimating the genetic effects from different reference points, the genotypic values, and the decomposition of genetic variances in a multi-locus, 2 alleles system. This package is presented in Le Rouzic & Alvarez-Castro (2008). ","Published":"2015-01-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"noise","Version":"1.0","Title":"Estimation of Intrinsic and Extrinsic Noise from Single-Cell\nData","Description":"Functions to calculate estimates of intrinsic and extrinsic noise from the two-reporter single-cell experiment, as in Elowitz, M. B., A. J. Levine, E. D. Siggia, and P. S. Swain (2002) Stochastic gene expression in a single cell. Science, 297, 1183-1186. Functions implement multiple estimators developed for unbiasedness or min Mean Squared Error (MSE) in Fu, A. Q. and Pachter, L. (2016). Estimating intrinsic and extrinsic noise from single-cell gene expression measurements. .","Published":"2016-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NoiseFiltersR","Version":"0.1.0","Title":"Label Noise Filters for Data Preprocessing in Classification","Description":"An extensive implementation of state-of-the-art and classical\n algorithms to preprocess label noise in classification problems.","Published":"2016-06-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nomclust","Version":"1.1.1106","Title":"Hierarchical Nominal Clustering Package","Description":"Package for hierarchical clustering of objects characterized by\n nominal variables.","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NominalLogisticBiplot","Version":"0.2","Title":"Biplot representations of categorical data","Description":"Analysis of a matrix of polytomous items using Nominal Logistic Biplots (NLB)\n according to Hernandez-Sanchez and Vicente-Villardon (2013). \n The NLB procedure extends the binary logistic biplot to nominal (polytomous) data. \n The individuals are represented as points on a plane and the variables are represented \n as convex prediction regions rather than vectors as in a classical or binary biplot. \n Using the methods from Computational Geometry, the set of prediction regions is converted to a set of points \n in such a way that the prediction for each individual is established by its closest \n \"category point\". Then interpretation is based on distances rather than on projections. \n In this package we implement the geometry of such a representation and construct computational algorithms \n for the estimation of parameters and the calculation of prediction regions.","Published":"2014-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nomogramEx","Version":"2.0","Title":"Extract Equations from a Nomogram","Description":"\n A nomogram can not be easily applied,\n because it is difficult to calculate the points or even the survival probability.\n The package, including a function of nomogramEx(),\n is to extract the polynomial equations to calculate the points of each variable,\n and the survival probability corresponding to the total points.","Published":"2016-12-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"noncensus","Version":"0.1","Title":"U.S. Census Regional and Demographic Data","Description":"A collection of various regional information determined by the\n U.S. Census Bureau along with demographic data.","Published":"2014-05-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"NonCompart","Version":"0.3.0","Title":"Noncompartmental Analysis for Pharmacokinetic Data","Description":"Conduct a noncompartmental analysis as closely as possible to the most widely used commercial software for pharmacokinetic analysis, i.e. 'Phoenix(R) WinNonlin(R)' .\n Some features are\n 1) CDISC SDTM terms\n 2) Automatic slope selection with the same criterion of WinNonlin(R)\n 3) Supporting both 'linear-up linear-down' and 'linear-up log-down' method\n 4) Interval(partial) AUCs with 'linear' or 'log' interpolation method\n * Reference: Gabrielsson J, Weiner D. Pharmacokinetic and Pharmacodynamic Data Analysis - Concepts and Applications. 5th ed. 2016. (ISBN:9198299107).","Published":"2017-04-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"noncompliance","Version":"0.2.2","Title":"Causal Inference in the Presence of Treatment Noncompliance\nUnder the Binary Instrumental Variable Model","Description":"A finite-population significance test of the 'sharp' causal null hypothesis that\n treatment exposure X has no effect on final outcome Y, within the principal stratum of Compliers.\n A generalized likelihood ratio test statistic is used, and the resulting p-value is exact.\n Currently, it is assumed that there are only Compliers and Never Takers in the population.","Published":"2016-02-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"nonlinearTseries","Version":"0.2.3","Title":"Nonlinear Time Series Analysis","Description":"Functions for nonlinear time series analysis. This package permits \n the computation of the most-used nonlinear statistics/algorithms\n including generalized correlation dimension, information dimension,\n largest Lyapunov exponent, sample entropy and Recurrence \n Quantification Analysis (RQA), among others. Basic routines\n for surrogate data testing are also included. Part of this work \n was based on the book \"Nonlinear time series analysis\" by \n Holger Kantz and Thomas Schreiber (ISBN: 9780521529020).","Published":"2015-07-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"nonmem2R","Version":"0.1.7","Title":"Loading NONMEM Output Files and Simulate with Parameter\nUncertainty","Description":"Loading NONMEM (NONlinear Mixed-Effect Modeling, ) output files and simulate with parameter uncertainty.","Published":"2017-06-20","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"nonmemica","Version":"0.7.1","Title":"Create and Evaluate NONMEM Models in a Project Context","Description":"Systematically creates and modifies NONMEM(R) control streams. Harvests\n NONMEM output, builds run logs, creates derivative data, generates diagnostics.\n NONMEM (ICON Development Solutions ) is software for \n nonlinear mixed effects modeling. See 'package?nonmemica'. ","Published":"2017-04-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nonnest2","Version":"0.4-1","Title":"Tests of Non-Nested Models","Description":"Testing non-nested models via theory supplied by Vuong (1989) .\n Includes tests of model distinguishability and of model fit that can be applied\n to both nested and non-nested models. Also includes functionality to obtain\n confidence intervals associated with AIC and BIC. This material is based on work\n supported by the National Science Foundation under Grant Number SES-1061334.","Published":"2016-09-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"nonpar","Version":"1.0.1","Title":"A Collection of Nonparametric Hypothesis Tests","Description":"Contains the following 5 nonparametric hypothesis tests:\n The Sign Test,\n The 2 Sample Median Test,\n Miller's Jackknife Procedure,\n Cochran's Q Test, &\n The Stuart-Maxwell Test.","Published":"2017-04-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nonparaeff","Version":"0.5-8","Title":"Nonparametric Methods for Measuring Efficiency and Productivity","Description":"This package contains functions for measuring efficiency\n and productivity of decision making units (DMUs) under the\n framework of Data Envelopment Analysis (DEA) and its\n variations.","Published":"2013-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NonpModelCheck","Version":"3.0","Title":"Model Checking and Variable Selection in Nonparametric\nRegression","Description":"Provides tests of significance for covariates (or groups of covariates) in a fully nonparametric regression model and a variable (or group) selection procedure based on False Discovery Rate. In addition, it provides a function for local polynomial regression for any number of dimensions, using a bandwidth specified by the user or automatically chosen by cross validation or an adaptive procedure.","Published":"2017-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nonrandom","Version":"1.42","Title":"Stratification and matching by the propensity score","Description":"This package offers a comprehensive data analysis if\n stratification and matching by the propensity score is done.\n Several functions are implemented, starting from the selection\n of the propensity score model up to estimating propensity score\n based treatment or exposure effects. All functions can be\n applied separately as well as combined.","Published":"2014-04-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"nontarget","Version":"1.9","Title":"Detecting Isotope, Adduct and Homologue Relations in LC-MS Data","Description":"Screening a HRMS data set for peaks related by (1) isotope patterns, (2) different adducts of the same molecule and/or (3) homologue series. The resulting isotopic pattern and adduct groups can then be combined to so-called components, with homologue series information attached. Also allows plotting and filtering HRMS data for mass defects, frequent m/z distances and components vs. non-components.","Published":"2016-09-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nontargetData","Version":"1.1","Title":"Quantized simulation data of isotope pattern centroids","Description":"Data sets for isotope pattern grouping of LC-HRMS peaks with package nontarget. Based on a vast set of unique PubChem molecular formulas, quantized (a) m/z, (b) m/z differences, (c) intensity ratios and (d) marker centroids of simulated centroid pairs are listed for different instrument resolutions.","Published":"2014-07-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nopaco","Version":"1.0.3","Title":"Non-Parametric Concordance Coefficient","Description":"A non-parametric test for multi-observer concordance and\n differences between concordances in (un)balanced data.","Published":"2017-04-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"nopp","Version":"1.0.8","Title":"Nash Optimal Party Positions","Description":"Estimation of party/candidate ideological positions\n that correspond to a Nash equilibrium along a \n one-dimensional space.","Published":"2016-07-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nor1mix","Version":"1.2-2","Title":"Normal (1-d) Mixture Models (S3 Classes and Methods)","Description":"Onedimensional Normal Mixture Models Classes, for, e.g.,\n density estimation or clustering algorithms research and teaching;\n providing the widely used Marron-Wand densities. Now fitting to data\n by ML (Maximum Likelihood) or EM estimation.","Published":"2016-08-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nordklimdata1","Version":"1.2","Title":"Dataset for Climate Analysis with Data from the Nordic Region","Description":"The Nordklim dataset 1.0 is a unique and useful achievement for climate \n analysis. It includes observations of twelve different climate elements from \n more than 100 stations in the Nordic region, in time span over 100 years.\n The project contractors were NORDKLIM/NORDMET on behalf of the National \n meteorological services in Denmark (DMI), Finland (FMI), Iceland (VI), \n Norway (DNMI) and Sweden (SMHI).","Published":"2015-07-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"norm","Version":"1.0-9.5","Title":"Analysis of multivariate normal datasets with missing values","Description":"Analysis of multivariate normal datasets with missing\n values","Published":"2013-02-28","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"norm2","Version":"2.0.1","Title":"Analysis of Incomplete Multivariate Data under a Normal Model","Description":"Functions for parameter estimation, Bayesian posterior simulation\n and multiple imputation from incomplete multivariate data under a\n normal model.","Published":"2016-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NORMA","Version":"0.1","Title":"Builds General Noise SVRs","Description":"Builds general noise SVR models using Naive Online R Minimization Algorithm, NORMA, an optimization method based on classical stochastic gradient descent suitable for computing SVR models in an online setting.","Published":"2017-01-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NormalGamma","Version":"1.1","Title":"Normal-gamma convolution model","Description":"The functions proposed in this package compute the density of the sum of a Gaussian and a gamma random variables, estimate the parameters and correct the noise effect in a gamma-signal and Gaussian-noise model. This package has been used to implement the background correction method for Illumina microarray data presented in Plancade S., Rozenholc Y. and Lund E. \"Generalization of the normal-exponential model : exploration of a more accurate parameterization for the signal distribution on Illumina BeadArrays\", BMC Bioinfo 2012, 13(329).","Published":"2013-10-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NormalLaplace","Version":"0.2-0","Title":"The Normal Laplace Distribution","Description":"This package provides functions for the normal Laplace\n distribution. It is currently under development and provides\n only limited functionality. Density, distribution and quantile\n functions, random number generation, and moments are provided.","Published":"2015-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"normalp","Version":"0.7.0","Title":"Routines for Exponential Power Distribution","Description":"Collection of utilities referred to Exponential Power distribution, \n also known as General Error Distribution (see Mineo, A.M. and Ruggieri, M. (2005), A software Tool for the Exponential Power Distribution: The normalp package. In Journal of Statistical Software, Vol. 12, Issue 4)","Published":"2014-12-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"normalr","Version":"0.0.3","Title":"Normalisation of Multiple Variables in Large-Scale Datasets","Description":"The robustness of many of the statistical techniques, such as factor analysis, applied in \n the social sciences rests upon the assumption of item-level normality. However, when dealing \n with real data, these assumptions are often not met. The Box-Cox transformation (Box & Cox, 1964)\n provides an optimal transformation for non-normal variables. Yet, for \n large datasets of continuous variables, its application in current software programs is cumbersome\n with analysts having to take several steps to normalise each variable. We present an R package \n 'normalr' that enables researchers to make convenient optimal transformations of multiple variables\n in datasets. This R package enables users to quickly and accurately: (1) anchor all of their \n variables at 1.00, (2) select the desired precision with which the optimal lambda is estimated, \n (3) apply each unique exponent to its variable, (4) rescale resultant values to within their \n original X1 and X(n) ranges, and (5) provide original and transformed estimates of skewness, \n kurtosis, and other inferential assessments of normality.","Published":"2017-01-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"NormPsy","Version":"1.0.5","Title":"Normalisation of Psychometric Tests","Description":"Functions for normalizing psychometric test scores. The normalization aims at correcting the metrological properties of the psychometric tests such as the ceiling and floor effects and the curvilinearity (unequal interval scaling). Functions to compute and plot predictions in the natural scale of the psychometric test from the estimates of a linear mixed model estimated on the normalized scores are also provided.","Published":"2017-03-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"NORMT3","Version":"1.0-3","Title":"Evaluates complex erf, erfc, Faddeeva, and density of sum of\nGaussian and Student's t","Description":"Evaluates the probability density function of the sum of\n the Gaussian and Student's t density on 3 degrees of freedom.\n Evaluates the p.d.f. of the sphered Student's t density\n function. Also evaluates the erf, and erfc functions on\n complex-valued arguments. Thanks to Krishna Myneni the function\n is calculates the Faddeeva function also!","Published":"2012-10-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"normtest","Version":"1.1","Title":"Tests for Normality","Description":"Tests for the composite hypothesis of normality","Published":"2014-03-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"normwhn.test","Version":"1.0","Title":"Normality and White Noise Testing","Description":"Includes Omnibus Univariate and Multivariate Normality\n Tests (See Doornik and Hansen (1994)). One variation allows for\n the possibility of weak dependence rather than independence in\n the variable(s). Also included is an univariate white noise\n test where the null hypothesis is \"white noise\" rather than\n strict \"white noise\".","Published":"2012-10-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NORRRM","Version":"1.0.0","Title":"Geochemical Toolkit for R","Description":"CIPW Norm (acronym from the surnames of the authors: Cross, Iddings, Pirrson and Washington)\n is the most commonly used calculation algorithm to estimate the standard mineral assemblages\n for igneous rocks from its geochemical composition. NORRRM (acronym from noRm, R language and\n Renee) is the highly consistent program to calculate the CIPW Norm.","Published":"2015-03-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"NORTARA","Version":"1.0.0","Title":"Generation of Multivariate Data with Arbitrary Marginals","Description":"An implementation of a specific method for generating\n n-dimensional random vectors with given marginal distributions and\n correlation matrix. The method uses the NORTA (NORmal To Anything)\n approach which generates a standard normal random vector and then\n transforms it into a random vector with specified marginal distributions\n and the RA (Retrospective Approximation) algorithm which is a generic\n stochastic root-finding algorithm. The marginals can be continuous or\n discrete. See the vignette of package for more details.","Published":"2014-12-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"nortest","Version":"1.0-4","Title":"Tests for Normality","Description":"Five omnibus tests for testing the composite hypothesis of\n normality.","Published":"2015-07-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nortestARMA","Version":"1.0.2","Title":"Neyman Smooth Tests of Normality for the Errors of ARMA Models","Description":"Tests the goodness-of-fit to the Normal distribution for the errors of an ARMA model.","Published":"2017-04-14","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"nos","Version":"1.0.0","Title":"Compute Node Overlap and Segregation in Ecological Networks","Description":"Calculate NOS (node overlap and segregation) and \n the associated metrics described in Strona and \n Veech (2015) and Strona et al. \n (2017; In Press, DOI to be provided in subsequent package version). \n The functions provided in the package enable assessment of \n structural patterns ranging from complete node segregation to perfect \n nestedness in a variety of network types. In addition, they provide a \n measure of network modularity. ","Published":"2017-05-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nose","Version":"1.0","Title":"nose Package for R","Description":"The nose package consists of a collection of three\n functions for classifying sparseness in typical 2 x 2 data sets\n with at least one cell should have zero count. These functions\n are based on the three widely applied summary measures for 2 x\n 2 categorical data viz, Risk Difference (RD), Relative Risk\n (RR), Odds Ratio (OR). This package helps to identify suitable\n continuity correction for zero cells when a multi centre\n analysis or a meta analysis is carried out. Further, it can be\n considered as a tool for sensitivity analysis for adding a\n continuity correction and to identify the presence of Simpson's\n paradox.","Published":"2012-12-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NostalgiR","Version":"1.0.2","Title":"Advanced Text-Based Plots","Description":"Provides functions to produce advanced ascii graphics, directly to the terminal window. This package utilizes the txtplot() function from the 'txtplot' package, to produce text-based histograms, empirical cumulative distribution function plots, scatterplots with fitted and regression lines, quantile plots, density plots, image plots, and contour plots.","Published":"2015-09-24","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"not","Version":"1.0","Title":"Narrowest-Over-Threshold Change-Point Detection","Description":"Provides efficient implementation of the Narrowest-Over-Threshold methodology for detecting an unknown number of change-points occurring at unknown locations in one-dimensional data following 'deterministic signal + noise' model. Currently implemented scenarios are: piecewise-constant signal, piecewise-constant signal with a heavy-tailed noise, piecewise-linear signal, piecewise-quadratic signal, piecewise-constant signal and with piecewise-constant variance of the noise.","Published":"2016-08-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"notifyme","Version":"0.3.0","Title":"Send Alerts to your Cellphone and Phillips Hue Lights","Description":"Functions to flash your hue lights, or text yourself, from R. Designed to be used with long running scripts.","Published":"2016-11-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"notifyR","Version":"1.02","Title":"Send push notifications to your smartphone via pushover.net\n(ACCOUNT REQUIRED!)","Description":"This Package provides a connection to the pushover.net API\n to send push notification to your smartphone directly from R.\n (ACCOUNT REQUIRED!)","Published":"2012-08-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"novelist","Version":"1.0","Title":"NOVEL Integration of the Sample and Thresholded (NOVELIST)\nCorrelation and Covariance Estimators","Description":"Estimate Large correlation and covariance matrices and their inverses using \n integration of the sample and thresholded correlation and covariance estimators.","Published":"2015-05-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"noweb","Version":"1.0-4","Title":"Noweb system for R","Description":"The noweb system for source code, implemented in R.","Published":"2013-04-03","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"Nozzle.R1","Version":"1.1-1","Title":"Nozzle Reports","Description":"The Nozzle package provides an API to generate HTML\n reports with dynamic user interface elements based on\n JavaScript and CSS (Cascading Style Sheets). Nozzle was\n designed to facilitate summarization and rapid browsing of\n complex results in data analysis pipelines where multiple\n analyses are performed frequently on big data sets. The package\n can be applied to any project where user-friendly reports need\n to be created.","Published":"2013-05-15","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"np","Version":"0.60-3","Title":"Nonparametric Kernel Smoothing Methods for Mixed Data Types","Description":"Nonparametric (and semiparametric) kernel methods that seamlessly handle a mix of continuous, unordered, and ordered factor data types. We would like to gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC:www.nserc.ca), the Social Sciences and Humanities Research Council of Canada (SSHRC:www.sshrc.ca), and the Shared Hierarchical Academic Research Computing Network (SHARCNET:www.sharcnet.ca).","Published":"2017-04-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"nparACT","Version":"0.7","Title":"Non-Parametric Measures of Actigraphy Data","Description":"Computes interdaily stability (IS), intradaily variability (IV) & the relative amplitude (RA) from actigraphy data as described in Blume et al. (2016) and van Someren et al. (1999) . Additionally, it also computes L5 (i.e. the 5 hours with lowest average actigraphy amplitude) and M10 (the 10 hours with highest average amplitude) as well as the respective start times. The flex versions will also compute the L-value for a user-defined number of minutes. IS describes the strength of coupling of a rhythm to supposedly stable zeitgebers. It varies between 0 (Gaussian Noise) and 1 for perfect IS. IV describes the fragmentation of a rhythm, i.e. the frequency and extent of transitions between rest and activity. It is near 0 for a perfect sine wave, about 2 for Gaussian noise and may be even higher when a definite ultradian period of about 2 hrs is present. RA is the relative amplitude of a rhythm. Note that to obtain reliable results, actigraphy data should cover a reasonable number of days.","Published":"2016-12-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nparcomp","Version":"2.6","Title":"Multiple Comparisons and Simultaneous Confidence Intervals","Description":"With this package, it is possible to compute nonparametric simultaneous confidence intervals for relative contrast effects in the unbalanced one way layout. Moreover, it computes simultaneous p-values. The simultaneous confidence intervals can be computed using multivariate normal distribution, multivariate t-distribution with a Satterthwaite Approximation of the degree of freedom or using multivariate range preserving transformations with Logit or Probit as transformation function. 2 sample comparisons can be performed with the same methods described above. There is no assumption on the underlying distribution function, only that the data have to be at least ordinal numbers.","Published":"2015-03-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"nparLD","Version":"2.1","Title":"Nonparametric Analysis of Longitudinal Data in Factorial\nExperiments","Description":"The package \"nparLD\" is designed to perform nonparametric\n analysis of longitudinal data in factorial experiments.\n Longitudinal data are those which are collected from the same\n subjects over time, and they frequently arise in biological\n sciences. Nonparametric methods do not require distributional\n assumptions, and are applicable to a variety of data types\n (continuous, discrete, purely ordinal, and dichotomous). Such\n methods are also robust with respect to outliers and for small\n sample sizes.","Published":"2012-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nparsurv","Version":"0.1.0","Title":"Nonparametric Tests for Main Effects, Simple Effects and\nInteraction Effect in a Factorial Design with Censored Data","Description":"Nonparametric Tests for Main Effects, Simple Effects and Interaction Effect with Censored Data and Two Factorial Influencing Variables.","Published":"2017-01-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NPBayesImpute","Version":"0.6","Title":"Non-Parametric Bayesian Multiple Imputation for Categorical Data","Description":"These routines create multiple imputations of missing at random categorical data, with or without structural zeros. Imputations are based on Dirichlet process mixtures of multinomial distributions, which is a non-parametric Bayesian modeling approach that allows for flexible joint modeling.","Published":"2016-02-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"npbr","Version":"1.5","Title":"Nonparametric Boundary Regression","Description":"A variety of functions for the best known and most innovative approaches to nonparametric boundary estimation. The selected methods are concerned with empirical, smoothed, unrestricted as well as constrained fits under both separate and multiple shape constraints. They cover robust approaches to outliers as well as data envelopment techniques based on piecewise polynomials, splines, local linear fitting, extreme values and kernel smoothing. The package also seamlessly allows for Monte Carlo comparisons among these different estimation methods. Its use is illustrated via a number of empirical applications and simulated examples.","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NPC","Version":"1.1.0","Title":"Nonparametric Combination of Hypothesis Tests","Description":"An implementation of nonparametric combination of hypothesis tests.\n This package performs nonparametric combination (Pesarin and Salmaso 2010),\n a permutation-based procedure for jointly testing multiple hypotheses. The\n tests are conducted under the global \"sharp\" null hypothesis of no effects,\n and the component tests are combined on the metric of their p-values. A\n key feature of nonparametric combination is that it accounts for the\n dependence among tests under the null hypothesis. In addition to the\n \"NPC\" function, which performs nonparametric combination itself, the\n package also contains a number of helper functions, many of which calculate\n a test statistic given an input of data.","Published":"2016-03-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NPCD","Version":"1.0-10","Title":"Nonparametric Methods for Cognitive Diagnosis","Description":"An array of nonparametric and parametric estimation methods for cognitive diagnostic models, including nonparametric classification of examinee attribute profiles, joint maximum likelihood estimation (JMLE) of examinee attribute profiles and item parameters, and nonparametric refinement of the Q-matrix, as well as conditional maximum likelihood estimation (CMLE) of examinee attribute profiles given item parameters and CMLE of item parameters given examinee attribute profiles. Currently the nonparametric methods in the package support both conjunctive and disjunctive models, and the parametric methods in the package support the DINA model, the DINO model, the NIDA model, the G-NIDA model, and the R-RUM model. ","Published":"2016-10-18","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"NPCirc","Version":"2.0.1","Title":"Nonparametric Circular Methods","Description":"This package implements nonparametric smoothing methods for circular data.","Published":"2014-10-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"npcopTest","Version":"1.02","Title":"Non Parametric Test for Detecting Changes in the Copula","Description":"A non parametric test for change points detection in the dependence between the components of multivariate data, with or without (multiple) changes in the marginal distributions. ","Published":"2017-01-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"npcp","Version":"0.1-6","Title":"Some Nonparametric Tests for Change-Point Detection in Possibly\nMultivariate Observations","Description":"Provides nonparametric tests for assessing whether possibly serially dependent univariate or multivariate observations have the same c.d.f. or not. In addition to tests focusing directly on the c.d.f., the package contains tests designed to be particularly sensitive to changes in the underlying copula, Spearman's rho or certain quantities that can be estimated using one-sample U-statistics of order two such as the variance, Gini's mean difference or Kendall's tau. The latest addition is a nonparametric test for detecting changes in the distribution of independent block maxima.","Published":"2015-07-24","License":"GPL (>= 3) | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"npde","Version":"2.0","Title":"Normalised prediction distribution errors for nonlinear\nmixed-effect models","Description":"Routines to compute normalised prediction distribution\n errors, a metric designed to evaluate non-linear mixed effect\n models such as those used in pharmacokinetics and\n pharmacodynamics","Published":"2012-10-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NPflow","Version":"0.12.0","Title":"Bayesian Nonparametrics for Automatic Gating of Flow-Cytometry\nData","Description":"Dirichlet process mixture of multivariate normal, skew normal or skew t-distributions\n modeling oriented towards flow-cytometry data pre-processing applications.","Published":"2017-04-04","License":"LGPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"NPHMC","Version":"2.2","Title":"Sample Size Calculation for the Proportional Hazards Mixture\nCure Model","Description":"An R-package for calculating sample size of a survival trial with or without cure fractions","Published":"2013-09-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"npIntFactRep","Version":"1.5","Title":"Nonparametric Interaction Tests for Factorial Designs with\nRepeated Measures","Description":"Returns nonparametric aligned rank tests for the interaction in two-way factorial designs, on R data sets with repeated measures in 'wide' format. Five ANOVAs tables are reported. A PARAMETRIC one on the original data, one for a CHECK upon the interaction alignments, and three aligned rank tests: one on the aligned REGULAR, one on the FRIEDMAN, and one on the KOCH ranks. In these rank tests, only the resulting values for the interaction are relevant.","Published":"2015-12-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nplplot","Version":"4.5","Title":"Plotting linkage and association results","Description":"This package provides routines for plotting\n linkage and association results along a chromosome,\n with marker names displayed along the top border.\n There are also routines for generating BED and BedGraph\n custom tracks for viewing in the UCSC genome browser.\n The data reformatting program Mega2 uses this\n package to plot output from a variety of\n programs. ","Published":"2014-05-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"nplr","Version":"0.1-7","Title":"N-Parameter Logistic Regression","Description":"Performing drug response analyses and IC50 estimations using n-Parameter logistic regression. Can also be applied to proliferation analyses.","Published":"2016-12-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"NPMLEcmprsk","Version":"2.1","Title":"Type-Specific Failure Rate and Hazard Rate on Competing Risks\nData","Description":"Given a failure type, the function computes covariate-specific probability of failure over time and covariate-specific conditional hazard rate based on possibly right-censored competing risk data. Specifically, it computes the non-parametric maximum-likelihood estimates of these quantities and their asymptotic variances in a semi-parametric mixture model for competing-risks data, as described in Chang et al. (2007a).","Published":"2015-04-29","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"npmlreg","Version":"0.46-1","Title":"Nonparametric maximum likelihood estimation for random effect\nmodels","Description":"Nonparametric maximum likelihood estimation or Gaussian\n quadrature for overdispersed generalized linear models and\n variance component models","Published":"2014-10-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NPMOD","Version":"0.1.0","Title":"Non Parametric Module","Description":"Non-Parametric Module (NPMOD) is a graphical interface dedicated to the testing of non-parametric data for educational purposes.","Published":"2017-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NPMPM","Version":"1.0","Title":"tertiary probabilistic model in predictive microbiology for use\nin food manufacture","Description":"The main method npmpm calculates bacterial concentrations\n during food manufacture after a contamination. Variability and\n uncertainty are included by use of probability distributions\n and Monte Carlo Simulation. The model aims at predicting\n possible bacterial concentrations at one certain point in time\n s, e.g. at the end of a process chain. The process steps of\n this process chain are run through in linear order.\n Experimental data that match current process step conditions\n are gathered, and one deterministic primary model is fitted to\n every series of measured values. From every fitted curve one\n concentration of bacteria at time s is computed, yielding a set\n of concentrations. This sample of possible contamination sizes\n is assumed to follow a certain probability distribution. After\n calculation of distribution parameters, one value is randomly\n drawn from this probability distribution. This value may be\n modified, and then serves as contamination for the next process\n step.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"npmr","Version":"1.1","Title":"Nuclear Penalized Multinomial Regression","Description":"Fit multinomial logistic regression with a penalty on the nuclear\n norm of the estimated regression coefficient matrix, using proximal\n gradient descent.","Published":"2016-08-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"npmv","Version":"2.4.0","Title":"Nonparametric Comparison of Multivariate Samples","Description":"Performs analysis of one-way multivariate data, for small samples using Nonparametric techniques. Using approximations for ANOVA Type, Wilks' Lambda, Lawley Hotelling, and Bartlett Nanda Pillai Test statics, the package compares the multivariate distributions for a single explanatory variable. The comparison is also performed using a permutation test for each of the four test statistics. The package also performs an all-subsets algorithm regarding variables and regarding factor levels. ","Published":"2017-01-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NPMVCP","Version":"1.1","Title":"Nonparametric Multivariate Change Point Model","Description":"Nonparametric Multivariate Change Point Model","Published":"2013-06-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nppbib","Version":"1.0-0","Title":"Nonparametric Partially-Balanced Incomplete Block Design\nAnalysis","Description":"Implements a nonparametric statistical test for rank or\n score data from partially-balanced incomplete block-design\n experiments.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"npregfast","Version":"1.4.0","Title":"Nonparametric Estimation of Regression Models with\nFactor-by-Curve Interactions","Description":"A method for obtaining nonparametric estimates of regression models\n with or without factor-by-curve interactions using local polynomial kernel\n smoothers or splines. Additionally, a parametric model (allometric model) can be\n estimated.","Published":"2016-11-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"nprobust","Version":"0.0.1","Title":"Robust Data-Driven Statistical Inference for Local Polynomial\nRegression and Kernel Density Estimation","Description":"Tools for data-driven analytical statistical inference for Local Polynomial Regression estimators and Kernel Density Estimation.","Published":"2016-11-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nproc","Version":"2.0.6","Title":"Neyman-Pearson Receiver Operating Characteristics","Description":"Given a sample of class 0 and class 1 and a classification method, the package generates the corresponding Neyman-Pearson classifier with a pre-specified type-I error control and Neyman-Pearson Receiver Operating Characteristics.","Published":"2017-03-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"npROCRegression","Version":"1.0-5","Title":"Kernel-Based Nonparametric ROC Regression Modelling","Description":"Implements several nonparametric regression approaches for the inclusion of covariate information on the receiver operating characteristic (ROC) framework.","Published":"2017-04-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"NPS","Version":"1.1","Title":"Convenience Functions and Tests for Working With the Net\nPromoter Score (NPS)","Description":"Small functions to make working with survey data in the context of\n a Net Promoter programme easier. Specifically, data transformation methods,\n some methods for examining the statistical properties of the NPS, such as\n its variance and standard errors, and some simple inferential testing\n procedures. Net Promoter and NPS are registered trademarks of Bain &\n Company, Satmetrix Systems and Fred Reichheld.","Published":"2014-12-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"npsf","Version":"0.2.0","Title":"Nonparametric and Stochastic Efficiency and Productivity\nAnalysis","Description":"Provides a variety of tools for nonparametric and parametric efficiency measurement.","Published":"2017-01-19","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"NPsimex","Version":"0.2-1","Title":"Nonparametric Smoothing for contaminated data using\nSimulation-Extrapolation","Description":"This package contains a collection of functions to to perform nonparametric deconvolution using simulation extrapolation (SIMEX). We propose an estimator that adopts the SIMEX idea but bypasses the simulation step in the original SIMEX algorithm. There is no bandwidth parameter and the estimate is determined by appropriately selecting \"design points\". See details in: Wang, X.F., Sun, J. and Fan, Z. (2011). Deconvolution density estimation with heteroscedastic errors using SIMEX.","Published":"2011-11-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"npsm","Version":"0.5","Title":"Package for Nonparametric Statistical Methods using R","Description":"Functions and datasets used in the book Nonparametric Statistical Methods Using R.","Published":"2014-11-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"npsp","Version":"0.5-3","Title":"Nonparametric Spatial Statistics","Description":"Multidimensional nonparametric spatio-temporal (geo)statistics.\n S3 classes and methods for multidimensional: linear binning,\n local polynomial kernel regression, density and variogram estimation.\n Nonparametric methods for trend and variogram inference.","Published":"2016-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"npst","Version":"2.0","Title":"Generalization of Hewitt's Seasonality Test","Description":"Package 'npst' generalizes Hewitt's (1971) test for seasonality and\n Rogerson's (1996) extension based on Monte-Carlo simulation.","Published":"2014-02-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"npsurv","Version":"0.3-4","Title":"Non-Parametric Survival Analysis","Description":"Contains functions for non-parametric survival analysis of\n\t exact and interval-censored observations. ","Published":"2015-11-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nricens","Version":"1.3","Title":"NRI for Risk Prediction Models with Time to Event and Binary\nResponse Data","Description":"Calculating the net reclassification improvement (NRI) for risk prediction models with time to event and binary data.","Published":"2016-12-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NSA","Version":"0.0.32","Title":"Post-normalization of total copy numbers","Description":"Post-normalization of total copy-number estimates.","Published":"2012-12-21","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"nscancor","Version":"0.6","Title":"Non-Negative and Sparse CCA","Description":"This package implements two algorithms for canonical correlation\n analysis (CCA) that are based on iterated regression\n steps. By choosing the appropriate regression algorithm for each data\n modality, it is possible to enforce sparsity, non-negativity or other kinds\n of constraints on the projection vectors. Multiple canonical variables are\n computed sequentially using a generalized deflation scheme, where the\n additional correlation not explained by previous variables is maximized.\n 'nscancor' is used to analyze paired data from two domains, and has the same\n interface as the 'cancor' function from the 'stats' package (plus some extra\n parameters). 'mcancor' is appropriate for analyzing data from three or more\n domains.","Published":"2014-07-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"NScluster","Version":"1.1.1","Title":"Simulation and Estimation of the Neyman-Scott Type Spatial\nCluster Models","Description":"Simulation and estimation for Neyman-Scott spatial cluster point process models and their extensions. For estimating parameters by the simplex method, parallel computation using OpenMP application programming interface is available.","Published":"2016-12-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nscprepr","Version":"0.1.1","Title":"Prepares and Writes Files to Submit to the National Student\nClearinghouse","Description":"Prepares and writes files to submit to the National Student Clearinghouse's \n StudentTracker service .","Published":"2017-06-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"nse","Version":"1-00.17","Title":"Numerical Standard Errors Computation in R","Description":"Collection of functions designed to calculate numerical standard error (NSE) of univariate time series \n as described in Ardia et al. (2016) and Ardia and Bluteau (2017) .","Published":"2017-02-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nsga2R","Version":"1.0","Title":"Elitist Non-dominated Sorting Genetic Algorithm based on R","Description":"This package provide functions for box-constrained\n multiobjective optimization using the elitist non-dominated\n sorting genetic algorithm - NSGA-II. Fast non-dominated\n sorting, crowding distance, tournament selection, simulated\n binary crossover, and polynomial mutation are called in the\n main program, nsga2R, to complete the search.","Published":"2013-06-16","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"nsgp","Version":"1.0.5","Title":"Non-Stationary Gaussian Process Regression","Description":"A Gaussian process regression using a Gaussian kernel for both\n one-sample and two-sample cases. Includes non-stationary Gaussian kernel\n (exponential decay function) and several likelihood ratio tests for\n differential testing along target points.","Published":"2014-10-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"NSM3","Version":"1.9","Title":"Functions and Datasets to Accompany Hollander, Wolfe, and\nChicken - Nonparametric Statistical Methods, Third Edition","Description":"Designed to replace the tables which were in the back of the first two editions of Hollander and Wolfe - Nonparametric Statistical Methods. Exact procedures are performed when computationally possible. Monte Carlo and Asymptotic procedures are performed otherwise. For those procedures included in the base packages, our code simply provides a wrapper to standardize the output with the other procedures in the package.","Published":"2016-10-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"nspmix","Version":"1.3-0","Title":"Nonparametric and Semiparametric Mixture Estimation","Description":"Contains functions for maximum likelihood estimation of\n\t nonparametric and semiparametric mixture models. ","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nsprcomp","Version":"0.5","Title":"Non-Negative and Sparse PCA","Description":"This package implements two methods for performing a constrained\n principal component analysis (PCA), where non-negativity and/or sparsity\n constraints are enforced on the principal axes (PAs). The function\n 'nsprcomp' computes one principal component (PC) after the other. Each PA\n is optimized such that the corresponding PC has maximum additional variance\n not explained by the previous components. In contrast, the function\n 'nscumcomp' jointly computes all PCs such that the cumulative variance is\n maximal. Both functions have the same interface as the 'prcomp' function\n from the 'stats' package (plus some extra parameters), and both return the\n result of the analysis as an object of class 'nsprcomp', which inherits\n from 'prcomp'.","Published":"2014-07-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nsRFA","Version":"0.7-12","Title":"Non-supervised Regional Frequency Analysis","Description":"A collection of statistical tools for objective (non-supervised) applications \n of the Regional Frequency Analysis methods in hydrology. \n The package refers to the index-value method and, more precisely, helps the\n hydrologist to: (1) regionalize the index-value; (2) form homogeneous regions \n with similar growth curves; (3) fit distribution functions to the \n empirical regional growth curves.","Published":"2014-08-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nsROC","Version":"1.0","Title":"Non-Standard ROC Curve Analysis","Description":"Tools for estimating Receiver Operating Characteristic (ROC) curves,\n building confidence bands, comparing several curves both for dependent and \n independent data, estimating the cumulative-dynamic ROC curve in presence of\n censored data, and performing meta-analysis studies, among others.","Published":"2017-06-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"NSUM","Version":"1.0","Title":"Network Scale Up Method","Description":"A Bayesian framework for population group size estimation using the Network Scale Up Method (NSUM). Size estimates are based on a random degree model and include options to adjust for barrier and transmission effects.","Published":"2015-03-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"nucim","Version":"1.0.0","Title":"Nucleome Imaging Toolbox","Description":"Tools for 4D nucleome imaging. Quantitative analysis of the 3D nuclear landscape recorded with super-resolved fluorescence microscopy.","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"NUCOMBog","Version":"1.0.4","Title":"NUtrient Cycling and COMpetition Model Undisturbed Open Bog\nEcosystems in a Temperate to Sub-Boreal Climate","Description":"Modelling the vegetation, carbon, nitrogen and water dynamics of undisturbed open bog ecosystems in a temperate to sub-boreal climate. The executable of the model can downloaded from .","Published":"2017-05-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"nullabor","Version":"0.3.1","Title":"Tools for Graphical Inference","Description":"Tools for visual inference. Generate null data sets\n and null plots using permutation and simulation. Calculate distance metrics\n for a lineup, and examine the distributions of metrics.","Published":"2014-12-17","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"numbers","Version":"0.6-6","Title":"Number-Theoretic Functions","Description":"\n Provides number-theoretic functions for factorization, prime numbers,\n twin primes, primitive roots, modular inverses, extended GCD, Farey \n series and continuous fractions. Included are some divisor functions,\n or Euler's Phi function and Egyptian fractions.","Published":"2017-01-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"numDeriv","Version":"2016.8-1","Title":"Accurate Numerical Derivatives","Description":"Methods for calculating (usually) accurate\n\tnumerical first and second order derivatives. Accurate calculations \n\tare done using 'Richardson''s' extrapolation or, when applicable, a\n\tcomplex step derivative is available. A simple difference \n\tmethod is also provided. Simple difference is (usually) less accurate\n\tbut is much quicker than 'Richardson''s' extrapolation and provides a \n\tuseful cross-check. \n\tMethods are provided for real scalar and vector valued functions. ","Published":"2016-08-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"numGen","Version":"0.1.0","Title":"Number Series Generator","Description":"A number series generator that creates number series items based on cognitive models.","Published":"2017-05-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"numKM","Version":"0.1.0","Title":"Create a Kaplan-Meier Plot with Numbers at Risk","Description":"To add the table of numbers at risk below the Kaplan-Meier plot.","Published":"2017-05-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"numOSL","Version":"2.3","Title":"Numeric Routines for Optically Stimulated Luminescence Dating","Description":"Package for optimizing regular numeric problems in optically stimulated luminescence \n dating, such as: equivalent dose calculation, dose rate determination, growth curve fitting,\n decay curve decomposition, statistical age model optimization, and statistical plot visualization.","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nutshell","Version":"2.0","Title":"Data for \"R in a Nutshell\"","Description":"This package contains data sets used as examples in the\n book \"R in a Nutshell\" from O'Reilly Media. For more\n information on this book, see\n http://shop.oreilly.com/product/0636920022008.do","Published":"2012-12-12","License":"CC BY-NC-ND 3.0 US","snapshot_date":"2017-06-23"} {"Package":"nutshell.audioscrobbler","Version":"1.0","Title":"Audioscrobbler data for \"R in a Nutshell\"","Description":"This package contains the Audio Scrobbler data set used as\n an example in the book \"R in a Nutshell\" from O'Reilly Media.\n For more information about this book, see\n http://shop.oreilly.com/product/0636920022008.do","Published":"2012-12-12","License":"CC BY-NC-SA 3.0","snapshot_date":"2017-06-23"} {"Package":"nutshell.bbdb","Version":"1.0","Title":"Baseball Database for \"R in a Nutshell\"","Description":"This package contains the baseball databank data set used\n as an example in the book \"R in a Nutshell\" from O'Reilly\n Media. For more information about this book, see\n http://shop.oreilly.com/product/0636920022008.do","Published":"2012-12-12","License":"CC BY-NC-ND 3.0 US","snapshot_date":"2017-06-23"} {"Package":"nws","Version":"1.7.0.1","Title":"R functions for NetWorkSpaces and Sleigh","Description":"Provides coordination and parallel execution facilities,\n as well as limited cross-language data exchange, using the\n netWorkSpaces server developed by REvolution Computing","Published":"2010-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"nycflights13","Version":"0.2.2","Title":"Flights that Departed NYC in 2013","Description":"Airline on-time data for all flights departing NYC in 2013.\n Also includes useful 'metadata' on airlines, airports, weather, and planes.","Published":"2017-01-27","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"nzelect","Version":"0.3.3","Title":"New Zealand Election Data","Description":"Convenient access to New Zealand election\n results by voting place. Voting places have been matched to Regional Council,\n Territorial Authority, and Area Unit, to facilitate matching with additional data.\n Opinion polls since 2002 and some convenience analytical function are also supplied.","Published":"2017-03-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"nzpullover","Version":"0.0.2","Title":"Driving Offences in New Zealand Between 2009 and 2016","Description":"Datasets of driving offences and fines in New Zealand between 2009 and 2016.\n Originally published by the New Zealand Police at\n .","Published":"2017-01-29","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"oaColors","Version":"0.0.4","Title":"OpenAnalytics Colors Package","Description":"Provides carefully chosen color palettes as used a.o. at OpenAnalytics .","Published":"2015-11-30","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"oai","Version":"0.2.2","Title":"General Purpose 'Oai-PMH' Services Client","Description":"A general purpose client to work with any 'OAI-PMH'\n (Open Archives Initiative Protocol for 'Metadata' Harvesting) service.\n The 'OAI-PMH' protocol is described at\n .\n Functions are provided to work with the 'OAI-PMH' verbs: 'GetRecord',\n 'Identify', 'ListIdentifiers', 'ListMetadataFormats', 'ListRecords', and\n 'ListSets'.","Published":"2016-11-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OAIHarvester","Version":"0.2-2","Title":"Harvest Metadata Using OAI-PMH v2.0","Description":"\n Harvest metadata using the Open Archives Initiative Protocol for Metadata\n Harvesting (OAI-PMH) version 2.0.","Published":"2017-01-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"oapackage","Version":"2.0.23","Title":"Orthogonal Array Package","Description":"Interface to D-optimal design generation code of the Orthogonal Array package. Can generate D-optimal designs with specified number of runs and factors. The optimality of the designs is defined in terms of a user specified optimization function based on the D-efficiency and Ds-efficiency.","Published":"2015-07-13","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"oaPlots","Version":"0.0.25","Title":"OpenAnalytics Plots Package","Description":"Offers a suite of functions for enhancing R plots.","Published":"2015-11-30","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Oarray","Version":"1.4-5","Title":"Arrays with arbitrary offsets","Description":"Generalise the starting point of the array index","Published":"2013-01-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"oasis","Version":"2.1","Title":"Multiple Sclerosis Lesion Segmentation using Magnetic Resonance\nImaging (MRI)","Description":"Trains and makes predictions from the OASIS method, described in\n detail in the paper \"OASIS is Automated Statistical Inference for Segmentation,\n with applications to multiple sclerosis lesion segmentation in MRI\" \n . \n OASIS is a method for multiple sclerosis (MS)\n lesion segmentation on structural magnetic resonance image (MRI) studies. OASIS\n creates probability maps of lesion presence using the FLAIR, T2, T1, and PD\n structural MRI volumes. This packages allows for training of the OASIS model\n and prediction of OASIS probability maps from a trained model with user supplied\n studies that have a gold standard lesion segmentation masks. The package will\n also create OASIS probability maps for MRI studies using the OASIS model from\n the OASIS paper if no gold standard lesion segmentation masks are available.","Published":"2016-09-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"OasisR","Version":"2.0.1","Title":"Outright Tool for the Analysis of Spatial Inequalities and\nSegregation","Description":"A set of indexes and tests for the analysis of social segregation.","Published":"2016-08-25","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"oaxaca","Version":"0.1.3","Title":"Blinder-Oaxaca Decomposition","Description":"An implementation of the Blinder-Oaxaca decomposition for linear regression models.","Published":"2016-01-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"obAnalytics","Version":"0.1.1","Title":"Limit Order Book Analytics","Description":"Data processing, visualisation and analysis of Limit Order Book\n event data.","Published":"2016-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"objectProperties","Version":"0.6.5","Title":"A factory of self-describing properties","Description":"Supports the definition of sets of properties on objects. Observers can listen to changes on individual properties or the set as a whole. The properties are meant to be fully self-describing. In support of this, there is a framework for defining enumerated types, as well as other bounded types, as S4 classes.","Published":"2011-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"objectSignals","Version":"0.10.2","Title":"objectSignals","Description":"A mutable Signal object can report changes to its state,\n clients could register functions so that they are called whenever\n the signal is emited. The signal could be emited, disconnected,\n blocked, unblocked, and buffered.","Published":"2011-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"obliclus","Version":"0.9","Title":"Cluster-based factor rotation","Description":"This package conducts factor rotation techniques which\n intentd to identify a simple and well-clustered structure in a\n factor loading matrix.","Published":"2012-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"obliqueRF","Version":"0.3","Title":"Oblique Random Forests from Recursive Linear Model Splits","Description":"Random forest with oblique decision trees for binary\n classification tasks. Discriminative node models in the tree\n are based on: ridge regression, partial least squares\n regression, logistic regression, linear support vector\n machines, or random coefficients.","Published":"2012-08-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"obs.agree","Version":"1.0","Title":"An R package to assess agreement between observers","Description":"The package includes two functions for measuring agreement. Raw Agreement Indices (RAI) to categorical data and Information-Based Measure of Disagreement (IBMD) to continuous data. It can be used for multiple raters and multiple readings cases.","Published":"2013-12-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"observer","Version":"0.1.2","Title":"Observe and Check your Data","Description":"Checks that a given dataset passes user-specified \n rules. The main functions are observe_if() and inspect(). ","Published":"2017-01-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OBsMD","Version":"0.2-0.00","Title":"Objective Bayesian Model Discrimination in Follow-Up Designs","Description":"Implements the objective Bayesian methodology proposed in Consonni and Deldossi in order to choose the optimal experiment that better discriminate between competing models. G.Consonni, L. Deldossi (2014) Objective Bayesian Model Discrimination in Follow-up Experimental Designs, Test. .","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"obsSens","Version":"1.3","Title":"Sensitivity analysis for Observational studies","Description":"Observational studies are limited in that there could be\n an unmeasured variable related to both the response variable\n and the primary predictor. If this unmeasured variable were\n included in the analysis it would change the relationship\n (possibly changing the conclusions). Sensitivity analysis is a\n way to see how much of a relationship needs to exist with the\n unmeasured variable before the conclusions change. This\n package provides tools for doing a sensitivity analysis for\n regression (linear, logistic, and cox) style models.","Published":"2013-01-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"oc","Version":"0.96","Title":"OC Roll Call Analysis Software","Description":"Estimates Optimal Classification scores from roll call\n votes supplied though a 'rollcall' object from package 'pscl'.","Published":"2016-07-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"OCA","Version":"0.1","Title":"Optimal Capital Allocations","Description":"Computes optimal capital allocations based on some standard principles such as Haircut, Overbeck type II and the Covariance Allocation Principle. It also provides some shortcuts for obtaining the Value at Risk and the Expectation Shortfall, using both the normal and the t-student distribution, see Urbina and Guillén (2014) and Urbina (2013).","Published":"2017-02-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"occ","Version":"1.0","Title":"Estimates PET neuroreceptor occupancies","Description":"This package provides a generic function for estimating\n positron emission tomography (PET) neuroreceptor occupancies\n from the total volumes of distribution of a set of regions of\n interest. Fittings methods include the simple 'reference\n region' and 'ordinary least squares' (sometimes known as\n occupancy plot) methods, as well as the more efficient\n 'restricted maximum likelihood estimation'.","Published":"2014-12-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"oce","Version":"0.9-21","Title":"Analysis of Oceanographic Data","Description":"Supports the analysis of Oceanographic data, including 'ADCP'\n measurements, measurements made with 'argo' floats, 'CTD' measurements,\n sectional data, sea-level time series, coastline and topographic data, etc.\n Provides specialized functions for calculating seawater properties such as\n potential temperature in either the 'UNESCO' or 'TEOS-10' equation of state.\n Produces graphical displays that conform to the conventions of the Oceanographic\n literature.","Published":"2017-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"oceanmap","Version":"0.0.8","Title":"A Plotting Toolbox for 2D Oceanographic Data","Description":"Plotting toolbox for 2D oceanographic data (satellite data, sst, chla, ocean fronts & bathymetry). Recognized classes and formats include ncdf4, Raster, '.nc' and '.gz' files.","Published":"2017-06-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"OceanView","Version":"1.0.4","Title":"Visualisation of Oceanographic Data and Model Output","Description":"Functions for transforming and viewing 2-D and 3-D (oceanographic) data and model output.","Published":"2016-01-18","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"ocedata","Version":"0.1.3","Title":"Oceanographic Datasets for Oce","Description":"Several important and Oceanographic datasets are provided. These are particularly useful to the Oce package, but can be helpful in a general context, as well.","Published":"2015-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ockc","Version":"1.0","Title":"Order Constrained Solutions in k-Means Clustering","Description":"Extends 'flexclust' with an R implementation of order constrained\n solutions in k-means clustering (Steinley and Hubert, 2008, ).","Published":"2016-11-22","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ocomposition","Version":"1.1","Title":"Regression for Rank-Indexed Compositional Data","Description":"Regression model where the response variable is a rank-indexed compositional vector (non-negative values that sum up to one and are ordered from the largest to the smallest). Parameters are estimated in the Bayesian framework using MCMC methods. ","Published":"2015-08-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OData","Version":"0.6","Title":"R Helper for OData Web Services","Description":"Helper methods for accessing data from web service based on OData Protocol.\n It provides several helper methods to access the service metadata, the data from datasets and to download some file resources (it only support CSV for now).\n For more information about OData go to .","Published":"2016-12-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ODB","Version":"1.1.1","Title":"Open Document Databases (.odb) management","Description":"This package provides functions to create, connect, update\n and query HSQL databases embedded in Open Document Databases\n (.odb) files, as OpenOffice and LibreOffice do.","Published":"2012-07-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"odbc","Version":"1.1.0","Title":"Connect to ODBC Compatible Databases (using the DBI Interface)","Description":"A DBI-compatible interface to ODBC databases.","Published":"2017-06-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"odds.converter","Version":"1.4","Title":"Betting Odds Conversion","Description":"Conversion between the most common odds types for sports betting.\n Hong Kong odds, US odds, Decimal odds, Indonesian odds, Malaysian odds, and raw\n Probability are covered in this package.","Published":"2017-01-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"oddsratio","Version":"1.0.0","Title":"Odds Ratio Calculation for GAM(M)s & GLM(M)s","Description":"Simplified odds ratio calculation of GAM(M)s & GLM(M)s. \n Provides structured output (data frame) of all predictors and their corresponding odds ratios and confident intervals\n for further analyses. It helps to avoid false references of predictors and increments by\n specifying these parameters in a list instead of using 'exp(coef(model))' \n (standard approach of odds ratio calculation for GLMs) which just returns a plain numeric output. \n For GAM(M)s, odds ratio calculation is highly simplified with this package since it takes care of\n the multiple 'predict()' calls of the chosen predictor while holding other predictors constant.\n Also, this package allows odds ratio calculation of percentage steps across the whole\n predictor distribution range for GAM(M)s. In both cases, confident intervals are returned additionally.\n Calculated odds ratio of GAM(M)s can be inserted into the smooth function plot. ","Published":"2017-06-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"odeintr","Version":"1.7.1","Title":"C++ ODE Solvers Compiled on-Demand","Description":"Wraps the Boost odeint library for integration of differential\n equations.","Published":"2017-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"odfWeave","Version":"0.8.4","Title":"Sweave processing of Open Document Format (ODF) files","Description":"Sweave processing of Open Document Format (ODF) files","Published":"2014-01-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"odfWeave.survey","Version":"1.0","Title":"Support for odfWeave on the survey package","Description":"Provides methods for odfTable() for objects in the survey\n package.","Published":"2009-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ODMconverter","Version":"2.3","Title":"Tools to Convert ODM Files","Description":"Transformation of 'ODM' (see ) files into R format, Microsoft Office format, 'CDA' format and vice versa. 'ODM' format is commonly used in clinical trials. Semantic annotation (such as 'UMLS', 'SNOMED' or 'LOINC' codes) are transformed accordingly.","Published":"2016-12-31","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"oec","Version":"2.2","Title":"Use the Observatory of Economic Complexity's API in R","Description":"Use The Observatory of Economic Complexity's API in R to download international trade data in csv and create and D3Plus visualizations.","Published":"2016-11-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OECD","Version":"0.2.2","Title":"Search and Extract Data from the OECD","Description":"Search and extract data from the OECD.","Published":"2016-01-17","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"oem","Version":"2.0.5","Title":"Orthogonalizing EM","Description":"Solves penalized least squares problems for big tall data using the orthogonalizing EM algorithm of Xiong et al. (2016) . The main fitting function is oem() and the functions cv.oem() and xval.oem() are for cross validation, the latter being an accelerated cross validation function for linear models. The big.oem() function allows for out of memory fitting.","Published":"2017-03-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"officer","Version":"0.1.4","Title":"Manipulation of Microsoft Word and PowerPoint Documents","Description":"Access and manipulate 'Microsoft Word' and 'Microsoft PowerPoint' documents from R. \n The package focus on tabular and graphical reporting from R; it also provides two functions \n that let users get document content into data objects. A set of functions \n lets add and remove images, tables and paragraphs of text in new or existing documents. \n When working with 'PowerPoint' presentations, slides can be added or removed; shapes inside \n slides can also be added or removed. When working with 'Word' documents, a cursor can be \n used to help insert or delete content at a specific location in the document. The package \n does not require any installation of Microsoft product to be able to write Microsoft files.","Published":"2017-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"oglmx","Version":"2.0.0.3","Title":"Estimation of Ordered Generalized Linear Models","Description":"Ordered models such as ordered probit and ordered logit presume that the error variance is constant across observations. In the case that this assumption does not hold estimates of marginal effects are typically biased (Weiss (1997)). This package allows for generalization of ordered probit and ordered logit models by allowing the user to specify a model for the variance. Furthermore, the package includes functions to calculate the marginal effects. Wrapper functions to estimate the standard limited dependent variable models are also included.","Published":"2017-03-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Ohmage","Version":"2.11-4","Title":"R Client for Ohmage 2 server","Description":"R Client for Ohmage 2 server. Implements basic R\n functions to retrieve and process data.","Published":"2014-09-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"OIdata","Version":"1.0","Title":"Data sets and supplements (OpenIntro)","Description":"A collection of data sets from several sources that may be\n useful for teaching, practice, or other purposes. Functions\n have also been included to assist in the retrieval of table\n data from websites or in visualizing sample data.","Published":"2012-05-31","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"oii","Version":"1.0.1","Title":"Crosstab and Statistical Tests for OII MSc Stats Course","Description":"Provides simple crosstab output with optional statistics (e.g.,\n Goodman-Kruskal Gamma, Somers' d, and Kendall's tau-b) as well as two-way\n and one-way tables. The package is used within the statistics component of\n the Masters of Science (MSc) in Social Science of the Internet at the Oxford\n Internet Institute (OII), University of Oxford, but the functions should be\n useful for general data analysis and especially for analysis of categorical and\n ordinal data.","Published":"2016-11-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OIsurv","Version":"0.2","Title":"Survival analysis supplement to OpenIntro guide","Description":"Supplemental functions and data for the OpenIntro guide to the survival package in R.","Published":"2013-10-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OjaNP","Version":"0.9-9","Title":"Multivariate Methods Based on the Oja Median and Related\nConcepts","Description":"Functions to calculate the Oja median, Oja signs and ranks and methods based upon them.","Published":"2016-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"okcupiddata","Version":"0.1.0","Title":"OkCupid Profile Data for Introductory Statistics and Data\nScience Courses","Description":"Cleaned profile data from \"OkCupid Profile Data for Introductory\n Statistics and Data Science Courses\" (Journal of Statistics Education 2015\n ).","Published":"2016-08-19","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"okmesonet","Version":"0.1.5","Title":"Retrieve Oklahoma Mesonet climatological data","Description":"okmesonet retrieves and summarizes Oklahoma (USA) Mesonet\n climatological data provided by the Oklahoma Climatological Survey. \n Measurements are recorded every five minutes at approximately 120 stations\n throughout Oklahoma and are available in near real-time.","Published":"2014-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"olctools","Version":"0.3.0","Title":"Open Location Code Handling in R","Description":"'Open Location Codes' \n are a Google-created standard for identifying geographic locations. 'olctools' provides\n utilities for validating, encoding and decoding entries that follow this\n standard.","Published":"2016-05-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OligoSpecificitySystem","Version":"1.3","Title":"Oligo Specificity System","Description":"Calculate the theorical specificity of a system of\n multiple primers used for PCR, qPCR primers or degenerated\n primer design","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"OLScurve","Version":"0.2.0","Title":"OLS growth curve trajectories","Description":"Provides tools for more easily organizing and\n plotting individual ordinary least square (OLS) growth curve\n trajectories.","Published":"2014-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"olsrr","Version":"0.2.0","Title":"Tools for Teaching and Learning OLS Regression","Description":"Tools for teaching and learning ordinary least squares regression. Includes \n comprehensive regression output, heteroskedasticity tests, collinearity diagnostics, \n residual diagnostics, measures of influence, model fit assessment and variable selection procedures.","Published":"2017-06-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"omd","Version":"1.0","Title":"filter the molecular descriptors for QSAR","Description":"This package including two useful function, which can be\n used for filter the molecular descriptors matrix for QSAR.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OmicKriging","Version":"1.4.0","Title":"Poly-Omic Prediction of Complex TRaits","Description":"It provides functions to generate a correlation matrix\n from a genetic dataset and to use this matrix to predict the phenotype of an\n individual by using the phenotypes of the remaining individuals through\n kriging. Kriging is a geostatistical method for optimal prediction or best\n unbiased linear prediction. It consists of predicting the value of a\n variable at an unobserved location as a weighted sum of the variable at\n observed locations. Intuitively, it works as a reverse linear regression:\n instead of computing correlation (univariate regression coefficients are\n simply scaled correlation) between a dependent variable Y and independent\n variables X, it uses known correlation between X and Y to predict Y.","Published":"2016-03-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"omics","Version":"0.1-5","Title":"'--omics' Data Analysis Toolbox","Description":"A collection of functions to analyse '--omics' datasets such as DNA\n methylation and gene expression profiles.","Published":"2016-11-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OmicsPLS","Version":"1.0.1","Title":"Perform Two-Way Orthogonal Partial Least Squares","Description":"Performs the O2PLS data integration method for two datasets yielding joint and data-specific parts for each dataset.\n The algorithm automatically switches to a memory-efficient approach to fit O2PLS to high dimensional data.\n It provides a rigorous and a faster alternative cross-validation method to select the number of components,\n as well as functions to report proportions of explained variation and to construct plots of your results.\n See Trygg and Wold (2003) and el Bouhaddani et al (2016) .","Published":"2017-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ompr","Version":"0.6.0","Title":"Model and Solve Mixed Integer Linear Programs","Description":"Model mixed integer linear programs in an algebraic way directly in R.\n The model is solver-independent and thus offers the possibility\n to solve a model with different solvers. It currently only supports\n linear constraints and objective functions. See the 'ompr'\n website for more information, \n documentation and examples.","Published":"2017-04-17","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ompr.roi","Version":"0.6.0","Title":"A Solver for 'ompr' that Uses the R Optimization Infrastructure\n('ROI')","Description":"A solver for 'ompr' based on the R Optimization Infrastructure ('ROI').\n The package makes all solvers in 'ROI' available to solve 'ompr' models. Please see the\n 'ompr' website and package docs for more information\n and examples on how to use it.","Published":"2017-04-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"oncomodel","Version":"1.0","Title":"Maximum likelihood tree models for oncogenesis","Description":"Computing probabilistic tree models for oncogenesis based\n on genetic data using maximum likelihood.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Oncotree","Version":"0.3.3","Title":"Estimating oncogenetic trees","Description":"Contains functions to construct and evaluate directed tree structures that\n model the process of occurrence of genetic alterations during carcinogenesis.","Published":"2013-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OneArmPhaseTwoStudy","Version":"0.1.6","Title":"Planing, Monitoring and Evaluating Oncological Phase 2 Studies","Description":"Purpose of this package is it to plan, monitor and evaluate\n oncological phase II studies. In general this kind of studies are single-arm\n trials with planned interim analysis and binary endpoint. To meet the resulting\n requirements, the package provides functions to calculate and evaluate 'Simon's\n two-stage designs' and 'so-called' 'subset designs'. If you are unfamiliar with\n this package a good starting point is to take a closer look at the functions\n getSolutions() and getSolutionsSub1().The web-based tool (https://imbi.shinyapps.io/phaseII-app/)\n extends the functionality of our R package by means of a proper dealing with over- and underrunning.\n The R function binom.test of the 'stats' R package and the package 'binom' might be \n helpful to assess the performance of the corresponding one-stage design as a reference.","Published":"2016-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"onehot","Version":"0.1.1","Title":"Fast Onehot Encoding for Data.frames","Description":"Quickly create numeric matrices for machine learning algorithms\n that require them. It converts factor columns into onehot vectors.","Published":"2017-05-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"onemap","Version":"2.0-4","Title":"Software for constructing genetic maps in experimental crosses:\nfull-sib, RILs, F2 and backcrosses","Description":"Analysis of molecular marker data from model (backcrosses,\n F2 and recombinant inbred lines) and non-model systems (i. e.\n outcrossing species). For the later, it allows statistical\n analysis by simultaneously estimating linkage and linkage\n phases (genetic map construction). All analysis are based on\n multipoint approaches using hidden Markov models.","Published":"2013-09-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"OneR","Version":"2.2","Title":"One Rule Machine Learning Classification Algorithm with\nEnhancements","Description":"Implements the One Rule (OneR) Machine Learning classification algorithm (Holte, R.C. (1993) ) with enhancements for sophisticated handling of numeric data and missing values together with extensive diagnostic functions. It is useful as a baseline for machine learning models and the rules are often helpful heuristics.","Published":"2017-05-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ONETr","Version":"1.0.3","Title":"Efficient Authenticated Interaction with the O*NET API","Description":"Provides a series of functions designed to enable users to easily search and interact with occupational data from the O*NET API . The package produces parsed and listed XML data for custom interactions, or pre-packaged functions for easy extraction of specific data (e.g., Knowledge, Skills, Abilities, Work Styles, etc.).","Published":"2015-08-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"OneTwoSamples","Version":"1.0-3","Title":"Deal with one and two (normal) samples","Description":"In this package, we introduce an R function\n one_two_sample() which can deal with one and two (normal)\n samples. For one normal sample x, the function reports\n descriptive statistics, plot, interval estimation and test of\n hypothesis of x. For two normal samples x and y, the function\n reports descriptive statistics, plot, interval estimation and\n test of hypothesis of x and y, respectively. It also reports\n interval estimation and test of hypothesis of mu1-mu2 (the\n difference of the means of x and y) and sigma1^2 / sigma2^2\n (the ratio of the variances of x and y), tests whether x and y\n are from the same population, finds the correlation coefficient\n of x and y if x and y have the same length.","Published":"2013-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"onewaytests","Version":"1.4","Title":"One-Way Tests in Independent Groups Designs","Description":"Performs one-way tests in independent groups designs, pairwise comparisons, graphical approaches, and assess variance homogeneity and normality of each group via tests and plots. ","Published":"2017-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"onion","Version":"1.2-4","Title":"octonions and quaternions","Description":"\n A collection of routines to manipulate and visualize quaternions and\n octonions.","Published":"2011-12-27","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"onlinePCA","Version":"1.3.1","Title":"Online Principal Component Analysis","Description":"Online PCA for multivariate and functional data using perturbation methods, low-rank incremental methods, and stochastic optimization methods. ","Published":"2016-09-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"onls","Version":"0.1-1","Title":"Orthogonal Nonlinear Least-Squares Regression","Description":"Orthogonal Nonlinear Least-Squares Regression using Levenberg-Marquardt Minimization.","Published":"2015-09-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ontologyIndex","Version":"2.4","Title":"Functions for Reading Ontologies into R","Description":"Functions for reading ontologies into R as lists and manipulating sets of ontological terms - 'ontologyX: A suite of R packages for working with ontological data', Greene et al 2017 .","Published":"2017-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ontologyPlot","Version":"1.4","Title":"Functions for Visualising Sets of Ontological Terms","Description":"Functions for visualising sets of ontological terms using the 'graphviz' layout system.","Published":"2016-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ontologySimilarity","Version":"2.2","Title":"Functions for Calculating Ontological Similarities","Description":"Functions for calculating semantic similarities between ontological terms or sets of ontological terms based on term information content and assessing statistical significance of similarity in the context of a collection of sets of ontological terms.","Published":"2016-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OOBCurve","Version":"0.1","Title":"Out of Bag Learning Curve","Description":"Provides a function to calculate the out-of-bag learning curve for random forests for any measure that is available in the 'mlr' package. Supported random forest packages are 'randomForest' and 'ranger' and trained models of these packages with the train function of 'mlr'.","Published":"2017-02-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"OOmisc","Version":"1.2","Title":"Ozgur-Ozlem Miscellaneous","Description":"Includes miscellaneous functions.","Published":"2013-02-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OOR","Version":"0.1.1","Title":"Optimistic Optimization in R","Description":"Implementation of optimistic optimization methods for global optimization of deterministic or stochastic functions. The algorithms feature guarantees of the convergence to a global optimum. They require minimal assumptions on the (only local) smoothness, where the smoothness parameter does not need to be known. They are expected to be useful for the most difficult functions when we have no information on smoothness and the gradients are unknown or do not exist. Due to the weak assumptions, however, they can be mostly effective only in small dimensions, for example, for hyperparameter tuning.","Published":"2017-02-03","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"OpasnetUtils","Version":"1.2.0","Title":"Opasnet Modelling Environment Utility Functions","Description":"Contains tools for open assessment and modelling in Opasnet,\n a wiki-based web site and workspace for societal decision making\n (see for more information).\n The core principle of the workspace is maximal openness and modularity.\n Variables are defined on public wiki pages using wiki inputs/tables,\n databases and R code. This package provides the functionality to download and use these\n variables. It also contains health impact assessment tools such as\n spatial methods for exposure modelling.","Published":"2015-06-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"OPDOE","Version":"1.0-9","Title":"OPtimal Design Of Experiments","Description":"Experimental Design","Published":"2014-01-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"opefimor","Version":"1.2","Title":"Option Pricing and Estimation of Financial Models in R","Description":"Companion package to the book Option Pricing and\n Estimation of Financial Models in R, Wiley, Chichester. ISBN:\n 978-0-470-74584-7.","Published":"2015-08-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"openadds","Version":"0.2.0","Title":"Client to Access 'Openaddresses' Data","Description":"'Openaddresses' () client. Search,\n fetch data, and combine 'datasets'. Outputs are easy to visualize\n with base plots, 'ggplot2', or 'leaflet'.","Published":"2017-01-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"openair","Version":"2.1-0","Title":"Tools for the Analysis of Air Pollution Data","Description":"Tools to analyse, interpret and understand air\n pollution data. Data are typically hourly time series\n and both monitoring data and dispersion model output\n can be analysed. Many functions can also be applied to\n other data, including meteorological and traffic data.","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"opencage","Version":"0.1.2","Title":"Interface to the OpenCage API","Description":"Tool for accessing the OpenCage API, which provides forward\n geocoding (from placename to longitude and latitude) and reverse geocoding (from\n longitude and latitude to placename).","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OpenCL","Version":"0.1-3","Title":"Interface allowing R to use OpenCL","Description":"This package provides an interface to OpenCL, allowing R\n to leverage computing power of GPUs and other HPC accelerator\n devices.","Published":"2012-05-26","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"opencpu","Version":"2.0.2","Title":"Producing and Reproducing Results","Description":"A system for embedded scientific computing and reproducible research with R.\n The OpenCPU server exposes a simple but powerful HTTP api for RPC and data interchange\n with R. This provides a reliable and scalable foundation for statistical services or \n building R web applications. The OpenCPU server runs either as a single-user development\n server within the interactive R session, or as a multi-user Linux stack based on Apache2. \n The entire system is fully open source and permissively licensed. The OpenCPU website\n has detailed documentation and example apps.","Published":"2017-06-17","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"openEBGM","Version":"0.1.0","Title":"EBGM Scores for Mining Large Contingency Tables","Description":"An implementation of DuMouchel's (1999) \n Bayesian data mining method for the\n market basket problem. Calculates Empirical Bayes Geometric Mean (EBGM) and\n quantile scores from the posterior distribution using the Gamma-Poisson\n Shrinker (GPS) model to find unusually large cell counts in large, sparse\n contingency tables. Can be used to find unusually high reporting rates of\n adverse events associated with products. In general, can be used to mine any\n database where the co-occurrence of two variables or items is of interest.\n Also calculates relative and proportional reporting ratios. Builds on the work\n of the 'PhViD' package, from which much of the code is derived. Some of the\n added features include stratification to adjust for confounding variables and\n data squashing to improve computational efficiency.","Published":"2017-05-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"OpenImageR","Version":"1.0.6","Title":"An Image Processing Toolkit","Description":"Incorporates functions for image preprocessing, filtering and image recognition. The package takes advantage of 'RcppArmadillo' to speed up computationally intensive functions. The histogram of oriented gradients descriptor is a modification of the 'findHOGFeatures' function of the 'SimpleCV' computer vision platform and the average_hash(), dhash() and phash() functions are based on the 'ImageHash' python library.","Published":"2017-05-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"openintro","Version":"1.4","Title":"OpenIntro data sets and supplemental functions","Description":"This package is a supplement to OpenIntro Statistics,\n which is a free textbook available at openintro.org (at cost\n paperbacks are also available for under $10 on Amazon). The\n package contains data sets used in the textbook along with\n custom plotting functions for reproducing book figures. Note\n that many functions and examples include color transparency.\n Some plotting elements may not show up properly (or at all) in\n some Windows versions.","Published":"2012-09-01","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"OpenML","Version":"1.4","Title":"Exploring Machine Learning Better, Together","Description":"'OpenML.org' is an online machine learning platform where \n researchers can easily download and upload data sets, share machine learning \n tasks and experiments and organize them online to work and collaborate more \n effectively.\n We provide an R interface to the OpenML REST API in order to download and \n upload data sets, tasks, flows and runs, see \n for more information.","Published":"2017-06-22","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OpenMPController","Version":"0.1-2","Title":"Control number of OpenMP threads dynamically","Description":"The OpenMPController package provides a function\n 'omp_set_num_threads()' to set the number of OpenMP threads to\n be used. This may be useful, for example, when linking against\n a vendor optimised BLAS/LAPACK library (e.g. the AMD Core Math\n Library), since the defaults used by those libraries may not be\n highly performant.","Published":"2013-05-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"OpenMx","Version":"2.7.12","Title":"Extended Structural Equation Modelling","Description":"Facilitates treatment of statistical model specifications\n as things that can be generated and manipulated programmatically.\n Structural equation models may be specified with reticular action model matrices or paths,\n linear structural relations matrices or paths, or\n directly in matrix algebra.\n Fit functions include full information maximum likelihood,\n maximum likelihood, and weighted least squares.\n Example models include confirmatory factor, multiple group, mixture\n distribution, categorical threshold, modern test theory, differential\n equations, state space, and many others.","Published":"2017-06-17","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"openNLP","Version":"0.2-6","Title":"Apache OpenNLP Tools Interface","Description":"An interface to the Apache OpenNLP tools (version 1.5.3).\n The Apache OpenNLP library is a machine learning based toolkit for the\n processing of natural language text written in Java.\n It supports the most common NLP tasks, such as tokenization, sentence\n segmentation, part-of-speech tagging, named entity extraction, chunking,\n parsing, and coreference resolution.\n See for more information.","Published":"2016-02-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"openNLPdata","Version":"1.5.3-2","Title":"Apache OpenNLP Jars and Basic English Language Models","Description":"Apache OpenNLP jars and basic English language models.","Published":"2015-06-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"OpenRepGrid","Version":"0.1.10","Title":"Tools to Analyse Repertory Grid Data","Description":"A set of functions to analyze repertory grid data.","Published":"2017-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"openssl","Version":"0.9.6","Title":"Toolkit for Encryption, Signatures and Certificates Based on\nOpenSSL","Description":"Bindings to OpenSSL libssl and libcrypto, plus custom SSH pubkey parsers.\n Supports RSA, DSA and EC curves P-256, P-384 and P-521. Cryptographic signatures\n can either be created and verified manually or via x509 certificates. AES can be\n used in cbc, ctr or gcm mode for symmetric encryption; RSA for asymmetric (public\n key) encryption or EC for Diffie Hellman. High-level envelope functions combine\n RSA and AES for encrypting arbitrary sized data. Other utilities include key\n generators, hash functions (md5, sha1, sha256, etc), base64 encoder, a secure\n random number generator, and 'bignum' math methods for manually performing\n crypto calculations on large multibyte integers.","Published":"2016-12-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OpenStreetMap","Version":"0.3.3","Title":"Access to Open Street Map Raster Images","Description":"Accesses high resolution raster maps using the OpenStreetMap\n protocol. Dozens of road, satellite, and topographic map servers are directly\n supported, including Apple, Mapnik, Bing, and stamen. Additionally raster maps\n may be constructed using custom tile servers. Maps can be\n plotted using either base graphics, or ggplot2. This package is not affiliated\n with the OpenStreetMap.org mapping project.","Published":"2016-09-09","License":"GPL-2 | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"opentraj","Version":"1.0","Title":"Tools for Creating and Analysing Air Trajectory Data","Description":"opentraj uses the Hybrid Single Particle Lagrangian Integrated Trajectory Model (HYSPLIT) for computing simple air parcel trajectories. The functions in this package allow users to run HYSPLIT for trajectory calculations, as well as get its results, directly from R without using any GUI interface.","Published":"2014-09-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"openVA","Version":"1.0.3","Title":"Automated Method for Verbal Autopsy","Description":"Implements multiple existing open-source algorithms for coding cause of death from verbal autopsies. It also provides tools for data manipulation tasks commonly used in Verbal Autopsy analysis and implements easy graphical visualization of individual and population level statistics.","Published":"2017-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"openxlsx","Version":"4.0.17","Title":"Read, Write and Edit XLSX Files","Description":"Simplifies the creation of Excel .xlsx files by providing a high\n level interface to writing, styling and editing worksheets. Through the use of\n 'Rcpp', read/write times are comparable to the 'xlsx' and 'XLConnect' packages\n with the added benefit of removing the dependency on Java.","Published":"2017-03-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"opera","Version":"1.0","Title":"Online Prediction by Expert Aggregation","Description":"Misc methods to form online predictions, for regression-oriented time-series, by combining a finite set of forecasts provided by the user.","Published":"2016-08-17","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"operator.tools","Version":"1.6.3","Title":"Utilities for Working with R's Operators","Description":"Provides a collection of utilities that allow programming with \n R's operators. Routines allow classifying operators, \n translating to and from an operator and its underlying function, and inverting \n some operators (e.g. comparison operators), etc. All methods can be extended\n to custom infix operators. ","Published":"2017-02-28","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"operators","Version":"0.1-8","Title":"Additional Binary Operators","Description":"A set of binary operators for common tasks such as regex\n manipulation.","Published":"2015-07-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OPI","Version":"2.5","Title":"Open Perimetry Interface","Description":"Implementation of the Open Perimetry Interface (OPI) for simulating and controlling visual field machines using R. The OPI is a standard for interfacing with visual field testing machines (perimeters). It specifies basic functions that allow many visual field tests to be constructed. As of February 2016 it is fully implemented on the Octopus 600 and Octopus 900 and partially on the Heidelberg Edge Perimeter, the Kowa AP 7000 and the CrewT imo. It also has a cousin: the R package visualFields, which has tools for analysing and manipulating visual field data.","Published":"2016-07-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Opportunistic","Version":"1.1","Title":"Broadcasts, Transmissions and Receptions in an Opportunistic\nNetwork","Description":"Computes the expectation of the number of broadcasts, transmissions and receptions considering an Opportunistic transport model. It provides theoretical results and also estimated values based on Monte Carlo simulations.","Published":"2017-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ops","Version":"1.0","Title":"Optimal Power Space Transformation","Description":"Comparison of data by Pearson product-moment correlation\n coefficients is prone to outliers. The problem can be\n alleviated by normalizing data with outliers before computing\n the Pearson correlation coefficient. The sample provides such\n normalization by optimal power space transformation.","Published":"2012-02-20","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"optAUC","Version":"1.0","Title":"Optimal Combinations of Diagnostic Tests Based on AUC","Description":"Searches for optimal linear combination of multiple\n diagnostic tests (markers) that maximizes the area under the\n receiver operating characteristic curve (AUC); performs an\n approximated cross-validation for estimating the AUC associated\n with the estimated coefficients.","Published":"2013-04-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"optband","Version":"0.2.1","Title":"'surv' Object Confidence Bands Optimized by Area","Description":"Given a certain coverage level, obtains simultaneous confidence\n bands for the survival and cumulative hazard functions such that the area\n between is minimized. Produces an approximate solution based on local time\n arguments.","Published":"2017-05-10","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"optbdmaeAT","Version":"1.0.1","Title":"Optimal Block Designs for Two-Colour cDNA Microarray Experiments","Description":"Computes A-, MV-, D- and E-optimal or near-optimal block designs for two-colour cDNA microarray experiments using the linear fixed effects and mixed effects models where the interest is in a comparison of all possible elementary treatment contrasts. The algorithms used in this package are based on the treatment exchange and array exchange algorithms of Debusho, Gemechu and Haines (2016, unpublished). The package also provides an optional method of using the graphical user interface (GUI) R package tcltk to ensure that it is user friendly.","Published":"2017-02-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"optBiomarker","Version":"1.0-27","Title":"Estimation of optimal number of biomarkers for two-group\nmicroarray based classifications at a given error tolerance\nlevel for various classification rules","Description":"Estimates optimal number of biomarkers for two-group\n classification based on microarray data","Published":"2013-07-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"optCluster","Version":"1.1.1","Title":"Determine Optimal Clustering Algorithm and Number of Clusters","Description":"Cluster analysis using statistical and biological\n\tvalidation measures for both continuous and count data.","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"optDesignSlopeInt","Version":"1.1","Title":"Optimal Designs for Estimating the Slope Divided by the\nIntercept","Description":"Compute optimal experimental designs\n that measure the slope divided by the intercept.","Published":"2016-07-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"optextras","Version":"2016-8.8","Title":"Tools to Support Optimization Possibly with Bounds and Masks","Description":"Tools to assist in safely applying user generated objective and \n derivative function to optimization programs. These are primarily function \n minimization methods with at most bounds and masks on the parameters.\n Provides a way to check the basic computation of objective functions that \n the user provides, along with proposed gradient and Hessian functions, \n as well as to wrap such functions to avoid failures when inadmissible parameters \n are provided. Check bounds and masks. Check scaling or optimality conditions. \n Perform an axial search to seek lower points on the objective function surface. \n Includes forward, central and backward gradient approximation codes.","Published":"2016-08-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"OptGS","Version":"1.1.1","Title":"Near-Optimal and Balanced Group-Sequential Designs for Clinical\nTrials with Continuous Outcomes","Description":"Functions to find near-optimal multi-stage designs for continuous outcomes.","Published":"2015-09-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"OptHedging","Version":"1.0","Title":"Estimation of value and hedging strategy of call and put\noptions","Description":"Estimation of value and hedging strategy of call and put options, based on optimal hedging and Monte Carlo method, from Chapter 3 of 'Statistical Methods for Financial Engineering', by Bruno Remillard, CRC Press, (2013).","Published":"2013-10-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"opticut","Version":"0.1-0","Title":"Likelihood Based Optimal Partitioning for Indicator Species\nAnalysis","Description":"Likelihood based optimal partitioning for indicator\n species analysis. Finding the best binary partition for each species\n based on model selection, possibly controlling for modifying/confounding\n variables as described in Kemencei et al. (2014) .\n The package also implements various measures of uncertainty based on\n binary partitions, optimal multinomial partitioning, and exploratory\n suitability indices, with native support for parallel computations.","Published":"2016-12-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"optifunset","Version":"1.0","Title":"Set Options if Unset","Description":"A single function 'options.ifunset(...)' is contained herewith, which allows the user to set a global option ONLY if it is not already set. By this token, for package maintainers this function can be used in preference to the standard 'options(...)' function, making provision for THEIR end user to place 'options(...)' directives within their '.Rprofile' file, which will not be overridden at the point when a package is loaded.","Published":"2015-05-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"optigrab","Version":"0.7.3","Title":"Command-Line Parsing for an R World","Description":"Parse options from the command-line using a simple, clean syntax. \n It requires little or no specification and supports short and long options,\n GNU-, Java- or Microsoft- style syntaxes, verb commands and more. ","Published":"2016-12-05","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"optim.functions","Version":"0.1","Title":"Standard Benchmark Optimization Functions","Description":"A set of standard benchmark optimization functions for R and\n a common interface to sample them.","Published":"2017-03-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OptimalCutpoints","Version":"1.1-3","Title":"Computing optimal cutpoints in diagnostic tests","Description":"This package enables users to compute one or more optimal cutpoints for diagnostic tests or continuous markers. Various approaches for selecting optimal cutoffs have been implemented, including methods based on cost-benefit analysis and diagnostic test accuracy measures (Sensitivity/Specificity, Predictive Values and Diagnostic Likelihood Ratios). Numerical and graphical output for all methods is easily obtained.","Published":"2014-11-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"OptimalDesign","Version":"0.2","Title":"Algorithms for D-, A-, and IV-Optimal Designs","Description":"Algorithms for D-, A- and IV-optimal designs of experiments. Some of the functions in this package require the 'gurobi' software and its accompanying R package. For their installation, please follow the instructions at and the file gurobi_inst.txt, respectively. ","Published":"2016-11-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"OptimaRegion","Version":"0.2","Title":"Confidence Regions for Optima","Description":"Computes confidence regions on the location of\n response surface optima.","Published":"2016-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"optimbase","Version":"1.0-9","Title":"R port of the Scilab optimbase module","Description":"Provides a set of commands to manage an abstract\n optimization method. The goal is to provide a building block\n for a large class of specialized optimization methods. This\n package manages: the number of variables, the minimum and\n maximum bounds, the number of non linear inequality\n constraints, the cost function, the logging system, various\n termination criteria, etc...","Published":"2014-03-02","License":"CeCILL-2","snapshot_date":"2017-06-23"} {"Package":"optimization","Version":"1.0-4","Title":"Flexible Optimization","Description":"Flexible optimizer with numerous input specifications. It allows a very detailed parameterization and is therefore useful for specific and complex loss functions, like functions with discrete parameter space. Also visualization tools for validation and analysis of the convergence are included.","Published":"2017-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"optimr","Version":"2016-8.16","Title":"A Replacement and Extension of the 'optim' Function","Description":"Provides a test of replacement and extension of the optim()\n function to unify and streamline optimization capabilities in R\n for smooth, possibly box constrained functions of several or\n many parameters. This version has a reduced set of methods and is\n intended to be on CRAN.","Published":"2016-08-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"optimsimplex","Version":"1.0-5","Title":"R port of the Scilab optimsimplex module","Description":"Provides a building block for optimization algorithms\n based on a simplex. The optimsimplex package may be used in the\n following optimization methods: the simplex method of Spendley\n et al., the method of Nelder and Mead, Box's algorithm for\n constrained optimization, the multi-dimensional search by\n Torczon, etc...","Published":"2014-02-02","License":"CeCILL-2","snapshot_date":"2017-06-23"} {"Package":"optimus","Version":"0.1.0","Title":"Model Based Diagnostics for Multivariate Cluster Analysis","Description":"Assessment and diagnostics for comparing competing\n clustering solutions, using predictive models. The main intended\n use is for comparing clustering/classification solutions of\n ecological data (e.g. presence/absence, counts, ordinal scores) to\n 1) find an optimal partitioning solution, 2) identify\n characteristic species and 3) refine a classification by merging\n clusters that increase predictive performance. However, in a more\n general sense, this package can do the above for any set of\n clustering solutions for i observations of j variables.","Published":"2017-03-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"optimx","Version":"2013.8.7","Title":"A Replacement and Extension of the optim() Function","Description":"Provides a replacement and extension of the optim()\n function to unify and streamline optimization capabilities in R\n for smooth, possibly box constrained functions of several or\n many parameters. This is the CRAN version of the package.","Published":"2014-11-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"OptInterim","Version":"3.0.1","Title":"Optimal Two and Three Stage Designs for Single-Arm and Two-Arm\nRandomized Controlled Trials with a Long-Term Binary Endpoint","Description":"Optimal two and three stage designs monitoring\n time-to-event endpoints at a specified timepoint","Published":"2012-12-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OptionPricing","Version":"0.1","Title":"Option Pricing with Efficient Simulation Algorithms","Description":"Efficient Monte Carlo Algorithms for the price and the sensitivities of Asian and European Options under Geometric Brownian Motion.","Published":"2014-11-08","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"optiRum","Version":"0.37.3","Title":"Financial Functions & More","Description":"This fills the gaps credit analysts and loan modellers at\n Optimum Credit identify in the existing R code body.\n It allows for the production of documentation with less coding,\n replicates a number of Microsoft Excel functions useful for\n modelling loans (without rounding), and other helpful functions\n for producing charts and tables. It also has some additional scales for\n use, including a GBP scale.","Published":"2015-12-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"optiscale","Version":"1.1","Title":"Optimal scaling","Description":"Tools for performing an optimal scaling transformation on a data\n vector","Published":"2014-08-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"optiSel","Version":"0.9.1","Title":"Optimum Contribution Selection and Population Genetics","Description":"A framework for the optimization of breeding programs via optimum contribution selection and mate allocation. An easy to use set of function for computation of optimum contributions of selection candidates, and of the population genetic parameters to be optimized. These parameters can be estimated using pedigree or genotype information, and include kinships, kinships at native haplotype segments, and breed composition of crossbred individuals. They are suitable for managing genetic diversity, removing introgressed genetic material, and accelerating genetic gain. Additionally, functions are provided for computing genetic contributions from ancestors, inbreeding coefficients, the native effective size, the native genome equivalent, pedigree completeness, and for preparing and plotting pedigrees. ","Published":"2017-05-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"optismixture","Version":"0.1","Title":"Optimal Mixture Weights in Multiple Importance Sampling","Description":"Code for optimal mixture weights in importance sampling. Workhorse\n functions penoptpersp() and penoptpersp.alpha.only() minimize estimated\n variances with and without control variates respectively. It can be used in\n adaptive mixture importance sampling, for example, function batch.estimation() does\n two stages, a pilot estimate of mixing alpha and a following importance\n sampling.","Published":"2015-08-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"optmatch","Version":"0.9-7","Title":"Functions for Optimal Matching","Description":"Distance based bipartite matching using the RELAX-IV minimum cost flow solver,\n oriented to matching of treatment and control groups in observational\n studies. Routines are provided to generate distances from generalised linear models (propensity\n score matching), formulas giving variables on which to limit matched distances, stratified or\n exact matching directives, or calipers, alone or in combination.","Published":"2016-12-30","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"optparse","Version":"1.3.2","Title":"Command Line Option Parser","Description":"A command line parser inspired by Python's 'optparse' library to\n be used with Rscript to write \"#!\" shebang scripts that accept short and\n long flag/options.","Published":"2015-10-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"optpart","Version":"2.3-0","Title":"Optimal Partitioning of Similarity Relations","Description":"Contains a set of algorithms for creating\n partitions and coverings of objects largely based on operations\n on (dis)similarity relations (or matrices). There are several\n iterative re-assignment algorithms optimizing different\n goodness-of-clustering criteria. In addition, there are\n covering algorithms 'clique' which derives maximal cliques, and\n 'maxpact' which creates a covering of maximally compact sets.\n Graphical analyses and conversion routines are also included.","Published":"2016-09-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"optR","Version":"1.2.5","Title":"Optimization Toolbox for Solving Linear Systems","Description":"Solves linear systems of form Ax=b via Gauss elimination, \n LU decomposition, Gauss-Seidel, Conjugate Gradient Method (CGM) and Cholesky methods.","Published":"2016-11-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"optrcdmaeAT","Version":"1.0.0","Title":"Optimal Row-Column Designs for Two-Colour cDNA Microarray\nExperiments","Description":"Computes A-, MV-, D- and E-optimal or near-optimal row-column designs for two-colour cDNA microarray experiments using the linear fixed effects and mixed effects models where the interest is in a comparison of all pairwise treatment contrasts. The algorithms used in this package are based on the array exchange and treatment exchange algorithms adopted from Debusho, Gemechu and Haines (2016, unpublished) algorithms after adjusting for the row-column designs setup. The package also provides an optional method of using the graphical user interface (GUI) R package tcltk to ensure that it is user friendly.","Published":"2017-04-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"optrees","Version":"1.0","Title":"Optimal Trees in Weighted Graphs","Description":"Finds optimal trees in weighted graphs. In\n particular, this package provides solving tools for minimum cost spanning\n tree problems, minimum cost arborescence problems, shortest path tree\n problems and minimum cut tree problem.","Published":"2014-09-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"opusminer","Version":"0.1-0","Title":"OPUS Miner Algorithm for Filtered Top-k Association Discovery","Description":"Provides a simple R interface to the OPUS Miner algorithm (implemented in C++) for finding the top-k productive, non-redundant itemsets from transaction data. The OPUS Miner algorithm uses the OPUS search algorithm to efficiently discover the key associations in transaction data, in the form of self-sufficient itemsets, using either leverage or lift. See for more information in relation to the OPUS Miner algorithm.","Published":"2017-02-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ora","Version":"2.0-1","Title":"Convenient Tools for Working with Oracle Databases","Description":"Easy-to-use functions to explore Oracle databases and import data\n into R. User interface for the ROracle package.","Published":"2014-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"orca","Version":"1.1-1","Title":"Computation of Graphlet Orbit Counts in Sparse Graphs","Description":"Implements orbit counting using a fast combinatorial approach.\n\tCounts orbits of nodes and edges from edge matrix or data frame, or a\n\tgraph object from the graph package.","Published":"2016-07-28","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"ORCI","Version":"1.1","Title":"Several confidence intervals for the odds ratio","Description":"Computes various confidence intervals for the odds ratio of two independent binomial proportions.","Published":"2014-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"orclus","Version":"0.2-5","Title":"ORCLUS subspace clustering","Description":"Functions to perform subspace clustering and\n classification.","Published":"2013-02-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ORCME","Version":"2.0.2","Title":"Order Restricted Clustering for Microarray Experiments","Description":"Provides clustering of genes with similar \n dose response (or time course) profiles. It implements the method \n described by Lin et al. (2012).","Published":"2015-07-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"orcutt","Version":"2.1","Title":"Estimate Procedure in Case of First Order Autocorrelation","Description":"Solve first order autocorrelation problems using an iterative method. This procedure estimates both autocorrelation and beta coefficients recursively until we reach the convergence (8th decimal). The residuals are computed after estimating Beta using EGLS approach and Rho is estimated using the previous residuals.","Published":"2017-04-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ordBTL","Version":"0.8","Title":"Modelling comparison data with ordinal response","Description":"This package extends the Bradley-Terry-Luce model for fitting pair\n comparison models with an ordinal response. It is also possible to\n incorporate an order effect, or, equivalently, an effect for the home\n advantage.","Published":"2014-05-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ordcrm","Version":"1.0.0","Title":"Likelihood-Based Continual Reassessment Method (CRM) Dose\nFinding Designs","Description":"Provides the setup and calculations needed\n to run a likelihood-based continual reassessment method (CRM)\n dose finding trial and performs simulations to assess design\n performance under various scenarios. 3 dose finding designs\n are included in this package: ordinal proportional odds model\n (POM) CRM, ordinal continuation ratio (CR) model CRM, and the\n binary 2-parameter logistic model CRM.\n These functions allow customization of design characteristics\n to vary sample size, cohort sizes, target dose-limiting\n toxicity (DLT) rates, discrete or continuous dose levels,\n combining ordinal grades 0 and 1 into one category, and\n incorporate safety and/or stopping rules.\n For POM and CR model designs, ordinal toxicity grades are\n specified by common terminology criteria for adverse events\n (CTCAE) version 4.0.\n Function 'pseudodata' creates the necessary starting models\n for these 3 designs, and function 'nextdose' estimates the\n next dose to test in a cohort of patients for a target DLT\n rate.\n We also provide the function 'crmsimulations' to assess the\n performance of these 3 dose finding designs under various\n scenarios.","Published":"2016-11-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ordDisp","Version":"1.0.1","Title":"Separating Location and Dispersion in Ordinal Regression Models","Description":"Estimate location-shift models or rating-scale models accounting for response styles (RSRS) for the regression analysis of ordinal responses.","Published":"2016-10-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"orddom","Version":"3.1","Title":"Ordinal Dominance Statistics","Description":"Computes ordinal, statistics and effect sizes as an\n alternative to mean comparison: Cliff's delta or success rate\n difference (SRD), Vargha and Delaney's A or the Area Under a\n Receiver Operating Characteristic Curve (AUC), the discrete\n type of McGraw & Wong's Common Language Effect Size (CLES) or\n Grissom & Kim's Probability of Superiority (PS), and the Number\n needed to treat (NNT) effect size. Moreover, comparisons to\n Cohen's d are offered based on Huberty & Lowman's Percentage of\n Group (Non-)Overlap considerations.","Published":"2013-02-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ORDER2PARENT","Version":"1.0","Title":"Estimate parent distributions with data of several order\nstatistics","Description":"This package uses B-spline based nonparametric smooth\n estimators to estimate parent distributions given observations\n on multiple order statistics.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"orderbook","Version":"1.03","Title":"Orderbook visualization/Charting software","Description":"Functions for visualizing and retrieving data for the\n state of an orderbook at a particular period in time.","Published":"2013-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"orderedLasso","Version":"1.7","Title":"Ordered Lasso and Time-lag Sparse Regression","Description":"Ordered lasso and time-lag sparse regression. Ordered Lasso fits a\n linear model and imposes an order constraint on the coefficients. It writes\n the coefficients as positive and negative parts, and requires positive\n parts and negative parts are non-increasing and positive. Time-Lag Lasso\n generalizes the ordered Lasso to a general data matrix with multiple\n predictors. For more details, see Suo, X.,Tibshirani, R., (2014) 'An\n Ordered Lasso and Sparse Time-lagged Regression'.","Published":"2014-11-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"orderstats","Version":"0.1.0","Title":"Efficiently Generates Random Order Statistic Variables","Description":"All the methods in this package generate a vector of uniform order statistics using a beta distribution and use an inverse cumulative distribution function for some distribution to give a vector of random order statistic variables for some distribution. This is much more efficient than using a loop since it is directly sampling from the order statistic distribution.","Published":"2017-06-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"OrdFacReg","Version":"1.0.6","Title":"Least Squares, Logistic, and Cox-Regression with Ordered\nPredictors","Description":"In biomedical studies, researchers are often interested in assessing the association between one or more ordinal explanatory variables and an outcome variable, at the same time adjusting for covariates of any type. The outcome variable may be continuous, binary, or represent censored survival times. In the absence of a precise knowledge of the response function, using monotonicity constraints on the ordinal variables improves efficiency in estimating parameters, especially when sample sizes are small. This package implements an active set algorithm that efficiently computes such estimators.","Published":"2015-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ordiBreadth","Version":"1.0","Title":"Ordinated Diet Breadth","Description":"Calculates ordinated diet breadth with some plotting functions.","Published":"2015-12-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ordinal","Version":"2015.6-28","Title":"Regression Models for Ordinal Data","Description":"Implementation of cumulative link (mixed) models also known\n as ordered regression models, proportional odds models, proportional\n hazards models for grouped survival times and ordered logit/probit/...\n models. Estimation is via maximum likelihood and mixed models are fitted\n with the Laplace approximation and adaptive Gauss-Hermite quadrature.\n Multiple random effect terms are allowed and they may be nested, crossed or\n partially nested/crossed. Restrictions of symmetry and equidistance can be\n imposed on the thresholds (cut-points/intercepts). Standard model\n methods are available (summary, anova, drop-methods, step,\n confint, predict etc.) in addition to profile methods and slice\n methods for visualizing the likelihood function and checking\n convergence.","Published":"2015-06-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ordinalCont","Version":"1.2","Title":"Ordinal Regression Analysis for Continuous Scales","Description":"A regression framework for response variables which are continuous\n self-rating scales such as the Visual Analog Scale (VAS) used in pain\n assessment, or the Linear Analog Self-Assessment (LASA) scales in quality\n of life studies. These scales measure subjects' perception of an intangible\n quantity, and cannot be handled as ratio variables because of their inherent\n non-linearity. We treat them as ordinal variables, measured on a continuous\n scale. A function (the g function) connects the scale with an underlying\n continuous latent variable. The link function is the inverse of the CDF of the\n assumed underlying distribution of the latent variable. Currently the logit\n link, which corresponds to a standard logistic distribution, is implemented.","Published":"2017-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ordinalForest","Version":"1.0","Title":"Ordinal Forests: Prediction and Class Width Inference with\nOrdinal Target Variables","Description":"Ordinal forests (OF) are a method for ordinal regression with high-dimensional \n and low-dimensional data that is able to predict the values of the ordinal target variable \n for new observations and at the same time estimate the relative widths of the classes of \n the ordinal target variable. Using a (permutation-based) variable importance measure it \n is moreover possible to rank the importances of the covariates.\n OF will be presented in an upcoming technical report by Hornung et al..\n The main functions of the package are: ordfor() (construction of OF), predict.ordfor() \n (prediction of the target variable values of new observations), and plot.ordfor() \n (visualization of the estimated relative widths of the classes of the ordinal target \n variable).","Published":"2017-04-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ordinalgmifs","Version":"1.0.3","Title":"Ordinal Regression for High-Dimensional Data","Description":"Provides a function for fitting cumulative link, adjacent category, forward and backward continuation ratio, and stereotype ordinal response models when the number of parameters exceeds the sample size, using the the generalized monotone incremental forward stagewise method.","Published":"2016-11-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OrdinalLogisticBiplot","Version":"0.4","Title":"Biplot representations of ordinal variables","Description":"Analysis of a matrix of polytomous items using Ordinal Logistic Biplots (OLB)\n The OLB procedure extends the binary logistic biplot to ordinal (polytomous) data. \n The individuals are represented as points on a plane and the variables are represented \n as lines rather than vectors as in a classical or binary biplot, specifying the points\n for each of the categories of the variable. \n The set of prediction regions is established by stripes perpendicular to the line \n between the category points, in such a way that the prediction for each individual is given\n by its projection into the line of the variable.","Published":"2015-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ordinalNet","Version":"2.0","Title":"Penalized Ordinal Regression","Description":"Fits ordinal regression models with elastic net penalty by coordinate descent.\n Supported model families include cumulative probability, stopping ratio, continuation ratio,\n and adjacent category. These families are a subset of vector glm's which belong to a model\n class we call the elementwise link multinomial-ordinal (ELMO) class. Each family\n in this class links a vector of covariates to a vector of class probabilities.\n Each of these families has a parallel form, which is appropriate for ordinal response\n data, as well as a nonparallel form that is appropriate for an unordered categorical\n response, or as a more flexible model for ordinal data. The parallel model\n has a single set of coefficients, whereas the nonparallel model has a set of coefficients\n for each response category except the baseline category. It is also possible \n to fit a model with both parallel and nonparallel terms, which we call the semi-parallel \n model. The semi-parallel model has the flexibility of the nonparallel model, \n but the elastic net penalty shrinks it toward the parallel model.","Published":"2017-05-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OrdLogReg","Version":"1.1","Title":"Ordinal Logic Regression","Description":"Method for develops a classification model for ordinal responses based on logic regression.","Published":"2014-09-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"OrdMonReg","Version":"1.0.3","Title":"Compute least squares estimates of one bounded or two ordered\nisotonic regression curves","Description":"We consider the problem of estimating two isotonic regression curves g1* and g2* under the constraint that they are ordered, i.e. g1* <= g2*. Given two sets of n data points y_1, ..., y_n and z_1, ..., z_n that are observed at (the same) deterministic design points x_1, ..., x_n, the estimates are obtained by minimizing the Least Squares criterion L(a, b) = sum_{i=1}^n (y_i - a_i)^2 w1(x_i) + sum_{i=1}^n (z_i - b_i)^2 w2(x_i) over the class of pairs of vectors (a, b) such that a and b are isotonic and a_i <= b_i for all i = 1, ..., n. We offer two different approaches to compute the estimates: a projected subgradient algorithm where the projection is calculated using a PAVA as well as Dykstra's cyclical projection algorithm.","Published":"2011-12-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OrdNor","Version":"2.0","Title":"Concurrent Generation of Ordinal and Normal Data with Given\nCorrelation Matrix and Marginal Distributions","Description":"Implementation of a procedure for generating samples from a mixed distribution of ordinal and normal random variables with pre-specified correlation matrix and marginal distributions.","Published":"2015-11-24","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ordPens","Version":"0.3-1","Title":"Selection and/or Smoothing of Ordinal Predictors","Description":"Selection and/or smoothing of ordinally scaled independent variables using a group lasso or generalized ridge penalty.","Published":"2015-05-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ore","Version":"1.6.0","Title":"An R Interface to the Onigmo Regular Expression Library","Description":"Provides an alternative to R's built-in functionality for handling\n regular expressions, based on the Onigmo library. Offers first-class\n compiled regex objects, partial matching and function-based substitutions,\n amongst other features.","Published":"2017-04-13","License":"BSD_3_clause + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"ores","Version":"0.2.0","Title":"Connector to the Objective Revision Evaluation Service (ORES)","Description":"A connector to ORES (), an AI project to provide edit scoring for content\n on Wikipedia and other Wikimedia projects. This lets a researcher identify\n if edits are likely to be reverted, damaging, or made in good faith.","Published":"2016-12-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OrgMassSpecR","Version":"0.4-6","Title":"Organic Mass Spectrometry","Description":"Organic/biological mass spectrometry data analysis.","Published":"2016-10-19","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"orgR","Version":"0.9.0","Title":"Analyse Text Files Created by Emacs' Org mode","Description":"Provides functionality to process text files created by Emacs' Org mode, and decompose the content to the smallest components (headlines, body, tag, clock entries etc). Emacs is an extensible, customizable text editor and Org mode is for keeping notes, maintaining TODO lists, planning projects. Allows users to analyze org files as data frames in R, e.g., to convieniently group tasks by tag into project and calculate total working hours. Also provides some help functions like search.parent, gg.pie (visualise working hours in ggplot2) and tree.headlines (visualise headline stricture in tree format) to help user managing their complex org files. ","Published":"2014-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"orgutils","Version":"0.4-1","Title":"Helper Functions for Org Files","Description":"Helper functions for Org files ():\n a generic function 'toOrg' for transforming R objects into Org\n markup (most useful for data frames; there are also methods for\n Dates/POSIXt) and a function to read Org tables into data frames.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ORIClust","Version":"1.0-1","Title":"Order-restricted Information Criterion-based Clustering\nAlgorithm","Description":"ORIClust is a user-friendly R-based software package for\n gene clustering. Clusters are given by genes matched to\n prespecified profiles across various ordered treatment groups.\n It is particularly useful for analyzing data obtained from\n short time-course or dose-response microarray experiments.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"orientlib","Version":"0.10.3","Title":"Support for orientation data","Description":"Representations, conversions and display of orientation\n SO(3) data. See the orientlib help topic for details.","Published":"2013-03-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"origami","Version":"0.8.0","Title":"Generalized Framework for Cross-Validation","Description":"Provides a general framework for the application of\n cross-validation schemes to particular functions. By allowing arbitrary\n lists of results, origami accommodates a range of cross-validation\n applications.","Published":"2017-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"OriGen","Version":"1.4.3","Title":"Fast Spatial Ancestry via Flexible Allele Frequency Surfaces","Description":"Used primarily for estimates of allele frequency surfaces from point estimates. \n It can also place individuals of unknown origin back onto the geographic map with great accuracy. \n Additionally, it can place admixed individuals by estimating contributing fractions at each\n location on a map. Lastly, it can rank SNPs by their ability to differentiate populations. \n See \"Fast Spatial Ancestry via Flexible Allele Frequency Surfaces\" (John Michael Ranola, John\n Novembre, Kenneth Lange) in Bioinformatics 2014 for more info.","Published":"2016-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"originr","Version":"0.2.0","Title":"Fetch Species Origin Data from the Web","Description":"Get species origin data (whether species is native/invasive) from the\n following sources on the web: Encyclopedia of Life (), Flora\n 'Europaea' (), Global Invasive Species\n Database (), the Native Species Resolver\n (), Integrated Taxonomic\n Information Service (), and Global Register of \n Introduced and Invasive Species ().","Published":"2016-12-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"orloca","Version":"4.2","Title":"The package deals with Operations Research LOCational Analysis\nmodels","Description":"This version of the package deals with the min-sum location\n problem, also known as Fermat--Weber problem. The min-sum location problem\n search for a point such that the weighted sum of the distances to the\n demand points are minimized.","Published":"2014-06-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"orloca.es","Version":"4.1","Title":"Spanish version of orloca package","Description":"Version espanola del paquete orloca que trata el problema\n de localizacion min-sum, tambien conocido como problema de\n Fermat--Weber. El problema de localizacion min-sum busca un\n punto tal que la suma ponderada de las distancias a los puntos\n de demanda sea minima.","Published":"2013-01-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ORMDR","Version":"1.3-2","Title":"ORMDR","Description":"Odds ratio based multifactor-dimensionality reduction\n method for detecting gene-gene interactions","Published":"2012-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"oro.dicom","Version":"0.5.0","Title":"Rigorous - DICOM Input / Output","Description":"Data input/output functions for data that conform to the \n Digital Imaging and Communications in Medicine (DICOM) standard, part\n of the Rigorous Analytics bundle.","Published":"2015-04-20","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"oro.nifti","Version":"0.7.2","Title":"Rigorous - NIfTI + ANALYZE + AFNI : Input / Output","Description":"Functions for the input/output and visualization of\n medical imaging data that follow either the ANALYZE, NIfTI or AFNI\n formats. This package is part of the Rigorous Analytics bundle.","Published":"2016-12-31","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"oro.pet","Version":"0.2.3","Title":"Rigorous - Positron Emission Tomography","Description":"Image analysis techniques for positron emission tomography \n (PET) that form part of the Rigorous Analytics bundle. ","Published":"2014-09-27","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"orQA","Version":"0.2.1","Title":"Order Restricted Assessment Of Microarray Titration Experiments","Description":"Assess repeatability, accuracy and corss-platform\n agreement of titration microarray data based on order\n restricted inference procedures","Published":"2010-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"orsifronts","Version":"0.1.1","Title":"Southern Ocean Frontal Distributions (Orsi)","Description":"A data set package with the \"Orsi\" fronts as a\n 'SpatialLinesDataFrame' object. The Orsi et al. (1995) fronts are published at\n the Southern Ocean Atlas Database Page, please see package CITATION for details.","Published":"2015-12-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"orsk","Version":"1.0-3","Title":"Converting Odds Ratio to Relative Risk in Cohort Studies with\nPartial Data Information","Description":"Convert the Odds Ratio to the Relative Risk in Cohort Studies with Partial Data Information.","Published":"2017-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"orthogonalsplinebasis","Version":"0.1.6","Title":"Orthogonal B-Spline Basis Functions","Description":"Represents the basis functions for B-splines in a simple matrix\n formulation that facilitates, taking integrals, derivatives, and\n making orthogonal the basis functions.","Published":"2015-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OrthoPanels","Version":"1.1-0","Title":"Dynamic Panel Models with Orthogonal Reparameterization of Fixed\nEffects","Description":"Implements the orthogonal reparameterization\n approach recommended by Lancaster (2002) to estimate dynamic panel\n models with fixed effects (and optionally: panel specific\n intercepts). The approach uses a likelihood-based estimator and\n produces estimates that are asymptotically unbiased as N goes to\n infinity, with a T as low as 2.","Published":"2016-11-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"orthopolynom","Version":"1.0-5","Title":"Collection of functions for orthogonal and orthonormal\npolynomials","Description":"A collection of functions to construct sets of orthogonal\n polynomials and their recurrence relations. Additional\n functions are provided to calculate the derivative, integral,\n value and roots of lists of polynomial objects.","Published":"2013-02-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"osc","Version":"1.0.0","Title":"Orthodromic Spatial Clustering","Description":"Allows distance based spatial clustering of georeferenced data by implementing the City Clustering Algorithm - CCA. Multiple versions allow clustering for matrix, raster and single coordinates on a plain (euclidean distance) or on a sphere (great-circle or orthodromic distance).","Published":"2016-05-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"OSCV","Version":"1.0","Title":"One-Sided Cross-Validation","Description":"Functions for implementing different versions of the OSCV method in the kernel regression and density estimation frameworks. The package mainly supports the following articles: (1) Savchuk, O.Y., Hart, J.D. (2017). Fully robust one-sided cross-validation for regression functions. Computational Statistics, and (2) Savchuk, O.Y. (2017). One-sided cross-validation for nonsmooth density functions, .","Published":"2017-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"osd","Version":"0.1","Title":"Orthogonal Signal Deconvolution for Spectra Deconvolution in\nGC-MS and GCxGC-MS Data","Description":"Compound deconvolution for chromatographic data, including gas chromatography - mass spectrometry (GC-MS) and comprehensive gas chromatography - mass spectrometry (GCxGC-MS). The package includes functions to perform independent component analysis - orthogonal signal deconvolution (ICA-OSD), independent component regression (ICR), multivariate curve resolution (MCR-ALS) and orthogonal signal deconvolution (OSD) alone.","Published":"2016-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"osDesign","Version":"1.7","Title":"Design and analysis of observational studies","Description":"The osDesign serves for planning an observational study. Currently, functionality is focused on the two-phase and case-control designs. Functions in this packages provides Monte Carlo based evaluation of operating characteristics such as powers for estimators of the components of a logistic regression model.","Published":"2014-08-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"osi","Version":"0.1.0","Title":"Open Source Initiative API Connector","Description":"A connector to the API maintained by the Open Source Initiative , which\n provides machine-readable metadata about a variety of open source software licenses.","Published":"2016-06-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"osmar","Version":"1.1-7","Title":"OpenStreetMap and R","Description":"This package provides infrastructure to access\n OpenStreetMap data from different sources, to work with the data\n in common R manner, and to convert data into available\n infrastructure provided by existing R packages (e.g., into sp and\n igraph objects).","Published":"2013-11-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"osmdata","Version":"0.0.4","Title":"Import 'OpenStreetMap' Data as Simple Features or Spatial\nObjects","Description":"Download and import of 'OpenStreetMap' ('OSM') data as 'sf' or 'sp'\n objects. 'OSM' data are extracted from the 'Overpass' web server and\n processed with very fast 'C++' routines for return to 'R'.","Published":"2017-06-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"osmplotr","Version":"0.2.3","Title":"Customisable Images of OpenStreetMap Data","Description":"Customisable images of OpenStreetMap (OSM) data and\n data visualisation using OSM objects.","Published":"2016-07-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"OSMscale","Version":"0.5.1","Title":"Add a Scale Bar to 'OpenStreetMap' Plots","Description":"Functionality to handle and project lat-long coordinates, easily download background maps\n and add a correct scale bar to 'OpenStreetMap' plots in any map projection.","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"osrm","Version":"3.0.1","Title":"Interface Between R and the OpenStreetMap-Based Routing Service\nOSRM","Description":"An interface between R and the OSRM API. OSRM is a routing\n service based on OpenStreetMap data. See for more\n information. A public API exists but one can run its own instance. This package\n allows to compute distances (travel time and kilometric distance) between points\n and travel time matrices.","Published":"2017-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"OTE","Version":"1.0","Title":"Optimal Trees Ensembles for Regression, Classification and Class\nMembership Probability Estimation","Description":"Functions for creating ensembles of optimal trees for regression, classification and class membership probability estimation are given. A few trees are selected from an initial set of trees grown by random forest for the ensemble on the basis of their individual and collective performance. Trees are assessed on out-of-bag data and on an independent training data set for individual and collective performance respectively. The prediction functions return estimates of the test responses and their class membership probabilities. Unexplained variations, error rates, confusion matrix, Brier scores, etc. are also returned for the test data.","Published":"2015-07-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"otinference","Version":"0.1.0","Title":"Inference for Optimal Transport","Description":"Sample from the limiting distributions of empirical Wasserstein\n distances under the null hypothesis and under the alternative. Perform a \n two-sample test on multivariate data using these limiting distributions and \n binning.","Published":"2017-03-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"otrimle","Version":"0.4","Title":"Robust Model-Based Clustering","Description":"Performs robust cluster analysis allowing for outliers and noise that cannot be fitted by any cluster. The data are modelled by a mixture of Gaussian distributions and a noise component, which is an improper uniform distribution covering the whole Euclidean space. Parameters are estimated by (pseudo) maximum likelihood. This is fitted by a EM-type algorithm. See Coretto and Hennig (2015) , and Coretto and Hennig (2016) .","Published":"2016-11-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OTRselect","Version":"1.0","Title":"Variable Selection for Optimal Treatment Decision","Description":"A penalized regression framework that can simultaneously estimate the optimal treatment strategy and identify important variables. Appropriate for either censored or uncensored continuous response.","Published":"2016-03-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"OTUtable","Version":"1.1.0","Title":"North Temperate Lakes - Microbial Observatory 16S Time Series\nData and Functions","Description":"Analyses of OTU tables produced by 16S sequencing, as well as example data. It contains the data and scripts used in the paper Linz, et al. (2017) \"Bacterial community composition and dynamics spanning five years in freshwater bog lakes\" (Manuscript submitted, preprint available at ).","Published":"2017-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ouch","Version":"2.9-2","Title":"Ornstein-Uhlenbeck Models for Phylogenetic Comparative\nHypotheses","Description":"Fit and compare Ornstein-Uhlenbeck models for evolution along a phylogenetic tree.","Published":"2015-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"outbreaker","Version":"1.1-7","Title":"Bayesian Reconstruction of Disease Outbreaks by Combining\nEpidemiologic and Genomic Data","Description":"Bayesian reconstruction of disease outbreaks using epidemiological\n and genetic information.","Published":"2015-12-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"outbreaks","Version":"1.3.0","Title":"A Collection of Disease Outbreak Data","Description":"Empirical or simulated disease outbreak data, provided either as\n RData or as text files.","Published":"2017-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OutbreakTools","Version":"0.1-14","Title":"Basic Tools for the Analysis of Disease Outbreaks","Description":"Implements basic tools for storing, handling and visualizing disease outbreak data, as well as simple analysis tools. OutbreakTools defines the new formal class obkData which can be used to store any case-base outbreak data, and provides summaries for these objects, alongside a range of functions for subsetting and data manipulation. It implements a range of graphics for visualising timelines, maps, contact networks and genetic analyses. It also includes a simple case-base outbreak simulation tool.","Published":"2015-12-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OutlierDC","Version":"0.3-0","Title":"Outlier Detection using quantile regression for Censored Data","Description":"This package provides three algorithms to detect outlying observations for censored survival data. ","Published":"2014-03-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"OutlierDM","Version":"1.1.1","Title":"Outlier Detection for Multi-replicated High-throughput Data","Description":"Detecting outlying values such as genes, peptides or samples for multi-replicated high-throughput high-dimensional data","Published":"2014-12-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"outliers","Version":"0.14","Title":"Tests for outliers","Description":"A collection of some tests commonly used for identifying\n outliers.","Published":"2011-01-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OutrankingTools","Version":"1.0","Title":"Functions for Solving Multiple-criteria Decision-making Problems","Description":"Functions to process ''outranking'' ELECTRE methods existing in the literature. See, e.g.,\n\t\t\t about the outranking approach and the foundations of ELECTRE methods.","Published":"2014-12-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"outreg","Version":"0.2.2","Title":"Regression Table for Publication","Description":"Create regression tables for publication.\n Currently supports 'lm', 'glm', 'survreg', and 'ivreg' outputs.","Published":"2017-03-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"OUwie","Version":"1.50","Title":"Analysis of Evolutionary Rates in an OU Framework","Description":"Calculates and compares rate differences of continuous character evolution under Brownian motion and a new set of Ornstein-Uhlenbeck-based Hansen models that allow the strength of selection and stochastic motion to vary across selective regimes.","Published":"2016-06-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"overlap","Version":"0.3.0","Title":"Estimates of Coefficient of Overlapping for Animal Activity\nPatterns","Description":"Provides functions to fit kernel density functions to\n data on temporal activity patterns of animals; estimate coefficients\n of overlapping of densities for two species; and calculate bootstrap\n estimates of confidence intervals.","Published":"2017-05-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"overlapping","Version":"1.5.0","Title":"Estimation of Overlapping in Empirical Distributions","Description":"Functions for estimating the overlapping area of two or more empirical distributions.","Published":"2017-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"owmr","Version":"0.7.2","Title":"OpenWeatherMap API Wrapper","Description":"Accesses OpenWeatherMap's (owm) API.\n 'owm' itself is a service providing weather data in the past, in the future and now.\n Furthermore, 'owm' serves weather map layers usable in frameworks like 'leaflet'.\n In order to access the API, you need to sign up for an API key. There are free and paid plans.\n Beside functions for fetching weather data from 'owm', 'owmr' supplies\n tools to tidy up fetched data (for fast and simple access) and to show it on leaflet maps.","Published":"2017-01-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"oXim","Version":"1.2.1","Title":"Oxycline Index from Matrix Echograms","Description":"Tools for oxycline depth calculation from echogram matrices.","Published":"2017-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"OxyBS","Version":"1.5","Title":"Processing of Oxy-Bisulfite Microarray Data","Description":"\n Provides utilities for processing of Oxy-Bisulfite microarray data \n (e.g. via the Illumina Infinium platform, ) \n with tandem arrays, one using conventional\n bisulfite conversion, the other using oxy-bisulfite conversion.","Published":"2017-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"oz","Version":"1.0-21","Title":"Plot the Australian Coastline and States","Description":"Functions for plotting Australia's coastline and state\n boundaries.","Published":"2016-12-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"P2C2M","Version":"0.7.6","Title":"Posterior Predictive Checks of Coalescent Models","Description":"Conducts posterior predictive checks of coalescent models using gene and species trees generated by 'BEAST' or '*BEAST'. The functionality of P2C2M can be extended via two third-party R packages that are available from the author websites: 'genealogicalSorting' and 'phybase'. To use these optional packages, the installation of the Python libraries 'NumPy' (>= 1.9.0) and 'DendroPy' (= 3.12.0) is required.","Published":"2015-06-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"p2distance","Version":"1.0.1","Title":"Welfare's Synthetic Indicator","Description":"The welfare's synthetic indicator provides an ideal tool\n for measuring multi-dimensional concepts such as welfare,\n development, living standards, etc. It enables information from\n the various indicators to be aggregated into a single synthetic\n measure.","Published":"2012-08-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"p3state.msm","Version":"1.3","Title":"Analyzing survival data","Description":"Analyzing survival data from illness-death model","Published":"2012-07-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"pa","Version":"1.2-1","Title":"Performance Attribution for Equity Portfolios","Description":"A package that provides tools for conducting performance attribution for equity portfolios. The package uses two methods: the Brinson method and a regression-based analysis. ","Published":"2013-12-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PAactivPAL","Version":"2.0","Title":"Summarize Daily Physical Activity from 'activPAL' Accelerometer\nData","Description":"Summarize physical activity (Mets, Proportion, Total, etc.) in the given time period from accelerometer data. This package has been tested by data exported from 'activPAL'. 'activPAL' is a wearable device for medical and healthcare research application developed by PAL Technologies, Glasgow, Scotland. A simulated accelerometer sample dataset is provided in this package. See https://github.com/YukunZhang/PAactivPAL for more information.","Published":"2016-04-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PabonLasso","Version":"1.0","Title":"Pabon Lasso Graphs and Comparing Situations of a Unit in Two\nDifferent Times","Description":"Pabon Lasso is a graphical method for monitoring the efficiency of different wards of a hospital or different hospitals.Pabon Lasso graph is divided into 4 parts which are created after drawing the average of BTR and BOR. The part in the left-down side is Zone I, left-up side is Zone II, Right-up side part is Zone III and the last part is Zone IV.","Published":"2015-09-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PAC","Version":"1.0.8","Title":"Partition-Assisted Clustering and Multiple Alignments of\nNetworks","Description":"Implements Partition-Assisted Clustering and Multiple Alignments of Networks. It 1) utilizes partition-assisted clustering to find robust and accurate clusters and 2) discovers coherent relationships of clusters across multiple samples. It is particularly useful for analyzing single-cell data set.","Published":"2017-04-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PACBO","Version":"0.1.0","Title":"Clustering Online Datasets","Description":"A function for clustering online datasets. The number\n of cells is data-driven which need not to be chosen in advance by the user.\n The method is introduced and fully described in Le Li, Benjamin Guedj and Sebastien Loustau (2016), \"PAC-Bayesian Online Clustering\" (arXiv preprint: ).","Published":"2016-07-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pacbpred","Version":"0.92.2","Title":"PAC-Bayesian Estimation and Prediction in Sparse Additive\nModels","Description":"This package is intended to perform estimation and\n prediction in high-dimensional additive models, using a sparse\n PAC-Bayesian point of view and a MCMC algorithm. The method is\n fully described in Guedj and Alquier (2013), 'PAC-Bayesian\n Estimation and Prediction in Sparse Additive Models',\n Electronic Journal of Statistics, 7, 264--291.","Published":"2013-02-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pack","Version":"0.1-1","Title":"Convert values to/from raw vectors","Description":"Functions to easily convert data to binary formats other programs/machines can understand.","Published":"2008-09-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"packagedocs","Version":"0.4.0","Title":"Build Website of Package Documentation","Description":"Build a package documentation and function reference site and use it as the package vignette.","Published":"2016-11-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"packagetrackr","Version":"0.1.1","Title":"Track R Package Downloads from RStudio's CRAN Mirror","Description":"Allows to get and cache R package download log files\n from RStudio's CRAN mirror for analyzing package usage.","Published":"2015-09-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"packcircles","Version":"0.2.0","Title":"Circle Packing","Description":"Simple algorithms for circle packing.","Published":"2017-04-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"packClassic","Version":"0.5.2","Title":"Toy example of Pack Classic","Description":"This package comes to illustrate the book \"Petit Manuel de\n Programmation Orientee Objet sous R\"","Published":"2009-10-11","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"packHV","Version":"2.1","Title":"A few Useful Functions for Statisticians","Description":"Various useful functions for statisticians: describe data, plot Kaplan-Meier curves with numbers of subjects at risk, compare data sets, display spaghetti-plot, build multi-contingency tables...","Published":"2016-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"packrat","Version":"0.4.8-1","Title":"A Dependency Management System for Projects and their R Package\nDependencies","Description":"Manage the R packages your project depends\n on in an isolated, portable, and reproducible way.","Published":"2016-09-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"packS4","Version":"0.9.3","Title":"Toy Example of S4 Package","Description":"Illustration of the book \"Petit Manuel de Programmation Orientee Objet sous R\". The english version \"A (Not so) Short Introduction to S4\" is on CRAN, 'Contributed documentation'.","Published":"2015-05-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pacman","Version":"0.4.6","Title":"Package Management Tool","Description":"Tools to more conveniently perform tasks associated with\n add-on packages. pacman conveniently wraps library and package\n related functions and names them in an intuitive and consistent\n fashion. It seeks to combine functionality from lower level\n functions which can speed up workflow.","Published":"2017-05-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"paco","Version":"0.3.1","Title":"Procrustes Application to Cophylogenetic Analysis","Description":"Procrustes analyses to infer co-phylogenetic\n matching between pairs of (ultrametric) phylogenetic trees.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pacotest","Version":"0.2.2","Title":"Testing for Partial Copulas and the Simplifying Assumption in\nVine Copulas","Description":"Routines for two different test types, the Equal Correlation (ECORR) test and the Vectorial Independence (VI) test are provided. The tests can be applied to check whether a conditional copula coincides with its partial copula. Functions to test whether a regular vine copula satisfies the so-called simplifying assumption or to test a single copula within a regular vine copula to be a (j-1)-th order partial copula are available. The ECORR test comes with a decision tree approach to allow testing in high-dimensional settings.","Published":"2017-03-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pact","Version":"0.5.0","Title":"Predictive Analysis of Clinical Trials","Description":"A prediction-based approach to the analysis of data from\n randomized clinical trials is implemented. Based on response and\n covariate data from a randomized clinical trial comparing a new\n experimental treatment E versus a control C, the objective is to\n develop and internally validate a model that can identify subjects\n likely to benefit from E rather than C. Currently,\n survival and binary response types are permitted.","Published":"2016-04-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Pade","Version":"0.1-4","Title":"Padé Approximant Coefficients","Description":"Given a vector of Taylor series coefficients of sufficient length as input, the function returns the numerator and denominator coefficients for the Padé approximant of appropriate order.","Published":"2015-07-29","License":"GPL (>= 2) | BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"padr","Version":"0.3.0","Title":"Quickly Get Datetime Data Ready for Analysis","Description":"Transforms datetime data into a format ready for analysis.\n It offers two functionalities; aggregating data to a higher level interval\n (thicken) and imputing records where observations were absent (pad). It also\n offers a few functions that assist with filling missing values after padding.","Published":"2017-05-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"paf","Version":"1.0","Title":"Attributable Fraction Function for Censored Survival Data","Description":"Calculate unadjusted/adjusted attributable fraction function of a set of covariates for a censored survival outcome from a Cox model using the method proposed by Chen, Lin and Zeng (Biometrika 97, 713-726., 2010). ","Published":"2014-02-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pafdR","Version":"1.0","Title":"Book Companion for Processing and Analyzing Financial Data with\nR","Description":"Provides access to material from the book \"Processing and Analyzing Financial Data with R\" by Marcelo Perlin (2017) available at .","Published":"2017-04-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PAFit","Version":"1.0.0.0","Title":"Generative Mechanism Estimation in Temporal Complex Networks","Description":"Statistical methods for estimating preferential attachment and node fitness generative mechanisms in temporal complex networks are provided. ","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pagenum","Version":"1.0","Title":"Put Page Numbers on Graphics","Description":"A simple way to add page numbers to base/ggplot/lattice graphics.","Published":"2015-12-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pageviews","Version":"0.3.0","Title":"An API Client for Wikimedia Traffic Data","Description":"Pageview data from the 'Wikimedia' sites, such as\n 'Wikipedia' , from entire projects to per-article\n levels of granularity, through the new RESTful API and data source .","Published":"2016-10-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PAGI","Version":"1.0","Title":"The package can identify the dysregulated KEGG pathways based on\nglobal influence from the internal effect of pathways and\ncrosstalk between pathways","Description":"The package can identify the dysregulated KEGG pathways\n based on global influence from the internal effect of pathways\n and crosstalk between pathways. (1) The PAGI package can\n prioritize the pathways associated with two biological states\n by statistical significance or FDR. (2) The PAGI package can\n evaluated the global influence factor (GIF) score in the global\n gene-gene network constructed based on the relationships of\n genes extracted from each pathway in KEGG database and the\n overlapped genes between pathways.","Published":"2013-11-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PAGWAS","Version":"2.0","Title":"Pathway Analysis Methods for Genomewide Association Data","Description":"Bayesian hierarchical methods for pathway analysis of genomewide association data: Normal/Bayes factors and Sparse Normal/Adaptive lasso. The Frequentist Fisher's product method is included as well.","Published":"2015-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"paintmap","Version":"1.0","Title":"Plotting Paintmaps","Description":"Plots matrices of colours as grids of coloured squares - aka heatmaps, \n\tguaranteeing legible row and column names, \n\twithout transformation of values, \n\twithout re-ordering rows or columns,\n\tand without dendrograms.","Published":"2016-08-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pairedCI","Version":"0.5-4","Title":"Confidence intervals for the ratio of locations and for the\nratio of scales of two paired samples","Description":"The package contains two functions: paired.Loc and\n paired.Scale. A parametric and nonparametric confidence\n interval can be computed for the ratio of locations\n (paired.Loc) and the ratio of scales (paired.Scale). The\n samples must be paired and expected values must be positive.","Published":"2012-11-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PairedData","Version":"1.0.1","Title":"Paired Data Analysis","Description":"This package provides many datasets and a set of graphics\n (based on ggplot2), statistics, effect sizes and hypothesis\n tests for analysing paired data with S4 class.","Published":"2013-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pairheatmap","Version":"1.0.1","Title":"A tool for comparing heatmaps","Description":"A tool to compare two heatmaps and discover patterns\n within and across groups. In the context of biology, group can\n be defined based on gene ontology.","Published":"2012-02-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pairsD3","Version":"0.1.0","Title":"D3 Scatterplot Matrices","Description":"Creates an interactive scatterplot matrix using the D3 JavaScript library. See for more information on D3.","Published":"2015-04-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"PairViz","Version":"1.2.1","Title":"Visualization using Eulerian tours and Hamiltonian\ndecompositions","Description":"Eulerian tours and Hamiltonian decompositions of complete graphs are used to ameliorate order effects in statistical graphics. ","Published":"2011-12-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pairwise","Version":"0.4.1","Title":"Rasch Model Parameters by Pairwise Algorithm","Description":"Performs the explicit calculation\n -- not estimation! -- of the Rasch item parameters for dichotomous and\n polytomous item responses, using a pairwise comparison approach. Person\n parameters (WLE) are calculated according to Warm's weighted likelihood\n approach.","Published":"2017-02-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pairwiseCI","Version":"0.1-25","Title":"Confidence Intervals for Two Sample Comparisons","Description":"Calculation of the parametric, nonparametric confidence intervals\n for the difference or ratio of location parameters, nonparametric confidence interval\n for the Behrens-Fisher problem and for the difference, ratio and odds-ratio of binomial\n proportions for comparison of independent samples. Common wrapper functions to split \n data sets and apply confidence intervals or tests to these subsets.\n A by-statement allows calculation of CI separately for the levels of further factors. \n CI are not adjusted for multiplicity.","Published":"2015-08-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PairwiseD","Version":"0.9.62","Title":"Pairing Up Units and Vectors in Panel Data Setting","Description":"Pairing observations according to a chosen formula and facilitates bilateral analysis of the panel data. Paring is possible for observations, as well as for vectors of observations ordered with respect to time.","Published":"2017-05-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"palaeoSig","Version":"1.1-3","Title":"Significance Tests for Palaeoenvironmental Reconstructions","Description":"Tests if quantitative palaeoenvironmental reconstructions are statistically significant.","Published":"2015-02-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"paleobioDB","Version":"0.5.0","Title":"Download and Process Data from the Paleobiology Database","Description":"Includes 19 functions to wrap each endpoint of\n the PaleobioDB API, plus 8 functions to visualize and process the fossil\n data. The API documentation for the Paleobiology Database can be found in\n .","Published":"2016-09-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"paleofire","Version":"1.1.9","Title":"Analysis of Charcoal Records from the Global Charcoal Database","Description":"Tools to extract and analyse charcoal sedimentary data stored in\n the Global Charcoal Database. Main functionalities includes data extraction\n and sites selection, transformation and interpolation of the charcoal\n records as well as compositing.","Published":"2016-10-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"paleoMAS","Version":"2.0-1","Title":"Paleoecological Analysis","Description":"Transfer functions and statistical operations for\n paleoecology","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"paleomorph","Version":"0.1.4","Title":"Geometric Morphometric Tools for Paleobiology","Description":"Fill missing symmetrical data with mirroring, calculate Procrustes alignments with or without scaling, and compute standard or vector correlation and covariance matrices (congruence coefficients) of 3D landmarks. Tolerates missing data for all analyses.","Published":"2017-04-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"paleotree","Version":"2.7","Title":"Paleontological and Phylogenetic Analyses of Evolution","Description":"Provides tools for transforming, a posteriori time-scaling, and\n modifying phylogenies containing extinct (i.e. fossil) lineages. In particular,\n most users are interested in the functions timePaleoPhy(), bin_timePaleoPhy(),\n cal3TimePaleoPhy() and bin_cal3TimePaleoPhy(), which a posteriori time-scale cladograms of\n fossil taxa into dated phylogenies. This package also contains a large number\n of likelihood functions for estimating sampling and diversification rates from\n different types of data available from the fossil record (e.g. range data,\n occurrence data, etc). paleotree users can also simulate diversification and\n sampling in the fossil record using the function simFossilRecord(), which is a\n detailed simulator for branching birth-death-sampling processes composed of\n discrete taxonomic units arranged in ancestor-descendant relationships. Users\n can use simFossilRecord() to simulate diversification in incompletely sampled\n fossil records, under various models of morphological differentiation (i.e.\n the various patterns by which morphotaxa originate from one another), and\n with time-dependent, longevity-dependent and/or diversity-dependent rates of\n diversification, extinction and sampling. Additional functions allow users to\n translate simulated ancestor-descendant data from simFossilRecord() into standard\n time-scaled phylogenies or unscaled cladograms that reflect the relationships\n among taxon units.","Published":"2016-04-13","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"paleoTS","Version":"0.5-1","Title":"Analyze Paleontological Time-Series","Description":"Facilitates analysis of paleontological sequences of trait values from an evolving lineage. Functions are provided to fit, using maximum likelihood, evolutionary models including unbiased random walks, directional evolution, stasis, Ornstein-Uhlenbeck, punctuated change, and evolutionary models in which traits track some measured covariate.","Published":"2015-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"palettetown","Version":"0.1.1","Title":"Use Pokemon Inspired Colour Palettes","Description":"Use Pokemon(R) inspired palettes with additional 'ggplot2' scales.\n Palettes are the colours in each Pokemon's sprite, ordered by how common\n they are in the image. The first 386 Pokemon are currently provided.","Published":"2016-04-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"palinsol","Version":"0.93","Title":"Insolation for Palaeoclimate Studies","Description":"R package to compute Incoming Solar Radiation (insolation) for palaeoclimate studies. Features three solutions: Berger (1978), Berger and Loutre (1991) and Laskar et al. (2004). Computes daily-mean, season-averaged and annual means for all latitudes.","Published":"2016-03-05","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"palm","Version":"1.0.0","Title":"Fitting Point Process Models via the Palm Likelihood","Description":"Functions for the fitting of point process models using the Palm likelihood. First proposed by Tanaka, Ogata, and Stoyan (2008) , maximisation of the Palm likelihood can provide computationally efficient parameter estimation in situations where the full likelihood is intractable. This package is chiefly focused on Neyman-Scott point processes, but can also fit void processes. The development of this package was motivated by the analysis of capture-recapture surveys on which individuals cannot be identified---the data from which can conceptually be seen as a clustered point process. As such, some of the functions in this package are specifically for the estimation of cetacean density from two-camera aerial surveys.","Published":"2017-01-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"palr","Version":"0.0.6","Title":"Colour Palettes for Data","Description":"Colour palettes for data, based on some well known public data\n sets.","Published":"2016-07-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pals","Version":"1.4","Title":"Color Palettes, Colormaps, and Tools to Evaluate Them","Description":"A comprehensive collection of color palettes, colormaps, and tools to evaluate them.","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pamctdp","Version":"0.3.2","Title":"Principal Axes Methods for Contingency Tables with Partition\nStructures on Rows and Columns","Description":"Correspondence Analysis of Contingency Tables with Simple\n and Double Structures Superimposed Representations, Intra\n Blocks Correspondence Analysis (IBCA), Weighted Intra Blocks\n Correspondence Analysis (WIBCA).","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pamm","Version":"0.9","Title":"Power Analysis for Random Effects in Mixed Models","Description":"Simulation functions to assess or explore the power of a dataset to estimates significant random effects (intercept or slope) in a mixed model. The functions are based on the \"lme4\" package.","Published":"2015-12-16","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"pampe","Version":"1.1.2","Title":"Implementation of the Panel Data Approach Method for Program\nEvaluation","Description":"Implements the Panel Data Approach Method for program evaluation as developed in Hsiao, Ching and Ki Wan (2012). pampe estimates the effect of an intervention by comparing the evolution of the outcome for a unit affected by an intervention or treatment to the evolution of the unit had it not been affected by the intervention.","Published":"2015-11-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pamr","Version":"1.55","Title":"Pam: prediction analysis for microarrays","Description":"Some functions for sample classification in microarrays","Published":"2014-08-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pan","Version":"1.4","Title":"Multiple Imputation for Multivariate Panel or Clustered Data","Description":"Multiple imputation for multivariate panel or clustered data.","Published":"2016-02-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pAnalysis","Version":"2.0","Title":"Benchmarking and Rescaling R2 using Noise Percentile Analysis","Description":"Provides the tools needed to benchmark the R2 value corresponding to a certain acceptable noise level while also providing a rescaling function based on that noise level yielding a new value of R2 we refer to as R2k which is independent of both the number of degrees of freedom and the noise distribution function.","Published":"2016-01-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PANDA","Version":"0.9.9","Title":"Preferential Attachment Based Common Neighbor Distribution\nDerived Functional Associations","Description":"PANDA (Preferential Attachment based common Neighbor Distribution derived Associations) is designed to perform the following tasks in PPI networks: (1) identify significantly functionally associated protein pairs, (2) predict GO terms and KEGG pathways for proteins, (3) make a cluster of proteins based on the significant protein pairs, (4) identify subclusters whose members are enriched in KEGG pathways. For other types of biological networks, (1) and (3) can still be performed.","Published":"2016-12-05","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"pander","Version":"0.6.0","Title":"An R Pandoc Writer","Description":"Contains some functions catching all messages, stdout and other\n useful information while evaluating R code and other helpers to return user\n specified text elements (like: header, paragraph, table, image, lists etc.)\n in pandoc's markdown or several type of R objects similarly automatically\n transformed to markdown format. Also capable of exporting/converting (the\n resulting) complex pandoc documents to e.g. HTML, PDF, docx or odt. This\n latter reporting feature is supported in brew syntax or with a custom\n reference class with a smarty caching backend.","Published":"2015-11-23","License":"AGPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pandocfilters","Version":"0.1-1","Title":"Pandoc Filters for R","Description":"The document converter 'pandoc' is widely used\n in the R community. One feature of 'pandoc' is that it can produce and consume\n JSON-formatted abstract syntax trees (AST). This allows to transform a given\n source document into JSON-formatted AST, alter it by so called filters and pass\n the altered JSON-formatted AST back to 'pandoc'. This package provides functions\n which allow to write such filters in native R code. \n Although this package is inspired by the Python package 'pandocfilters' \n , it provides additional convenience functions which make it simple to use the 'pandocfilters' package as a \n report generator. Since 'pandocfilters' inherits most of it's functionality\n from 'pandoc' it can create documents in many formats \n (for more information see ) but is also bound to the same\n limitations as 'pandoc'.","Published":"2016-04-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"panelaggregation","Version":"0.1.1","Title":"Aggregate Longitudinal Survey Data","Description":"Aggregate Business Tendency Survey Data (and other qualitative\n surveys) to time series at various aggregation levels. Run aggregation of\n survey data in a speedy, re-traceable and a easily deployable way.\n Aggregation is substantially accelerated by use of data.table.\n This package intends to provide an interface that is less general and abstract than data.table but rather geared towards\n survey researchers.","Published":"2017-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"panelAR","Version":"0.1","Title":"Estimation of Linear AR(1) Panel Data Models with\nCross-Sectional Heteroskedasticity and/or Correlation","Description":"The package estimates linear models on panel data structures in the presence of AR(1)-type autocorrelation as well as panel heteroskedasticity and/or contemporaneous correlation. First, AR(1)-type autocorrelation is addressed via a two-step Prais-Winsten feasible generalized least squares (FGLS) procedure, where the autocorrelation coefficients may be panel-specific. A number of common estimators for the autocorrelation coefficient are supported. In case of panel heteroskedasticty, one can choose to use a sandwich-type robust standard error estimator with OLS or a panel weighted least squares estimator after the two-step Prais-Winsten estimator. Alternatively, if panels are both heteroskedastic and contemporaneously correlated, the package supports panel-corrected standard errors (PCSEs) as well as the Parks-Kmenta FGLS estimator.","Published":"2014-02-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PanelCount","Version":"1.0.9","Title":"Random Effects and/or Sample Selection Models for Panel Count\nData","Description":"A high performance package implementing random effects and/or sample selection models for panel count data.","Published":"2015-10-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"Paneldata","Version":"1.0","Title":"Linear models for panel data","Description":"Linear models for panel data: the fixed effect model and the\n random effect model","Published":"2014-03-20","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"pangaear","Version":"0.3.0","Title":"Client for the 'Pangaea' Database","Description":"Tools to interact with the 'Pangaea' Database\n (), including functions for searching for data,\n fetching 'datasets' by 'dataset' 'ID', and working with the 'Pangaea'\n 'OAI-PMH' service.","Published":"2017-03-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PANICr","Version":"1.0.0","Title":"PANIC Tests of Nonstationarity","Description":"A methodology that makes use of the factor structure of large\n dimensional panels to understand the nature of nonstationarity inherent\n in data. This is referred to as PANIC, Panel Analysis of Nonstationarity\n in Idiosyncratic and Common Components. \n PANIC (2004) includes\n valid pooling methods that allow panel tests to be constructed.\n PANIC (2004) can detect whether the nonstationarity in a series is\n pervasive, or variable specific, or both.\n PANIC (2010) includes\n two new tests on the idiosyncratic component that estimates the pooled\n autoregressive coefficient and sample moment, respectively. The PANIC\n model approximates the number of factors based on\n Bai and Ng (2002) .","Published":"2016-09-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PanJen","Version":"1.4","Title":"A Semi-Parametric Test for Specifying Functional Form","Description":"A central decision in a parametric regression is how to specify the relation between an dependent variable and each explanatory variable. This package provides a semi-parametric tool for comparing different transformations of an explanatory variables in a parametric regression. The functions is relevant in a situation, where you would use a box-cox or Box-Tidwell transformations. In contrast to the classic power-transformations, the methods in this package allows for theoretical driven user input and the possibility to compare with a non-parametric transformation.","Published":"2017-05-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"papayar","Version":"1.0","Title":"View Medical Research Images using the Papaya JavaScript Library","Description":"Users pass images and objects of class 'nifti' from the 'oro.nifti'\n package to a Papaya, an interactive lightweight JavaScript viewer.\n Although many packages can view individual slices or projections of\n image and matrix data, this package allows for quick and easy\n interactive browsing of images. The viewer is based off of the\n Mango software, which is a lightweight medical image viewer.","Published":"2016-09-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"papeR","Version":"1.0-2","Title":"A Toolbox for Writing Pretty Papers and Reports","Description":"A toolbox for writing 'knitr', 'Sweave' or other 'LaTeX'- or 'markdown'-based\n\t reports and to prettify the output of various estimated models.","Published":"2017-02-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"paperplanes","Version":"0.0.1.9","Title":"Distance Recordings from a Paper Plane Folding/Flying Experiment","Description":"This is a data only package, that provides distances from a paper plane experiment.","Published":"2017-02-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"parallelDist","Version":"0.1.1","Title":"Parallel Distance Matrix Computation using Multiple Threads","Description":"A fast parallelized alternative to R's native 'dist' function to\n calculate distance matrices for continuous, binary, and multi-dimensional\n input matrices with support for a broad variety of distance functions from\n the 'stats', 'proxy' and 'dtw' R packages. For ease of use, the 'parDist'\n function extends the signature of the 'dist' function and uses the same\n parameter naming conventions as distance methods of existing R packages.\n The package is mainly implemented in C++ and leverages the 'RcppParallel'\n package to parallelize the distance computations with the help of the\n 'TinyThread' library. Furthermore, the 'Armadillo' linear algebra library\n is used for optimized matrix operations during distance calculations. The\n curiously recurring template pattern (CRTP) technique is applied to avoid\n virtual functions, which improves the Dynamic Time Warping calculations\n while keeping the implementation flexible enough to support different step\n patterns and normalization methods.","Published":"2017-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ParallelForest","Version":"1.1.0","Title":"Random Forest Classification with Parallel Computing","Description":"R package implementing random forest classification using parallel computing, built with Fortran and OpenMP in the backend.","Published":"2014-07-15","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"parallelize.dynamic","Version":"0.9-1","Title":"Automate parallelization of function calls by means of dynamic\ncode analysis","Description":"Passing a given function name or a call to the\n parallelize/parallelize_call functions analyses and executes\n the code, if possible in parallel. Parallel code execution can\n be performed locally or on remote batch queuing systems.","Published":"2013-05-22","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"parallelMap","Version":"1.3","Title":"Unified Interface to Parallelization Back-Ends","Description":"Unified parallelization framework for multiple back-end,\n designed for internal package and interactive usage.\n The main operation is a parallel \"map\" over lists.\n Supports local, multicore, mpi and BatchJobs mode.\n Allows \"tagging\" of the parallel operation\n with a level name that can be later selected by the user to\n switch on parallel execution for exactly this operation.","Published":"2015-06-10","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"parallelMCMCcombine","Version":"1.0","Title":"Methods for combining independent subset Markov chain Monte\nCarlo (MCMC) posterior samples to estimate a posterior density\ngiven the full data set","Description":"Recent Bayesian Markov chain Monto Carlo (MCMC) methods have been developed for big data sets that are too large to be analyzed using traditional statistical methods. These methods partition the data into non-overlapping subsets, and perform parallel independent Bayesian MCMC analyses on the data subsets, creating independent subposterior samples for each data subset. These independent subposterior samples are combined through four functions in this package, including averaging across subset samples, weighted averaging across subsets samples, and kernel smoothing across subset samples. The four functions assume the user has previously run the Bayesian analysis and has produced the independent subposterior samples outside of the package; the functions use as input the array of subposterior samples. The methods have been demonstrated to be useful for Bayesian MCMC models including Bayesian logistic regression, Bayesian Gaussian mixture models and Bayesian hierarchical Poisson-Gamma models. The methods are appropriate for Bayesian hierarchical models with hyperparameters, as long as data values in a single level of the hierarchy are not split into subsets.","Published":"2014-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"parallelML","Version":"1.2","Title":"A Parallel-Voting Algorithm for many Classifiers","Description":"By sampling your data, running the provided classifier on these samples in parallel on your own machine and letting your models vote on a prediction, we return much faster predictions than the regular machine learning algorithm and possibly even more accurate predictions.","Published":"2015-06-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ParallelPC","Version":"1.2","Title":"Paralellised Versions of Constraint Based Causal Discovery\nAlgorithms","Description":"Parallelise constraint based causality discovery and causal inference methods. The parallelised algorithms in the package will generate the same results as that of the 'pcalg' package but will be much more efficient. ","Published":"2015-12-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"parallelSVM","Version":"0.1-9","Title":"A Parallel-Voting Version of the Support-Vector-Machine\nAlgorithm","Description":"By sampling your data, running the Support-Vector-Machine algorithm on these samples in parallel on your own machine and letting your models vote on a prediction, we return much faster predictions than the regular Support-Vector-Machine and possibly even more accurate predictions.","Published":"2015-06-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ParallelTree","Version":"0.1.2","Title":"Parallel Tree","Description":"\n Provides two functions: Group_function() and Parallel_Tree(). Group_function() applies a given function (e.g.,mean()) to input variable(s) by group across levels. Has additional data management options.\n Parallel_Tree() uses 'ggplot2' to create a parallel coordinate plots (technically a facsimile of parallel coordinate plots in a Cartesian coordinate system).\n Used in combination these functions can create parallel tree plots, a variant of parallel coordinate plots, which are useful for visualizing multilevel data.","Published":"2016-11-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"paramGUI","Version":"2.1.2","Title":"A Shiny GUI for some Parameter Estimation Examples","Description":"Allows specification and fitting of some parameter\n estimation examples inspired by time-resolved spectroscopy via\n a Shiny GUI.","Published":"2017-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ParamHelpers","Version":"1.10","Title":"Helpers for Parameters in Black-Box Optimization, Tuning and\nMachine Learning","Description":"Functions for parameter descriptions and operations in black-box\n optimization, tuning and machine learning. Parameters can be described\n (type, constraints, defaults, etc.), combined to parameter sets and can in\n general be programmed on. A useful OptPath object (archive) to log function\n evaluations is also provided.","Published":"2017-01-05","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"paramlink","Version":"1.1-0","Title":"Parametric Linkage and Other Pedigree Analysis in R","Description":"A suite of tools for analysing pedigrees with marker data, including parametric linkage analysis, forensic computations, relatedness analysis and marker simulations. The core of the package is an implementation of the Elston-Stewart algorithm for pedigree likelihoods, extended to allow mutations as well as complex inbreeding. Features for linkage analysis include singlepoint LOD scores, power analysis, and multipoint analysis (the latter through a wrapper to the MERLIN software). Forensic applications include exclusion probabilities, genotype distributions and conditional simulations. Data from the Familias software can be imported and analysed in paramlink. Finally, paramlink offers many utility functions for creating, manipulating and plotting pedigrees with or without marker data (the actual plotting is done by the kinship2 package).","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"params","Version":"0.6.1","Title":"Simplify Parameters","Description":"An interface to simplify organizing parameters used in a package,\n using external configuration files. This attempts to provide a cleaner\n alternative to options().","Published":"2016-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"paran","Version":"1.5.1","Title":"Horn's Test of Principal Components/Factors","Description":"paran is an implementation of Horn's technique for\n numerically and graphically evaluating the components or\n factors retained in a principle components analysis (PCA) or\n common factor analysis (FA). Horn's method contrasts\n eigenvalues produced through a PCA or FA on a number of random\n data sets of uncorrelated variables with the same number of\n variables and observations as the experimental or observational\n data set to produce eigenvalues for components or factors that\n are adjusted for the sample error-induced inflation. Components\n with adjusted eigenvalues greater than one are retained. paran\n may also be used to conduct parallel analysis following\n Glorfeld's (1995) suggestions to reduce the likelihood of\n over-retention.","Published":"2012-09-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"parboost","Version":"0.1.4","Title":"Distributed Model-Based Boosting","Description":"Distributed gradient boosting based on the mboost package. The\n parboost package is designed to scale up component-wise functional\n gradient boosting in a distributed memory environment by splitting the\n observations into disjoint subsets, or alternatively using bootstrap\n samples (bagging). Each cluster node then fits a boosting model to its\n subset of the data. These boosting models are combined in an ensemble,\n either with equal weights, or by fitting a (penalized) regression\n model on the predictions of the individual models on the complete\n data.","Published":"2015-05-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"parcor","Version":"0.2-6","Title":"Regularized estimation of partial correlation matrices","Description":"The package estimates the matrix of partial correlations\n based on different regularized regression methods: lasso,\n adaptive lasso, PLS, and Ridge Regression. In addition, the\n package provides model selection for lasso, adaptive lasso and\n Ridge regression based on cross-validation.","Published":"2014-09-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ParDNAcopy","Version":"2.0","Title":"Parallel implementation of the \"segment\" function of package\n\"DNAcopy\"","Description":"Parallelized version of the \"segment\" function from Bioconductor package \"DNAcopy\", utilizing multi-core computation on host CPU","Published":"2014-12-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ParentOffspring","Version":"1.0","Title":"Conduct the Parent-Offspring Test Using Monomorphic SNP Markers","Description":"Conduct the Parent-Offspring Test Using Monomorphic SNP Markers. The similarity to the parents is computed for each offspring, and a plot of similarity for all offspring is produced. One can keep the offspring above some threshold for the similarity for further studies.","Published":"2013-09-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ParetoPosStable","Version":"1.1","Title":"Computing, Fitting and Validating the PPS Distribution","Description":"Statistical functions to describe a Pareto Positive Stable (PPS) \n distribution and fit it to real data. Graphical and statistical tools to \n validate the fits are included.","Published":"2015-09-02","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"parfm","Version":"2.7.5","Title":"Parametric Frailty Models","Description":"Fits Parametric Frailty Models by maximum marginal likelihood.\n Possible baseline hazards:\n exponential, Weibull, inverse Weibull (Fréchet),\n Gompertz, lognormal, log-skew-normal, and loglogistic.\n Possible Frailty distributions:\n gamma, positive stable, inverse Gaussian and lognormal.","Published":"2017-04-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"parfossil","Version":"0.2.0","Title":"Parallelized functions for palaeoecological and\npalaeogeographical analysis","Description":"The package provides a number of easily parallelized\n functions from the fossil package. This package is designed to\n be used with some type of parallel computing backend, such as\n multicore, snow or MPI.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"parlitools","Version":"0.0.4","Title":"Tools for Analysing UK Politics","Description":"Provides various tools for analysing UK political data, including creating political cartograms and retrieving data.","Published":"2017-06-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"parma","Version":"1.5-3","Title":"Portfolio Allocation and Risk Management Applications","Description":"Provision of a set of models and methods for use in the allocation and management of capital in financial portfolios.","Published":"2016-08-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"parmigene","Version":"1.0.2","Title":"Parallel Mutual Information estimation for Gene Network\nreconstruction","Description":"The package provides a parallel estimation of the mutual\n information based on entropy estimates from k-nearest neighbors\n distances and algorithms for the reconstruction of gene\n regulatory networks.","Published":"2012-07-23","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"PARSE","Version":"0.1.0","Title":"Model-Based Clustering with Regularization Methods for\nHigh-Dimensional Data","Description":"Model-based clustering and identifying informative features based on regularization methods. The package includes three regularization methods - PAirwise Reciprocal fuSE (PARSE) penalty proposed by Wang, Zhou and Hoeting (2016), the adaptive L1 penalty (APL1) and the adaptive pairwise fusion penalty (APFP). Heatmaps are included to shown the identification of informative features.","Published":"2016-06-11","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"parsec","Version":"1.1.2","Title":"Partial Orders in Socio-Economics","Description":"Implements basic partial order tools for multidimensional poverty evaluation with ordinal variables. Its main goal is to provide socio-economic scholars with an integrated set of elementary functions for multidimensional poverty evaluation, based on ordinal information. The package is organized in four main parts. The first two comprise functions for data management and basic partial order analysis; the third and the fourth are devoted to evaluation and implement both the poset-based approach and a more classical counting procedure.","Published":"2016-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"parsedate","Version":"1.1.3","Title":"Recognize and Parse Dates in Various Formats, Including All ISO\n8601 Formats","Description":"Parse dates automatically, without the need of\n specifying a format. Currently it includes the git date parser.\n It can also recognize and parse all ISO 8601 formats.","Published":"2017-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"parsemsf","Version":"0.1.0","Title":"Parse Thermo MSF Files and Estimate Protein Abundances","Description":"Provides functions for parsing Thermo MSF files produced by Proteome Discoverer 1.4.x (see for more information). This package makes it easy to view individual peptide information, including peak areas, and to map peptides to locations within the parent protein sequence. This package also estimates protein abundances from peak areas and across multiple technical replicates. The author of this package is not affiliated with ThermoFisher Scientific in any way.","Published":"2017-01-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"partDSA","Version":"0.9.14","Title":"Partitioning Using Deletion, Substitution, and Addition Moves","Description":"A novel tool for generating a piecewise\n constant estimation list of increasingly complex predictors\n based on an intensive and comprehensive search over the entire\n covariate space.","Published":"2017-01-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"partialAR","Version":"1.0.10","Title":"Partial Autoregression","Description":"A time series is said to be partially autoregressive if it can be represented as a sum of a random walk and an autoregressive sequence without unit roots. This package fits partially autoregressive time series, where the autoregressive component is AR(1). This may be of use in modeling certain financial time series.","Published":"2017-04-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"partialCI","Version":"1.1.0","Title":"Partial Cointegration","Description":"A collection of time series is partially cointegrated if a linear combination of these time series can be found so that the residual spread is partially autoregressive - meaning that it can be represented as a sum of an autoregressive series and a random walk. This concept is useful in modeling certain sets of financial time series and beyond, as it allows for the spread to contain transient and permanent components alike. Partial cointegration has been introduced by Clegg and Krauss (2016) , along with a large-scale empirical application to financial market data. The partialCI package comprises estimation, testing, and simulation routines for partial cointegration models in state space. Clegg et al. (2017) provide an in in-depth discussion of the package functionality as well as illustrating examples in the fields of finance and macroeconomics.","Published":"2017-04-25","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"Partiallyoverlapping","Version":"1.0","Title":"Partially Overlapping Samples t-Tests","Description":"The \"partially overlapping samples t-tests\", for the comparison\n of means for two samples which include both paired observations and independent\n observations. [See Derrick, B., Russ, B., Toher, D. & White P (2017). Test\n statistics for the comparison of means for two samples which include both paired\n observations and independent observations. Journal of Modern Applied Statistical\n Methods, 16(1)].","Published":"2017-01-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"partialOR","Version":"0.9","Title":"Partial Odds Ratio","Description":"Computes Odds Ratio adjusted for a vector of possibly\n continuous covariates","Published":"2013-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"partitionMap","Version":"0.5","Title":"Partition Maps","Description":"Low-dimensional embedding, using Random Forests for\n multiclass classification","Published":"2013-01-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"partitionMetric","Version":"1.1","Title":"Compute a distance metric between two partitions of a set","Description":"partitionMetric computes a distance between two partitions\n of a set.","Published":"2014-03-02","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"partitions","Version":"1.9-18","Title":"Additive Partitions of Integers","Description":"Additive partitions of integers. Enumerates the\n partitions, unequal partitions, and restricted partitions of an\n integer; the three corresponding partition functions are also\n given. Set partitions are now included.","Published":"2015-08-06","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"partools","Version":"1.1.6","Title":"Tools for the 'Parallel' Package","Description":"Miscellaneous utilities for parallelizing large\n computations. Alternative to MapReduce.\n File splitting and distributed operations such as sort and aggregate.\n \"Software Alchemy\" method for parallelizing most statistical methods,\n presented in N. Matloff, Parallel Computation for Data Science,\n Chapman and Hall, 2015. Includes a debugging aid.","Published":"2017-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"partsm","Version":"1.1-2","Title":"Periodic Autoregressive Time Series Models","Description":"This package performs basic functions to fit and predict periodic autoregressive time series models. These models are discussed in the book P.H. Franses (1996) \"Periodicity and Stochastic Trends in Economic Time Series\", Oxford University Press. Data set analyzed in that book is also provided. NOTE: the package was orphaned during several years. It is now only maintained, but no major enhancement are expected, and the maintainer cannot provide any support. ","Published":"2014-03-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"party","Version":"1.2-3","Title":"A Laboratory for Recursive Partytioning","Description":"A computational toolbox for recursive partitioning.\n The core of the package is ctree(), an implementation of\n conditional inference trees which embed tree-structured \n regression models into a well defined theory of conditional\n inference procedures. This non-parametric class of regression\n trees is applicable to all kinds of regression problems, including\n nominal, ordinal, numeric, censored as well as multivariate response\n variables and arbitrary measurement scales of the covariates. \n Based on conditional inference trees, cforest() provides an\n implementation of Breiman's random forests. The function mob()\n implements an algorithm for recursive partitioning based on\n parametric models (e.g. linear models, GLMs or survival\n regression) employing parameter instability tests for split\n selection. Extensible functionality for visualizing tree-structured\n regression models is available. The methods are described in\n Hothorn et al. (2006) ,\n Zeileis et al. (2008) and \n Strobl et al. (2007) .","Published":"2017-04-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"partykit","Version":"1.1-1","Title":"A Toolkit for Recursive Partytioning","Description":"A toolkit with infrastructure for representing, summarizing, and\n visualizing tree-structured regression and classification models. This\n unified infrastructure can be used for reading/coercing tree models from\n different sources ('rpart', 'RWeka', 'PMML') yielding objects that share\n functionality for print()/plot()/predict() methods. Furthermore, new and improved\n reimplementations of conditional inference trees (ctree()) and model-based\n recursive partitioning (mob()) from the 'party' package are provided based\n on the new infrastructure.","Published":"2016-09-20","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"parviol","Version":"1.1","Title":"Parviol","Description":"Parviol combines parallel coordinates and violin plot","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PAS","Version":"1.2","Title":"Polygenic Analysis System (PAS)","Description":"An R package for polygenic trait analysis","Published":"2013-08-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PASenseWear","Version":"1.0","Title":"Summarize Daily Physical Activity from 'SenseWear' Accelerometer\nData","Description":"Provide summary table of daily physical activity and per-person/grouped heat map for accelerometer data from SenseWear Armband. See for more information about SenseWear Armband.","Published":"2016-08-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pass","Version":"1.0","Title":"Prediction and Stability Selection of Tuning Parameters","Description":"To implement two methods, Kappa and PASS, for selecting\n tuning parameters in regularized procedures such as LASSO,\n SCAD, adaptive LASSO, aiming for variable selection in\n regularized regression","Published":"2013-01-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"password","Version":"1.0-0","Title":"Create Random Passwords","Description":"Create random passwords of letters, numbers and punctuation.","Published":"2016-03-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pasta","Version":"0.1.0","Title":"Noodlyfied Pasting of Strings","Description":"Intuitive and readable infix functions to paste strings together.","Published":"2016-12-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pastecs","Version":"1.3-18","Title":"Package for Analysis of Space-Time Ecological Series","Description":"Regulation, decomposition and analysis of space-time series. The pastecs library is a PNEC-Art4 and IFREMER (Benoit Beliaeff ) initiative to bring PASSTEC 2000 (http://www.obs-vlfr.fr/~enseigne/anado/passtec/passtec.htm) functionalities to R.","Published":"2014-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pastis","Version":"0.1-2","Title":"Phylogenetic Assembly with Soft Taxonomic Inferences","Description":"A pre-processor for mrBayes that assimilates sequences, taxonomic\n information and tree constraints as per xxx. The main functions of\n interest for most users will be pastis_simple, pastis_main and conch. The\n main analysis is conducted with pastis_simple or pastis_main followed by a\n manual execution of mrBayes (>3.2). The placement of taxa not contained in\n the tree constraint can be investigated using conch.","Published":"2013-09-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"PASWR","Version":"1.1","Title":"PROBABILITY and STATISTICS WITH R","Description":"Data and functions for the book PROBABILITY and STATISTICS\n WITH R.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PASWR2","Version":"1.0.2","Title":"Probability and Statistics with R, Second Edition","Description":"Functions and data sets for the text Probability and Statistics\n with R, Second Edition.","Published":"2016-01-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"patchDVI","Version":"1.9.1616","Title":"Package to Patch .dvi or .synctex Files","Description":"Functions to patch specials in .dvi files,\n or entries in .synctex files. Works with \"concordance=TRUE\" \n in Sweave or knitr to link sources to previews.","Published":"2015-06-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"patchPlot","Version":"0.1.5","Title":"Scatterplots of image patches","Description":"Functions to generate scatterplots with images patches\n instead of usual glyphs, with associated utilities.","Published":"2013-03-11","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"patchSynctex","Version":"0.1-4","Title":"Communication Between Editor and Viewer for Literate Programs","Description":"This utility eases the debugging of literate documents\n\t ('noweb' files) by patching the synchronization information\n\t (the '.synctex(.gz)' file) produced by 'pdflatex' with\n\t concordance information produced by 'Sweave' or 'knitr' and\n\t 'Sweave' or 'knitr' ; this allows for bilateral communication\n\t between a text editor (visualizing the 'noweb' source) and\n\t a viewer (visualizing the resultant 'PDF'), thus bypassing\n\t the intermediate 'TeX' file.","Published":"2016-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PATHChange","Version":"1.0","Title":"A Tool for Identification of Differentially Expressed Pathways\nusing Multi-Statistic Comparison","Description":"An R tool suited to Affymetrix microarray data that combines three different statistical tests (Bootstrap, Fisher exact and Wilcoxon signed rank) to evaluate genetic pathway alterations.","Published":"2016-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pathClass","Version":"0.9.4","Title":"Classification using biological pathways as prior knowledge","Description":"pathClass is a collection of classification methods that\n use information about feature connectivity in a biological\n network as an additional source of information. This additional\n knowledge is incorporated into the classification a priori.\n Several authors have shown that this approach significantly\n increases the classification performance.","Published":"2013-07-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pathdiagram","Version":"0.1.9","Title":"Basic functions for drawing path diagrams","Description":"Implementation of simple functions to draw\n basic path diagrams just for visualization purposes.","Published":"2013-07-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pathmapping","Version":"1.0.2","Title":"Compute Deviation and Correspondence Between Spatial Paths","Description":"Functions to compute and display the area-based deviation between spatial paths and to compute a mapping based on minimizing area and distance-based cost. For details, see: Mueller, S. T., Perelman, B. S., & Veinott, E. S. (2016) .","Published":"2017-03-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pathmox","Version":"0.2.0","Title":"Pathmox Approach of Segmentation Trees in Partial Least Squares\nPath Modeling","Description":"pathmox, the cousin package of plspm, provides a very\n interesting solution for handling segmentation variables\n in PLS Path Modeling: segmentation trees in PLS Path Modeling.","Published":"2013-12-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pathological","Version":"0.1-2","Title":"Path Manipulation Utilities","Description":"Utilities for paths, files and directories.","Published":"2017-02-15","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"PathSelectMP","Version":"1.0","Title":"Backwards Variable Selection for Paths using M Plus","Description":"Primarily for use with datasets containing only categorical variables, although continuous variables may be included as independent variables in paths. Using M Plus, backward variable selection is performed on all Total, Total Indirect, and then Direct effects until none of these effects have p-values greater than the specified target p-value. If there are missing values in the data, imputations are performed using the Mice package. Then selection is performed with the imputed data sets, and results are averaged.","Published":"2016-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"patPRO","Version":"1.1.0","Title":"Visualizing Temporal Microbiome Data","Description":"Quickly and easily visualize longitudinal microbiome profiles using standard output from the QIIME microbiome analysis toolkit (see for more information).","Published":"2016-02-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"patternator","Version":"0.1.0","Title":"Feature Extraction from Female Brown Anole Lizard Dorsal\nPatterns","Description":"Provides a set of functions to efficiently recognize and clean the continuous dorsal pattern of a female brown anole lizard (Anolis sagrei) traced from 'ImageJ', an open platform for scientific image analysis (see for more information), and extract common features such as the pattern sinuosity indices, coefficient of variation, and max-min width.","Published":"2017-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PatternClass","Version":"1.7.1","Title":"Class-Focused Pattern Metric Comparisons using Simulation","Description":"Provides tools for estimating composition and configuration parameters from a categorical (binary) landscape map (grid) and then simulates a selected number of statistically similar landscapes. Class-focused pattern metrics are computed for each simulated map to produce empirical distributions against which statistical comparisons can be made. The code permits the analysis of single maps or pairs of maps.","Published":"2016-10-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"patternize","Version":"0.0.1","Title":"Quantification of Color Pattern Variation","Description":"Quantification of variation in organismal color patterns as obtained from image data. \n Patternize defines homology between pattern positions across images either through \n fixed landmarks or image registration. Pattern identification is performed by \n categorizing the distribution of colors using either an RGB threshold or unsupervised \n image segmentation.","Published":"2017-03-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"patternplot","Version":"0.1","Title":"Versatile Pie Chart using Patterns, Colors, and Images","Description":"Creates aesthetically pleasing and informative pie charts. \n It can plot pie charts either in black and white or in colors, \n with or without filled patterns. On the one hand, black and white pie charts filled \n\t\t\t with patterns are useful for publications, especially when an increasing \n\t\t\t number of journals only accept black and white figures or charge a \n\t\t\t significant amount for a color figure. On the other hand, colorful pie\n\t\t\t charts with or without patterns are useful for print design, online publishing,\n\t\t\t or poster and 'PowerPoint' presentations. 'patternplot' allows the flexibility of a\n\t\t\t variety of combinations of patterns and colors to choose from. It also has the\n\t\t\t ability to fill in the slices with any external images in 'png' and 'jpeg' formats.\n\t\t\t In summary, 'patternplot' allows the users to be as creative as they can while\n\t\t\t creating pie charts!","Published":"2016-12-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"pauwels2014","Version":"1.0","Title":"Bayesian Experimental Design for Systems Biology","Description":"Implementation of a Bayesian active learning strategy to carry out sequential experimental design in the context of biochemical network kinetic parameter estimation. This package gathers functions and pre-computed data sets to reproduce results presented in Pauwels E. et. al published in BMC Systems Biology, 2014. Scripts are given to compute all results from scratch or to draw pictures based on pre-computed data sets. ","Published":"2014-08-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pavo","Version":"1.1.0","Title":"Perceptual Analysis, Visualization and Organization of Spectral\nColor Data in R","Description":"A cohesive framework for parsing, analyzing and organizing color from spectral data.","Published":"2017-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pawacc","Version":"1.2.2","Title":"Physical Activity with Accelerometers","Description":"This is a collection of functions to process, format and store accelerometer data.","Published":"2017-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PAWL","Version":"0.5","Title":"Implementation of the PAWL algorithm","Description":"Implementation of the Parallel Adaptive Wang-Landau\n algorithm. Also implemented for comparison: parallel adaptive\n Metropolis-Hastings,SMC sampler.","Published":"2012-11-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pawls","Version":"1.0.0","Title":"Penalized Adaptive Weighted Least Squares Regression","Description":"Efficient algorithms for fitting weighted least squares regression with \\eqn{L_{1}}{L1} regularization on both the\n coefficients and weight vectors, which is able to perform simultaneous variable selection \n and outliers detection efficiently.","Published":"2017-05-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pbapply","Version":"1.3-2","Title":"Adding Progress Bar to '*apply' Functions","Description":"A lightweight package that adds\n progress bar to vectorized R functions\n ('*apply'). The implementation can easily be added\n to functions where showing the progress is\n useful (e.g. bootstrap). The type and style of the\n progress bar (with percentages or remaining time)\n can be set through options.\n Supports several parallel processing backends.","Published":"2017-03-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pbatR","Version":"2.2-9","Title":"P2BAT","Description":"This package provides data analysis via the pbat program,\n and an alternative internal implementation of the power\n calculations via simulation only. For analysis, this package\n provides a frontend to the PBAT software, automatically reading\n in the output from the pbat program and displaying the\n corresponding figure when appropriate (i.e. PBAT-logrank). It\n includes support for multiple processes and clusters. For\n analysis, users must download PBAT (developed by Christoph\n Lange) and accept it's license, available on the PBAT webpage.\n Both the data analysis and power calculations have command line\n and graphical interfaces using tcltk.","Published":"2013-03-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"PBD","Version":"1.4","Title":"Protracted Birth-Death Model of Diversification","Description":"Conducts maximum likelihood analysis and simulation of the\n protracted birth-death model of diversification. See\n Etienne, R.S. & J. Rosindell 2012 ;\n Lambert, A., H. Morlon & R.S. Etienne 2014, ;\n Etienne, R.S., H. Morlon & A. Lambert 2014, .","Published":"2017-05-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pbdBASE","Version":"0.4-5","Title":"Programming with Big Data -- Base Wrappers for Distributed\nMatrices","Description":"An interface to and extensions for the 'PBLAS' and\n 'ScaLAPACK' numerical libraries. This enables R to utilize\n distributed linear algebra for codes written in the 'SPMD' fashion.\n This interface is deliberately low-level and mimics the style of\n the native libraries it wraps. For a much higher level way of\n managing distributed matrices, see the 'pbdDMAT' package.","Published":"2016-10-13","License":"Mozilla Public License 2.0","snapshot_date":"2017-06-23"} {"Package":"pbdDEMO","Version":"0.3-1","Title":"Programming with Big Data -- Demonstrations and Examples Using\n'pbdR' Packages","Description":"A set of demos of 'pbdR' packages, together with a useful,\n unifying vignette.","Published":"2016-10-25","License":"Mozilla Public License 2.0","snapshot_date":"2017-06-23"} {"Package":"pbdDMAT","Version":"0.4-2","Title":"'pbdR' Distributed Matrix Methods","Description":"A set of classes for managing distributed matrices, and\n a collection of methods for computing linear algebra and\n statistics. Computation is handled mostly by routines from the\n 'pbdBASE' package, which itself relies on the 'ScaLAPACK' and\n 'PBLAS' numerical libraries for distributed computing.","Published":"2016-10-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pbdMPI","Version":"0.3-3","Title":"Programming with Big Data -- Interface to MPI","Description":"An efficient interface to MPI by utilizing S4\n classes and methods with a focus on Single Program/Multiple Data\n ('SPMD')\n parallel programming style, which is intended for batch parallel\n execution.","Published":"2016-12-18","License":"Mozilla Public License 2.0","snapshot_date":"2017-06-23"} {"Package":"pbdNCDF4","Version":"0.1-4","Title":"Programming with Big Data -- Interface to Parallel Unidata\nNetCDF4 Format Data Files","Description":"This package adds collective parallel read and write capability\n to the R package ncdf4 version 1.8. Typical use is as a\n parallel NetCDF4 file reader in SPMD style programming. Each R\n process reads and writes its own data in a synchronized\n collective mode, resulting in faster parallel performance.\n Performance improvement is conditional on a parallel file system.","Published":"2014-06-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"pbdPROF","Version":"0.3-1","Title":"Programming with Big Data --- MPI Profiling Tools","Description":"MPI profiling tools.","Published":"2016-09-23","License":"Mozilla Public License 2.0","snapshot_date":"2017-06-23"} {"Package":"pbdRPC","Version":"0.1-1","Title":"Programming with Big Data -- Remote Procedure Call","Description":"A very light implementation yet secure for remote procedure calls\n with unified interface via ssh (OpenSSH) or plink/plink.exe (PuTTY).","Published":"2017-01-01","License":"Mozilla Public License 2.0","snapshot_date":"2017-06-23"} {"Package":"pbdSLAP","Version":"0.2-2","Title":"Programming with Big Data -- Scalable Linear Algebra Packages","Description":"Utilizing scalable linear algebra packages mainly\n including BLACS, PBLAS, and ScaLAPACK in double precision via\n pbdMPI based on ScaLAPACK version 2.0.2.","Published":"2016-09-25","License":"Mozilla Public License 2.0","snapshot_date":"2017-06-23"} {"Package":"pbdZMQ","Version":"0.2-6","Title":"Programming with Big Data -- Interface to ZeroMQ","Description":"'ZeroMQ' is a well-known library for high-performance\n asynchronous messaging in scalable, distributed applications. This\n package provides high level R wrapper functions to easily utilize\n 'ZeroMQ'. We mainly focus on interactive client/server programming\n frameworks. For convenience, a minimal 'ZeroMQ' library (4.1.0 rc1)\n is shipped with 'pbdZMQ', which can be used if no system installation\n of 'ZeroMQ' is available. A few wrapper functions compatible with\n 'rzmq' are also provided.","Published":"2017-05-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PBIBD","Version":"1.2","Title":"Partially Balanced Incomplete Block Designs","Description":"It constructs four series of PBIB designs and also assists in calculating the efficiencies of PBIB Designs with any number of associate classes. This will help the researchers in adopting a PBIB designs and calculating the efficiencies of any PBIB design very quickly and efficiently.","Published":"2017-01-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PBImisc","Version":"1.0","Title":"A Set of Datasets Used in My Classes or in the Book 'Modele\nLiniowe i Mieszane w R, Wraz z Przykladami w Analizie Danych'","Description":"A set of datasets and functions used in the book\n 'Modele liniowe i mieszane w R, wraz z przykladami w analizie danych'.\n Datasets either come from real studies or are created to be as similar \n as possible to real studies.","Published":"2016-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pbivnorm","Version":"0.6.0","Title":"Vectorized Bivariate Normal CDF","Description":"Provides a vectorized R function for calculating\n probabilities from a standard bivariate normal CDF.","Published":"2015-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pbkrtest","Version":"0.4-7","Title":"Parametric Bootstrap and Kenward Roger Based Methods for Mixed\nModel Comparison","Description":"Test in mixed effects models. Attention is on mixed effects models\n as implemented in the 'lme4' package. This package implements a parametric\n bootstrap test and a Kenward Roger modification of F-tests for linear mixed\n effects models and a parametric bootstrap test for generalized linear mixed\n models.","Published":"2017-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pbmcapply","Version":"1.2.2","Title":"Tracking the Progress of Mc*pply with Progress Bar","Description":"A light-weight package helps you track and visualize\n the progress of parallel version of vectorized R functions (mc*apply).\n Parallelization (mc.core > 1) works only on *nix (Linux, Unix such as macOS) system due to\n the lack of fork() functionality, which is essential for mc*apply, on Windows.","Published":"2017-05-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PBNPA","Version":"0.0.1","Title":"Permutation Based Non-Parametric Analysis of CRISPR Screen Data","Description":"Implements permutation based non-parametric analysis of CRISPR (Clustered \n Regularly Interspaced Short Palindromic Repeats) screen data.","Published":"2016-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pbo","Version":"1.3.4","Title":"Probability of Backtest Overfitting","Description":"Following the method of Bailey et al., computes for a collection\n of candidate models the probability of backtest overfitting, the\n performance degradation and probability of loss, and the stochastic\n dominance.","Published":"2014-05-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pBrackets","Version":"1.0","Title":"Plot Brackets","Description":"Adds different kinds of brackets to a plot, including braces, chevrons, parentheses or square brackets.","Published":"2014-10-17","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"pbs","Version":"1.1","Title":"Periodic B Splines","Description":"Periodic B Splines Basis","Published":"2013-06-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PBSadmb","Version":"0.68.104","Title":"ADMB for R Using Scripts or GUI","Description":"R Support for ADMB (Automatic Differentiation Model Builder)","Published":"2014-04-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PBSddesolve","Version":"1.12.2","Title":"Solver for Delay Differential Equations","Description":"Routines for solving systems of delay differential equations by\n interfacing numerical routines written by Simon N. Wood , with contributions\n by Benjamin J. Cairns. These numerical routines first appeared in Simon\n Wood's 'solv95' program. This package includes a vignette and a complete\n user's guide. 'PBSddesolve' originally appeared on CRAN under the name\n 'ddesolve'. That version is no longer supported. The current name emphasizes\n a close association with other PBS packages, particularly 'PBSmodelling'.","Published":"2016-09-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PBSmapping","Version":"2.69.76","Title":"Mapping Fisheries Data and Spatial Analysis Tools","Description":"This software has evolved from fisheries research conducted at the\n Pacific Biological Station (PBS) in `Nanaimo', British Columbia, Canada. It\n extends the R language to include two-dimensional plotting features similar\n to those commonly available in a Geographic Information System (GIS).\n Embedded C code speeds algorithms from computational geometry, such as\n finding polygons that contain specified point events or converting between\n longitude-latitude and Universal Transverse Mercator (UTM) coordinates.\n Additionally, we include `C++' code developed by Angus Johnson for the `Clipper'\n library. Also included are data for a global shoreline and other\n data sets in the public domain. The R directory `.../library/PBSmapping/doc'\n offers a complete user's guide, which should be consulted to use package\n functions effectively.","Published":"2015-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PBSmodelling","Version":"2.67.266","Title":"GUI Tools Made Easy: Interact with Models and Explore Data","Description":"Provides software to facilitate the design, testing, and operation\n of computer models. It focuses particularly on tools that make it easy to\n construct and edit a customized graphical user interface (GUI). Although our\n simplified GUI language depends heavily on the R interface to the Tcl/Tk\n package, a user does not need to know Tcl/Tk. Examples illustrate models\n built with other R packages, including PBSmapping, PBSddesolve, and BRugs. \n A complete user's guide `PBSmodelling-UG.pdf' shows how to use this package\n effectively.","Published":"2015-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pca3d","Version":"0.10","Title":"Three Dimensional PCA Plots","Description":"Functions simplifying presentation of PCA models in a 3D interactive representation using 'rgl'.","Published":"2017-02-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PCA4TS","Version":"0.1","Title":"Segmenting Multiple Time Series by Contemporaneous Linear\nTransformation","Description":"To seek for a contemporaneous linear transformation for\n a multivariate time series such that the transformed series is segmented\n into several lower-dimensional subseries, and those subseries are\n uncorrelated with each other both contemporaneously and serially.","Published":"2015-08-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pcaBootPlot","Version":"0.2.0","Title":"Create 2D Principal Component Plots with Bootstrapping","Description":"Draws a 2D principal component plot using the first 2 principal\n components from the original and bootstrapped data to give some sense of\n variability.","Published":"2015-06-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pcadapt","Version":"3.0.4","Title":"Fast Principal Component Analysis for Outlier Detection","Description":"Methods to detect genetic markers involved in biological\n adaptation. 'pcadapt' provides statistical tools for outlier detection based on\n Principal Component Analysis. Implements the method described in (Luu, 2016)\n .","Published":"2017-01-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PCADSC","Version":"0.8.0","Title":"Tools for Principal Component Analysis-Based Data Structure\nComparisons","Description":"A suite of non-parametric, visual tools for assessing differences in data structures\n for two datasets that contain different observations of the same variables. These tools are all \n based on Principal Component Analysis (PCA) and thus effectively address differences in the structures\n of the covariance matrices of the two datasets. The PCASDC tools consist of easy-to-use, \n intuitive plots that each focus on different aspects of the PCA decompositions. The cumulative eigenvalue\n (CE) plot describes differences in the variance components (eigenvalues) of the deconstructed covariance matrices. The\n angle plot presents the information loss when moving from the PCA decomposition of one dataset to the \n PCA decomposition of the other. The chroma plot describes the loading patterns of the two datasets, thereby\n presenting the relative weighting and importance of the variables from the original dataset. ","Published":"2017-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pcaL1","Version":"1.5.1","Title":"L1-Norm PCA Methods","Description":"Implementations of several methods for principal component analysis \n using the L1 norm. The package depends on COIN-OR Clp version >= \n 1.12.0. The methods implemented are \n PCA-L1 (Kwak 2008) , \n L1-PCA (Ke and Kanade 2003, 2005) , \n L1-PCA* (Brooks, Dula, and Boone 2013) , \n L1-PCAhp (Visentin, Prestwich and Armagan 2016) \n , \n wPCA (Park and Klabjan 2016),\n awPCA (Park and Klabjan 2016),\n PCA-Lp (Kwak 2014) , and\n SharpEl1-PCA (Brooks and Dula, submitted).","Published":"2017-04-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"pcalg","Version":"2.4-5","Title":"Methods for Graphical Models and Causal Inference","Description":"Functions for causal structure\n learning and causal inference using graphical models. The main algorithms\n for causal structure learning are PC (for observational data without hidden\n variables), FCI and RFCI (for observational data with hidden variables),\n and GIES (for a mix of data from observational studies\n (i.e. observational data) and data from experiments\n involving interventions (i.e. interventional data) without hidden\n variables). For causal inference the IDA algorithm, the Generalized\n Backdoor Criterion (GBC) and the Generalized Adjustment Criterion (GAC)\n are implemented.","Published":"2017-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PCAmixdata","Version":"2.2","Title":"Multivariate Analysis of Mixed Data","Description":"Principal Component Analysis, orthogonal rotation and multiple factor analysis for a mixture of quantitative and qualitative variables.","Published":"2014-12-05","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"pcaPA","Version":"2.0.2","Title":"Parallel Analysis for Ordinal and Numeric Data using Polychoric\nand Pearson Correlations with S3 Classes","Description":"A set of functions to perform parallel analysis for\n principal components analysis intended mainly for large data\n sets. It performs a parallel analysis of continuous, ordered\n (including dichotomous/binary as a special case) or mixed type\n of data associated with a principal components analysis.\n Polychoric correlations among ordered variables, Pearson\n correlations among continuous variables and polyserial\n correlation between mixed type variables (one ordered and one\n continuous) are used. Whenever the use of polyserial or\n polychoric correlations yields a non positive definite\n correlation matrix, the resulting matrix is transformed into\n the nearest positive definite matrix. This is a continued work \n based on a previous version developed at the Colombian Institute \n for the evaluation of education - ICFES.","Published":"2016-09-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pcaPP","Version":"1.9-61","Title":"Robust PCA by Projection Pursuit","Description":"Provides functions for robust PCA by projection pursuit.","Published":"2016-10-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"pcdpca","Version":"0.2.1","Title":"Dynamic Principal Components for Periodically Correlated\nFunctional Time Series","Description":"Method extends multivariate dynamic principal components to periodically correlated multivariate time series.","Published":"2016-11-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PCDSpline","Version":"1.0","Title":"Semiparametric regression analysis of panel count data using\nmonotone splines","Description":"Semiparametric regression analysis of panel count data under the non-homogeneous Poisson\n process model with and without Gamma frailty using monotone splines.","Published":"2014-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pcev","Version":"1.1.1","Title":"Principal Component of Explained Variance","Description":"Principal component of explained variance (PCEV) is a statistical tool for the analysis of a multivariate\n response vector. It is a dimension-reduction technique, similar to Principal\n component analysis (PCA), which seeks the maximize the proportion of\n variance (in the response vector) being explained by a set of covariates.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PCFAM","Version":"1.0","Title":"Computation of Ancestry Scores with Mixed Families and Unrelated\nIndividuals","Description":"We provide several algorithms to compute the genotype ancestry scores (such as eigenvector projections) in the case where highly correlated individuals are involved.","Published":"2017-03-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pcg","Version":"1.1","Title":"Preconditioned Conjugate Gradient Algorithm for solving Ax=b","Description":"The package solves linear system of equations Ax=b by using Preconditioned Conjugate Gradient Algorithm where A is real symmetric positive definite matrix. A suitable preconditioner matrix may be provided by user. This can also be used to minimize quadratic function (x'Ax)/2-bx for unknown x.","Published":"2014-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PCGSE","Version":"0.4","Title":"Principal Component Gene Set Enrichment","Description":"Contains logic for computing the statistical association of variable groups, i.e., gene sets, with respect to the principal components of genomic data.","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pch","Version":"1.3","Title":"Piecewise Constant Hazards Models for Censored and Truncated\nData","Description":"Using piecewise constant hazards models is a very flexible approach\n for the analysis of survival data. The time line is divided into sub-intervals;\n for each interval, a different hazard is estimated using Poisson regression.","Published":"2016-11-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PCICt","Version":"0.5-4","Title":"Implementation of POSIXct work-alike for 365 and 360 day\ncalendars","Description":"This package implements a work-alike to R's POSIXct class\n which implements 360- and 365-day calendars in addition to the\n gregorian calendar.","Published":"2013-06-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pcIRT","Version":"0.2.2","Title":"IRT Models for Polytomous and Continuous Item Responses","Description":"Estimates the multidimensional polytomous Rasch model\n (Rasch, 1961) and the Continuous Rating Scale model (Mueller, 1987).","Published":"2016-04-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PCIT","Version":"1.5-3","Title":"Partial Correlation Coefficient with Information Theory","Description":"Apply Partial Correlation coefficient with Information\n Theory (PCIT) to a correlation matrix.\n The PCIT algorithm identifies meaningful correlations to define\n edges in a weighted network. The algorithm can be applied to\n any correlation-based network including but not limited to gene\n co-expression networks.\n To reduce compute time by making use of multiple compute cores,\n simply run PCIT on a computer with has multiple cores and also\n has the Rmpi package installed. PCIT will then auto-detect the\n multicore environment and run in parallel mode without the need\n to rewrite your scripts. This makes scripts, using PCIT, portable\n across single core (or no Rmpi package installed) computers\n which will run in serial mode and multicore (with Rmpi package\n installed) computers which will run in parallel mode.","Published":"2015-02-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pcnetmeta","Version":"2.4","Title":"Patient-Centered Network Meta-Analysis","Description":"Provides functions to perform arm-based network meta-analysis for datasets with binary, continuous, and count outcomes.","Published":"2016-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pco","Version":"1.0.1","Title":"Panel Cointegration Tests","Description":"Computation of the Pedroni (1999) panel cointegration test statistics. Reported are the empirical and the standardized values. ","Published":"2015-07-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PCovR","Version":"2.7","Title":"Principal Covariates Regression","Description":"Analyzing regression data with many and/or highly collinear predictor variables, by simultaneously reducing the predictor variables to a limited number of components and regressing the criterion variables on these components. Several rotation options are provided in this package, as well as model selection options.","Published":"2017-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PCPS","Version":"1.0.3","Title":"Principal Coordinates of Phylogenetic Structure","Description":"Set of functions for analysis of Principal Coordinates of\n Phylogenetic Structure (PCPS).","Published":"2016-04-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pcrcoal","Version":"1.2.0","Title":"Implementing the Coalescent Approach to PCR Simulation Developed\nby Weiss and Von Haeseler (NAR, 1997)","Description":"Implementing the Coalescent Approach to PCR Simulation.","Published":"2016-09-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"pcrsim","Version":"1.0.2","Title":"Simulation of the Forensic DNA Process","Description":"Simulate the forensic DNA process: generate random or fixed DNA\n profiles, create forensic samples including mixtures of diploid and haploid\n cells, simulate DNA extraction, normalization, degradation, amplification\n including stutters and inter-locus balance, and capillary electrophoresis.\n DNA profiles are visualized as electropherograms and saved in tables.\n The command pcrsim() opens up a graphical user interface which allow the user\n to create projects, to enter, load, and save parameters required for the simulation.\n The simulation is transparent and the parameters used in each step of the simulation\n can be viewed in the result tables.","Published":"2017-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PCS","Version":"1.2","Title":"Calculate the probability of correct selection (PCS)","Description":"Given k populations (can be in thousands), what is the probability that a given subset of size t contains the true top t populations? This package finds this probability and offers three tuning parameters (G, d, L) to relax the definition.","Published":"2013-08-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pcse","Version":"1.9","Title":"Panel-Corrected Standard Error Estimation in R","Description":"This package contains a function to estimate\n panel-corrected standard errors. Data may contain balanced or\n unbalanced panels.","Published":"2013-11-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"PCSinR","Version":"0.1.0","Title":"Parallel Constraint Satisfaction Networks in R","Description":"Parallel Constraint Satisfaction (PCS) models are an increasingly\n common class of models in Psychology, with applications to reading and word\n recognition (McClelland & Rumelhart, 1981), judgment and decision making\n (Glöckner & Betsch, 2008; Glöckner, Hilbig, & Jekel, 2014), and several\n other fields (e.g. Read, Vanman, & Miller, 1997). In each of these fields,\n they provide a quantitative model of psychological phenomena, with precise\n predictions regarding choice probabilities, decision times, and often the degree\n of confidence. This package provides the necessary functions to create and\n simulate basic Parallel Constraint Satisfaction networks within R.","Published":"2016-10-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"pdc","Version":"1.0.3","Title":"Permutation Distribution Clustering","Description":"Permutation Distribution Clustering is a clustering method for time series. Dissimilarity of time series is formalized as the divergence between their permutation distributions. The permutation distribution was proposed as measure of the complexity of a time series.","Published":"2015-09-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"pdfCluster","Version":"1.0-2","Title":"Cluster analysis via nonparametric density estimation","Description":"The package performs cluster analysis via nonparametric density \n estimation. Operationally, the kernel method is used throughout to estimate\n the density. Diagnostics methods for evaluating the quality of the clustering \n are available. The package includes also a routine to estimate the \n probability density function obtained by the kernel method, given a set of\n data with arbitrary dimensions.","Published":"2014-04-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pdfetch","Version":"0.2.1","Title":"Fetch Economic and Financial Time Series Data from Public\nSources","Description":"Download economic and financial time series from public sources, \n including the St Louis Fed's FRED system, Yahoo Finance, the US Bureau of Labor Statistics, \n the US Energy Information Administration, the World Bank, Eurostat, the European Central Bank,\n the Bank of England, the UK's Office of National Statistics, Deutsche Bundesbank, and INSEE.","Published":"2017-04-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"pdfsearch","Version":"0.1.1","Title":"Search Tools for PDF Files","Description":"Includes functions for keyword search of pdf files. There is\n also a wrapper that includes searching of all files within a single\n directory.","Published":"2016-12-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pdftables","Version":"0.1","Title":"Programmatic Conversion of PDF Tables","Description":"Allows the user to convert PDF tables to formats more amenable to\n analysis ('.csv', '.xml', or '.xlsx') by wrapping the PDFTables API.\n In order to use the package, the user needs to sign up for an API account\n on the PDFTables website ().\n The package works by taking a PDF file as input, uploading it to PDFTables,\n and returning a file with the extracted data.","Published":"2016-02-15","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"pdftools","Version":"1.3","Title":"Text Extraction, Rendering and Converting of PDF Documents","Description":"Utilities based on 'libpoppler' for extracting text, fonts, attachments and \n metadata from a PDF file. Also supports high quality rendering of PDF documents info\n PNG, JPEG, TIFF format, or into raw bitmap vectors for further processing in R.","Published":"2017-06-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pdist","Version":"1.2","Title":"Partitioned Distance Function","Description":"Computes the euclidean distance between rows of a matrix X\n and rows of another matrix Y. Previously, this could be done\n by binding the two matrices together and calling 'dist', but\n this creates unnecessary computation by computing the distances\n between a row of X and another row of X, and likewise for Y.\n pdist strictly computes distances across the two matrices, not\n within the same matrix, making computations significantly\n faster for certain use cases.","Published":"2013-02-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"PDM","Version":"0.1","Title":"Photogrammetric Distances Measurer","Description":"Measures real distances in pictures. With PDM() function, you can choose one '*.jpg' file, select the measure in mm of scale, starting and and finishing point in the graphical scale, the name of the measure, and starting and and finishing point of the measures. After, ask the user for a new measure.","Published":"2016-08-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"pdmod","Version":"1.0","Title":"Proximal/distal modeling framework for Pavlovian conditioning\nphenomena","Description":"This package provides a model of Pavlovian conditioning phenomena, such as response extinction and spontaneous recovery, and partial reinforcement extinction effects. Competing proximal and distal reward predictions, computed using fast and slow learning rates, combine according to their uncertainties and the recency of information. The resulting mean prediction drives the response rate.","Published":"2014-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pdolsms","Version":"0.2","Title":"Panel Dynamic OLS Estimation of Cointegrating Vectors","Description":"Estimates panel data cointegrating relationships following the estimator of Mark and Sul (2003), .","Published":"2016-01-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"pdp","Version":"0.5.2","Title":"Partial Dependence Plots","Description":"A general framework for constructing partial dependence (i.e., \n marginal effect) plots from various types machine learning models in R.","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PdPDB","Version":"1.0","Title":"Pattern Discovery in PDB Structures of Metalloproteins","Description":"Looks for amino acid and/or nucleotide patterns coordinated to a given prosthetic centre. It also accounts for small molecule ligands. Files have to be in the local file system and contain the '.pdb' extension.","Published":"2016-11-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PDQutils","Version":"0.1.6","Title":"PDQ Functions via Gram Charlier, Edgeworth, and Cornish Fisher\nApproximations","Description":"A collection of tools for approximating the 'PDQ' functions\n (respectively, the cumulative distribution, density, and quantile) of\n probability distributions via classical expansions involving moments and\n cumulants.","Published":"2017-03-18","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"pdR","Version":"1.5","Title":"Threshold Model and Unit Root Tests in Panel Data","Description":"Threshold model, panel version of Hylleberg et al.(1990) seasonal unit root tests, and panel unit root test of Chang(2002).","Published":"2017-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PDSCE","Version":"1.2","Title":"Positive definite sparse covariance estimators","Description":"A package to compute and tune some positive definite and\n sparse covariance estimators","Published":"2013-06-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pdSpecEst","Version":"1.0.0","Title":"Positive-Definite Wavelet-Based Multivariate Spectral Analysis","Description":"Implementation of wavelet-based multivariate spectral density estimation and clustering methods in the Riemannian manifold of Hermitian and positive-definite matrices.","Published":"2017-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Peacock.test","Version":"1.0","Title":"Two and Three Dimensional Kolmogorov-Smirnov Two-Sample Tests","Description":"The original definition of the two and three dimensional Kolmogorov-Smirnov two-sample\n test statistics given by Peacock (1983) is implemented. Two R-functions: peacock2 and peacock3, \n are provided to compute the test statistics in two and three dimensional spaces, respectively. \n Note the Peacock test is different from the Fasano and Franceschini test (1987). The latter is \n a variant of the Peacock test.","Published":"2016-07-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"peacots","Version":"1.3","Title":"Periodogram Peaks in Correlated Time Series","Description":"Calculates the periodogram of a time series, maximum-likelihood fits an Ornstein-Uhlenbeck state space (OUSS) null model and evaluates the statistical significance of periodogram peaks against the OUSS null hypothesis. The OUSS is a parsimonious model for stochastically fluctuating variables with linear stabilizing forces, subject to uncorrelated measurement errors. Contrary to the classical white noise null model for detecting cyclicity, the OUSS model can account for temporal correlations typically occurring in ecological and geological time series.","Published":"2016-11-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PeakError","Version":"2017.06.19","Title":"Compute the Annotation Error of Peak Calls","Description":"Chromatin immunoprecipitation DNA sequencing results in genomic\n tracks that show enriched regions or peaks where proteins are bound.\n This package implements fast C code that computes the true and false\n positives with respect to a database of annotated regions.","Published":"2017-06-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"peakPick","Version":"0.11","Title":"Peak Picking Methods Inspired by Biological Data","Description":"Biologically inspired methods for\n detecting peaks in one-dimensional data, such as time series or genomics data.\n The algorithms were originally designed by Weber, Ramachandran, and Henikoff,\n see documentation.","Published":"2015-12-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"peakRAM","Version":"1.0.2","Title":"Monitor the Total and Peak RAM Used by an Expression or Function","Description":"When working with big data sets, RAM conservation is critically\n important. However, it is not always enough to just monitor the\n size of the objects created. So-called \"copy-on-modify\" behavior,\n characteristic of R, means that some expressions or functions may\n require an unexpectedly large amount of RAM overhead. For example,\n replacing a single value in a matrix duplicates that matrix in the\n back-end, making this task require twice as much RAM as that used\n by the matrix itself. This package makes it easy to monitor the total\n and peak RAM used so that developers can quickly identify and\n eliminate RAM hungry code.","Published":"2017-01-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Peaks","Version":"0.2","Title":"Peaks","Description":"Spectrum manipulation: background estimation, Markov\n smoothing, deconvolution and peaks search functions. Ported\n from ROOT/TSpectrum class.","Published":"2012-10-29","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"PeakSegDP","Version":"2017.06.20","Title":"Dynamic Programming Algorithm for Peak Detection in ChIP-Seq\nData","Description":"A quadratic time dynamic programming algorithm\n can be used to compute an approximate solution to the problem of\n finding the most likely changepoints\n with respect to the Poisson likelihood, subject\n to a constraint on the number of segments, and the changes which must\n alternate: up, down, up, down, etc. For more info read\n \n \"PeakSeg: constrained optimal segmentation and supervised penalty learning\n for peak detection in count data\" by TD Hocking et al,\n proceedings of ICML2015.","Published":"2017-06-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PeakSegOptimal","Version":"2017.06.20","Title":"Optimal Segmentation Subject to Up-Down Constraints","Description":"Computes optimal changepoint models using the\n Poisson likelihood for non-negative count data,\n subject to the PeakSeg constraint:\n the first change must be up, second change down, third change up, etc.\n For more info about the models and algorithms,\n read \"A log-linear time algorithm for constrained changepoint detection\"\n by TD Hocking et al.","Published":"2017-06-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pear","Version":"1.2","Title":"Package for Periodic Autoregression Analysis","Description":"Package for estimating periodic autoregressive models.\n Datasets: monthly ozone and Fraser riverflow. Plots: periodic\n versions of boxplot, auto/partial correlations, moving-average\n expansion.","Published":"2011-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pearson7","Version":"1.0-2","Title":"Maximum Likelihood Inference for the Pearson VII Distribution\nwith Shape Parameter 3/2","Description":"Supports maximum likelihood inference for the Pearson VII\n distribution with shape parameter 3/2 and free location and scale\n parameters. This distribution is relevant when estimating the velocity of\n processive motor proteins with random detachment.","Published":"2016-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PearsonDS","Version":"1.0","Title":"Pearson Distribution System","Description":"Implementation of the Pearson distribution system, including full\n support for the (d,p,q,r)-family of functions for probability distributions \n and fitting via method of moments and maximum likelihood method.","Published":"2017-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PearsonICA","Version":"1.2-4","Title":"Independent component analysis using score functions from the\nPearson system","Description":"The Pearson-ICA algorithm is a mutual information-based\n method for blind separation of statistically independent source\n signals. It has been shown that the minimization of mutual\n information leads to iterative use of score functions, i.e.\n derivatives of log densities. The Pearson system allows\n adaptive modeling of score functions. The flexibility of the\n Pearson system makes it possible to model a wide range of\n source distributions including asymmetric distributions. The\n algorithm is designed especially for problems with asymmetric\n sources but it works for symmetric sources as well.","Published":"2009-06-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pec","Version":"2.5.3","Title":"Prediction Error Curves for Risk Prediction Models in Survival\nAnalysis","Description":"Validation of risk predictions obtained from survival models and\n competing risk models based on censored data using inverse weighting and\n cross-validation.","Published":"2017-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pedantics","Version":"1.5","Title":"Functions to facilitate power and sensitivity analyses for\ngenetic studies of natural populations","Description":"Contains functions for sensitivity and power analysis, for calculating statistics describing pedigrees from wild populations, and for viewing pedigrees","Published":"2014-01-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"PedCNV","Version":"0.1","Title":"An implementation for association analysis with CNV data","Description":"An implementation for association analysis with CNV data in R. It\n provides two methods for association study: first, the observed probe\n intensity measurement can be directly used to detect the association of CNV\n with phenotypes of interest. Second, the most probable copy number is\n estimated with the proposed likelihood and the association of the most\n probable copy number with phenotype is tested. This method can be applied\n to both the independent and correlated population.","Published":"2014-01-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pedgene","Version":"2.9","Title":"Gene-Level Statistics for Pedigree Data","Description":"Gene-level association tests with disease status for pedigree data: kernel and burden association statistics.","Published":"2015-10-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pedigree","Version":"1.4","Title":"Pedigree functions","Description":"Pedigree related functions","Published":"2013-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pedigreemm","Version":"0.3-3","Title":"Pedigree-based mixed-effects models","Description":"Fit pedigree-based mixed-effects models.","Published":"2014-06-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pedometrics","Version":"0.6-6","Title":"Pedometric Tools and Techniques","Description":"Functions to employ many of the tools and techniques used in the \n field of pedometrics.","Published":"2015-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PeerPerformance","Version":"2.1.2","Title":"Luck-Corrected Peer Performance Analysis in R","Description":"Provides functions to perform the peer performance\n analysis of funds' returns as described in Ardia and Boudt (2016) .","Published":"2017-02-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pegas","Version":"0.10","Title":"Population and Evolutionary Genetics Analysis System","Description":"Functions for reading, writing, plotting, analysing, and manipulating allelic and haplotypic data, and for the analysis of population nucleotide sequences and micro-satellites including coalescence analyses.","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PEGroupTesting","Version":"1.0","Title":"Population Proportion Estimation using Group Testing","Description":"The population proportion using group testing can be estimated by different methods. Four functions including p.mle(), p.gart(), p.burrow() and p.order() are provided to implement four estimating methods including the maximum likelihood estimate, Gart's estimate, Burrow's estimate, and order statistic estimate. ","Published":"2016-09-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PEIP","Version":"2.0-1","Title":"Geophysical Inverse Theory and Optimization","Description":"Several functions introduced in Aster et al.'s book on inverse theory. The functions are often translations of MATLAB code developed by the authors to illustrate concepts of inverse theory as applied to geophysics. Generalized inversion, tomographic inversion algorithms (conjugate gradients, 'ART' and 'SIRT'), non-linear least squares, first and second order Tikhonov regularization, roughness constraints, and procedures for estimating smoothing parameters are included. Includes a wrapper for the FORTRAN based 'LSQR' (Paige and Saunders) routine.","Published":"2015-06-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PEMM","Version":"1.0","Title":"A Penalized EM algorithm incorporating missing-data mechanism","Description":"This package provides functions to perform multivariate Gaussian parameter estimation based on data with abundance-dependent missingness. It implements a penalized Expectation-Maximization (EM) algorithm. The package is tailored for but not limited to proteomics data applications, in which a large proportion of the data are often missing-not-at-random with lower values (or absolute values) more likely to be missing.","Published":"2014-01-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"pems.utils","Version":"0.2.17.8","Title":"Portable Emissions (and Other Mobile) Measurement System\nUtilities","Description":"Utility functions for the handling, analysis and visualisation \n of data from portable emissions measurement systems ('PEMS') and other \n similar mobile activity monitoring devices. The package includes a dedicated \n 'pems' data class that manages many of the quality control, unit handling \n and data archiving issues that can hinder efforts to standardise 'PEMS' \n research.","Published":"2016-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"penalized","Version":"0.9-50","Title":"L1 (Lasso and Fused Lasso) and L2 (Ridge) Penalized Estimation\nin GLMs and in the Cox Model","Description":"Fitting possibly high dimensional penalized\n regression models. The penalty structure can be any combination\n of an L1 penalty (lasso and fused lasso), an L2 penalty (ridge) and a\n positivity constraint on the regression coefficients. The\n supported regression models are linear, logistic and Poisson\n regression and the Cox Proportional Hazards model.\n Cross-validation routines allow optimization of the tuning\n parameters.","Published":"2017-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"penalizedLDA","Version":"1.1","Title":"Penalized Classification using Fisher's Linear Discriminant","Description":"Implements the penalized LDA proposal of \"Witten and Tibshirani (2011), Penalized classification using Fisher's linear discriminant, to appear in Journal of the Royal Statistical Society, Series B\".","Published":"2015-07-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"penalizedSVM","Version":"1.1","Title":"Feature Selection SVM using penalty functions","Description":"This package provides feature selection SVM using penalty\n functions. The smoothly clipped absolute deviation (SCAD),\n 'L1-norm', 'Elastic Net' ('L1-norm' and 'L2-norm') and 'Elastic\n SCAD' (SCAD and 'L2-norm') penalties are availible. The tuning\n parameters can be founf using either a fixed grid or a interval\n search.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"penaltyLearning","Version":"2017.06.14","Title":"Penalty Learning","Description":"Implementations of algorithms from \n Learning Sparse Penalties for Change-point Detection\n using Max Margin Interval Regression, by\n Hocking, Rigaill, Vert, Bach\n \n published in proceedings of ICML2013.","Published":"2017-06-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pencopula","Version":"0.3.5","Title":"Flexible Copula Density Estimation with Penalized Hierarchical\nB-Splines","Description":"Flexible copula density estimation with penalized hierarchical B-Splines.","Published":"2014-02-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pencopulaCond","Version":"0.2","Title":"Estimating Non-Simplified Vine Copulas Using Penalized Splines","Description":"Estimating Non-Simplified Vine Copulas Using Penalized Splines.","Published":"2017-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PenCoxFrail","Version":"1.0.1","Title":"Regularization in Cox Frailty Models","Description":"A regularization approach for Cox Frailty Models by penalization methods is provided.","Published":"2016-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pendensity","Version":"0.2.10","Title":"Density Estimation with a Penalized Mixture Approach","Description":"Estimation of univariate (conditional) densities using penalized B-splines with automatic selection of optimal smoothing parameter.","Published":"2016-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"penDvine","Version":"0.2.4","Title":"Flexible Pair-Copula Estimation in D-Vines using Bivariate\nPenalized Splines","Description":"Flexible Pair-Copula Estimation in D-vines using Bivariate Penalized Splines.","Published":"2015-07-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"penMSM","Version":"0.99","Title":"Estimating Regularized Multi-state Models Using L1 Penalties","Description":"Structured fusion Lasso penalized estimation of multi-state models with the penalty applied to absolute effects and absolute effect differences (i.e., effects on transition-type specific hazard rates).","Published":"2015-01-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"penRvine","Version":"0.2","Title":"Flexible R-Vines Estimation Using Bivariate Penalized Splines","Description":"Offers routines for estimating densities and copula distribution of R-vines using penalized splines.","Published":"2017-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pensim","Version":"1.2.9","Title":"Simulation of high-dimensional data and parallelized repeated\npenalized regression","Description":"Simulation of continuous, correlated high-dimensional data with time to event or binary response, and parallelized functions for Lasso, Ridge, and Elastic Net penalized regression with repeated starts and two-dimensional tuning of the Elastic Net.","Published":"2014-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"peperr","Version":"1.1-7","Title":"Parallelised Estimation of Prediction Error","Description":"Package peperr is designed for prediction error estimation\n through resampling techniques, possibly accelerated by parallel\n execution on a compute cluster. Newly developed model fitting\n routines can be easily incorporated.","Published":"2013-04-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"peplib","Version":"1.5.1","Title":"Peptide Library Analysis Methods","Description":"This package provides a variety of methods for dealing\n with analysis of peptide library data, including clustering,\n motif finding, and QSAR model fitting.","Published":"2013-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PepPrep","Version":"1.1.0","Title":"Insilico peptide mutation, digestion and homologous comparison","Description":"Amino acid exchange based on single nucleotide variant (SNV) information and tryptic digestion on peptide sequence. Searching for homologous by comparison of tryptic digested peptide sequences.","Published":"2014-09-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PepSAVIms","Version":"0.9.1","Title":"PepSAVI-MS Data Analysis","Description":"An implementation of the data processing and data analysis portion\n of a pipeline named the PepSAVI-MS which is currently under development by\n the Hicks laboratory at the University of North Carolina. The statistical\n analysis package presented herein provides a collection of software tools\n used to facilitate the prioritization of putative bioactive peptides from a\n complex biological matrix. Tools are provided to deconvolute mass\n spectrometry features into a single representation for each peptide charge\n state, filter compounds to include only those possibly contributing to the\n observed bioactivity, and prioritize these remaining compounds for those\n most likely contributing to each bioactivity data set.","Published":"2016-12-17","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"peptider","Version":"0.2.2","Title":"Evaluation of Diversity in Nucleotide Libraries","Description":"Evaluation of diversity in peptide libraries, including NNN, NNB,\n NNK/S, and 20/20 schemes. Custom encoding schemes can also be defined.\n Metrics for evaluation include expected coverage, relative efficiency, and\n the functional diversity of the library. Peptide-level inclusion\n probabilities are computable for both the native and custom encoding\n schemes.","Published":"2015-09-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Peptides","Version":"2.2","Title":"Calculate Indices and Theoretical Physicochemical Properties of\nProtein Sequences","Description":"Includes functions to calculate several physicochemical properties and indices for amino-acid sequences as well as to read and plot 'XVG' output files from the 'GROMACS' molecular dynamics package.","Published":"2017-06-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pequod","Version":"0.0-5","Title":"Moderated Regression Package","Description":"Moderated regression with mean and residual centering and simple slopes analysis.","Published":"2016-02-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"perARMA","Version":"1.6","Title":"Periodic Time Series Analysis","Description":"Identification, model fitting and estimation for time series with periodic structure.\n Additionally procedures for simulation of periodic processes\n and real data sets are included.","Published":"2016-02-25","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"Perc","Version":"0.1.2","Title":"Using Percolation and Conductance to Find Information Flow\nCertainty in a Direct Network","Description":"To find the certainty of dominance interactions with indirect\n interactions being considered.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"perccal","Version":"1.0","Title":"Implementing Double Bootstrap Linear Regression Confidence\nIntervals Using the 'perc-cal' Method","Description":"Contains functions which allow the user to compute confidence intervals quickly using the double bootstrap-based percentile calibrated ('perc-cal') method for linear regression coefficients. 'perccal_interval()' is the primary user-facing function within this package.","Published":"2016-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PerFit","Version":"1.4.1","Title":"Person Fit","Description":"Several person-fit statistics (PFSs) are offered. These statistics allow assessing whether\n individual response patterns to tests or questionnaires are (im)plausible given \n the other respondents in the sample or given a specified item response theory model. Some PFSs apply to \n dichotomous data, such as the likelihood-based PFSs (lz, lz*) and the group-based PFSs \n (personal biserial correlation, caution index, (normed) number of Guttman errors, \n agreement/disagreement/dependability statistics, U3, ZU3, NCI, Ht). PFSs suitable to polytomous data include\n extensions of lz, U3, and (normed) number of Guttman errors.","Published":"2016-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PerfMeas","Version":"1.2.1","Title":"PerfMeas: Performance Measures for ranking and classification\ntasks","Description":"Package that implements different performance measures for classification and ranking tasks. AUC, precision at a given recall, F-score for single and multiple classes are available.","Published":"2014-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PerformanceAnalytics","Version":"1.4.3541","Title":"Econometric tools for performance and risk analysis","Description":"Collection of econometric functions for\n performance and risk analysis. This package aims to aid\n practitioners and researchers in utilizing the latest\n research in analysis of non-normal return streams. In\n general, it is most tested on return (rather than\n price) data on a regular scale, but most functions will\n work with irregular return data as well, and increasing\n numbers of functions will work with P&L or price data\n where possible.","Published":"2014-09-16","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"performanceEstimation","Version":"1.1.0","Title":"An Infra-Structure for Performance Estimation of Predictive\nModels","Description":"An infra-structure for estimating the predictive performance of\n predictive models. In this context, it can also be used to compare and/or select\n among different alternative ways of solving one or more predictive tasks. The\n main goal of the package is to provide a generic infra-structure to estimate\n the values of different metrics of predictive performance using different\n estimation procedures. These estimation tasks can be applied to any solutions\n (workflows) to the predictive tasks. The package provides easy to use standard\n workflows that allow the usage of any available R modeling algorithm together\n with some pre-defined data pre-processing steps and also prediction post-\n processing methods. It also provides means for addressing issues related with\n the statistical significance of the observed differences.","Published":"2016-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pergola","Version":"1.0","Title":"Toolbox for Polyploid Genetic Data","Description":"Provides tools for linkage mapping in polyploids.\n It implements the method PERGOLA, which is a fast, deterministic method to\n calculate the order of markers in a linkage group.","Published":"2016-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PeriodicTable","Version":"0.1.1","Title":"Periodic Table of the Elements","Description":"Provides a dataset containing properties for chemical elements.\n Helper functions are also provided to access some atomic properties.","Published":"2017-01-24","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"perm","Version":"1.0-0.0","Title":"Exact or Asymptotic permutation tests","Description":"Perform Exact or Asymptotic permutation tests","Published":"2010-07-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"PermAlgo","Version":"1.1","Title":"Permutational Algorithm to Simulate Survival Data","Description":"This version of the permutational algorithm generates a\n dataset in which event and censoring times are conditional on\n an user-specified list of covariates, some or all of which are\n time-dependent.","Published":"2015-04-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PerMallows","Version":"1.13","Title":"Permutations and Mallows Distributions","Description":"Includes functions to work with the Mallows and Generalized Mallows\n Models. The considered distances are Kendall's-tau, Cayley, Hamming and Ulam\n and it includes functions for making inference, sampling and learning such\n distributions, some of which are novel in the literature. As a by-product,\n PerMallows also includes operations for permutations, paying special attention\n to those related with the Kendall's-tau, Cayley, Ulam and Hamming distances. It\n is also possible to generate random permutations at a given distance, or with\n a given number of inversions, or cycles, or fixed points or even with a given\n length on LIS (longest increasing subsequence).","Published":"2017-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"permDep","Version":"1.0-0","Title":"Permutation Tests for General Dependent Truncation","Description":"Implementations of permutation approach to hypothesis testing for quasi-independence of truncation time and failure time. The implemented approaches are powerful against non-monotone alternatives and thereby offer protection against erroneous assumptions of quasi-independence. The proposed tests use either a conditional or an unconditional method to evaluate the permutation p-value. The conditional method was first developed in Tsai (1980) and Efron and Petrosian (1992) . The unconditional method provides a valid approximation to the conditional method, yet computationally simpler and does not hold fixed the size of each risk sets. Users also have an option to carry out the proposed permutation tests in a parallel computing fashion. ","Published":"2017-04-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"permGPU","Version":"0.14.9","Title":"Using GPUs in Statistical Genomics","Description":"Can be used to carry out\n permutation resampling inference in the context of RNA\n microarray studies.","Published":"2016-02-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"permGS","Version":"0.2.4","Title":"Permutational Group Sequential Test for Time-to-Event Data","Description":"Permutational group-sequential tests for time-to-event data based on the log-rank test statistic. Supports exact permutation test when the censoring distributions are equal in the treatment and the control group and approximate imputation-permutation methods when the censoring distributions are different. ","Published":"2017-06-22","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"permPATH","Version":"1.1","Title":"Permutation Based Gene Expression Pathway Analysis","Description":"Can be used to carry out permutation based gene expression pathway analysis. This work was supported by a National Institute of Allergy and Infectious Disease/National Institutes of Health contract (No. HHSN272200900059C). ","Published":"2017-06-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"permubiome","Version":"1.1","Title":"A Permutation Based Test for Biomarker Discovery in Microbiome\nData","Description":"All the functions compiled in this package were created to perform permutation-based non-parametric analysis on microbiome data for biomarker discovery aims. This test executes thousands of comparisons in pairwise manner, after random shuffling of data into the different groups of study.","Published":"2016-03-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"permutations","Version":"1.0-2","Title":"Permutations of a Finite Set","Description":"Manipulates invertible functions from a finite set to itself. Can transform from word form to cycle form and back.","Published":"2017-01-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"permute","Version":"0.9-4","Title":"Functions for Generating Restricted Permutations of Data","Description":"A set of restricted permutation designs for freely exchangeable, line transects (time series), and spatial grid designs plus permutation of blocks (groups of samples) is provided. 'permute' also allows split-plot designs, in which the whole-plots or split-plots or both can be freely-exchangeable or one of the restricted designs. The 'permute' package is modelled after the permutation schemes of 'Canoco 3.1' (and later) by Cajo ter Braak.","Published":"2016-09-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"perry","Version":"0.2.0","Title":"Resampling-based prediction error estimation for regression\nmodels","Description":"Tools that allow developers to write functions for\n prediction error estimation with minimal programming effort and\n assist users with model selection in regression problems.","Published":"2013-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"persiandictionary","Version":"1.0","Title":"English to Persian dictionary","Description":"Translate words from English to Persian (Over 67,000\n words)","Published":"2013-04-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PersomicsArray","Version":"1.0","Title":"Automated Persomics Array Image Extraction","Description":"Automated identification of printed array positions from high content \n microscopy images and the export of those positions as individual images \n written to output as multi-layered tiff files.","Published":"2016-09-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"personalized","Version":"0.0.2","Title":"Estimation and Validation Methods for Subgroup Identification\nand Personalized Medicine","Description":"Provides functions for fitting and validation of subgroup\n identification and personalized medicine models under the general subgroup\n identification framework of Chen et al. (2017) .\n This package is intended for use for both randomized controlled trials and\n observational studies.","Published":"2017-06-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"personograph","Version":"0.1.3","Title":"Pictographic Representation of Treatment Effects","Description":"Visualizes treatment effects using person icons, similar to Cates (NNT) charts.","Published":"2015-11-07","License":"LGPL (>= 2.0, < 3) | Mozilla Public License","snapshot_date":"2017-06-23"} {"Package":"perspectev","Version":"1.1","Title":"Permutation of Species During Turnover Events","Description":"Provides a robust framework for analyzing the extent to which differential survival with respect to higher level trait variation is reducible to lower level variation. In addition to its primary test, it also provides functions for simulation-based power analysis, reading in common data set formats, and visualizing results. Temporarily contains an edited version of function hr.mcp() from package 'wild1', written by Glen Sargeant. For tutorial see: http://evolve.zoo.ox.ac.uk/Evolve/Perspectev.html.","Published":"2015-08-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"perturb","Version":"2.05","Title":"Tools for evaluating collinearity","Description":"\"perturb\" evaluates collinearity by adding random noise to\n selected variables. \"colldiag\" calculates condition numbers and\n variance decomposition proportions to test for collinearity and\n uncover its sources.","Published":"2012-02-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pesticides","Version":"0.1","Title":"Analysis of single serving and composite pesticide residue\nmeasurements","Description":"Single item and composite pesticide residue measurements\n of fifteen commodity-pesticide combinations plus analytical\n tools.","Published":"2012-10-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PET","Version":"0.4.9","Title":"Simulation and Reconstruction of PET Images","Description":"This package implements different analytic/direct and\n iterative reconstruction methods of Peter Toft. It also offer\n the possibility to simulate PET data.","Published":"2010-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pETM","Version":"0.1.5","Title":"Penalized Exponential Tilt Model","Description":"In analysis of high-dimensional DNA methylation data, a penalized exponential tilt model can identify differentially methylated loci between cases and controls, using network based regularization. It is able to detect any differences in means only, in variances only or in both means and variances. ","Published":"2016-05-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"petrinetR","Version":"0.1.0","Title":"Building, Visualizing, Exporting and Replaying Petri Nets","Description":"Functions for the construction of Petri Nets. Petri Nets can be replayed by firing enabled transitions.\n Silent transitions will be hidden by the execution handler. Also includes functionalities for the visualization of Petri Nets and\n export of Petri Nets to PNML (Petri Net Markup Language) files.","Published":"2016-08-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pewdata","Version":"0.2.0","Title":"Reproducible Retrieval of Pew Research Center Datasets","Description":"Reproducible, programmatic retrieval of survey datasets from the\n Pew Research Center.","Published":"2016-09-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pez","Version":"1.1-1","Title":"Phylogenetics for the Environmental Sciences","Description":"Eco-phylogenetic and community phylogenetic analyses.\n Keeps community ecological and phylogenetic data matched up and\n comparable using 'comparative.comm' objects. Wrappers for common\n community phylogenetic indices ('pez.shape', 'pez.evenness',\n 'pez.dispersion', and 'pez.dissimilarity' metrics). Implementation\n of Cavender-Bares (2004) correlation of phylogenetic and\n ecological matrices ('fingerprint.regression'). Phylogenetic\n Generalised Linear Mixed Models (PGLMMs; 'pglmm') following Ives &\n Helmus (2011) and Rafferty & Ives (2013). Simulation of null\n assemblages, traits, and phylogenies ('scape', 'sim.meta.comm').","Published":"2016-01-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pfa","Version":"1.1","Title":"Estimates False Discovery Proportion Under Arbitrary Covariance\nDependence","Description":"Estimate the false discovery proportion (FDP) by Principal Factor Approximation method with general known and unknown covariance dependence.","Published":"2016-07-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pgam","Version":"0.4.12","Title":"Poisson-Gamma Additive Models","Description":"This work is an extension of the state space model for\n Poisson count data, Poisson-Gamma model, towards a\n semiparametric specification. Just like the generalized\n additive models (GAM), cubic splines are used for covariate\n smoothing. The semiparametric models are fitted by an iterative\n process that combines maximization of likelihood and\n backfitting algorithm.","Published":"2012-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PGEE","Version":"1.5","Title":"Penalized Generalized Estimating Equations in High-Dimension","Description":"Fits penalized generalized estimating equations to longitudinal data with high-dimensional covariates.","Published":"2017-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pgee.mixed","Version":"0.1.0","Title":"Penalized Generalized Estimating Equations for Bivariate Mixed\nOutcomes","Description":"Perform simultaneous estimation and variable selection for correlated bivariate\n mixed outcomes (one continuous outcome and one binary outcome per cluster) using\n penalized generalized estimating equations. In addition, clustered Gaussian and binary\n outcomes can also be modeled. The SCAD, MCP, and LASSO penalties are supported.\n Cross-validation can be performed to find the optimal regularization parameter(s).","Published":"2016-12-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PGICA","Version":"1.0","Title":"Parallel Group ICA Algorithm","Description":"A Group ICA Algorithm that can run in parallel on an SGE platform or multi-core PCs","Published":"2014-11-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pgirmess","Version":"1.6.7","Title":"Data Analysis in Ecology","Description":"Miscellaneous functions for data analysis in ecology, with special emphasis on spatial data.","Published":"2017-03-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pglm","Version":"0.1-2","Title":"panel generalized linear model","Description":"Estimation of panel models for glm-like models: this includes binomial models (logit and probit) count models (poisson and negbin) and ordered models (logit and probit)","Published":"2013-12-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pGLS","Version":"0.0-1","Title":"Generalized Least Square in comparative Phylogenetics","Description":"Based on the Generalized Least Square model for\n comparative Phylogenetics (ref).","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PGM2","Version":"1.0-1","Title":"Nested Resolvable Designs and their Associated Uniform Designs","Description":"Construction method of nested resolvable designs from \n a projective geometry defined on Galois field of order 2. The obtained\n Resolvable designs are used to build uniform design. The presented results\n are based on and A. Boudraa et al. (See references).","Published":"2016-12-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pGMGM","Version":"1.0","Title":"Estimating Multiple Gaussian Graphical Models (GGM) in Penalized\nGaussian Mixture Models (GMM)","Description":"This is an R and C code implementation of the New-SP and New-JGL method of Gao et al. (2016) to perform model-based clustering and multiple graph estimation.","Published":"2016-06-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pgmm","Version":"1.2","Title":"Parsimonious Gaussian Mixture Models","Description":"Carries out model-based clustering or classification using parsimonious Gaussian mixture models.","Published":"2015-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pgnorm","Version":"2.0","Title":"The p-Generalized Normal Distribution","Description":"Evaluation of the pdf and the cdf of the univariate,\n noncentral, p-generalized normal distribution. Sampling from\n the univariate, noncentral, p-generalized normal distribution\n using either the p-generalized polar method, the p-generalized\n rejecting polar method, the Monty Python method, the Ziggurat\n method or the method of Nardon and Pianca. The package also\n includes routines for the simulation of the bivariate,\n p-generalized uniform distribution and the simulation of the\n corresponding angular distribution.","Published":"2015-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pgraph","Version":"0.8","Title":"Build Dependency Graphs using Projection","Description":"Implements a general framework for creating dependency graphs using projection. Both lasso and sparse additive model projections are implemented. Both Pearson correlation and distance covariance are used to generate the graph.","Published":"2016-10-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pgs","Version":"0.4-0","Title":"Precision of Geometric Sampling","Description":"Computation of mean squared errors of stereological predictors.","Published":"2013-12-11","License":"CeCILL","snapshot_date":"2017-06-23"} {"Package":"ph2bayes","Version":"0.0.1","Title":"Bayesian Single-Arm Phase II Designs","Description":"An implementation of Bayesian single-arm phase II\n design methods for binary outcome based on posterior\n probability and predictive probability.","Published":"2016-01-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ph2bye","Version":"0.1.4","Title":"Phase II Clinical Trial Design Using Bayesian Methods","Description":"Calculate the Bayesian posterior/predictive probability and\n determine the sample size and stopping boundaries for single-arm Phase II design.","Published":"2016-08-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ph2mult","Version":"0.1.1","Title":"Phase II Clinical Trial Design for Multinomial Endpoints","Description":"Provide multinomial design methods under intersection-union test (IUT) and union-intersection test (UIT) scheme for Phase II trial. The design types include : Minimax (minimize the maximum sample size), Optimal (minimize the expected sample size), Admissible (minimize the Bayesian risk) and Maxpower (maximize the exact power level).","Published":"2016-11-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phalen","Version":"1.0","Title":"Phalen Algorithms and Functions","Description":"The phalen package contains (1) clustering and \n partitioning algorithms; (2) penalty functions for numeric \n vectors; (3) a ranking function; and (4) color palettes and \n functions. ","Published":"2013-09-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"phangorn","Version":"2.2.0","Title":"Phylogenetic Analysis in R","Description":"Phylogenetic analysis in R: Estimation of phylogenetic\n trees and networks using Maximum Likelihood, Maximum Parsimony,\n distance methods and Hadamard conjugation.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phantom","Version":"0.1.2","Title":"Gene Set Pareto Heterogeneity Analysis of Time-Course Gene\nExpression Data","Description":"Pareto front based statistical tool for detecting heterogeneity in gene sets and biological modules from time-course data.","Published":"2017-06-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PharmPow","Version":"1.0","Title":"Pharmacometric Power calculations for mixed study designs","Description":"This package contains functions performing power calculations for mixed (sparse/dense sampled) pharmacokinetic study designs. The input data for these functions is tailored for NONMEM .phi files.","Published":"2014-03-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"phase1RMD","Version":"1.0.7","Title":"Repeated Measurement Design for Phase I Clinical Trial","Description":"Implements our Bayesian phase I repeated measurement design that accounts for multidimensional toxicity endpoints from multiple treatment cycles. The package also provides a novel design to account for both multidimensional toxicity endpoints and early-stage efficacy endpoints in the phase I design. For both designs, functions are provided to recommend the next dosage selection based on the data collected in the available patient cohorts and to simulate trial characteristics given design parameters. Yin, Jun, et al. (2017) .","Published":"2017-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phaseR","Version":"1.3","Title":"Phase Plane Analysis of One and Two Dimensional Autonomous ODE\nSystems","Description":"phaseR is an R package for the qualitative analysis of one and\n two dimensional autonomous ODE systems, using phase plane methods. Programs\n are available to identify and classify equilibrium points, plot the\n direction field, and plot trajectories for multiple initial conditions. In\n the one dimensional case, a program is also available to plot the phase\n portrait. Whilst in the two dimensional case, additionally a program is\n available to plot nullclines. Many example systems are provided for the\n user.","Published":"2014-07-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PhaseType","Version":"0.1.3","Title":"Inference for Phase-type Distributions","Description":"Functions to perform Bayesian inference on absorption time\n data for Phase-type distributions. Plans to expand this to\n include frequentist inference and simulation tools.","Published":"2012-10-16","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"phcfM","Version":"1.2","Title":"Modelling anthropogenic deforestation","Description":"phcfM is an R package for modelling anthropogenic\n deforestation. It was initially developed to obtain REDD+\n baseline scenarios of deforestation for the \"programme\n holistique de conservation des forets a Madagascar\" (from which\n the package is named after). Parameter inference is done in a\n hierarchical Bayesian framework. Markov chains Monte Carlo\n (MCMC) are coded in C++ using the Scythe statistical library to\n maximize computation efficiency.","Published":"2013-04-09","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pheatmap","Version":"1.0.8","Title":"Pretty Heatmaps","Description":"Implementation of heatmaps that offers more control\n over dimensions and appearance.","Published":"2015-12-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"phenability","Version":"2.0","Title":"Nonparametric Stability Analysis","Description":"An alternative to carrying out phenotypic adaptability and stability analyses, taking into account nonparametric statistics. Can be used as a robust approach, less sensitive to departures from common genotypic, environmental, and GxE effects data assumptions (e.g., normal distribution of errors).","Published":"2015-02-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"phenex","Version":"1.4-5","Title":"Auxiliary Functions for Phenological Data Analysis","Description":"Provides some easy-to-use functions for \n\tspatial analyses of (plant-) phenological data \n\tsets and satellite observations of vegetation.","Published":"2017-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PHENIX","Version":"1.3.1","Title":"Phenotypic Integration Index","Description":"Provides functions to estimate the size-controlled phenotypic integration index, a novel method by Torices & Méndez (2014) to solve problems due to individual size when estimating integration (namely, larger individuals have larger components, which will drive a correlation between components only due to resource availability that might obscure the observed measures of integration). In addition, the package also provides the classical estimation by Wagner (1984), bootstrapping and jackknife methods to calculate confidence intervals and a significance test for both integration indices.","Published":"2017-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phenmod","Version":"1.2-3","Title":"Auxiliary functions for phenological data processing, modelling\nand result handling","Description":"Provides functions to preprocess phenological data, for modelling and result handling.","Published":"2013-08-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pheno","Version":"1.6","Title":"Auxiliary functions for phenological data analysis","Description":"Provides some easy-to-use functions for time series\n analyses of (plant-) phenological data sets. These functions\n mainly deal with the estimation of combined phenological time\n series and are usually wrappers for functions that are already\n implemented in other R packages adapted to the special\n structure of phenological data and the needs of phenologists.\n Some date conversion functions to handle Julian dates are also\n provided.","Published":"2012-10-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pheno2geno","Version":"1.3.1","Title":"High-Throughput Generation of Genetic Markers and Maps from\nMolecular Phenotypes for Crosses Between Inbred Strains","Description":"High-throughput generation of genetic markers from molecular phenotypes for crosses between inbred strains. These markers can be use to saturate existing genetic map or creating a new one.","Published":"2015-03-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"phenology","Version":"5.4","Title":"Tools to Manage a Parametric Function that Describes Phenology","Description":"Functions used to fit and test the phenology of species based on counts.","Published":"2017-03-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"phenopix","Version":"2.3.1","Title":"Process Digital Images of a Vegetation Cover","Description":"A collection of functions to process digital images, depict greenness index trajectories and extract relevant phenological stages. ","Published":"2017-06-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PhenotypeSimulator","Version":"0.1.2","Title":"Flexible Phenotype Simulation from Different Genetic and Noise\nModels","Description":"Phenotype simulator allows for the flexible simulation of \n phenotypes under different models, including fixed and background genetic \n effects as well as correlated, fixed and background noise effects. Different \n phenotypic effects can be combined into a final phenotype while controling \n for the proportion of variance explained by each of the components. For each \n component, the number of variables, their distribution and the design of \n their effect across traits can be customised. The final simulated phenotypes\n and its components can be automatically saved into .rds or .csv files. In\n addition, for simulated genotypes, export into plink format is possible. ","Published":"2017-06-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PHeval","Version":"0.5.3","Title":"Evaluation of the Proportional Hazards Assumption with a\nStandardized Score Process","Description":"Provides tools for the evaluation of the goodness of fit and the predictive capacity of the proportional hazards model.","Published":"2015-12-16","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"phia","Version":"0.2-1","Title":"Post-Hoc Interaction Analysis","Description":"Analysis of terms in linear, generalized and mixed linear models, \n\ton the basis of multiple comparisons of factor contrasts. Specially suited \n\tfor the analysis of interaction terms.","Published":"2015-11-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"philentropy","Version":"0.0.3","Title":"Similarity and Distance Quantification Between Probability\nFunctions","Description":"Computes 46 optimized distance and similarity measures for comparing probability functions. These comparisons between probability functions have their foundations in a broad range of scientific disciplines from mathematics to ecology. The aim of this package is to provide a core framework for clustering, classification, statistical inference, goodness-of-fit, non-parametric statistics, information theory, and machine learning tasks that are based on comparing univariate or multivariate probability functions.","Published":"2017-05-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"phmm","Version":"0.7-5","Title":"Proportional Hazards Mixed-effects Model (PHMM)","Description":"Fits proportional hazards model incorporating random effects using\n an EM algorithm using Markov Chain Monte Carlo at E-step.","Published":"2013-09-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"phonenumber","Version":"0.2.2","Title":"Convert Letters to Numbers and Back as on a Telephone Keypad","Description":"Convert English letters to numbers or numbers to English letters as \n on a telephone keypad. When converting letters to numbers, a character \n vector is returned with \"A,\" \"B,\" or \"C\" becoming 2, \"D,\" \"E\", or \"F\" \n becoming 3, etc. When converting numbers to letters, a character vector is \n returned with multiple elements (i.e., \"2\" becomes a vector of \"A,\" \"B,\" and \n \"C\").","Published":"2015-09-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"phonics","Version":"0.7.4","Title":"Phonetic Spelling Algorithms","Description":"Provides a collection of phonetic algorithms including\n Soundex, Metaphone, NYSIIS, Caverphone, and others.","Published":"2016-06-05","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"phonR","Version":"1.0-7","Title":"Tools for Phoneticians and Phonologists","Description":"Tools for phoneticians and phonologists, including functions for normalization and plotting of vowels.","Published":"2016-08-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"phonTools","Version":"0.2-2.1","Title":"Tools for Phonetic and Acoustic Analyses","Description":"Contains tools for the organization, display, and analysis of the sorts of data frequently encountered in phonetics research and experimentation, including the easy creation of IPA vowel plots, and the creation and manipulation of WAVE audio files.","Published":"2015-07-31","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"photobiology","Version":"0.9.15","Title":"Photobiological Calculations","Description":"Definitions of classes, methods, operators and functions for use in\n photobiology and radiation meteorology and climatology. Calculation of\n effective (weighted) and not-weighted irradiances/doses, fluence rates,\n transmittance, reflectance, absorptance, absorbance and diverse\n ratios and other derived quantities from spectral data. Local maxima and\n minima. Conversion between energy- and photon-based units. Wavelength\n interpolation. Astronomical calculations related solar angles and day\n length. Colours and vision.","Published":"2017-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"photobiologyInOut","Version":"0.4.13","Title":"Read Spectral and Logged Data from Foreign Files","Description":"Functions for reading, and in some cases writing, foreign files \n containing spectral data from spectrometers and their associated software, \n output from daylight simulation models in common use, and some spectral \n data repositories. As well as functions for exchange of spectral data with \n other R packages.","Published":"2017-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"photobiologyLamps","Version":"0.4.1","Title":"Spectral Data of Light Emission by Lamps","Description":"Spectral emission data for some frequently used lamps excluding \n LEDs. Original data for incandescent and different types of discharge lamps \n are included. ","Published":"2016-10-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"photobiologyLEDs","Version":"0.4.2","Title":"Spectral Data for Light-Emitting-Diodes","Description":"Spectral emission data for some frequently used light emitting\n diodes.","Published":"2016-10-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"photobiologyPlants","Version":"0.4.1","Title":"Plant Photobiology Related Functions and Data","Description":"Provides functions for quantifying visible (VIS) and ultraviolet\n (UV) radiation in relation to the photoreceptors Phytochromes,\n Cryptochromes, and UVR8 which are present in plants. It also\n includes data sets on the optical properties of plants.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"photobiologyWavebands","Version":"0.4.2","Title":"Waveband Definitions for UV, VIS, and IR Radiation","Description":"Constructors of waveband objects for commonly used biological\n spectral weighting functions (BSWFs) and for different wavebands describing\n named ranges of wavelengths in the ultraviolet (UV), visible (VIS)\n and infrared (IR) regions of the electromagnetic spectrum.","Published":"2017-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phrasemachine","Version":"1.1.2","Title":"Simple Phrase Extraction","Description":"Simple noun phrase extraction using part-of-speech information.\n Takes a collection of un-processed documents as input and returns a set of noun\n phrases associated with those documents.","Published":"2017-05-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"phreeqc","Version":"3.3.10","Title":"R Interface to Geochemical Modeling Software","Description":"A geochemical modeling program developed by the US Geological\n Survey that is designed to perform a wide variety of aqueous geochemical\n calculations, including speciation, batch-reaction, one-dimensional\n reactive-transport, and inverse geochemical calculations.","Published":"2017-01-28","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"phtt","Version":"3.1.2","Title":"Panel Data Analysis with Heterogeneous Time Trends","Description":"The package provides estimation procedures for panel data with large dimensions n, T, and general forms of unobservable heterogeneous effects. Particularly, the estimation procedures are those of Bai (2009) and Kneip, Sickles, and Song (2012), which complement one another very well: both models assume the unobservable heterogeneous effects to have a factor structure. The method of Bai (2009) assumes that the factors are stationary, whereas the method of Kneip et al. (2012) allows the factors to be non-stationary. Additionally, the 'phtt' package provides a wide range of dimensionality criteria in order to estimate the number of the unobserved factors simultaneously with the remaining model parameters.","Published":"2014-08-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"phuassess","Version":"1.1","Title":"Proportional Habitat Use Assessment","Description":"Assessment of habitat selection by means of the permutation-based combination of sign tests (Fattorini et al., 2014 ). To exemplify the application of this procedure, habitat selection is assessed for a population of European Brown Hares settled in central Italy.","Published":"2016-12-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PhViD","Version":"1.0.8","Title":"PharmacoVigilance Signal Detection","Description":"A collection of several pharmacovigilance signal detection methods extended to the multiple comparison setting.","Published":"2016-11-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Phxnlme","Version":"1.0.0","Title":"Run Phoenix NLME and Perform Post-Processing","Description":"Calls 'Phoenix NLME' (non-linear mixed effects), a population\n modeling and simulation software, for pharmacokinetics and pharmacodynamics\n analyses and conducts post-processing of the results. This includes creation of\n various diagnostic plots, bootstrap and visual predictive checks. See for more\n information about 'Phoenix NLME'.","Published":"2015-11-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"phybreak","Version":"0.1.1","Title":"Analysis of Outbreaks with Sequence Data","Description":"Implementation the outbreak analysis method described by \n Klinkenberg et al (2016) . \n Simulate outbreaks, analyse datasets by creating samples from the \n posterior distribution with a Markov-Chain Monte Carlo sampler, \n and summarize the output.","Published":"2016-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phyclust","Version":"0.1-19","Title":"Phylogenetic Clustering (Phyloclustering)","Description":"Phylogenetic clustering (phyloclustering) is an evolutionary\n Continuous Time Markov Chain model-based approach to identify\n population structure from molecular data without assuming\n linkage equilibrium. The package phyclust (Chen 2011) provides a\n convenient implementation of phyloclustering for DNA and SNP data,\n capable of clustering individuals into subpopulations and identifying\n molecular sequences representative of those subpopulations. It is\n designed in C for performance, interfaced with R for visualization,\n and incorporates other popular open source programs including\n ms (Hudson 2002) ,\n seq-gen (Rambaut and Grassly 1997)\n ,\n Hap-Clustering (Tzeng 2005) and\n PAML baseml (Yang 1997, 2007) ,\n ,\n for simulating data, additional analyses, and searching the best tree.\n See the phyclust website for more information, documentations and\n examples.","Published":"2017-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phyext2","Version":"0.0.4","Title":"An Extension (for Package 'SigTree') of Some of the Classes in\nPackage 'phylobase'","Description":"Based on (but not identical to) the no-longer-maintained package 'phyext', provides enhancements to 'phylobase' classes, specifically for use by package 'SigTree'; provides classes and methods which help users manipulate branch-annotated trees (as in 'SigTree'); also provides support for a few other extra features.","Published":"2015-07-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PhyInformR","Version":"1.0","Title":"Rapid Calculation of Phylogenetic Information Content","Description":"Enables rapid calculation of phylogenetic information content using the latest advances in phylogenetic informativeness based theory. These advances include modifications that incorporate uneven branch lengths and any model of nucleotide substitution to provide assessments of the phylogenetic utility of any given dataset or dataset partition. Also provides new tools for data visualization and routines optimized for rapid statistical calculations, including approaches making use of Bayesian posterior distributions and parallel processing. Users can apply these approaches toward screening datasets for phylogenetic/genomic information content.","Published":"2016-11-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"phylin","Version":"1.1.1","Title":"Spatial Interpolation of Genetic Data","Description":"The spatial interpolation of genetic distances between\n\t samples is based on a modified kriging method that\n\t accepts a genetic distance matrix and generates a map of\n\t probability of lineage presence. This package also offers\n\t tools to generate a map of potential contact zones\n\t between groups with user-defined thresholds in the tree\n\t to account for old and recent divergence. Additionally,\n\t it has functions for IDW interpolation using genetic data\n\t and midpoints.","Published":"2015-11-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phylobase","Version":"0.8.4","Title":"Base Package for Phylogenetic Structures and Comparative Data","Description":"Provides a base S4 class for comparative methods, incorporating\n one or more trees and trait data.","Published":"2017-04-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phylocanvas","Version":"0.1.0","Title":"Interactive Phylogenetic Trees Using the 'Phylocanvas'\nJavaScript Library","Description":"Create and customize interactive phylogenetic trees using the 'phylocanvas' JavaScript library and the 'htmlwidgets' package. These trees can be used directly from the R console, from 'RStudio', in Shiny apps, and in R Markdown documents. See for more information on the 'phylocanvas' library.","Published":"2017-02-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"phyloclim","Version":"0.9-4","Title":"Integrating Phylogenetics and Climatic Niche Modeling","Description":"This package implements some recently developed methods in phyloclimatic modeling.","Published":"2013-07-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phylocurve","Version":"2.0.8","Title":"Phylogenetic Comparative Methods for High-Dimensional Traits","Description":"Tools for studying the evolution of high-dimensional traits\n (morphometric, function-valued, etc.) including ancestral state reconstruction,\n estimating phylogenetic signal, and assessing correlated trait evolution. Visit\n for more information.","Published":"2017-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phylodyn","Version":"0.9.0","Title":"Statistical Tools for Phylodynamics","Description":"Statistical tools for reconstructing population size from genetic\n sequence data.","Published":"2017-05-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PhylogeneticEM","Version":"1.0.1","Title":"Automatic Shift Detection using a Phylogenetic EM","Description":"\n Implementation of the automatic shift detection method for\n Brownian Motion (BM) or Ornstein–Uhlenbeck (OU) models of trait evolution on\n phylogenies. Some tools to handle equivalent shifts configurations are also\n available.","Published":"2017-05-01","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PHYLOGR","Version":"1.0.8","Title":"Functions for Phylogenetically Based Statistical Analyses","Description":"Manipulation and analysis of phylogenetically simulated\n data sets and phylogenetically based analyses using GLS.","Published":"2014-11-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phylogram","Version":"1.0.1","Title":"Dendrograms for Evolutionary Analysis","Description":"Contains functions for importing and exporting 'dendrogram' \n objects in parenthetic text format, and several \n functions for command-line tree manipulation. \n With an emphasis on speed and computational efficiency, \n the package also includes a suite of tools for rapidly computing \n distance matrices and building large trees using fast alignment-free \n 'k-mer' counting and divisive clustering techniques.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"phyloland","Version":"1.3","Title":"Modelling Competitive Exclusion and Limited Dispersal in a\nStatistical Phylogeographic Framework","Description":"Phyloland package models a space colonization process mapped onto a phylogeny, it aims at estimating limited dispersal and ecological competitive exclusion in a Bayesian MCMC statistical phylogeographic framework (please refer to phyloland-package help for details.)","Published":"2014-09-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"phylolm","Version":"2.5","Title":"Phylogenetic Linear Regression","Description":"Provides functions for fitting phylogenetic linear models and phylogenetic generalized linear models. The computation uses an algorithm that is linear in the number of tips in the tree. The package also provides functions for simulating continuous or binary traits along the tree. Other tools include functions to test the adequacy of a population tree.","Published":"2016-10-17","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PhyloMeasures","Version":"2.1","Title":"Fast and Exact Algorithms for Computing Phylogenetic\nBiodiversity Measures","Description":"Given a phylogenetic tree T and an assemblage S of species represented as \n a subset of tips in T, we want to compute a measure of the diversity \n of the species in S with respect to T. The current package offers \n efficient algorithms that can process large phylogenetic data for several such measures. \n Most importantly, the package includes algorithms for computing \n efficiently the standardized versions of phylogenetic measures and their p-values, which are \n essential for null model comparisons. Among other functions, \n the package provides efficient computation of richness-standardized versions \n for indices such as the net relatedness index (NRI), \n nearest taxon index (NTI), phylogenetic\n diversity index (PDI), and the corresponding indices of two-sample measures. \n The package also introduces a new\n single-sample measure, the Core Ancestor Cost (CAC); the package provides\n functions for computing the value and the standardised index of the CAC and,\n more than that, there is an extra function available that can compute exactly \n any statistical moment of the measure. The package supports computations\n under different null models, including abundance-weighted models.","Published":"2017-01-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"phylometrics","Version":"0.0.1","Title":"Estimating Statistical Errors of Phylogenetic Metrics","Description":"Provides functions to estimate statistical errors of phylogenetic\n metrics particularly to detect binary trait influence on diversification, as\n well as a function to simulate trees with fixed number of sampled taxa and trait\n prevalence.","Published":"2015-11-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phylopath","Version":"0.2.3","Title":"Perform Phylogenetic Path Analysis","Description":"A comprehensive and easy to use R implementation of confirmatory\n phylogenetic path analysis as described by Von Hardenberg and Gonzalez-Voyer\n (2012) .","Published":"2017-03-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"phylosignal","Version":"1.1","Title":"Exploring the Phylogenetic Signal in Continuous Traits","Description":"A collection of tools to explore the phylogenetic signal in univariate and multivariate data. The package provides functions to plot traits data against a phylogenetic tree, different measures and tests for the phylogenetic signal, methods to describe where the signal is located and a phylogenetic clustering method.","Published":"2015-10-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"phylosim","Version":"3.0.2","Title":"Flexible Simulations of Biological Sequence Evolution","Description":"An extensible object-oriented framework for the Monte Carlo simulation of sequence evolution written in 100 percent R. It is built on the top of the R.oo and ape packages and uses Gillespie's direct method to simulate substitutions, insertions and deletions.","Published":"2016-09-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"phylotate","Version":"1.1","Title":"Phylogenies with Annotations","Description":"Functions to read and write APE-compatible phylogenetic\n trees in NEXUS and Newick formats, while preserving annotations.","Published":"2017-04-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"phylotools","Version":"0.1.2","Title":"Phylogenetic tools for Eco-phylogenetics","Description":"Building supermatrix for DNA barcodes using different\n genes, calculating the inequality among lineages and\n phylogenetic similarity for very large dataset using slicing\n methods by invoking Phylocom.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"phyloTop","Version":"2.0.1","Title":"Calculating Topological Properties of Phylogenies","Description":"Tools for calculating and viewing topological properties of phylogenetic trees.","Published":"2017-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"phyndr","Version":"0.1.0","Title":"Matches Tip and Trait Data","Description":"Use topological or taxonomic information to maximize the overlap of phylogenetic and comparative data.","Published":"2015-08-18","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"phyreg","Version":"0.7","Title":"Implements the Phylogenetic Regression of Grafen (1989)","Description":"Provides general linear model facilities (single y-variable, multiple x-variables with arbitrary mixture of continuous and categorical and arbitrary interactions) for cross-species data. The theory is in A. Grafen (1989, Proc. R. Soc. B 326, 119-157) and aims to cope with both recognised phylogeny (closely related species tend to be similar) and unrecognised phylogeny (a polytomy usually indicates ignorance about the true sequence of binary splits).","Published":"2014-02-08","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"PhysActBedRest","Version":"1.0","Title":"Marks Periods of 'Bedrest' in Actigraph Accelerometer Data","Description":"Contains a function to categorize accelerometer readings collected in free-living (e.g., for 24 hours/day for 7 days), preprocessed and compressed as counts (unit-less value) in a specified time period termed epoch (e.g., 1 minute) as either bedrest (sleep) or active. The input is a matrix with a timestamp column and a column with number of counts per epoch. The output is the same dataframe with an additional column termed bedrest. In the bedrest column each line (epoch) contains a function-generated classification 'br' or 'a' denoting bedrest/sleep and activity, respectively. The package is designed to be used after wear/nonwear marking function in the 'PhysicalActivity' package. ","Published":"2016-04-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"PhysicalActivity","Version":"0.1-1","Title":"Process Physical Activity Accelerometer Data","Description":"This package contains functions to classify monitor wear and nonwear time intervals in accelerometer data collected to assess physical activity in free-living condition. The package also contains functions to make plot for accelerometer data, and to obtain the summary of daily monitor wear time and the mean of monitor wear time during valid days. A monitored day is considered valid if the total minutes of classified monitor wear time per day is greater than a user defined cutoff.","Published":"2011-11-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"physiology","Version":"0.2.2","Title":"Calculate Physiological Characteristics of Adults and Children","Description":"A variety of formulae are provided for estimation of height,\n weight and fluid compartments of adults and children. Each formula is\n referenced to the original publication. Warnings can be given for\n estimation based on input data outside of normal ranges. Future functions\n will cover more material with a focus on anaesthesia, critical\n care and peri-operative medicine.","Published":"2015-01-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PhySortR","Version":"1.0.7","Title":"A Fast, Flexible Tool for Sorting Phylogenetic Trees","Description":"Screens and sorts phylogenetic trees in both traditional and\n extended Newick format. Allows for the fast and flexible screening (within\n a tree) of Exclusive clades that comprise only the target taxa and/or Non-\n Exclusive clades that includes a defined portion of non-target taxa.","Published":"2016-05-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"phytools","Version":"0.6-00","Title":"Phylogenetic Tools for Comparative Biology (and Other Things)","Description":"Package contains various functions for phylogenetic analysis.\n\tThis functionality is concentrated in the phylogenetic analysis of \n\tcomparative data from species. For example, the package includes\n\tfunctions for Bayesian and ML ancestral state estimation; visual\n\tsimulation of trait evolution; fitting models of trait evolution\n\twith multiple Brownian rates and correlations; visualizing \n\tdiscrete and continuous character evolution using colors or \n\tprojections into trait space; identifying the location of a change\n\tin the rate of character evolution on the tree; fast Brownian motion\n\tsimulation and simulation under several other models of \n\tcontinuous trait evolution; fitting a model of correlated binary\n\ttrait evolution; locating the position of a fossil or an recently\n\textinct lineage on a tree using continuous character data with ML;\n\tplotting lineage accumulation through time, including across \n\tmultiple trees (such as a Bayesian posterior sample); conducting\n\tan analysis called stochastic character mapping, in which character\n\thistories for a discrete trait are sampled from their posterior\t\n\tprobability distribution under a model; conducting a multiple \n\t(i.e., partial) Mantel test; fitting a phylogenetic regression model\n\twith error in predictor and response variables; conducting a\n\tphylogenetic principal components analysis, a phylogenetic\n\tregression, a reduced major axis regression, a phylogenetic\n\tcanonical correlation analysis, and a phylogenetic ANOVA; projecting \n\ta tree onto a geographic map; simulating discrete character \n\thistories on the tree; fitting a model in which a discrete character \n\tevolves under the threshold model; visualization of cospeciation; and \n\ta simple statistical test for cospeciation between two trees. In \n\taddition to this phylogenetic comparative method functionality, the \n\tpackage also contains functions for a wide range of other purposes in \n\tphylogenetic biology. For instance, functionality in this package \n\tincludes (but is not restricted to): adding taxa to a tree \n\t(including randomly, everywhere, or automatically to genera); \n\tgenerating all bi- and multi-furcating trees for a set of taxa; \n\treducing a phylogeny to its backbone tree; dropping tips or adding \n\ttips to special types of phylogenetic trees; exporting a tree as an \n\tXML file; converting a tree with a mapped character to a tree with\n\tsingleton nodes and one character state per edge; estimating a\n\tphylogeny using the least squares method; simulating birth-death\n\ttrees under a range of conditions; rerooting trees; computing a \n\tconsensus tree under multiple methods, including via minimization\n\tof the distance to other trees in the set; a wide range of \n\tvisualizations of trees; and a variety of other manipulations and \n\tanalyses that phylogenetic biologists may find useful for their \n\tresearch.","Published":"2017-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"phytotools","Version":"1.0","Title":"Phytoplankton Production Tools","Description":"Fits PE and RLC data to one of a four published PE models.\n\tSimulates incident irradiance as a function of time and space.\n\tCalculates phytoplankton production by transposing modeled PE or RLC data \n\tto a water column with a user-defined theoretical in-situ irradiance field.","Published":"2015-02-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pi0","Version":"1.4-0","Title":"Estimating the Proportion of True Null Hypotheses for FDR","Description":"Methods for estimating the proportion of true null hypotheses, i.e., the pi0, when a very large number of hypotheses are simultaneously tested, especially for the purpose of (local) false discovery rate control for microarray data. It also contains functions to estimate the distribution of noncentrality parameters from a large number of parametric tests. ","Published":"2015-07-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"picante","Version":"1.6-2","Title":"R tools for integrating phylogenies and ecology","Description":"Phylocom integration, community analyses, null-models, traits and evolution in R","Published":"2014-03-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"picasso","Version":"0.5.4","Title":"Pathwise Calibrated Sparse Shooting Algorithm","Description":"Computationally efficient tools for fitting generalized linear model with convex or non-convex penalty. Users can enjoy the superior statistical property of non-convex penalty such as SCAD and MCP which has significantly less estimation error and overfitting compared to convex penalty such as lasso and ridge. Computation is handled by multi-stage convex relaxation and the PathwIse CAlibrated Sparse Shooting algOrithm (PICASSO) which exploits warm start initialization, active set updating, and strong rule for coordinate preselection to boost computation, and attains a linear convergence to a unique sparse local optimum with optimal statistical properties. The computation is memory-optimized using the sparse matrix output.","Published":"2016-10-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pid","Version":"0.36","Title":"Process Improvement using Data","Description":"A collection of scripts and data files for the statistics text: \n \"Process Improvement using Data\". The package contains code for designed \n experiments, data sets and other convenience functions used in the book.","Published":"2015-08-07","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"piecewiseSEM","Version":"1.2.1","Title":"Piecewise Structural Equation Modeling","Description":"Implements piecewise structural equation models.","Published":"2016-12-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pifpaf","Version":"1.0.0","Title":"Potential Impact Fraction and Population Attributable Fraction\nfor Cross-Sectional Data","Description":"Uses a generalized method to estimate the Potential Impact Fraction (PIF) and the Population Attributable Fraction (PAF) from cross-sectional data. It creates point-estimates, confidence intervals, and estimates of variance. In addition it generates plots for conducting sensitivity analysis. The estimation method corresponds to Zepeda-Tello, Camacho-García-Formentí, et al. 2017. 'Nonparametric Methods to Estimate the Potential Impact Fraction from Cross-sectional Data'. Unpublished manuscript. This package was developed under funding by Bloomberg Philanthropies.","Published":"2017-05-31","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PIGE","Version":"0.9","Title":"Self contained gene set analysis for gene- and\npathway-environment interaction analysis","Description":"Extension of the ARTP package for gene- and pathway-environment\n interaction","Published":"2013-12-30","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"PIGShift","Version":"1.0.1","Title":"Polygenic Inverse Gamma Shifts","Description":"Fits models of gene expression evolution to expression data from\n coregulated groups of genes, assuming inverse gamma distributed rate\n variation.","Published":"2015-12-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Pijavski","Version":"1.0","Title":"Global Univariate Minimization","Description":"Global univariate minimization of Lipschitz functions is performed by using Pijavski method, which was published in Pijavski (1972) .","Published":"2016-03-12","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"pim","Version":"2.0.1","Title":"Fit Probabilistic Index Models","Description":"Fit a probabilistic index model as described in \n Thas et al . The interface to the \n modeling function has changed in this new version. The old version is\n still available at R-Forge. You can install the old package\n using install.packages('pimold', repos = 'http://R-Forge.R-project.org').","Published":"2017-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pinbasic","Version":"1.1.0","Title":"Fast and Stable Estimation of the Probability of Informed\nTrading (PIN)","Description":"Utilities for fast and stable estimation of the probability of \n informed trading (PIN) in the model introduced by Easley et al. (2002) \n are implemented. Since the basic model developed \n by Easley et al. (1996) is nested in the \n former due to equating the intensity of uninformed buys and sells, functions \n can also be applied to this simpler model structure, if needed. \n State-of-the-art factorization of the model likelihood function as well as \n most recent algorithms for generating initial values for optimization routines are implemented. \n In total, two likelihood factorizations and three methodologies for \n starting values are included. \n Furthermore, functions for simulating datasets of daily aggregated buys and sells, \n calculating confidence intervals for the probability of informed trading and posterior probabilities \n of trading days' conditions are available. ","Published":"2017-03-02","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pinfsc50","Version":"1.1.0","Title":"Sequence ('FASTA'), Annotation ('GFF') and Variants ('VCF') for\n17 Samples of 'P. Infestans\" and 1 'P. Mirabilis'","Description":"Genomic data for the plant pathogen \"Phytophthora infestans.\" It\n includes a variant file ('VCF'), a sequence file ('FASTA') and an annotation file\n ('GFF'). This package is intended to be used as example data for packages that\n work with genomic data.","Published":"2016-12-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"pingr","Version":"1.1.2","Title":"Check if a Remote Computer is Up","Description":"Check if a remote computer is up. It can either\n just call the system ping command, or check a specified\n TCP port.","Published":"2017-03-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pinnacle.API","Version":"2.0.9","Title":"A Wrapper for the Pinnacle API","Description":"An interface to the API by Pinnacle that allows Pinnacle customers to interact with the sports market data in R.See for more information. The Pinnacle API can be used to place wagers, retrieve line information, retrieve account information.Please be aware that the TOC of Pinnacle apply . An account with Pinnacle is necessary to use the Pinnacle API. ","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pinyin","Version":"1.0.2","Title":"Convert Chinese Characters into Pinyin","Description":"Convert Chinese characters into Pinyin (the official romanization system for Standard Chinese in mainland China, Malaysia, Singapore, and Taiwan. See for details).","Published":"2017-06-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"pipe.design","Version":"0.5.1","Title":"Dual-Agent Dose Escalation for Phase I Trials using the PIPE\nDesign","Description":"Implements the Product of Independent beta Probabilities dose Escalation (PIPE) design for dual-agent Phase I trials as described in Mander AP, Sweeting MJ (2015) .","Published":"2017-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pipefittr","Version":"0.1.2","Title":"Convert Nested Functions to Pipes","Description":"To take nested function calls and convert them to a more readable form using pipes from package 'magrittr'.","Published":"2016-09-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pipeGS","Version":"0.1","Title":"Permutation p-Value Estimation for Gene Set Tests","Description":"Code for various permutation p-values estimation methods for gene set test. The description of corresponding methods can be found in the dissertation of Yu He(2016) \"Efficient permutation P-value estimation for gene set tests\" . One of the methods also corresponds to the paper \"Permutation p-value approximation via generalized Stolarsky invariance\" .","Published":"2016-11-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pipeliner","Version":"0.1.1","Title":"Machine Learning Pipelines for R","Description":"A framework for defining 'pipelines' of functions for applying data transformations, \n model estimation and inverse-transformations, resulting in predicted value generation (or \n model-scoring) functions that automatically apply the entire pipeline of functions required to go\n from input to predicted output.","Published":"2016-12-19","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"pipeR","Version":"0.6.1.3","Title":"Multi-Paradigm Pipeline Implementation","Description":"Provides various styles of function chaining methods: Pipe\n operator, Pipe object, and pipeline function, each representing a distinct\n pipeline model yet sharing almost a common set of features: A value can be\n piped to the first unnamed argument of a function and to dot symbol in an\n enclosed expression. The syntax is designed to make the pipeline more\n readable and friendly to a wide range of operations.","Published":"2016-04-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PIPS","Version":"1.0.1","Title":"Predicted Interval Plots","Description":"Generate Predicted Interval Plots. Simulate and plot\n confidence intervals of an effect estimate given observed data\n and a hypothesis about the distribution of future data.","Published":"2012-09-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pirate","Version":"1.0.0","Title":"Generated Effect Modifier","Description":"An implementation of the generated effect modifier (GEM) method. This method constructs composite variables by linearly combining pre-treatment scalar patient characteristics to create optimal treatment effect modifiers in linear models. The optimal linear combination is called a GEM. Treatment is assumed to have been assigned at random. For reference, see E Petkova, T Tarpey, Z Su, and RT Ogden. Generated effect modifiers (GEMs) in randomized clinical trials. Biostatistics (First published online: July 27, 2016, ).","Published":"2016-11-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pitchRx","Version":"1.8.2","Title":"Tools for Harnessing 'MLBAM' 'Gameday' Data and Visualizing\n'pitchfx'","Description":"With 'pitchRx', one can easily obtain Major League Baseball Advanced\n Media's 'Gameday' data (as well as store it in a remote database). The\n 'Gameday' website hosts a wealth of data in XML format, but perhaps most\n interesting is 'pitchfx'. Among other things, 'pitchfx' data can be used to\n recreate a baseball's flight path from a pitcher's hand to home plate. With\n pitchRx, one can easily create animations and interactive 3D 'scatterplots'\n of the baseball's flight path. 'pitchfx' data is also commonly used to\n generate a static plot of baseball locations at the moment they cross home\n plate. These plots, sometimes called strike-zone plots, can also refer to a\n plot of event probabilities over the same region. 'pitchRx' provides an easy\n and robust way to generate strike-zone plots using the 'ggplot2' package.","Published":"2015-12-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"piton","Version":"0.1.1","Title":"Parsing Expression Grammars in Rcpp","Description":"A wrapper around the 'Parsing Expression Grammar Template Library', a C++11 library for generating\n Parsing Expression Grammars, that makes it accessible within Rcpp. With this, developers can implement\n their own grammars and easily expose them in R packages.","Published":"2017-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PivotalR","Version":"0.1.18.3","Title":"A Fast, Easy-to-Use Tool for Manipulating Tables in Databases\nand a Wrapper of MADlib","Description":"Provides an R interface for the Pivotal Data stack\n running on 'PostgreSQL', 'Greenplum' or 'Apache HAWQ (incubating)'\n databases with parallel and distributed computation ability for big data\n processing. 'PivotalR' provides an R interface to various database\n operations on tables or views. These operations are almost the same as\n the corresponding native R operations. Thus users of R do not need\n to learn 'SQL' when they operate on objects in the database. It also\n provides a wrapper for 'Apache MADlib (incubating)', which is an open-\n source library for parallel and scalable in-database analytics.","Published":"2017-02-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pivottabler","Version":"0.3.0","Title":"Create Pivot Tables in R","Description":"Create regular pivot tables with just a few lines of R. \n More complex pivot tables can also be created, e.g. pivot tables\n with irregular layouts, multiple calculations and/or derived \n calculations based on multiple data frames.","Published":"2017-06-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pixiedust","Version":"0.7.5","Title":"Tables so Beautifully Fine-Tuned You Will Believe It's Magic","Description":"The introduction of the 'broom' package has made converting model\n objects into data frames as simple as a single function. While the 'broom'\n package focuses on providing tidy data frames that can be used in advanced\n analysis, it deliberately stops short of providing functionality for reporting\n models in publication-ready tables. 'pixiedust' provides this functionality with\n a programming interface intended to be similar to 'ggplot2's system of layers\n with fine tuned control over each cell of the table. Options for output include\n printing to the console and to the common markdown formats (markdown, HTML, and\n LaTeX). With a little 'pixiedust' (and happy thoughts) tables can really fly.","Published":"2017-05-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pixmap","Version":"0.4-11","Title":"Bitmap Images (``Pixel Maps'')","Description":"Functions for import, export, plotting and other\n manipulations of bitmapped images.","Published":"2011-07-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PK","Version":"1.3-3","Title":"Basic Non-Compartmental Pharmacokinetics","Description":"Estimation of pharmacokinetic parameters using non-compartmental theory.","Published":"2016-01-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pkgconfig","Version":"2.0.1","Title":"Private Configuration for 'R' Packages","Description":"Set configuration options on a per-package basis.\n Options set by a given package only apply to that package,\n other packages are unaffected.","Published":"2017-03-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pkgcopier","Version":"0.0.1","Title":"Copy Local R Packages to Another Environment","Description":"\"Copy\" local R package information to a temporary cloud space and \"paste\" your favorite R packages to a new environment.","Published":"2016-06-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pkggraph","Version":"0.2.0","Title":"A Consistent and Intuitive Platform to Explore the Dependencies\nof Packages on the Comprehensive R Archive Network Like\nRepositories","Description":"Interactively explore various dependencies of a package(s) (on the Comprehensive R Archive Network Like repositories) and perform analysis using tidy philosophy. Most of the functions return a 'tibble' object (enhancement of 'dataframe') which can be used for further analysis. The package offers functions to produce 'network' and 'igraph' dependency graphs. The 'plot' method produces a static plot based on 'ggnetwork' and 'plotd3' function produces an interactive D3 plot based on 'networkD3'.","Published":"2017-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pkgKitten","Version":"0.1.4","Title":"Create Simple Packages Which Do not Upset R Package Checks","Description":"Provides a function kitten() which creates cute little \n packages which pass R package checks. This sets it apart from \n package.skeleton() which it calls, and which leaves imperfect files \n behind. As this is not exactly helpful for beginners, kitten() offers \n an alternative.","Published":"2016-11-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pkgmaker","Version":"0.22","Title":"Package development utilities","Description":"This package provides some low-level utilities to use for package\n development. It currently provides managers for multiple package specific\n options and registries, vignette, unit test and bibtex related utilities.\n It serves as a base package for packages like NMF, RcppOctave, doRNG, and\n as an incubator package for other general purposes utilities, that will\n eventually be packaged separately.\n It is still under heavy development and changes in the interface(s) are\n more than likely to happen.","Published":"2014-05-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PKgraph","Version":"1.7","Title":"Model diagnostics for population pharmacokinetic models","Description":"PKgraph provides a graphical user interface for population\n pharmacokinetic model diagnosis. It also provides an integrated\n and comprehensive platform for the analysis of pharmacokinetic\n data including exploratory data analysis, goodness of model\n fit, model validation and model comparison. Results from a\n variety of modeling fitting software, including NONMEM,\n Monolix, SAS and R, can be used. PKgraph is programmed in R,\n and uses the R packages lattice, ggplot2 for static graphics,\n and rggobi for interactive graphics.","Published":"2012-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PKI","Version":"0.1-3","Title":"Public Key Infrastucture for R Based on the X.509 Standard","Description":"PKI functions such as verifying certificates, RSA encription and signing which can be used to build PKI infrastructure and perform cryptographic tasks.","Published":"2015-07-28","License":"GPL-2 | GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pkmon","Version":"0.9","Title":"Least-Squares Estimator under k-Monotony Constraint for Discrete\nFunctions","Description":"We implement two least-squares estimators under k-monotony constraint using a method based on the Support Reduction Algorithm from Groeneboom et al (2008) . The first one is a projection estimator on the set of k-monotone discrete functions. The second one is a projection on the set of k-monotone discrete probabilities. This package provides functions to generate samples from the spline basis from Lefevre and Loisel (2013) , and from mixtures of splines.","Published":"2016-09-24","License":"CC BY 4.0","snapshot_date":"2017-06-23"} {"Package":"PKNCA","Version":"0.8.1","Title":"Perform Pharmacokinetic Non-Compartmental Analysis","Description":"Compute standard Non-Compartmental Analysis (NCA)\n parameters and summarize them. In addition to this core work, it\n also provides standardized plotting routines, basic assessments\n for biocomparison or drug interaction, and model-based estimation\n routines for calculating doses to reach specific values of AUC or\n Cmax.","Published":"2017-02-27","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"PKPDmodels","Version":"0.3.2","Title":"Pharmacokinetic/pharmacodynamic models","Description":"Provides functions to evaluate common\n pharmacokinetic/pharmacodynamic models and their gradients.","Published":"2012-01-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pkr","Version":"0.1.0","Title":"Pharmacokinetics in R","Description":"Conduct a noncompartmental analysis as closely as possible to the most widely used commercial software for pharmacokinetic analysis, i.e. 'Phoenix(R) WinNonlin(R)' .\n Some features are\n 1) CDISC SDTM terms\n 2) Automatic slope selection with the same criterion of WinNonlin(R)\n 3) Supporting both 'linear-up linear-down' and 'linear-up log-down' method\n 4) Interval(partial) AUCs with 'linear' or 'log' interpolation method\n * Reference: Gabrielsson J, Weiner D. Pharmacokinetic and Pharmacodynamic Data Analysis - Concepts and Applications. 5th ed. 2016. (ISBN:9198299107).","Published":"2017-03-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PKreport","Version":"1.5","Title":"A reporting pipeline for checking population pharmacokinetic\nmodel assumption","Description":"PKreport aims to 1) provide automatic pipeline for users\n to visualize data and models. It creates a flexible R framework\n with automatically generated R scripts to save time and cost\n for later usage; 2) implement an archive-oriented management\n tool for users to store, retrieve and modify figures. 3) offer\n powerful and convenient service to generate high-quality graphs\n based on two R packages: lattice and ggplot2.","Published":"2014-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pks","Version":"0.4-0","Title":"Probabilistic Knowledge Structures","Description":"Fitting and testing probabilistic knowledge structures,\n especially the basic local independence model (BLIM, Doignon & Flamagne,\n 1999), using the minimum discrepancy maximum likelihood (MDML) method.","Published":"2016-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pla","Version":"0.2","Title":"Parallel Line Assays","Description":"Parallel Line Assays: Completely randomized design,\n Randomized Block design, and Latin squares design.\n Balanced data are fitted as described in the Ph.Eur.\n In the presence of missing values complete data analysis can be\n performed (with computation of Fieller's confidence intervals for\n the estimated potency), or imputation of values can be applied.\n The package contains a script such that a pdf-document with a\n report of an analysis of an assay can be produced from an input file\n with data of the assay. Here no knowledge of R is needed by the user.","Published":"2015-09-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plac","Version":"0.1.1","Title":"A Pairwise Likelihood Augmented Cox Estimator for Left-Truncated\nData","Description":"A semi-parametric estimation method for the Cox model\n with left-truncated data using augmented information\n from the marginal of truncation times.","Published":"2016-04-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"placement","Version":"0.1.1","Title":"Tools for Accessing the Google Maps API","Description":"The main functions in this package are drive_time\n\t(used for calculating distances between physical addresses or coordinates) and\n\tgeocode_url (used for estimating the lat/long coordinates\n\tof a physical address). Optionally, it generates the cryptographic signatures necessary\n\tfor making API calls with a Google for Work/Premium account within the geocoding process.\n\tThese accounts have larger quota limits than the \"standard_api\" and, thus, this package\n\tmay be useful for individuals seeking to submit large batch jobs within R to the Google Maps API.\n\tPlacement also provides methods for accessing the standard API using a (free) Google API key\n\t(see: ).","Published":"2016-07-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"plan","Version":"0.4-2","Title":"Tools for project planning","Description":"Supports the creation of burndown charts and gantt diagrams.","Published":"2013-09-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"planar","Version":"1.6","Title":"Multilayer Optics","Description":"Solves the electromagnetic problem of reflection and transmission at a planar multilayer interface. Also computed are the decay rates and emission profile for a dipolar emitter.","Published":"2016-02-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Planesmuestra","Version":"0.1","Title":"Functions for Calculating Dodge Romig, MIL STD 105E and MIL STD\n414 Acceptance Sampling Plan","Description":"Calculates an acceptance sampling plan, (sample size and acceptance number) based in MIL STD 105E, Dodge Romig and MIL STD 414 tables and procedures. The arguments for each function are related to lot size, inspection level and quality level. The specific plan operating curve (OC), is calculated by the binomial distribution. ","Published":"2016-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"planor","Version":"1.3-7","Title":"Generation of Regular Factorial Designs","Description":"Automatic generation of regular factorial designs, including fractional designs, orthogonal block designs, row-column designs and split-plots.","Published":"2017-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plantecophys","Version":"1.1-8","Title":"Modelling and Analysis of Leaf Gas Exchange Data","Description":"Coupled leaf gas exchange model, A-Ci curve simulation and\n fitting, Ball-Berry stomatal conductance models, \n leaf energy balance using Penman-Monteith, Cowan-Farquhar\n optimization, humidity unit conversions.","Published":"2016-08-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"plaqr","Version":"1.1","Title":"Partially Linear Additive Quantile Regression","Description":"Estimation, prediction, thresholding, and plotting for partially linear additive quantile regression. Intuitive functions for fitting and plotting partially linear additive quantile regression models. Uses and works with functions from the 'quantreg' package.","Published":"2016-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PlasmaMutationDetector","Version":"1.5.2","Title":"Tumor Mutation Detection in Plasma","Description":"Aims at detecting single nucleotide variation\n (SNV) and insertion/deletion (INDEL) in circulating tumor DNA (ctDNA), used\n as a surrogate marker for tumor, at each base position of an Next Generation\n Sequencing (NGS) analysis. Mutations are assessed by comparing the minor-allele\n frequency at each position to the measured PER in control samples.","Published":"2016-09-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Plasmidprofiler","Version":"0.1.6","Title":"Visualization of Plasmid Profile Results","Description":"Contains functions developed to combine the results of querying a plasmid database using\n short-read sequence typing with the results of a blast analysis against the query results.","Published":"2017-01-06","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"plater","Version":"1.0.0","Title":"Read, Tidy, and Display Data from Microtiter Plates","Description":"Tools for interacting with data from experiments done in microtiter\n plates. Easily read in plate-shaped data and convert it to tidy format, \n combine plate-shaped data with tidy data, and view tidy data in plate shape. ","Published":"2016-10-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"platetools","Version":"0.0.2","Title":"Tools and Plots for Multi-Well Plates","Description":"Collection of functions for working with multi-well microtitre\n plates, mainly 96, 384 and 1536 well plates.","Published":"2016-10-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PlayerRatings","Version":"1.0-1","Title":"Dynamic Updating Methods for Player Ratings Estimation","Description":"Implements schemes for estimating player or \n team skill based on dynamic updating. Implemented methods include \n Elo, Glicko and Stephenson. Contains pdf documentation of a \n reproducible analysis using approximately two million chess \n matches.","Published":"2016-01-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"playwith","Version":"0.9-54","Title":"A GUI for interactive plots using GTK+","Description":"A GTK+ graphical user interface for editing and\n interacting with R plots.","Published":"2012-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pleiades","Version":"0.2.0","Title":"Interface to the 'Pleiades' 'Archeological' Database","Description":"Provides a set of functions for interacting with the\n 'Pleiades' () 'API', including \n getting status data, places data, and creating a 'GeoJSON' \n based map on 'GitHub' 'gists'.","Published":"2017-06-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pleio","Version":"1.1","Title":"Pleiotropy Test for Multiple Traits on a Genetic Marker","Description":"Perform tests for pleiotropy of multiple traits on genotypes for a genetic marker.","Published":"2016-08-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plfm","Version":"2.2.1","Title":"Probabilistic Latent Feature Analysis","Description":"Functions for estimating probabilistic latent feature models with a disjunctive, conjunctive or additive mapping rule on (aggregated) binary three-way data.","Published":"2017-06-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plfMA","Version":"1.0.4","Title":"A GUI to View, Design and Export Various Graphs of Data","Description":"Provides a graphical user interface for viewing and designing various types of graphs of the data. The graphs can be saved in different formats of an image.","Published":"2017-05-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"plgp","Version":"1.1-7","Title":"Particle Learning of Gaussian Processes","Description":"Sequential Monte Carlo inference for fully Bayesian\n Gaussian process (GP) regression and classification models by\n particle learning (PL). The sequential nature of inference\n and the active learning (AL) hooks provided facilitate thrifty \n sequential design (by entropy) and optimization\n (by improvement) for classification and\n regression models, respectively.\n This package essentially provides a generic\n PL interface, and functions (arguments to the interface) which\n implement the GP models and AL heuristics. Functions for \n a special, linked, regression/classification GP model and \n an integrated expected conditional improvement (IECI) statistic \n is provides for optimization in the presence of unknown constraints.\n Separable and isotropic Gaussian, and single-index correlation\n functions are supported.\n See the examples section of ?plgp and demo(package=\"plgp\") \n for an index of demos","Published":"2014-12-02","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"plink","Version":"1.5-1","Title":"IRT Separate Calibration Linking Methods","Description":"Item response theory based methods are used to compute\n linking constants and conduct chain linking of unidimensional\n or multidimensional tests for multiple groups under a common\n item design. The unidimensional methods include the Mean/Mean,\n Mean/Sigma, Haebara, and Stocking-Lord methods for dichotomous\n (1PL, 2PL and 3PL) and/or polytomous (graded response, partial\n credit/generalized partial credit, nominal, and multiple-choice\n model) items. The multidimensional methods include the least\n squares method and extensions of the Haebara and Stocking-Lord\n method using single or multiple dilation parameters for\n multidimensional extensions of all the unidimensional\n dichotomous and polytomous item response models. The package\n also includes functions for importing item and/or ability\n parameters from common IRT software, conducting IRT true score\n and observed score equating, and plotting item response\n curves/surfaces, vector plots, information plots, and comparison \n plots for examining parameter drift.","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PLIS","Version":"1.1","Title":"Multiplicity control using Pooled LIS statistic","Description":"PLIS is a multiple testing procedure for testing several\n groups of hypotheses. Linear dependency is expected from the\n hypotheses within the same group and is modeled by hidden\n Markov Models. It is noted that, for PLIS, a smaller p value\n does not necessarily imply more significance because of\n dependency among the hypotheses. A typical application of PLIS\n is to analyze genome wide association studies datasets, where\n SNPs from the same chromosome are treated as a group and\n exhibit strong linear genomic dependency.","Published":"2012-08-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"plm","Version":"1.6-5","Title":"Linear Models for Panel Data","Description":"A set of estimators and tests for panel data.","Published":"2016-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plmDE","Version":"1.0","Title":"Additive partially linear models for differential gene\nexpression analysis","Description":"A set of tools for identifying genes whose differential\n expression is associated with measurements of other covariates\n on a continuous scale. These methods rely on generalized\n additive partially linear models which can be fitted\n efficiently using a B-spline basis approximation. Still under\n development: methods for interfacing with objects extending the\n eSet class and a function to pass linear models in edgeR and\n DEseq format.","Published":"2012-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PLMIX","Version":"1.0","Title":"Bayesian Analysis of Finite Mixtures of Plackett-Luce Models for\nPartial Rankings/Orderings","Description":"Fit finite mixtures of Plackett-Luce models for partial top rankings/orderings within the Bayesian framework. It provides MAP point estimates via EM algorithm and posterior MCMC simulations via Gibbs Sampling. It also fits MLE as a special case of the noninformative Bayesian analysis with vague priors.","Published":"2016-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plmm","Version":"0.1-1","Title":"Partially Linear Mixed Effects Model","Description":"This package fits the partially linear mixed effects model\n (semiparametric random intercept model) using kernel\n regression, without distributional assumptions for the random\n terms. Estimation procedure is an iterative generalized least\n squares type. A nonparametric heteroskedastic variance function\n is allowed for the regression error. Bootstrap resampling is\n provided for inference. The package implements bandwidth\n selection by an alternative cross validation for correlated\n data.","Published":"2012-11-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pln","Version":"0.2-1","Title":"Polytomous logit-normit (graded logistic) model estimation","Description":"Performs bivariate composite likelihood and full\n information maximum likelihood estimation for polytomous\n logit-normit (graded logistic) item response theory (IRT)\n models.","Published":"2013-01-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"plogr","Version":"0.1-1","Title":"The 'plog' C++ Logging Library","Description":"\n A simple header-only logging library for C++.\n Add 'LinkingTo: plogr' to 'DESCRIPTION', and '#include ' in your C++ modules to use it.","Published":"2016-09-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PLordprob","Version":"1.0","Title":"Multivariate Ordered Probit Model via Pairwise Likelihood","Description":"Multivariate ordered probit model, i.e. the extension of the scalar ordered probit model where the observed variables have dimension greater than one. Estimation of the parameters is done via maximization of the pairwise likelihood, a special case of the composite likehood obtained as product of bivariate marginal distributions. The package uses the Fortran 77 subroutine SADMVN by Alan Genz, with minor adaptations made by Adelchi Azzalini in his \"mvnormt\" package for evaluating the two-dimensional Gaussian integrals involved in the pairwise log-likelihood. Optimization of the latter objective function is performed via quasi-Newton box-constrained optimization algorithm, as implemented in nlminb.","Published":"2014-10-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"plot3D","Version":"1.1","Title":"Plotting Multi-Dimensional Data","Description":"Functions for viewing 2-D and 3-D data, including perspective plots, slice plots, surface plots, scatter plots, etc. Includes data sets from oceanography.","Published":"2016-01-13","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"plot3Drgl","Version":"1.0.1","Title":"Plotting Multi-Dimensional Data - Using 'rgl'","Description":"The 'rgl' implementation of plot3D functions.","Published":"2016-01-18","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"plotfunctions","Version":"1.2","Title":"Various Functions to Facilitate Visualization of Data and\nAnalysis","Description":"When analyzing data, plots are a helpful tool for visualizing data and interpreting statistical models. This package provides a set of simple tools for building plots incrementally, starting with an empty plot region, and adding bars, data points, regression lines, error bars, gradient legends, density distributions in the margins, and even pictures. The package builds further on R graphics by simply combining functions and settings in order to reduce the amount of code to produce for the user. As a result, the package does not use formula input or special syntax, but can be used in combination with default R plot functions. Note: Most of the functions were part of the package 'itsadug', which is now split in two packages: 1. the package 'itsadug', which contains the core functions for visualizing and evaluating nonlinear regression models, and 2. the package 'plotfunctions', which contains more general plot functions.","Published":"2017-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plotGoogleMaps","Version":"2.2","Title":"Plot Spatial or Spatio-Temporal Data Over Google Maps","Description":"Provides an interactive plot device for handling the geographic data for web browsers, designed for the automatic creation of web maps as a combination of users' data and Google Maps layers. ","Published":"2015-02-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"plotKML","Version":"0.5-8","Title":"Visualization of Spatial and Spatio-Temporal Objects in Google\nEarth","Description":"Writes sp-class, spacetime-class, raster-class and similar spatial and spatio-temporal objects to KML following some basic cartographic rules.","Published":"2017-05-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"plotluck","Version":"1.1.0","Title":"'ggplot2' Version of \"I'm Feeling Lucky!\"","Description":"Examines the characteristics of a data frame and a formula to\n automatically choose the most suitable type of plot out of the following supported\n options: scatter, violin, box, bar, density, hexagon bin, spine plot, and\n heat map. The aim of the package is to let the user focus on what to plot,\n rather than on the \"how\" during exploratory data analysis. It also automates\n handling of observation weights, logarithmic axis scaling, reordering of\n factor levels, and overlaying smoothing curves and median lines. Plots are\n drawn using 'ggplot2'.","Published":"2016-11-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"plotly","Version":"4.7.0","Title":"Create Interactive Web Graphics via 'plotly.js'","Description":"Easily translate 'ggplot2' graphs to an interactive web-based version and/or create custom web-based visualizations directly from R. Once uploaded to a 'plotly' account, 'plotly' graphs (and the data behind them) can be viewed and modified in a web browser.","Published":"2017-05-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"plotMCMC","Version":"2.0-0","Title":"MCMC Diagnostic Plots","Description":"Markov chain Monte Carlo diagnostic plots. The purpose of the\n package is to combine existing tools from the 'coda' and 'lattice' packages,\n and make it easy to adjust graphical details.","Published":"2014-03-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plotMElm","Version":"0.1.4","Title":"Plot Marginal Effects from Linear Models","Description":"Plot marginal effects for interactions estimated\n from linear models.","Published":"2016-06-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"plotmo","Version":"3.3.3","Title":"Plot a Model's Response and Residuals","Description":"Plot model surfaces for a wide variety of models\n using partial dependence plots and other techniques.\n Also plot model residuals and other information on the model.","Published":"2017-05-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"plotpc","Version":"1.0.4","Title":"Plot Principal Component Histograms Around a Scatter Plot","Description":"Plot principal component histograms around a bivariate\n scatter plot.","Published":"2015-09-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PlotPrjNetworks","Version":"1.0.0","Title":"Useful Networking Tools for Project Management","Description":"Useful set of tools for plotting network diagrams in any kind of project.","Published":"2015-07-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"plotprotein","Version":"1.0","Title":"Development of Visualization Tools for Protein Sequence","Description":"The image of the amino acid transform on the protein level is drawn, and the automatic routing of the functional elements such as the domain and the mutation site is completed.","Published":"2017-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PlotRegionHighlighter","Version":"1.0","Title":"Creates an envelope that surrounds a set of points plotted in a\ntwo dimensional space","Description":"Creates an envelope around a set of plotted points. The\n envelope is compact with a boundary that is continuous, smooth\n and convex. Each point is represented as a circle and the\n circles and connecting lines are the solution to the multiple\n pulley problem. This method can be used to highlight regions in\n a two-dimensional space.","Published":"2013-04-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"plotrix","Version":"3.6-5","Title":"Various Plotting Functions","Description":"Lots of plots, various labeling, axis and color scaling functions.","Published":"2017-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plotROC","Version":"2.0.1","Title":"Generate Useful ROC Curve Charts for Print and Interactive Use","Description":"Most ROC curve plots obscure the cutoff values and inhibit\n interpretation and comparison of multiple curves. This attempts to address\n those shortcomings by providing plotting and interactive tools. Functions\n are provided to generate an interactive ROC curve plot for web use, and\n print versions. A Shiny application implementing the functions is also\n included.","Published":"2016-02-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"plotrr","Version":"0.2.0","Title":"Making Visual Exploratory Data Analysis Easier","Description":"Functions for making visual exploratory data analysis easier.","Published":"2017-02-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"plotSEMM","Version":"2.2","Title":"Graphing Nonlinear Relations Among Latent Variables from\nStructural Equation Mixture Models","Description":"Contains a graphical user interface to generate the diagnostic\n plots proposed by Bauer (2005) and Pek & Chalmers (2015) to investigate\n nonlinear bivariate relationships in latent regression models using structural\n equation mixture models (SEMMs).","Published":"2016-05-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plotwidgets","Version":"0.4","Title":"Spider Plots, ROC Curves, Pie Charts and More for Use in Other\nPlots","Description":"Small self-contained plots for use in larger plots or to\n delegate plotting in other functions. Also contains a number of\n alternative color palettes and HSL color space based tools to modify\n colors or palettes.","Published":"2016-09-06","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"plRasch","Version":"1.0","Title":"Log Linear by Linear Association models and Rasch family models\nby pseudolikelihood estimation","Description":"Fit Log Linear by Linear Association models and Rasch family models by pseudolikelihood estimation","Published":"2014-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PLRModels","Version":"1.1","Title":"Statistical inference in partial linear regression models","Description":"This package provides statistical inference tools applied to Partial Linear Regression (PLR)\n models. Specifically, point estimation, confidence intervals estimation, bandwidth selection, goodness-of-fit tests and analysis of\n covariance are considered. \n Kernel-based methods, combined with ordinary least squares estimation, are used and time series \n errors are allowed. In addition, these techniques are also implemented for both parametric (linear) \n and nonparametric regression models.","Published":"2014-01-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"pls","Version":"2.6-0","Title":"Partial Least Squares and Principal Component Regression","Description":"Multivariate regression methods\n\tPartial Least Squares Regression (PLSR), Principal Component\n\tRegression (PCR) and Canonical Powered Partial Least Squares (CPPLS).","Published":"2016-12-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PLSbiplot1","Version":"0.1","Title":"The Partial Least Squares (PLS) Biplot","Description":"Principal Component Analysis (PCA) biplots, Covariance monoplots\n and biplots, Partial Least Squares (PLS) biplots, Partial Least Squares for\n Generalized Linear Model (PLS-GLM) biplots, Sparse Partial Least Squares\n (SPLS) biplots and Sparse Partial Least Squares for Generalized Linear\n Model (SPLS-GLM) biplots.","Published":"2014-11-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"plsdepot","Version":"0.1.17","Title":"Partial Least Squares (PLS) Data Analysis Methods","Description":"plsdepot contains different methods for PLS analysis of\n one or two data tables such as Tucker's Inter-Battery, NIPALS,\n SIMPLS, SIMPLS-CA, PLS Regression, and PLS Canonical Analysis.\n The main reference for this software is the awesome book (in\n French) 'La Regression PLS: Theorie et Pratique' by Michel\n Tenenhaus.","Published":"2012-11-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"plsdof","Version":"0.2-7","Title":"Degrees of Freedom and Statistical Inference for Partial Least\nSquares Regression","Description":"The plsdof package provides Degrees of Freedom estimates\n for Partial Least Squares (PLS) Regression. Model selection for\n PLS is based on various information criteria (aic, bic, gmdl)\n or on cross-validation. Estimates for the mean and covariance\n of the PLS regression coefficients are available. They allow\n the construction of approximate confidence intervals and the\n application of test procedures. Further, cross-validation\n procedures for Ridge Regression and Principal Components\n Regression are available.","Published":"2014-09-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plsgenomics","Version":"1.3-2","Title":"PLS Analyses for Genomics","Description":"Routines for PLS-based genomic analyses,\n implementing PLS methods for classification with\n microarray data and prediction of transcription factor\n activities from combined ChIP-chip analysis. The >=1.2-1\n versions include two new classification methods for microarray\n data: GSIM and Ridge PLS. The >=1.3 versions includes a\n new classification method combining variable selection and\n compression in logistic regression context: RIRLS-SPLS; and\n an adaptive version of the sparse PLS.","Published":"2017-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plspm","Version":"0.4.9","Title":"Tools for Partial Least Squares Path Modeling (PLS-PM)","Description":"Partial Least Squares Path Modeling (PLS-PM)\n analysis for both metric and\n non-metric data, as well as REBUS analysis.","Published":"2017-04-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"plspm.formula","Version":"1.0.1","Title":"Formula Based PLS Path Modeling","Description":"The main objective is to make easy the PLS Path Modeling with R using the package 'plspm'. It compute automatically the inner matrix and the outer list the 'plspm' function need simply by specify the model using formulas.","Published":"2015-12-30","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"plspolychaos","Version":"1.1-1","Title":"Sensitivity Indexes from Polynomial Chaos Expansions and PLS","Description":"Computation of sensitivity indexes by using a method based on a truncated Polynomial Chaos Expansion of the response and regression PLS, for computer models with correlated continuous inputs, whatever the input distribution. The truncated Polynomial Chaos Expansion is built from the multivariate Legendre orthogonal polynomials. \n The number of runs (rows) can be smaller than the number of monomials. It is possible to select only the most significant monomials. \n Of course, this package can also be used if the inputs are independent. Note that, when they are independent and uniformly distributed, the package 'polychaosbasics' is more appropriate. ","Published":"2017-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plsRbeta","Version":"0.2.0","Title":"Partial Least Squares Regression for Beta Regression Models","Description":"Provides Partial least squares Regression for (weighted) beta regression models and k-fold cross-validation of such models using various criteria. It allows for missing data in the explanatory variables. Bootstrap confidence intervals constructions are also available.","Published":"2014-12-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"plsRcox","Version":"1.7.2","Title":"Partial Least Squares Regression for Cox Models and Related\nTechniques","Description":"Provides Partial least squares Regression and various regular, sparse or kernel, techniques for fitting Cox models in high dimensional settings.","Published":"2014-12-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"plsRglm","Version":"1.1.1","Title":"Partial Least Squares Regression for Generalized Linear Models","Description":"Provides (weighted) Partial least squares Regression for generalized linear models and repeated k-fold cross-validation of such models using various criteria. It allows for missing data in the explanatory variables. Bootstrap confidence intervals constructions are also available.","Published":"2014-12-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"plsVarSel","Version":"0.9.1","Title":"Variable Selection in Partial Least Squares","Description":"Interfaces and methods for variable selection in Partial Least\n Squares. The methods include filter methods, wrapper methods and embedded\n methods. Both regression and classification is supported.","Published":"2016-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pltesim","Version":"0.1.2","Title":"Simulate Probabilistic Long-Term Effects in Models with Temporal\nDependence","Description":"Calculates and depicts probabilistic long-term effects\n in binary models with temporal dependence variables. The package performs\n two tasks. First, it calculates the change in the probability of the event\n occurring given a change in a theoretical variable. Second, it calculates\n the rolling difference in the future probability of the event for two\n scenarios: one where the event occurred at a given time and one where the\n event does not occur. The package is consistent with the recent movement to\n depict meaningful and easy-to-interpret quantities of interest with the\n requisite measures of uncertainty. It is the first to make it easy for\n researchers to interpret short- and long-term effects of explanatory\n variables in binary autoregressive models, which can have important\n implications for the correct interpretation of these models.","Published":"2017-03-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"plugdensity","Version":"0.8-3","Title":"Plug-in Kernel Density Estimation","Description":"Kernel density estimation with global bandwidth selection\n\t via \"plug-in\".","Published":"2011-11-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plumber","Version":"0.3.2","Title":"An API Generator for R","Description":"Gives the ability to automatically generate and serve an HTTP API\n from R functions using the annotations in the R documentation around your\n functions.","Published":"2017-05-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"plumbr","Version":"0.6.9","Title":"Mutable and dynamic data models","Description":"The base R data.frame, like any vector, is\n copied upon modification. This behavior is at odds with\n that of GUIs and interactive graphics. To rectify this,\n plumbr provides a mutable, dynamic tabular data model.\n Models may be chained together to form the complex\n plumbing necessary for sophisticated graphical\n interfaces. Also included is a\n general framework for linking datasets; an typical\n use case would be a linked brush.","Published":"2014-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plus","Version":"1.0","Title":"Penalized Linear Unbiased Selection","Description":"Efficient procedures for fitting an entire regression\n sequences with different model types.","Published":"2012-05-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"plusser","Version":"0.4-0","Title":"A Google+ Interface for R","Description":"plusser provides an API interface to Google+ so that posts,\n profiles and pages can be automatically retrieved.","Published":"2014-04-27","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"plyr","Version":"1.8.4","Title":"Tools for Splitting, Applying and Combining Data","Description":"A set of tools that solves a common set of problems: you\n need to break a big problem down into manageable pieces, operate on each\n piece and then put all the pieces back together. For example, you might\n want to fit a model to each spatial location or time point in your study,\n summarise data by panels or collapse high-dimensional arrays to simpler\n summary statistics. The development of 'plyr' has been generously supported\n by 'Becton Dickinson'.","Published":"2016-06-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PMA","Version":"1.0.9","Title":"Penalized Multivariate Analysis","Description":"Performs Penalized Multivariate Analysis: a penalized\n matrix decomposition, sparse principal components analysis, and\n sparse canonical correlation analysis, described in the\n following papers: (1) Witten, Tibshirani and Hastie (2009) A\n penalized matrix decomposition, with applications to sparse\n principal components and canonical correlation analysis.\n Biostatistics 10(3):515-534. (2) Witten and Tibshirani (2009)\n Extensions of sparse canonical correlation analysis, with\n applications to genomic data. Statistical Applications in\n Genetics and Molecular Biology 8(1): Article 28.","Published":"2013-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pmc","Version":"1.0.2","Title":"Phylogenetic Monte Carlo","Description":"Monte Carlo based model choice for applied phylogenetics of\n continuous traits. Method described in Carl Boettiger, Graham Coop,\n Peter Ralph (2012) Is your phylogeny informative? Measuring\n the power of comparative methods, Evolution 66 (7)\n 2240-51. doi:10.1111/j.1558-5646.2011.01574.x.","Published":"2016-12-05","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"pmcgd","Version":"1.1","Title":"pmcgd","Description":"Parsimonious Mixtures of Contaminated Gaussian Distributions","Published":"2013-12-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pmclust","Version":"0.1-9","Title":"Parallel Model-Based Clustering using\nExpectation-Gathering-Maximization Algorithm for Finite Mixture\nGaussian Model","Description":"Aims to utilize model-based clustering (unsupervised)\n for high dimensional and ultra large data, especially in a distributed\n manner. The code employs pbdMPI to perform a\n expectation-gathering-maximization algorithm\n for finite mixture Gaussian\n models. The unstructured dispersion matrices are assumed in the\n Gaussian models. The implementation is default in the single program\n multiple data programming model. The code can be executed\n through pbdMPI and independent to most MPI applications.\n See the High Performance\n Statistical Computing website for more information, documents\n and examples.","Published":"2016-12-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PMCMR","Version":"4.1","Title":"Calculate Pairwise Multiple Comparisons of Mean Rank Sums","Description":"The Kruskal and Wallis one-way analysis of variance by ranks \n\t or van der Waerden's normal score test can be employed, \n\t if the data do not meet the assumptions \n\t for one-way ANOVA. Provided that significant differences \n\t were detected by the omnibus test, one may be interested \n\t in applying post-hoc tests for pairwise multiple comparisons \n\t (such as Nemenyi's test, Dunn's test, Conover's test,\n\t van der Waerden's test). Similarly, one-way ANOVA with repeated \n\t measures that is also referred to as ANOVA with unreplicated \n\t block design can also be conducted via the Friedman-Test \n\t or the Quade-test. The consequent post-hoc pairwise \n\t multiple comparison tests according to Nemenyi, Conover and Quade\n\t are also provided in this package. Finally Durbin's test for \n\t a two-way balanced incomplete block design (BIBD) is also given\n\t in this package.","Published":"2016-01-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"pmhtutorial","Version":"1.0.0","Title":"Minimal Working Examples for Particle Metropolis-Hastings","Description":"Routines for state estimate in a linear\n Gaussian state space model and a simple stochastic volatility model using\n particle filtering. Parameter inference is also carried out in these models\n using the particle Metropolis-Hastings algorithm that includes the particle\n filter to provided an unbiased estimator of the likelihood. This package is \n a collection of minimal working examples of these algorithms and is only \n meant for educational use and as a start for learning to them on your own.","Published":"2016-01-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pMineR","Version":"0.31","Title":"Processes Mining in Medicine","Description":"Allows to build and train simple Process Mining (PM) models. The aim is to support PM specifically for the clinical domain from both administrative and clinical data.","Published":"2017-02-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pmlr","Version":"1.0","Title":"Penalized Multinomial Logistic Regression","Description":"Extends the approach proposed by Firth (1993) for bias\n reduction of MLEs in exponential family models to the\n multinomial logistic regression model with general covariate\n types. Modification of the logistic regression score function\n to remove first-order bias is equivalent to penalizing the\n likelihood by the Jeffreys prior, and yields penalized maximum\n likelihood estimates (PLEs) that always exist. Hypothesis\n testing is conducted via likelihood ratio statistics. Profile\n confidence intervals (CI) are constructed for the PLEs.","Published":"2010-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pmml","Version":"1.5.2","Title":"Generate PMML for Various Models","Description":"The Predictive Model Markup Language (PMML) is an XML-based\n language which provides a way for applications to define statistical and\n data mining models and to share models between PMML compliant applications.\n More information about PMML and the Data Mining Group can be found at http://\n www.dmg.org. The generated PMML can be imported into any PMML consuming\n application, such as the Zementis ADAPA and UPPI scoring engines which allow for\n predictive models built in R to be deployed and executed on site, in the cloud\n (Amazon, IBM, and FICO), in-database (IBM Netezza, Pivotal, Sybase IQ, Teradata\n and Teradata Aster) or Hadoop (Datameer and Hive).","Published":"2017-02-27","License":"GPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"pmmlTransformations","Version":"1.3.1","Title":"Transforms Input Data from a PMML Perspective","Description":"Allows for data to be transformed before using\n it to construct models. Builds structures to allow functions in\n the PMML package to output transformation details in\n addition to the model in the resulting PMML file.","Published":"2017-02-27","License":"GPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"pmr","Version":"1.2.5","Title":"Probability Models for Ranking Data","Description":"Descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty's and Koczkodaj's inconsistencies), probability models (Luce models, distance-based models, and rank-ordered logit models) and visualization with multidimensional preference analysis for ranking data are provided. Current, only complete rankings are supported by this package.","Published":"2015-05-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"png","Version":"0.1-7","Title":"Read and write PNG images","Description":"This package provides an easy and simple way to read, write and display bitmap images stored in the PNG format. It can read and write both files and in-memory raw vectors.","Published":"2013-12-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"pnmtrem","Version":"1.3","Title":"Probit-Normal Marginalized Transition Random Effects Models","Description":"An R package for Probit-Normal Marginalized Transition\n Random Effects Models","Published":"2013-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pnn","Version":"1.0.1","Title":"Probabilistic neural networks","Description":"The program pnn implements the algorithm proposed by\n Specht (1990). It is written in the R statistical language. It\n solves a common problem in automatic learning. Knowing a set of\n observations described by a vector of quantitative variables,\n we classify them in a given number of groups. Then, the\n algorithm is trained with this datasets and should guess\n afterwards the group of any new observation. This neural\n network has the main advantage to begin generalization\n instantaneously even with a small set of known observations. It\n is delivered with four functions (learn, smooth, perf and\n guess) and a dataset. The functions are documented with\n examples and provided with unit tests.","Published":"2013-05-07","License":"AGPL","snapshot_date":"2017-06-23"} {"Package":"pocrm","Version":"0.9","Title":"Dose Finding in Drug Combination Phase I Trials Using PO-CRM","Description":"Provides functions to implement and simulate the partial order continual reassessment method (PO-CRM) for use in Phase I trials of combinations of agents. Provides a function for generating a set of initial guesses (skeleton) for the toxicity probabilities at each combination that correspond to the set of possible orderings of the toxicity probabilities specified by the user.","Published":"2015-04-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"POET","Version":"2.0","Title":"Principal Orthogonal ComplEment Thresholding (POET) Method","Description":"Estimate large covariance matrices in approximate factor\n models by thresholding principal orthogonal complements.","Published":"2016-06-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pogit","Version":"1.1.0","Title":"Bayesian Variable Selection for a Poisson-Logistic Model","Description":"Bayesian variable selection for regression models of under-reported\n count data as well as for (overdispersed) Poisson, negative binomal and\n binomial logit regression models using spike and slab priors.","Published":"2016-04-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PogromcyDanych","Version":"1.5","Title":"PogromcyDanych / DataCrunchers is the Masive Online Open Course\nthat Brings R and Statistics to the People","Description":"The data sets used in the online course ,,PogromcyDanych''. You can process data in many ways. The course Data Crunchers will introduce you to this variety. For this reason we will work on datasets of different size (from several to several hundred thousand rows), with various level of complexity (from two to two thousand columns) and prepared in different formats (text data, quantitative data and qualitative data). All of these data sets were gathered in a single big package called PogromcyDanych to facilitate access to them. It contains all sorts of data sets such as data about offer prices of cars, results of opinion polls, information about changes in stock market indices, data about names given to newborn babies, ski jumping results or information about outcomes of breast cancer patients treatment.","Published":"2015-03-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"poibin","Version":"1.2","Title":"The Poisson Binomial Distribution","Description":"This package implements both the exact and approximation\n methods for computing the cdf of the Poisson binomial\n distribution. It also provides the pmf, quantile function, and\n random number generation for the Poisson binomial distribution.","Published":"2013-02-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PoiClaClu","Version":"1.0.2","Title":"Classification and clustering of sequencing data based on a\nPoisson model","Description":"Implements the methods described in the paper, Witten (2011) Classification and Clustering of Sequencing Data using a Poisson Model, Annals of Applied Statistics 5(4) 2493-2518.","Published":"2013-12-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"poilog","Version":"0.4","Title":"Poisson lognormal and bivariate Poisson lognormal distribution","Description":"Functions for obtaining the density, random deviates \n and maximum likelihood estimates of the Poisson lognormal \n distribution and the bivariate Poisson lognormal distribution.","Published":"2008-04-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pointdensityP","Version":"0.2.1","Title":"Point Density for Geospatial Data","Description":"For every spatial point in a list, calculates the density \n and temporal tendency (average age) of points within a user defined \n neighborhood; also supports point density and temporal tendency \n visualization using 'ggmap'.","Published":"2015-06-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pointRes","Version":"1.1.3","Title":"Analyzing Pointer Years and Components of Resilience","Description":"Functions to calculate and plot event and pointer years as well as components of resilience. Designed for dendroecological applications, but also suitable to analyze patterns in other ecological time series.","Published":"2016-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"poio","Version":"0.0-3","Title":"Input/Output Functionality for \"PO\" and \"POT\" Message\nTranslation Files","Description":"Read and write PO and POT files, for package translations.","Published":"2017-01-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PoisBinNonNor","Version":"1.1","Title":"Data Generation with Poisson, Binary and Continuous Components","Description":"Generation of multiple count, binary and continuous variables simultaneously \n given the marginal characteristics and association structure. Throughout the package,\n the word 'Poisson' is used to imply count data under the assumption of Poisson distribution.","Published":"2016-05-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"poisbinom","Version":"1.0.1","Title":"A Faster Implementation of the Poisson-Binomial Distribution","Description":"Provides the probability, distribution, and quantile functions and random number generator for the Poisson-Binomial distribution. This package relies on FFTW to implement the discrete Fourier transform, so that it is much faster than the existing implementation of the same algorithm in R.","Published":"2017-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PoisBinOrd","Version":"1.2","Title":"Data Generation with Poisson, Binary and Ordinal Components","Description":"Generation of multiple count, binary and ordinal variables simultaneously \n given the marginal characteristics and association structure. Throughout the package,\n the word 'Poisson' is used to imply count data under the assumption of Poisson distribution.","Published":"2016-05-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"PoisBinOrdNonNor","Version":"1.3","Title":"Generation of Up to Four Different Types of Variables","Description":"Generation of a chosen number of count, binary, ordinal, and continuous (via Fleishman polynomials) random variables, with specified correlations and marginal properties.","Published":"2017-03-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"PoisBinOrdNor","Version":"1.4","Title":"Data Generation with Poisson, Binary, Ordinal and Normal\nComponents","Description":"Generation of multiple count, binary, ordinal and normal variables \n simultaneously given the marginal characteristics and association structure.","Published":"2017-03-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"poisDoubleSamp","Version":"1.1","Title":"Confidence Intervals with Poisson Double Sampling","Description":"Functions to create confidence intervals for ratios of Poisson\n rates under misclassification using double sampling.","Published":"2015-02-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PoisNonNor","Version":"1.3","Title":"Simultaneous Generation of Count and Continuous Data","Description":"Generation of count (assuming Poisson distribution) and continuous data (using Fleishman polynomials) simultaneously. ","Published":"2017-03-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"PoisNor","Version":"1.1","Title":"Simultaneous Generation of Multivariate Data with Poisson and\nNormal Marginals","Description":"Generates multivariate data with count and continuous variables with a pre-specified correlation matrix. The count and continuous variables are assumed to have Poisson and normal marginals, respectively. The data generation mechanism is a combination of the normal to anything principle and a connection between Poisson and normal correlations in the mixture. ","Published":"2016-07-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"poisson","Version":"1.0","Title":"Simulating Homogenous & Non-Homogenous Poisson Processes","Description":"Contains functions and classes for simulating, plotting and analysing homogenous and non-homogenous Poisson processes.","Published":"2015-10-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"poisson.glm.mix","Version":"1.2","Title":"Fit high dimensional mixtures of Poisson GLMs","Description":"High dimensional mixtures of Poisson Generalized Linear models with three different parameterizations of Poisson means are considered. Moreover, partitioning the response variables into a set of blocks is possible. The package estimates parameters via EM algorithm. For an efficient initialization, a random splitting small-EM is introduced.","Published":"2014-04-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PoissonSeq","Version":"1.1.2","Title":"Significance analysis of sequencing data based on a Poisson log\nlinear model","Description":"This package implements a method for normalization,\n testing, and false discovery rate estimation for RNA-sequencing\n data. The description of the method is in Li J, Witten DM,\n Johnstone I, Tibshirani R (2012). Normalization, testing, and\n false discovery rate estimation for RNA-sequencing data.\n Biostatistics 13(3): 523-38. We estimate the sequencing depths\n of experiments using a new method based on Poisson\n goodness-of-fit statistic, calculate a score statistic on the\n basis of a Poisson log-linear model, and then estimate the\n false discovery rate using a modified version of permutation\n plug-in method. A more detailed instruction as well as sample\n data is available at\n http://www.stanford.edu/~junli07/research.html. In this\n version, we changed the way of calculating log foldchange for\n two-class data. The FDR estimation part remains unchanged.","Published":"2012-10-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"poistweedie","Version":"1.0","Title":"Poisson-Tweedie exponential family models","Description":"Simulation of models Poisson-Tweedie.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"poLCA","Version":"1.4.1","Title":"Polytomous variable Latent Class Analysis","Description":"Latent class analysis and latent class regression models \n for polytomous outcome variables. Also known as latent structure analysis.","Published":"2014-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"polidata","Version":"0.1.0","Title":"Political Data Interface in R","Description":"This package provides easy access to various political data APIs\n directly from R. For example, you can access Google Civic Information API\n (https://developers.google.com/civic-information/) or Sunlight Congress API\n (https://sunlightlabs.github.io/congress/) for US Congress data, and POPONG\n API (http://data.popong.com/) for South Korea National Assembly data.","Published":"2014-09-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"poliscidata","Version":"1.2.0","Title":"Datasets and Functions Featured in Pollock and Edwards R\nCompanion to Essential of Political Analysis","Description":"Bundles the datasets and functions used in the book by Philip Pollock and Barry Edwards, An R Companion to Essentials of Political Analysis, Second Edition.","Published":"2016-10-25","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"pollen","Version":"0.52.00","Title":"Analysis of Aerobiological Data","Description":"Supports analysis of aerobiological data. Available features include determination of pollen season limits, replacement of outliers (Kasprzyk and Walanus (2014) ), and calculation of growing degree days.","Published":"2017-04-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pollstR","Version":"2.0.0","Title":"Client for the HuffPost Pollster API","Description":"Client for the HuffPost Pollster API, which provides\n access to U.S. polls on elections and political opinion.","Published":"2017-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"polmineR","Version":"0.7.3","Title":"Toolkit for Corpus Analysis","Description":"Library for corpus analysis using the Corpus Workbench as an\n efficient back end for indexing and querying large corpora. The package offers\n functionality to flexibly create partitions and to carry out basic statistical\n operations (count, co-occurrences etc.). The original full text of documents\n can be reconstructed and inspected at any time. Beyond that, the package is\n intended to serve as an interface to packages implementing advanced statistical\n procedures. Respective data structures (document term matrices, term co-\n occurrence matrices etc.) can be created based on the indexed corpora.","Published":"2017-05-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"polspline","Version":"1.1.12","Title":"Polynomial Spline Routines","Description":"Routines for the polynomial spline fitting routines\n hazard regression, hazard estimation with flexible tails, logspline,\n lspec, polyclass, and polymars, by C. Kooperberg and co-authors.","Published":"2015-07-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"polyaAeppli","Version":"2.0","Title":"Implementation of the Polya-Aeppli distribution","Description":"Functions for evaluating the mass density, cumulative distribution function, quantile function and random variate generation for the Polya-Aeppli distribution, also known as the geometric compound Poisson distribution.","Published":"2014-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"polyapost","Version":"1.5","Title":"Simulating from the Polya Posterior","Description":"Simulate via Markov chain Monte Carlo (hit-and-run algorithm)\n a Dirichlet distribution conditioned to satisfy a finite set of linear\n equality and inequality constraints (hence to lie in a convex polytope\n that is a subset of the unit simplex).","Published":"2017-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"polychaosbasics","Version":"1.1-1","Title":"Sensitivity Indexes Calculated from Polynomial Chaos Expansions","Description":"Computation of sensitivity indexes by using a method based on a truncated Polynomial Chaos Expansions of the response.\n The necessary condition of the method is: the inputs must be uniformly and independently sampled. Since the inputs are uniformly distributed, the truncated Polynomial Chaos Expansion is built from the multivariate Legendre orthogonal polynomials.","Published":"2017-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Polychrome","Version":"0.9.3","Title":"Qualitative Palettes with Many Colors","Description":"Tools for creating, viewing, and assessing qualitative\n palettes with many (20-30 or more) colors.","Published":"2017-06-06","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"polyclip","Version":"1.6-1","Title":"Polygon Clipping","Description":"R port of Angus Johnson's open source library Clipper. Performs polygon clipping operations (intersection, union, set minus, set difference) for polygonal regions of arbitrary complexity, including holes. Computes offset polygons (spatial buffer zones, morphological dilations, Minkowski dilations) for polygonal regions and polygonal lines. Computes Minkowski Sum of general polygons. There is a function for removing self-intersections from polygon data.","Published":"2017-03-22","License":"BSL","snapshot_date":"2017-06-23"} {"Package":"polycor","Version":"0.7-9","Title":"Polychoric and Polyserial Correlations","Description":"Computes polychoric and polyserial correlations by quick \"two-step\" methods or ML, \n optionally with standard errors; tetrachoric and biserial correlations are special cases.","Published":"2016-08-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"polyCub","Version":"0.6.0","Title":"Cubature over Polygonal Domains","Description":"The following methods for cubature (numerical integration)\n over polygonal domains are currently implemented:\n the two-dimensional midpoint rule as a simple wrapper around\n as.im.function() from package 'spatstat' (Baddeley and Turner, 2005),\n the product Gauss cubature by Sommariva and Vianello (2007),\n an adaptive cubature for isotropic functions via line integrate()\n along the boundary (Meyer and Held, 2014),\n and quasi-exact methods specific to the integration of the\n bivariate Gaussian density over polygonal and circular domains\n (based on formulae from the Abramowitz and Stegun (1972) handbook).\n For cubature over simple hypercubes, the packages 'cubature' and\n 'R2Cuba' are more appropriate.","Published":"2017-05-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"polyfreqs","Version":"1.0.2","Title":"Bayesian Population Genomics in Autopolyploids","Description":"Implements a Gibbs sampling algorithm to perform Bayesian inference\n on biallelic SNP frequencies, genotypes and heterozygosity (observed and\n expected) in a population of autopolyploids. See the published paper in\n Molecular Ecology Resources: Blischak et al. (2016) .","Published":"2016-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"polynom","Version":"1.3-9","Title":"A Collection of Functions to Implement a Class for Univariate\nPolynomial Manipulations","Description":"A collection of functions to implement a class for univariate\n polynomial manipulations.","Published":"2016-12-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PolynomF","Version":"0.94","Title":"Polynomials in R","Description":"Implements univariate polynomial operations in R","Published":"2010-07-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PolyPatEx","Version":"0.9.2","Title":"Paternity Exclusion in Autopolyploid Species","Description":"Functions to perform paternity exclusion via allele\n matching, in autopolyploid species having ploidy 4, 6, or 8. The\n marker data used can be genotype data (copy numbers known) or\n 'allelic phenotype data' (copy numbers not known).","Published":"2016-04-11","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"polypoly","Version":"0.0.2","Title":"Helper Functions for Orthogonal Polynomials","Description":"Tools for reshaping, plotting, and manipulating matrices of orthogonal polynomials.","Published":"2017-05-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"polysat","Version":"1.7-0","Title":"Tools for Polyploid Microsatellite Analysis","Description":"A collection of tools to handle microsatellite data of\n any ploidy (and samples of mixed ploidy) where allele copy number is not\n known in partially heterozygous genotypes. It can import and export data in\n ABI 'GeneMapper', 'Structure', 'ATetra', 'Tetrasat'/'Tetra', 'GenoDive', 'SPAGeDi',\n 'POPDIST', 'STRand', and binary presence/absence formats. It can calculate\n pairwise distances between individuals using a stepwise mutation model or\n infinite alleles model, with or without taking ploidies and allele frequencies\n into account. These distances can be used for the calculation of clonal\n diversity statistics or used for further analysis in R. Allelic diversity\n statistics and Polymorphic Information Content are also available. polysat can \n assist the user in estimating the ploidy of samples, and it can estimate allele \n frequencies in populations, calculate pairwise or global differentiation statistics \n based on those frequencies, and export allele frequencies to 'SPAGeDi' and 'adegenet'. \n Functions are also included for assigning alleles to isoloci in cases where one pair \n of microsatellite primers amplifies alleles from two or more independently\n segregating isoloci.","Published":"2017-05-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"polySegratio","Version":"0.2-4","Title":"Simulate and test marker dosage for dominant markers in\nautopolyploids","Description":"Perform classic chi-squared tests and Ripol et al(1999)\n binomial confidence interval approach for autopolyploid\n dominant markers. Also, dominant markers may be generated\n for families of offspring where either one or both of the\n parents possess the marker. Missing values and\n misclassified markers may be generated at random.","Published":"2014-03-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"polySegratioMM","Version":"0.6-3","Title":"Bayesian mixture models for marker dosage in autopolyploids","Description":"Fits Bayesian mixture models to estimate marker dosage for dominant markers on autopolyploids using JAGS (1.0 or greater) as outlined in Baker et al (2010). May be used in conjunction with polySegratio for simulation studies and comparison with standard methods.","Published":"2014-03-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PolyTrend","Version":"1.2","Title":"Trend Classification Algorithm","Description":"This algorithm classifies the trends into linear, quadratic, cubic, concealed and no-trend types. The \"concealed trends\" are those trends that possess quadratic or cubic forms, but the net change from the start of the time period to the end of the time period hasn't been significant. The \"no-trend\" category includes simple linear trends with statistically in-significant slope coefficient.","Published":"2016-05-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"polywog","Version":"0.4-0","Title":"Bootstrapped Basis Regression with Oracle Model Selection","Description":"Routines for flexible functional form estimation via basis\n regression, with model selection via the adaptive LASSO or SCAD to prevent\n overfitting.","Published":"2014-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pom","Version":"1.1","Title":"POM - Patch Occupancy Models","Description":"This package fits a patch occupancy model","Published":"2013-05-20","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"POMaSPU","Version":"1.0.0","Title":"Adaptive Association Tests for Multiple Phenotypes using\nProportional Odds Model (POM-aSPU)","Description":"POM-aSPU test evaluates an association between an ordinal response and multiple phenotypes, for details see Kim and Pan (2017) .","Published":"2017-06-20","License":"GNU General Public License (>= 3)","snapshot_date":"2017-06-23"} {"Package":"Pomic","Version":"1.0.3","Title":"Pattern Oriented Modelling Information Criterion","Description":"Calculations of an information criterion are proposed to check the quality of simulations results of ABM/IBM or other non-linear rule-based models. The POMDEV measure is based on the KL divergence and likelihood theory. It basically indicates the deviance of simulation results from field observations. Once POMDEV scores and metropolis-hasting sampling on different model versions are effectuated, POMIC scores can be calculated. This method is still under development and further work are needed for the incorporation of multiple patterns assessment.","Published":"2016-05-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pomp","Version":"1.12","Title":"Statistical Inference for Partially Observed Markov Processes","Description":"Tools for working with partially observed Markov process (POMP) models (also known as stochastic dynamical systems, hidden Markov models, and nonlinear, non-Gaussian, state-space models). The package provides facilities for implementing POMP models, simulating them, and fitting them to time series data by a variety of frequentist and Bayesian methods. It is also a versatile platform for implementation of inference methods for general POMP models.","Published":"2017-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pooh","Version":"0.3-2","Title":"Partial Orders and Relations","Description":"Finds equivalence classes corresponding to a symmetric relation\n or undirected graph. Finds total order consistent with partial order\n or directed graph (so-called topological sort).","Published":"2017-03-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pop","Version":"0.1","Title":"A Flexible Syntax for Population Dynamic Modelling","Description":"Population dynamic models underpin a range of analyses and applications in ecology and epidemiology. The various approaches for analysing population dynamics models (MPMs, IPMs, ODEs, POMPs, PVA) each require the model to be defined in a different way. This makes it difficult to combine different modelling approaches and data types to solve a given problem. 'pop' aims to provide a flexible and easy to use common interface for constructing population dynamic models and enabling to them to be fitted and analysed in lots of different ways.","Published":"2016-06-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pop.wolf","Version":"0.1","Title":"Models for Simulating Wolf Populations","Description":"Simulate the dynamic of wolf populations using a specific Individual-Based Model (IBM) compiled in C.","Published":"2016-04-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"popbio","Version":"2.4.3","Title":"Construction and Analysis of Matrix Population Models","Description":"Construct and analyze projection matrix models from a demography study of marked individuals classified by age or stage. The package covers methods described in Matrix Population Models by Caswell (2001) and Quantitative Conservation Biology by Morris and Doak (2002).","Published":"2016-04-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"popdemo","Version":"0.2-3","Title":"Demographic Modelling Using Projection Matrices","Description":"Tools for modelling populations and demography using matrix projection \n models (MPMs). Designed to build on similar tools already available in 'popbio'. \n Specific foci are on indices of transient dynamics and use of control theory \n approaches, but 'popdemo' may also be useful for other implementations of MPMs, \n or matrix models in a more general sense. ","Published":"2016-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PopED","Version":"0.3.2","Title":"Population (and Individual) Optimal Experimental Design","Description":"Optimal experimental designs for both population and individual\n studies based on nonlinear mixed-effect models. Often this is based on a\n computation of the Fisher Information Matrix. This package was developed\n for pharmacometric problems, and examples and predefined models are available\n for these types of systems.","Published":"2016-12-12","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"popEpi","Version":"0.4.1","Title":"Functions for Epidemiological Analysis using Population Data","Description":"Enables computation of epidemiological statistics where e.g. \n counts or mortality rates of the reference population are used. Currently \n supported: excess hazard models, rates, mean survival times, relative \n survival, as well as standardized incidence and mortality ratios (SIRs/SMRs), \n all of which can be easily adjusted for e.g. age. \n Fast splitting and aggregation of 'Lexis' objects (from package 'Epi') \n and other computations achieved using 'data.table'. ","Published":"2016-11-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PopGenKit","Version":"1.0","Title":"Useful functions for (batch) file conversion and data resampling\nin microsatellite datasets","Description":"There are two main purposes to this package. The first is\n to allow batch conversion of Genepop (Rousset 2008) input files\n for use with Arlequin (Excoffier and Lischer 2010), which has a\n simple GUI to analyze batch files. Two commonly used simulation\n software, BottleSim (Kuo & Janzen 2003) and Easypop (Balloux\n 2001) produce Genepop output files that can be analyzed this\n way. There are also functions to convert to and from BottleSim\n format, to quickly produce allele frequency tables or to\n convert a file directly for use in ordination analyses (e.g.\n principal component analysis). This package also includes\n functions to calculate allele rarefaction curves, confidence\n intervals on heterozygosity and allelic richness with\n resampling strategies (bootstrap and jackknife).","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PopGenome","Version":"2.2.3","Title":"An Efficient Swiss Army Knife for Population Genomic Analyses","Description":"Provides efficient tools for population genomics data analysis,\n\table to process individual loci, large sets of loci, or whole genomes. PopGenome not only \n\timplements a wide range of population genetics statistics, but also facilitates the easy \n\timplementation of new algorithms by other researchers. PopGenome is optimized for speed via \n\tthe seamless integration of C code.","Published":"2017-03-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PopGenReport","Version":"3.0.0","Title":"A Simple Framework to Analyse Population and Landscape Genetic\nData","Description":"Provides beginner friendly framework to analyse population genetic\n data. Based on 'adegenet' objects it uses 'knitr' to create comprehensive reports on spatial genetic data. \n For detailed information how to use the package refer to the comprehensive\n tutorials or visit .","Published":"2017-02-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"popKorn","Version":"0.3-0","Title":"For interval estimation of mean of selected populations","Description":"Provides a suite of tools for various methods of estimating\n confidence intervals for the mean of selected populations.","Published":"2014-07-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"poplite","Version":"0.99.17.3","Title":"Tools for Simplifying the Population and Querying of SQLite\nDatabases","Description":"Provides objects and accompanying methods which facilitates populating and querying SQLite databases. ","Published":"2017-02-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"poppr","Version":"2.4.1","Title":"Genetic Analysis of Populations with Mixed Reproduction","Description":"Population genetic analyses for hierarchical analysis of partially\n clonal populations built upon the architecture of the 'adegenet' package.","Published":"2017-04-14","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"popprxl","Version":"0.1.3","Title":"Read GenAlEx Files Directly from Excel","Description":"GenAlEx is a popular Excel macro for genetic analysis and the\n 'poppr' R package allows import of GenAlEx formatted CSV data for genetic\n data analysis in R. This package allows for the import of GenAlEx formatted\n Excel files, serving as a small 'poppr' add on for those who have trouble or\n simply do not want to export their data into CSV format. ","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"popRange","Version":"1.1.3","Title":"popRange: A spatially and temporally explicit forward genetic\nsimulator","Description":"Runs a forward genetic simulator","Published":"2014-10-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"popReconstruct","Version":"1.0-4","Title":"Reconstruct Human Populations of the Recent Past","Description":"Implements the Bayesian hierarchical model described by Wheldon, Raftery, Clark and Gerland (see: http://www.csss.washington.edu/Papers/wp108.pdf) for simultaneously estimating age-specific population counts, fertility rates, mortality rates and net international migration flows, at the national level.","Published":"2014-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"popsom","Version":"4.2","Title":"Functions for Constructing and Evaluating Self-Organizing Maps","Description":"State of the art functions for constructing and evaluating self-organizing maps.","Published":"2017-06-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"poptrend","Version":"0.1.0","Title":"Estimate Smooth and Linear Trends from Population Count Survey\nData","Description":"Functions to estimate and plot smooth or linear population trends, or population indices, \n from animal or plant count survey data.","Published":"2016-12-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"population","Version":"0.1","Title":"Models for Simulating Populations","Description":"Run population simulations using an Individual-Based Model (IBM) compiled in C.","Published":"2015-12-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PopVar","Version":"1.2.1","Title":"Genomic Breeding Tools: Genetic Variance Prediction and\nCross-Validation","Description":"The main attribute of 'PopVar' is the prediction of genetic variance in bi-parental populations, from which the package derives its name. 'PopVar' contains a set of functions that use phenotypic and genotypic data from a set of candidate parents to 1) predict the mean, genetic variance, and superior progeny value of all, or a defined set of pairwise bi-parental crosses, and 2) perform cross-validation to estimate genome-wide prediction accuracy of multiple statistical models. More details are available in Mohammadi, Tiede, and Smith (2015). Crop Sci. doi:10.2135/cropsci2015.01.0030. A dataset 'think_barley.rda' is included for reference and examples.","Published":"2015-07-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"portes","Version":"2.1-4","Title":"Portmanteau Tests for Univariate and Multivariate Time Series\nModels","Description":"Simulate a univariate/multivariate data from seasonal and nonseasonal time series models. It implements the well-known univariate and multivariate portmanteau test statistics based on the asymptotic distributions and the Monte-Carlo significance tests.","Published":"2017-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"portfolio","Version":"0.4-7","Title":"Analysing equity portfolios","Description":"Classes for analysing and implementing equity portfolios.","Published":"2015-01-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PortfolioAnalytics","Version":"1.0.3636","Title":"Portfolio Analysis, Including Numerical Methods for Optimization\nof Portfolios","Description":"Portfolio optimization and analysis routines and graphics.","Published":"2015-04-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"PortfolioEffectEstim","Version":"1.4","Title":"High Frequency Price Estimators by PortfolioEffect","Description":"R interface to PortfolioEffect cloud service for estimating\n high frequency price variance, quarticity, microstructure noise variance, \n and other metrics in both aggregate and rolling window flavors. \n Constructed estimators could use client-side market data or access \n HF intraday price history for all major US Equities.\n See for more information on the \n PortfolioEffect high frequency portfolio analytics platform.","Published":"2016-09-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PortfolioEffectHFT","Version":"1.8","Title":"High Frequency Portfolio Analytics by PortfolioEffect","Description":"R interface to PortfolioEffect cloud service for backtesting\n high frequency trading (HFT) strategies, intraday portfolio analysis\n and optimization. Includes auto-calibrating model pipeline for market\n microstructure noise, risk factors, price jumps/outliers, tail risk\n (high-order moments) and price fractality (long memory). Constructed\n portfolios could use client-side market data or access HF intraday price\n history for all major US Equities. See \n for more information on the PortfolioEffect high frequency portfolio\n analytics platform.","Published":"2017-03-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PortfolioOptim","Version":"1.0.3","Title":"Small/Large Sample Portfolio Optimization","Description":"Two functions for financial portfolio optimization by linear programming are provided. One function implements Benders decomposition algorithm and can be used for very large data sets. The other, applicable for moderate sample sizes, finds optimal portfolio which has the smallest distance to a given benchmark portfolio.","Published":"2017-04-20","License":"GNU General Public License version 3","snapshot_date":"2017-06-23"} {"Package":"portfolioSim","Version":"0.2-7","Title":"Framework for simulating equity portfolio strategies","Description":"Classes that serve as a framework for designing equity\n portfolio simulations.","Published":"2013-07-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PortRisk","Version":"1.1.0","Title":"Portfolio Risk Analysis","Description":"Risk Attribution of a portfolio with Volatility Risk Analysis.","Published":"2015-11-01","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"PoSI","Version":"1.0","Title":"Valid Post-Selection Inference for Linear LS Regression","Description":"\n In linear LS regression, calculate for a given design matrix\n the multiplier K of coefficient standard errors such that the\n confidence intervals [b - K*SE(b), b + K*SE(b)] have a\n guaranteed coverage probability for all coefficient estimates\n b in any submodels after performing arbitrary model selection.","Published":"2017-01-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"postGIStools","Version":"0.2.1","Title":"Tools for Interacting with 'PostgreSQL' / 'PostGIS' Databases","Description":"Functions to convert geometry and 'hstore' data types from\n 'PostgreSQL' into standard R objects, as well as to simplify\n the import of R data frames (including spatial data frames) into 'PostgreSQL'.","Published":"2016-10-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"POT","Version":"1.1-6","Title":"Generalized Pareto Distribution and Peaks Over Threshold","Description":"Some functions useful to perform a Peak Over Threshold\n analysis in univariate and bivariate cases. A user's guide is\n available.","Published":"2016-11-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"potts","Version":"0.5-7","Title":"Markov Chain Monte Carlo for Potts Models","Description":"Do Markov chain Monte Carlo (MCMC) simulation of Potts models\n (Potts, 1952, ),\n which are the multi-color generalization of Ising models\n (so, as as special case, also simulates Ising models).\n Use the Swendsen-Wang algorithm (Swendsen and Wang, 1987,\n ) so MCMC is fast.\n Do maximum composite likelihood estimation of parameters\n (Besag, 1975, ,\n Lindsay, 1988, ).","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PottsUtils","Version":"0.3-2","Title":"Utility Functions of the Potts Models","Description":"A package including several functions related to the Potts models.","Published":"2014-08-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"POUMM","Version":"1.3.0","Title":"The Phylogenetic Ornstein-Uhlenbeck Mixed Model","Description":"The Phylogenetic Ornstein-Uhlenbeck Mixed Model (POUMM) allows to \n estimate the phylogenetic heritability of continuous traits, to test \n hypotheses of neutral evolution versus stabilizing selection, to quantify \n the strength of stabilizing selection, to estimate measurement error and to\n make predictions about the evolution of a phenotype and phenotypic variation \n in a population. The package implements combined maximum likelihood and \n Bayesian inference of the univariate Phylogenetic Ornstein-Uhlenbeck Mixed \n Model, fast parallel likelihood calculation, maximum likelihood \n inference of the genotypic values at the tips, functions for summarizing and\n plotting traces and posterior samples, functions for simulation of a univariate \n continuous trait evolution along a phylogenetic tree. A quick example on\n using the POUMM package can be found in the README. More elaborate\n examples and use-cases are provided in the vignette \n \"A User Guide to The POUMM R-package\".","Published":"2017-06-15","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"powell","Version":"1.0-0","Title":"Powell's UObyQA algorithm","Description":"Optimizes a function using Powell's UObyQA algorithm.","Published":"2006-08-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"PoweR","Version":"1.0.5","Title":"Computation of Power and Level Tables for Hypothesis Tests","Description":"Functions for the computation of power and level tables for hypothesis tests, in LaTeX format, functions to build explanatory graphs for studying power of test statistics.","Published":"2016-02-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Power2Stage","Version":"0.4-5","Title":"Power and Sample-Size Distribution of 2-Stage Bioequivalence\nStudies","Description":"Contains functions to obtain the operational characteristics of \n bioequivalence studies with 2-stage designs (TSD) via simulations.","Published":"2017-01-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"powerAnalysis","Version":"0.2.1","Title":"Power Analysis in Experimental Design","Description":"Basic functions for power analysis and effect size calculation.","Published":"2017-02-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"powerbydesign","Version":"1.0.3","Title":"Power Estimates for ANOVA Designs","Description":"Functions for bootstrapping the power of ANOVA designs\n based on estimated means and standard deviations of the conditions.\n Please refer to the documentation of the boot.power.anova() function\n for further details.","Published":"2016-10-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"powerCompRisk","Version":"0.1.1","Title":"Power Analysis Tool for Joint Testing Hazards with Competing\nRisks Data","Description":"A power analysis tool for jointly testing the cause-1 cause-specific hazard and the any-cause hazard with competing risks data.","Published":"2017-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"powerEQTL","Version":"0.1.3","Title":"Power and Sample Size Calculation for eQTL Analysis","Description":"Power and sample size calculation for eQTL analysis\n based on ANOVA or simple linear regression. It can also calculate power/sample size \n for testing the association of a SNP to a continuous type phenotype.","Published":"2017-01-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"powerGWASinteraction","Version":"1.1.3","Title":"Power Calculations for GxE and GxG Interactions for GWAS","Description":"Analytical power calculations for GxE and GxG interactions for case-control studies of candidate genes and genome-wide association studies (GWAS). This includes power calculation for four two-step screening and testing procedures. It can also calculate power for GxE and GxG without any screening. ","Published":"2015-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"poweRlaw","Version":"0.70.0","Title":"Analysis of Heavy Tailed Distributions","Description":"An implementation of maximum likelihood estimators for a variety\n of heavy tailed distributions, including both the discrete and continuous\n power law distributions. Additionally, a goodness-of-fit based approach is\n used to estimate the lower cut-off for the scaling region.","Published":"2016-12-22","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"powerMediation","Version":"0.2.7","Title":"Power/Sample Size Calculation for Mediation Analysis","Description":"Functions to \n calculate power and sample size for testing\n (1) mediation effects; \n (2) the slope in a simple linear regression; \n (3) odds ratio in a simple logistic regression;\n (4) mean change for longitudinal study with 2 time points;\n (5) interaction effect in 2-way ANOVA; and\n (6) the slope in a simple Poisson regression.","Published":"2017-02-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PowerNormal","Version":"1.0.0","Title":"Power Normal Distribution","Description":"Miscellaneous functions for a descriptive analysis and \n initial inference for the power parameter of the the Power \n Normal (PN) distribution. This miscellaneous will be extend \n for more distributions into the power family and the \n three-parameter model.","Published":"2017-06-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"powerpkg","Version":"1.5","Title":"Power analyses for the affected sib pair and the TDT design","Description":"(1) To estimate the power of testing for linkage using an\n affected sib pair design, as a function of the recurrence risk\n ratios. We will use analytical power formulae as implemented in\n R. These are based on a Mathematica notebook created by Martin\n Farrall. (2) To examine how the power of the transmission\n disequilibrium test (TDT) depends on the disease allele\n frequency, the marker allele frequency, the strength of the\n linkage disequilibrium, and the magnitude of the genetic\n effect. We will use an R program that implements the power\n formulae of Abel and Muller-Myhsok (1998). These formulae allow\n one to quickly compute power of the TDT approach under a\n variety of different conditions. This R program was modeled on\n Martin Farrall's Mathematica notebook.","Published":"2012-10-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"powerplus","Version":"3.1","Title":"Exponentiation Operations","Description":"Computation of matrix and scalar exponentiation.","Published":"2017-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"powerSurvEpi","Version":"0.0.9","Title":"Power and Sample Size Calculation for Survival Analysis of\nEpidemiological Studies","Description":"Functions to calculate power and\n sample size for testing main effect or interaction effect in\n the survival analysis of epidemiological studies\n (non-randomized studies), taking into account the \n correlation between the covariate of the\n interest and other covariates. Some calculations also take \n into account the competing risks and stratified analysis. \n This package also includes\n a set of functions to calculate power and sample size\n for testing main effect in the survival analysis of \n randomized clinical trials.","Published":"2015-07-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PowerTOST","Version":"1.4-5","Title":"Power and Sample Size Based on Two One-Sided t-Tests (TOST) for\n(Bio)Equivalence Studies","Description":"Contains functions to calculate power and sample size for\n various study designs used for bioequivalence studies. \n See function known.designs() for study designs covered. \n Moreover the package contains functions for power and sample size \n based on 'expected' power in case of uncertain (estimated) variability \n and/or uncertain theta0.\n -----\n Added are functions for the power and sample size for the ratio of \n two means with normally distributed data on the original scale \n (based on Fieller's confidence ('fiducial') interval).\n -----\n Contains further functions for power and sample size calculations based on\n non-inferiority t-test. This is not a TOST procedure but eventually useful \n if the question of 'non-superiority' must be evaluated.\n The power and sample size calculations based on non-inferiority test may \n also performed via 'expected' power in case of uncertain (estimated) \n variability and/or uncertain theta0.\n -----\n Contains functions power.scABEL() and sampleN.scABEL() to calculate power \n and sample size for the BE decision via scaled (widened) BE acceptance \n limits (EMA recommended) based on simulations.\n Contains also functions scABEL.ad() and sampleN.scABEL.ad() to iteratively\n adjust alpha in order to maintain the overall consumer risk in ABEL studies\n and adapt the sample size for the loss in power.\n Contains further functions power.RSABE() and sampleN.RSABE() to calculate \n power and sample size for the BE decision via reference scaled ABE criterion \n according to the FDA procedure based on simulations.\n Contains further functions power.NTIDFDA() and sampleN.NTIDFDA() to calculate \n power and sample size for the BE decision via the FDA procedure for NTID's \n based on simulations.\n Contains further functions power.HVNTID() and sampleN.HVNTID() to calculate \n power and sample size for the BE decision via the FDA procedure for \n highly variable NTID's (see FDA Dabigatran / rivaroxaban guidances)\n -----\n Contains functions for power analysis of a sample size plan for ABE \n (pa.ABE()), scaled ABE (pa.scABE()) and scaled ABE for NTID's (pa.NTIDFDA())\n analysing power if deviating from assumptions of the plan.\n -----\n Contains further functions for power calculations / sample size estimation\n for dose proportionality studies using the Power model.","Published":"2017-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PowerUpR","Version":"0.1.3","Title":"Power Analysis Tools for Multilevel Randomized Experiments","Description":"\n Statistical power analysis tools for designing multilevel randomized experiments.\n Includes functions to calculate statistical power (1 - type II error), minimum detectable effect size (MDES),\n minimum required sample size (MRSS), functions to solve constrained optimal sample allocation (COSA) problems,\n and to visualize duo or trio relationships between statistical power, MDES, MRSS, and a component of COSA.","Published":"2017-02-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"PP","Version":"0.6.1","Title":"Person Parameter Estimation","Description":"The PP package includes estimation of (MLE, WLE, MAP, EAP, ROBUST)\n person parameters for the 1,2,3,4-PL model and the GPCM (generalized\n partial credit model). The parameters are estimated under the assumption\n that the item parameters are known and fixed. The package is useful e.g. in\n the case that items from an item pool / item bank with known item parameters\n are administered to a new population of test-takers and an ability\n estimation for every test-taker is needed.","Published":"2017-05-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ppcor","Version":"1.1","Title":"Partial and Semi-Partial (Part) Correlation","Description":"Calculates partial and semi-partial\n (part) correlations along with p-value.","Published":"2015-12-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ppiPre","Version":"1.9","Title":"Predict Protein-Protein Interactions Based on Functional and\nTopological Similarities","Description":"Computing similarities between proteins based on their GO annotation, KEGG annotation and PPI network topology. It integrates seven features (TCSS, IntelliGO, Wang, KEGG, Jaccard, RA and AA) to predict PPIs using an SVM classifier. Some internal functions to calculate GO semantic similarities are re-used from R package GOSemSim authored by Guangchuang Yu.","Published":"2015-07-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ppls","Version":"1.6-1","Title":"Penalized Partial Least Squares","Description":"This package contains linear and nonlinear regression\n methods based on Partial Least Squares and Penalization\n Techniques. Model parameters are selected via cross-validation,\n and confidence intervals ans tests for the regression\n coefficients can be conducted via jackknifing.","Published":"2014-09-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ppmlasso","Version":"1.1","Title":"Point Process Models with LASSO Penalties","Description":"Toolkit for fitting point process models with sequences of LASSO penalties (\"regularisation paths\"). Regularisation paths of Poisson point process models or area-interaction models can be fitted with LASSO, adaptive LASSO or elastic net penalties. A number of criteria are available to judge the bias-variance tradeoff.","Published":"2015-01-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pps","Version":"0.94","Title":"Functions for PPS sampling","Description":"The pps package contains functions to select samples using\n PPS (probability proportional to size) sampling. It also\n includes a function for stratified simple random sampling, a\n function to compute joint inclusion probabilities for\n Sampford's method of PPS sampling, and a few utility functions.\n The user's guide pps-ug.pdf is included.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PPtree","Version":"2.3.0","Title":"Projection pursuit classification tree","Description":"Projection pursuit classification tree using LDA, Lr or PDA projection pursuit index","Published":"2014-05-08","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"PPtreeViz","Version":"2.0.1","Title":"Projection Pursuit Classification Tree Visualization","Description":"Tools for exploring projection pursuit classification tree using\n various projection pursuit indexes.","Published":"2016-12-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pqantimalarials","Version":"0.2","Title":"web tool for estimating under-five deaths caused by poor-quality\nantimalarials in sub-Saharan Africa","Description":"This package allows users to calculate the number of\n under-five child deaths caused by consumption of poor quality\n antimalarials across 39 sub-Saharan nations. The package supports one\n function, that starts an interactive web tool created using\n the shiny R package. The web tool runs locally on the user's machine.\n The web tool allows users to set input parameters (prevalence of poor\n quality antimalarials, case fatality rate of children who take poor\n quality antimalarials, and sample size) which are then used to perform\n an uncertainty analysis following the Latin hypercube\n sampling scheme. Users can download the output figures as PDFs, and the\n output data as CSVs. Users can also download their input parameters\n for reference. This package was designed to accompany the analysis\n presented in:\n J. Patrick Renschler, Kelsey Walters, Paul Newton, Ramanan Laxminarayan\n \"Estimated under-five deaths associated with poor-quality\n antimalarials in sub-Saharan Africa\", 2014. Paper submitted.","Published":"2014-09-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"prabclus","Version":"2.2-6","Title":"Functions for Clustering of Presence-Absence, Abundance and\nMultilocus Genetic Data","Description":"Distance-based parametric bootstrap tests for clustering with \n spatial neighborhood information. Some distance measures, \n Clustering of presence-absence, abundance and multilocus genetical data \n for species delimitation, nearest neighbor \n based noise detection. Try package?prabclus for on overview. ","Published":"2015-01-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"pracma","Version":"2.0.7","Title":"Practical Numerical Math Functions","Description":"\n Provides a large number of functions from numerical analysis and\n linear algebra, numerical optimization, differential equations,\n time series, plus some well-known special mathematical functions.\n Uses 'MATLAB' function names where appropriate to simplify porting.","Published":"2017-06-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"PracTools","Version":"0.4","Title":"Tools for Designing and Weighting Survey Samples","Description":"Contains functions for sample size calculation for survey samples using stratified or clustered one-, two-, and three-stage sample designs. Other functions compute variance components for multistage designs and sample sizes in two-phase designs. A number of example datasets are included.","Published":"2016-10-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pragma","Version":"0.1.3","Title":"Provides a pragma / directive / keyword syntax for R","Description":"pragma allows for the use of pragma (also sometimes called\n directives or keywords. These allow assigning arbitrary\n functionality to a word without requiring the standard function\n call syntax i.e. with parens.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"prais","Version":"0.1.1","Title":"Prais-Winsten Estimation Procedure for AR(1) Serial Correlation","Description":"The Prais-Winsten estimation procedure takes into account serial correlation of type AR(1) in a linear model. The procedure is an iterative method that recursively estimates the beta coefficients and the error autocorrelation of the specified model until convergence of rho, i.e. the AR(1) coefficient, is attained. All estimates are obtained by OLS.","Published":"2015-03-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"praise","Version":"1.0.0","Title":"Praise Users","Description":"Build friendly R packages that\n praise their users if they have done something\n good, or they just need it to feel better.","Published":"2015-08-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"praktikum","Version":"0.1","Title":"Kvantitatiivsete meetodite praktikumi asjad / Functions used in\nthe course \"Quantitative methods in behavioural sciences\"\n(SHPH.00.004), University of Tartu","Description":"Kasulikud funktsioonid kvantitatiivsete mudelite kursuse\n (SHPH.00.004) jaoks","Published":"2014-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"prc","Version":"2015.6-24","Title":"Paired Response Curve","Description":"Estimation, prediction and testing for analyzing serial dilution assay data using paired response curve.","Published":"2015-06-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"prcbench","Version":"0.7.3","Title":"Testing Workbench for Precision-Recall Curves","Description":"A testing workbench for evaluating precision-recall curves under various conditions.","Published":"2017-04-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"prclust","Version":"1.3","Title":"Penalized Regression-Based Clustering Method","Description":"Clustering is unsupervised and exploratory in nature. Yet, it can be performed through penalized regression with grouping pursuit. In this package, we provide two algorithms for fitting the penalized regression-based clustering (PRclust) with non-convex grouping penalties, such as group truncated lasso, MCP and SCAD. One algorithm is based on quadratic penalty and difference convex method. Another algorithm is based on difference convex and ADMM, called DC-ADD, which is more efficient. Generalized cross validation and stability based method were provided to select the tuning parameters. Rand index, adjusted Rand index and Jaccard index were provided to estimate the agreement between estimated cluster memberships and the truth.","Published":"2016-12-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"prcr","Version":"0.1.4","Title":"Person-Centered Analysis","Description":"Provides an easy-to-use yet adaptable set of tools to conduct person-center analysis using a two-step clustering procedure. As described in Bergman and El-Khouri (1999) , hierarchical clustering is performed to determine the initial partition for the subsequent k-means clustering procedure.","Published":"2017-05-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pre","Version":"0.2","Title":"Prediction Rule Ensembles","Description":"Derives prediction rule ensembles (PREs). Largely follows the \n procedure for deriving PREs as described in Friedman & Popescu (2008; \n ), with several adjustments and improvements. The \n main function pre() derives a prediction rule ensemble. Functions print(), \n plot(), coef() and importance() can be used to inspect the generated ensemble. \n Function predict() generates predicted values. Functions singeplot() and \n pairplot() depict dependence of the output on specified predictor variables. \n Function cvpre() performs full cross validation of a pre to calculate the \n expected prediction error. Functions interact() and bsnullinteract() can be \n used to assess interaction effects of predictor variables.","Published":"2017-04-26","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"precintcon","Version":"2.3.0","Title":"Precipitation Intensity, Concentration and Anomaly Analysis","Description":"It contains functions to analyze the precipitation\n intensity, concentration and anomaly.","Published":"2016-07-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"precrec","Version":"0.8.0","Title":"Calculate Accurate Precision-Recall and ROC (Receiver Operator\nCharacteristics) Curves","Description":"Accurate calculations and visualization of precision-recall and ROC (Receiver Operator Characteristics)\n curves.","Published":"2017-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"predatory","Version":"1.1","Title":"Tools for Detecting Predatory Publishers and Journals","Description":"Allows the user to check and find (allegedly) predatory journals based on Beall's list available at .\n As part of a research project, the data from the website has been scraped with web scraping algorithms and manual work. \n The package includes a search function and direct access to this database of predatory journals. The use of this tool\n should facilitate the detection of predatory publications by researchers and librarians.","Published":"2017-01-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PredictABEL","Version":"1.2-2","Title":"Assessment of Risk Prediction Models","Description":"PredictABEL includes functions to assess the performance of\n risk models. The package contains functions for the various measures that are\n used in empirical studies, including univariate and multivariate odds ratios\n (OR) of the predictors, the c-statistic (or area under the receiver operating\n characteristic (ROC) curve (AUC)), Hosmer-Lemeshow goodness of fit test,\n reclassification table, net reclassification improvement (NRI) and\n integrated discrimination improvement (IDI). Also included are functions\n to create plots, such as risk distributions, ROC curves, calibration plot,\n discrimination box plot and predictiveness curves. In addition to functions\n to assess the performance of risk models, the package includes functions to\n obtain weighted and unweighted risk scores as well as predicted risks using\n logistic regression analysis. These logistic regression functions are\n specifically written for models that include genetic variables, but they\n can also be applied to models that are based on non-genetic risk factors only.\n Finally, the package includes function to construct a simulated dataset with \n genotypes, genetic risks, and disease status for a hypothetical population, which \n is used for the evaluation of genetic risk models.","Published":"2014-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"prediction","Version":"0.2.0","Title":"Tidy, Type-Safe 'prediction()' Methods","Description":"A one-function package containing 'prediction()', a type-safe alternative to 'predict()' that always returns a data frame. The package currently supports common model types (e.g., \"lm\", \"glm\") from the 'stats' package, as well as numerous other model classes from other add-on packages. See the README or main package documentation page for a complete listing.","Published":"2017-04-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"predictionInterval","Version":"1.0.0","Title":"Prediction Interval Functions for Assessing Replication Study\nResults","Description":"A common problem faced by journal reviewers and authors is the question of\n whether the results of a replication study are consistent with the original\n published study. One solution to this problem is to examine the effect size\n from the original study and generate the range of effect sizes that could\n reasonably be obtained (due to random sampling) in a replication attempt\n (i.e., calculate a prediction interval). This package has functions that calculate\n the prediction interval for the correlation (i.e., r),\n standardized mean difference (i.e., d-value), and mean.","Published":"2016-08-20","License":"MIT License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PredictiveRegression","Version":"0.1-4","Title":"Prediction Intervals for Three Basic Statistical Models","Description":"Three prediction algorithms described in the paper\n \"On-line predictive linear regression\" Annals of Statistics 37,\n 1566 - 1590 (2009)","Published":"2012-10-29","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"predictmeans","Version":"0.99","Title":"Calculate Predicted Means for Linear Models","Description":"This package provides functions to diagnose \n and make inferences from various linear models, such as those obtained from 'aov', \n 'lm', 'glm', 'gls', 'lme', and 'lmer'. Inferences include predicted \n means and standard errors, contrasts, multiple comparisons, permutation tests and graphs. ","Published":"2014-09-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PredictTestbench","Version":"1.1.3","Title":"Test Bench for Comparison of Data Prediction Models","Description":"Provides a Testbench for comparison of prediction models. This\n package is inspired from 'imputeTestbench' package\n . It compares prediction\n models with reference to RMSE, MAE or MAPE parameters. It allows to add new\n proposed methods to test bench and to compare with other methods. The function\n 'prediction_append()' allows to add multiple numbers of methods to the existing\n methods available in test bench. One/two step ahead prediction is also possible\n in the testbench.","Published":"2016-12-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"predmixcor","Version":"1.1-1","Title":"Classification rule based on Bayesian mixture models with\nfeature selection bias corrected","Description":"\"train_predict_mix\" predicts the binary response with\n binary features","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PredPsych","Version":"0.1","Title":"Predictive Approaches in Psychology","Description":"\n Recent years have seen an increased interest in novel methods\n for analyzing quantitative data from experimental psychology. Currently, however, they lack an\n established and accessible software framework. Many existing implementations provide no guidelines,\n consisting of small code snippets, or sets of packages. In addition, the use of existing packages\n often requires advanced programming experience. 'PredPsych' is a user-friendly toolbox based on\n machine learning predictive algorithms. It comprises of multiple functionalities for multivariate\n analyses of quantitative behavioral data based on machine learning models.","Published":"2017-02-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"prefeR","Version":"0.1.1","Title":"R Package for Pairwise Preference Elicitation","Description":"Allows users to derive multi-objective weights from pairwise comparisons, which\n research shows is more repeatable, transparent, and intuitive other techniques. These weights\n can be rank existing alternatives or to define a multi-objective utility function for optimization.","Published":"2017-02-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"preference","Version":"0.1.0","Title":"2-Stage Clinical Trial Design and Analysis","Description":"Design and analyze two-stage randomized trials with a continuous\n outcome measure. The package contains functions to compute the required sample\n size needed to detect a given preference, treatment, and selection effect;\n alternatively, the package contains functions that can report the study power\n given a fixed sample size. Finally, analysis functions are provided to test each\n effect using either summary data (i.e. means, variances) or raw study data.","Published":"2017-06-16","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"prefmod","Version":"0.8-33","Title":"Utilities to Fit Paired Comparison Models for Preferences","Description":"Generates design matrix for analysing real paired comparisons and derived paired comparison data (Likert type items/ratings or rankings) using a loglinear approach. Fits loglinear Bradley-Terry model (LLBT) exploiting an eliminate feature. Computes pattern models for paired comparisons, rankings, and ratings. Some treatment of missing values (MCAR and MNAR). Fits latent class (mixture) models for paired comparison, rating and ranking patterns using a non-parametric ML approach.","Published":"2015-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PreKnitPostHTMLRender","Version":"0.1.0","Title":"Pre-Knitting Processing and Post HTML-Rendering Processing","Description":"Dynamize headers or R code within 'Rmd' files to prevent proliferation of 'Rmd' files for similar reports. Add in external HTML document within 'rmarkdown' rendered HTML doc.","Published":"2016-06-06","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PReMiuM","Version":"3.1.4","Title":"Dirichlet Process Bayesian Clustering, Profile Regression","Description":"Bayesian clustering using a Dirichlet process mixture model. This model is an alternative to regression models, non-parametrically linking a response vector to covariate data through cluster membership. The package allows Bernoulli, Binomial, Poisson, Normal, survival and categorical response, as well as Normal and discrete covariates. It also allows for fixed effects in the response model, where a spatial CAR (conditional autoregressive) term can be also included. Additionally, predictions may be made for the response, and missing values for the covariates are handled. Several samplers and label switching moves are implemented along with diagnostic tools to assess convergence. A number of R functions for post-processing of the output are also provided. In addition to fitting mixtures, it may additionally be of interest to determine which covariates actively drive the mixture components. This is implemented in the package as variable selection. ","Published":"2016-12-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prepdat","Version":"1.0.8","Title":"Preparing Experimental Data for Statistical Analysis","Description":"Prepares data for statistical analysis (e.g., analysis of variance\n ;ANOVA) by enabling the user to easily and quickly merge (using the\n file_merge() function) raw data files into one merged table and then\n aggregate the merged table (using the prep() function) into a finalized\n table while keeping track and summarizing every step of the preparation.\n The finalized table contains several possibilities for dependent measures of\n the dependent variable. Most suitable when measuring variables in an\n interval or ratio scale (e.g., reaction-times) and/or discrete values such\n as accuracy. Main functions included are file_merge() and prep(). The\n file_merge() function vertically merges individual data files (in a long\n format) in which each line is a single observation to one single dataset.\n The prep() function aggregates the single dataset according to any\n combination of grouping variables (i.e., between-subjects and\n within-subjects independent variables, respectively), and returns a data\n frame with a number of dependent measures for further analysis for each cell\n according to the combination of provided grouping variables. Dependent\n measures for each cell include among others means before and after rejecting\n all values according to a flexible standard deviation criteria, number of\n rejected values according to the flexible standard deviation criteria,\n proportions of rejected values according to the flexible standard deviation\n criteria, number of values before rejection, means after rejecting values\n according to procedures described in Van Selst & Jolicoeur (1994; suitable\n when measuring reaction-times), standard deviations, medians, means according\n to any percentile (e.g., 0.05, 0.25, 0.75, 0.95) and harmonic means. The data\n frame prep() returns can also be exported as a txt file to be used for\n statistical analysis in other statistical programs.","Published":"2016-09-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"preprocomb","Version":"0.3.0","Title":"Tools for Preprocessing Combinations","Description":"Preprocessing is often the most time-consuming phase in data analysis\n and preprocessing transformations interdependent in unexpected\n ways. This package helps to make preprocessing faster and more effective. It\n provides an S4 framework for creating and evaluating preprocessing combinations\n for classification, clustering and outlier detection. The framework supports\n adding of user-defined preprocessors and preprocessing phases. Default preprocessors\n can be used for low variance removal, missing value imputation, scaling, outlier\n removal, noise smoothing, feature selection and class imbalance correction.","Published":"2016-06-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"preprosim","Version":"0.2.0","Title":"Lightweight Data Quality Simulation for Classification","Description":"Data quality simulation can be used to check the robustness of data\n analysis findings and learn about the impact of data quality contaminations on\n classification. This package helps to add contaminations (noise, missing values,\n outliers, low variance, irrelevant features, class swap (inconsistency), class\n imbalance and decrease in data volume) to data and then evaluate the simulated\n data sets for classification accuracy. As a lightweight solution simulation runs\n can be set up with no or minimal up-front effort.","Published":"2016-07-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"preproviz","Version":"0.2.0","Title":"Tools for Visualization of Interdependent Data Quality Issues","Description":"Data quality issues such as missing values and outliers are often\n interdependent, which makes preprocessing both time-consuming and leads to\n suboptimal performance in knowledge discovery tasks. This package supports\n preprocessing decision making by visualizing interdependent data quality issues\n through means of feature construction. The user can define his own application\n domain specific constructed features that express the quality of a data point\n such as number of missing values in the point or use nine default features.\n The outcome can be explored with plot methods and the feature constructed data\n acquired with get methods.","Published":"2016-07-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prereg","Version":"0.2.0","Title":"R Markdown Templates to Preregister Scientific Studies","Description":"The R Markdown templates in this package are based on the Center\n for Open Science Preregistration Challenge and the 'AsPredicted.org' questions.\n They are, thus, particularly suited to draft preregistration documents for\n these programs but can also be used for internally.","Published":"2016-09-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PresenceAbsence","Version":"1.1.9","Title":"Presence-Absence Model Evaluation","Description":"This package provides a set of functions useful when\n evaluating the results of presence-absence models. Package\n includes functions for calculating threshold dependent measures\n such as confusion matrices, pcc, sensitivity, specificity, and\n Kappa, and produces plots of each measure as the threshold is\n varied. It will calculate optimal threshold choice according to\n a choice of optimization criteria. It also includes functions\n to plot the threshold independent ROC curves along with the\n associated AUC (area under the curve).","Published":"2012-08-17","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"presens","Version":"2.1.0","Title":"Interface for PreSens Fiber Optic Data","Description":"Makes output files from select PreSens Fiber Optic Oxygen\n Transmitters easier to work with in R. See for more\n information about PreSens (Precision Sensing GmbH). Note: this package is\n neither created nor maintained by PreSens.","Published":"2016-07-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"preseqR","Version":"3.1.0","Title":"Predicting the Number of Species in a Random Sample","Description":"The relation between the number of species and the number of individuals in a random sample is a classic problem back to Fisher (1943) . We generalize this problem to estimate the number of species represented at least r times in a random sample. In particular when r=1, it becomes the classic problem. We use a mixture of Poisson processes to model sampling procedures and apply a nonparametric empirical Bayes approach to obtain an estimator. For more information on preseqR, see Deng C, Daley T and Smith AD (2015) and Deng C and Smith AD (2016) .","Published":"2017-05-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PResiduals","Version":"0.2-4","Title":"Probability-Scale Residuals and Residual Correlations","Description":"Computes probability-scale residuals and residual correlations\n for continuous, ordinal, binary, count, and time-to-event data.","Published":"2016-06-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"preText","Version":"0.5.0","Title":"Diagnostics to Assess the Effects of Text Preprocessing\nDecisions","Description":"Functions to assess the effects of different text preprocessing decisions on the inferences drawn from the resulting document-term matrices they generate.","Published":"2017-05-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"prettycode","Version":"1.0.0","Title":"Pretty Print R Code in the Terminal","Description":"Replace the standard print method for functions with one that\n performs syntax highlighting, using ANSI colors, if the terminal\n supports them.","Published":"2017-01-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"prettydoc","Version":"0.2.0","Title":"Creating Pretty Documents from R Markdown","Description":"Creating tiny yet beautiful documents and vignettes from R\n Markdown. The package provides the 'html_pretty' output format as an\n alternative to the 'html_document' and 'html_vignette' engines that\n convert R Markdown into HTML pages. Various themes and syntax highlight\n styles are supported.","Published":"2016-09-01","License":"Apache License (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"prettyGraphs","Version":"2.1.5","Title":"publication-quality graphics","Description":"prettyGraphs contains simple, crisp graphics. Graphics produced by prettyGraphs are publication-quality.","Published":"2013-12-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prettymapr","Version":"0.2.1","Title":"Scale Bar, North Arrow, and Pretty Margins in R","Description":"Automates the process of creating a scale bar and north arrow in\n any package that uses base graphics to plot in R. Bounding box tools help find\n and manipulate extents. Finally, there is a function to automate the process\n of setting margins, plotting the map, scale bar, and north arrow, and resetting\n graphic parameters upon completion.","Published":"2017-04-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prettyR","Version":"2.2","Title":"Pretty Descriptive Stats","Description":"Functions for conventionally formatting descriptive stats,\n reshaping data frames and formatting R output as HTML.","Published":"2015-10-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"prettyunits","Version":"1.0.2","Title":"Pretty, Human Readable Formatting of Quantities","Description":"Pretty, human readable formatting of quantities.\n Time intervals: 1337000 -> 15d 11h 23m 20s.\n Vague time intervals: 2674000 -> about a month ago.\n Bytes: 1337 -> 1.34 kB.","Published":"2015-07-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"prevalence","Version":"0.4.0","Title":"Tools for Prevalence Assessment Studies","Description":"The prevalence package provides Frequentist and Bayesian methods for prevalence assessment studies. IMPORTANT: the truePrev functions in the prevalence package call on JAGS (Just Another Gibbs Sampler), which therefore has to be available on the user's system. JAGS can be downloaded from http://mcmc-jags.sourceforge.net/.","Published":"2015-04-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PrevMap","Version":"1.4.1","Title":"Geostatistical Modelling of Spatially Referenced Prevalence Data","Description":"Provides functions for both likelihood-based\n and Bayesian analysis of spatially referenced prevalence data, and is\n also an extension of the 'geoR' package.","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"prevR","Version":"3.3","Title":"Estimating Regional Trends of a Prevalence from a DHS","Description":"Spatial estimation of a prevalence surface\n or a relative risks surface, using data from a Demographic and Health\n Survey (DHS) or an analog survey.","Published":"2016-02-23","License":"CeCILL","snapshot_date":"2017-06-23"} {"Package":"pRF","Version":"1.2","Title":"Permutation Significance for Random Forests","Description":"Estimate False Discovery Rates (FDRs) for importance metrics from\n random forest runs.","Published":"2016-01-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"prim","Version":"1.0.16","Title":"Patient Rule Induction Method (PRIM)","Description":"Patient Rule Induction Method (PRIM) for bump hunting in high-dimensional data.","Published":"2015-09-08","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"primefactr","Version":"0.1.0","Title":"Use Prime Factorization for Computations","Description":"Use Prime Factorization for simplifying computations,\n for instance for ratios of large factorials.","Published":"2016-08-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"primer","Version":"1.0","Title":"Functions and data for A Primer of Ecology with R","Description":"Functions are primarily functions for systems of ordinary\n differential equations, difference equations, and eigenanalysis\n and projection of demographic matrices; data are for examples.","Published":"2012-05-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"primerTree","Version":"1.0.3","Title":"Visually Assessing the Specificity and Informativeness of Primer\nPairs","Description":"Identifies potential target sequences for a given set of primers\n and generates taxonomically annotated phylogenetic trees with the predicted\n amplification products.","Published":"2016-01-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"primes","Version":"0.1.0","Title":"Generate and Test for Prime Numbers","Description":"Functions to test whether a number is prime and generate the prime numbers within a specified range. Based around\n an implementation of Wilson's theorem for testing for an integer's primality.","Published":"2015-06-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PRIMME","Version":"2.1-0","Title":"Eigenvalues and Singular Values and Vectors from Large Matrices","Description":"\n R interface to PRIMME, a C library for computing a few\n eigenvalues and their corresponding eigenvectors of a real symmetric or complex\n Hermitian matrix. It can also compute singular values and vectors of a square\n or rectangular matrix. It can find largest, smallest, or interior\n singular/eigenvalues and can use preconditioning to accelerate convergence. ","Published":"2017-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PRIMsrc","Version":"0.7.0","Title":"PRIM Survival Regression Classification","Description":"Performs a unified treatment of Bump Hunting by Patient Rule Induction Method (PRIM) in Survival, Regression and Classification settings (SRC). The current version is a development release that only implements the case of a survival response. New features will be added soon as they are available.","Published":"2017-05-29","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"princurve","Version":"1.1-12","Title":"Fits a Principal Curve in Arbitrary Dimension","Description":"fits a principal curve to a data matrix in arbitrary\n dimensions","Published":"2013-04-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prinsimp","Version":"0.8-8","Title":"Finding and plotting simple basis vectors for multivariate data","Description":"Provides capabilities beyond principal components\n analysis to focus on finding structure in low variability\n subspaces. Constructs and plots simple basis vectors for\n pre-defined and user-defined measures of simplicity.","Published":"2013-11-02","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"printr","Version":"0.1","Title":"Automatically Print R Objects to Appropriate Formats According\nto the 'knitr' Output Format","Description":"Extends the S3 generic function knit_print() in 'knitr'\n to automatically print some objects using an appropriate format such as\n Markdown or LaTeX. For example, data frames are automatically printed as\n tables, and the help() pages can also be rendered in 'knitr' documents.","Published":"2017-05-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"prioritylasso","Version":"0.1.0","Title":"Analyzing Multiple Omics Data with an Offset Approach","Description":"Fits successive Lasso models for several blocks of (omics) data with different priorities and takes the predicted values as an offset for the next block.","Published":"2017-03-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prism","Version":"0.0.7","Title":"Access Data from the Oregon State Prism Climate Project","Description":"Allows users to access the Oregon State Prism climate data. Using the web service API data can easily downloaded in bulk and loaded into R for spatial analysis. Some user friendly visualizations are also provided.","Published":"2015-11-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PRISMA","Version":"0.2-6","Title":"Protocol Inspection and State Machine Analysis","Description":"Loads and processes huge text\n corpora processed with the sally toolbox ().\n sally acts as a very fast preprocessor which splits the text files into\n tokens or n-grams. These output files can then be read with the PRISMA\n package which applies testing-based token selection and has some\n replicate-aware, highly tuned non-negative matrix factorization and\n principal component analysis implementation which allows the processing of\n very big data sets even on desktop machines.","Published":"2017-02-28","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"PRISMAstatement","Version":"1.0.1","Title":"Plot Flow Charts According to the \"PRISMA\" Statement","Description":"Plot a PRISMA flow chart\n describing the identification, screening, eligibility and inclusion or studies in\n systematic reviews. PRISMA is an evidence-based minimum set of items for reporting\n in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews\n evaluating randomized trials, but can also be used as a basis for reporting\n systematic reviews of other types of research, particularly evaluations of\n interventions.","Published":"2016-10-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PrivateLR","Version":"1.2-21","Title":"Differentially Private Regularized Logistic Regression","Description":"PrivateLR implements two differentially private algorithms for \n estimating L2-regularized logistic regression coefficients. A randomized\n algorithm F is epsilon-differentially private (C. Dwork, Differential\n Privacy, ICALP 2006), if \n |log(P(F(D) in S)) - log(P(F(D') in S))| <= epsilon\n for any pair D, D' of datasets that differ in exactly one element, any\n set S, and the randomness is taken over the choices F makes. ","Published":"2014-10-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"prLogistic","Version":"1.2","Title":"Estimation of Prevalence Ratios using Logistic Models","Description":"Estimation of prevalence ratios using logistic models and confidence intervals with delta and bootstrap methods.","Published":"2013-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pro","Version":"0.1.1","Title":"Point-Process Response Model for Optogenetics","Description":"Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. This package implements the methodological framework, Point-process Response model for Optogenetics (PRO), for analyzing data from these experiments. This method provides explicit nonlinear transformations to link the flash point-process with the spiking point-process. Such response functions can be used to provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation.","Published":"2015-09-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prob","Version":"1.0-0","Title":"Elementary Probability on Finite Sample Spaces","Description":"\n A framework for performing elementary probability\n calculations on finite sample spaces, which may be represented by data frames\n or lists. Functionality includes setting up sample spaces, counting tools,\n defining probability spaces, performing set algebra, calculating probability\n and conditional probability, tools for simulation and checking the law of\n large numbers, adding random variables, and finding marginal distributions.\n Characteristic functions for all base R distributions are included.","Published":"2017-02-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"probemod","Version":"0.2.1","Title":"Statistical Tools for Probing Moderation Effects","Description":"Contains functions that are useful for probing moderation effects (or interactions) including techniques such as pick-a-point (also known as spotlight analysis) and Johnson-Neyman (also known as floodlight analysis). Plot function is also provided to facilitate visualization of results from each of these techniques.","Published":"2015-04-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"probFDA","Version":"1.0.1","Title":"Probabilistic Fisher Discriminant Analysis","Description":"Probabilistic Fisher discriminant analysis (pFDA) is a probabilistic version of the popular and powerful Fisher linear discriminant analysis for dimensionality reduction and classification.","Published":"2015-05-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ProbForecastGOP","Version":"1.3.2","Title":"Probabilistic weather forecast using the GOP method","Description":"The ProbForecastGOP package contains a main function,\n called ProbForecastGOP and other functions, to produce\n probabilistic weather forecasts of weather fields using the\n Geostatistical Output Perturbation (GOP) method of Gel,\n Raftery, and Gneiting (JASA, 2004).","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ProbitSpatial","Version":"1.0","Title":"Probit with Spatial Dependence, SAR and SEM Models","Description":"Binomial Spatial Probit models for big data.","Published":"2016-03-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"probsvm","Version":"1.00","Title":"probsvm: Class probability estimation for Support Vector\nMachines","Description":"This package provides multiclass conditional probability\n estimation for the SVM, which is distributional assumption\n free.","Published":"2013-05-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ProbYX","Version":"1.1-0","Title":"Inference for the Stress-Strength Model R = P(Y= 3)","snapshot_date":"2017-06-23"} {"Package":"proccalibrad","Version":"0.14","Title":"Extraction of Bands from MODIS Calibrated Radiances MOD02 NRT","Description":"Package for processing downloaded MODIS Calibrated radiances\n Product HDF files. Specifically, MOD02 calibrated radiance product files, and\n the associated MOD03 geolocation files (for MODIS-TERRA). The package will be\n most effective if the user installs MRTSwath (MODIS Reprojection Tool for swath\n products; , and\n adds the directory with the MRTSwath executable to the default R PATH by editing\n ~/.Rprofile.","Published":"2016-07-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"processcontrol","Version":"0.1.0","Title":"Statistical Process Control Charts","Description":"Generate time series chart for individual values with mean and +/-\n 3 standard deviation lines and the corresponding mR chart with the upper control\n limit. Also execute the 8 Shewhart stability run tests and display the violations.","Published":"2016-03-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"processmapR","Version":"0.1.0","Title":"Construct Process Maps Using Event Data","Description":"Visualize of process maps based on\n event logs, in the form of directed graphs. Part of the 'bupaR' framework.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"processmonitR","Version":"0.1.0","Title":"Building Process Monitoring Dashboards","Description":"Functions for constructing dashboards for business process monitoring. Building on the event log objects class from package 'bupaR'. Allows the use to assemble custom shiny dashboards based on process data.","Published":"2017-06-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"processx","Version":"2.0.0","Title":"Execute and Control System Processes","Description":"Portable tools to run system processes in the background.\n It can check if a background process is running; wait on a background\n process to finish; get the exit status of finished processes; kill\n background processes and their children; restart processes. It can read\n the standard output and error of the processes, using non-blocking\n connections. 'processx' can poll a process for standard output or\n error, with a timeout. It can also poll several processes at once.","Published":"2017-05-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ProDenICA","Version":"1.0","Title":"Product Density Estimation for ICA using tilted Gaussian density\nestimates","Description":"A direct and flexible method for estimating an ICA model.\n This approach estimates the densities for each component\n directly via a tilted gaussian. The tilt functions are\n estimated via a GAM poisson model. Details can be found in\n \"Elements of Statistical Learning (2nd Edition)\" Section 14.7.4","Published":"2010-05-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prodigenr","Version":"0.3.0","Title":"Research Project Directory Generator","Description":"Create a project directory structure, along with typical files\n for that project. This allows projects to be quickly and easily created,\n as well as for them to be standardized. Designed specifically with scientists\n in mind (mainly bio-medical researchers, but likely applies to other fields).","Published":"2016-07-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"prodlim","Version":"1.6.1","Title":"Product-Limit Estimation for Censored Event History Analysis","Description":"Fast and user friendly implementation of nonparametric estimators\n for censored event history (survival) analysis. Kaplan-Meier and\n Aalen-Johansen method.","Published":"2017-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"productivity","Version":"0.2.0","Title":"Indices of Productivity Using Data Envelopment Analysis (DEA)","Description":"\n Various transitive measures of productivity and profitability, in levels and changes, are computed. \n In addition to the classic Malmquist productivity index, the 'productivity' package contains also the \n multiplicatively complete and transitive Färe-Primont and Lowe indices.\n These indices are also decomposed into different components providing insightful information on \n the sources of productivity and profitability improvements. \n In the use of Malmquist productivity index, the technological change index is further decomposed \n into bias technological change components.\n For the transitive Färe-Primont and Lowe measures, it is possible to rule out technological change.\n The package also allows to prohibit negative technological change.\n All the estimations are based on the nonparametric Data Envelopment Analysis (DEA) and several \n assumptions regarding returns to scale are available (i.e. CRS, VRS, NIRS, NDRS).\n The package allows parallel computing by default, depending on the user's computer configuration.","Published":"2017-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"productplots","Version":"0.1.1","Title":"Product Plots for R","Description":"Framework for visualising tables of counts, proportions\n and probabilities. The framework is called product plots, alluding to\n the computation of area as a product of height and width, and the\n statistical concept of generating a joint distribution from the\n product of conditional and marginal distributions. The framework,\n with extensions, is sufficient to encompass over 20 visualisations\n previously described in fields of statistical graphics and 'infovis',\n including bar charts, mosaic plots, 'treemaps', equal area plots and\n fluctuation diagrams.","Published":"2016-07-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prof.tree","Version":"0.1.0","Title":"An Alternative Display Profiling Data as Tree Structure","Description":"An alternative data structure for the profiling information\n generated by Rprof().","Published":"2016-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PROFANCY","Version":"1.0","Title":"The package can prioritize candidate disease metabolites based\non global functional relationships between metabolites in the\ncontext of metabolic pathways","Description":"The package can prioritize the candidate disease metabolites based on the assumption \n that functionally related metabolites tend to associate with the same or similar \n diseases in the context of metabolic pathway. The PROFANCY package (1) prioritizes the \n disease metabolites from global functional similarity and local modularity of the \n metabolic network; (2) allows users to select default metabolites or input their \n interested metabolites as seed nodes or candidate nodes (3) can prioritize the candidate \n metabolites in KEGG or EHMN metabolic network. ","Published":"2013-07-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"profdpm","Version":"3.3","Title":"Profile Dirichlet Process Mixtures","Description":"This package facilitates profile inference (inference at\n the posterior mode) for a class of product partition models\n (PPM). The Dirichlet process mixture is currently the only\n available member of this class. These methods search for the\n maximum posterior (MAP) estimate for the data partition in a\n PPM.","Published":"2013-05-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ProfessR","Version":"2.3-5","Title":"Grades Setting and Exam Maker","Description":"Programs to determine student grades and create\n examinations from Question banks. Programs will create numerous\n multiple choice exams, randomly shuffled, for different versions of same question list.","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ProfileLikelihood","Version":"1.1","Title":"Profile Likelihood for a Parameter in Commonly Used Statistical\nModels","Description":"This package provides profile likelihoods for a parameter of interest in commonly used statistical models. The models include linear models, generalized linear models, proportional odds models, linear mixed-effects models, and linear models for longitudinal responses fitted by generalized least squares. The package also provides plots for normalized profile likelihoods as well as the maximum profile likelihood estimates and the kth likelihood support intervals.","Published":"2011-11-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"profileModel","Version":"0.5-9","Title":"Tools for profiling inference functions for various model\nclasses","Description":"profileModel provides tools that can be used to calculate, evaluate, plot and use for inference the profiles of *arbitrary* inference functions for *arbitrary* 'glm'-like fitted models with linear predictors.","Published":"2013-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"profileR","Version":"0.3-4","Title":"Profile Analysis of Multivariate Data in R","Description":"A suite of multivariate methods and data visualization tools to implement profile analysis and cross-validation techniques described in Davison & Davenport (2002) , Bulut (2013) , and other published and unpublished resources. The package includes routines to perform criterion-related profile analysis, profile analysis via multidimensional scaling, moderated profile analysis, profile analysis by group, and a within-person factor model to derive score profiles.","Published":"2017-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"profilr","Version":"0.1.0","Title":"Quickly Profile Data in R","Description":"Allows users to quickly and reliably profile data in R\n using convenience functions. The profiled data returns as a data.frame\n and provides a wealth of common and uncommon summary statistics.","Published":"2015-09-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ProFit","Version":"1.0.2","Title":"Fit Projected 2D Profiles to Galaxy Images","Description":"Get data / Define model / ??? / ProFit! ProFit is a Bayesian galaxy fitting tool that uses a fast C++ image generation library and a flexible interface to a large number of likelihood samplers.","Published":"2017-03-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"profmem","Version":"0.4.0","Title":"Simple Memory Profiling for R","Description":"A simple and light-weight API for memory profiling of R expressions. The profiling is built on top of R's built-in memory profiler ('utils::Rprofmem()'), which records every memory allocation done by R (also native code).","Published":"2016-09-15","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"profr","Version":"0.3.1","Title":"An alternative display for profiling information","Description":"profr provides an alternative data structure\n and visual rendering for the profiling information\n generated by Rprof.","Published":"2014-04-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"proftools","Version":"0.99-2","Title":"Profile Output Processing Tools for R","Description":"Tools for examining Rprof profile output.","Published":"2016-01-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"profvis","Version":"0.3.3","Title":"Interactive Visualizations for Profiling R Code","Description":"Interactive visualizations for profiling R code.","Published":"2017-01-14","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"progenyClust","Version":"1.2","Title":"Finding the Optimal Cluster Number Using Progeny Clustering","Description":"Implementing the Progeny Clustering algorithm, the 'progenyClust' package assesses the clustering stability and identifies the optimal clustering number for a given data matrix. It uses k-means clustering as a default, provides a tailored hierarchical clustering function, and can be customized to work with other clustering algorithms and different parameter settings. The package includes a main function progenyClust(), plot and summary methods for 'progenyClust' object, a function hclust.progenyClust() for hierarchical clustering, and two example datasets (test and cell) for testing. ","Published":"2016-04-12","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"ProgGUIinR","Version":"0.0-4","Title":"support package for \"Programming Graphical User Interfaces in R\"","Description":"sample code, appendices and functions for the text\n Programming GUIs in R","Published":"2014-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"prognosticROC","Version":"0.7","Title":"Prognostic ROC curves for evaluating the predictive capacity of\na binary test","Description":"Prognostic ROC curve is an alternative graphical approach to represent the discriminative capacity of the marker: a receiver operating characteristic (ROC) curve by plotting 1 minus the survival in the high-risk group against 1 minus the survival in the low-risk group. This package contains functions to assess prognostic ROC curve. The user can enter the survival according to a model previously estimated or the user can also enter individual survival data for estimating the prognostic ROC curve by using Kaplan-Meier estimator. The area under the curve (AUC) corresponds to the probability that a patient in the low-risk group has a longer lifetime than a patient in the high-risk group. The prognostic ROC curve provides complementary information compared to survival curves. The AUC is assessed by using the trapezoidal rules. When survival curves do not reach 0, the prognostic ROC curve is incomplete and the extrapolations of the AUC are performed by assuming pessimist, optimist and non-informative situations.","Published":"2013-11-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"progress","Version":"1.1.2","Title":"Terminal Progress Bars","Description":"Configurable Progress bars, they may include percentage,\n elapsed time, and/or the estimated completion time. They work in\n terminals, in 'Emacs' 'ESS', 'RStudio', 'Windows' 'Rgui' and the\n 'macOS' 'R.app'. The package also provides a 'C++' 'API', that works\n with or without 'Rcpp'.","Published":"2016-12-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"proj4","Version":"1.0-8","Title":"A simple interface to the PROJ.4 cartographic projections\nlibrary","Description":"A simple interface to lat/long projection and datum\n transformation of the PROJ.4 cartographic projections library.\n It allows transformation of geographic coordinates from one\n projection and/or datum to another.","Published":"2012-08-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ProjectTemplate","Version":"0.7","Title":"Automates the Creation of New Statistical Analysis Projects","Description":"Provides functions to\n automatically build a directory structure for a new R\n project. Using this structure, 'ProjectTemplate'\n automates data loading, preprocessing, library\n importing and unit testing.","Published":"2016-08-11","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"ProliferativeIndex","Version":"1.0.0","Title":"Calculates and Analyzes the Proliferative Index","Description":"Provides functions for calculating and analyzing the proliferative \n index (PI) from an RNA-seq dataset. As described in Ramaker & Lasseigne, \n et al. bioRxiv, 2016 .","Published":"2017-02-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ProNet","Version":"1.0.0","Title":"Biological Network Construction, Visualization and Analyses","Description":"High-throughput experiments are now widely used in biological researches, which improves both the quality and quantity of omics data. Network-based presentation of these data has become a popular way in data analyses. This package mainly provides functions for biological network construction, visualization and analyses. Networks can be constructed either from experimental data or from a set of proteins and integrated PPI database. Based on them, users can perform traditional visualization, along with the subcellular localization based ones for Homo sapiens and Arabidopsis thaliana. Furthermore, analyses including topological statistics, functional module clustering and go profiling can also be achieved. ","Published":"2015-10-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"prop.comb.RR","Version":"1.2","Title":"Analyzing Combination of Proportions and Relative Risk","Description":"Carrying out inferences about any linear combination of proportions and the ratio of two proportions.","Published":"2017-03-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"propagate","Version":"1.0-4","Title":"Propagation of Uncertainty","Description":"Propagation of uncertainty using higher-order Taylor expansion and Monte Carlo simulation.","Published":"2014-09-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PropCIs","Version":"0.2-5","Title":"Various confidence interval methods for proportions","Description":"Computes two-sample confidence intervals for single, paired and independent proportions","Published":"2014-04-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"PropClust","Version":"1.4-2","Title":"Propensity Clustering and Decomposition","Description":"This package implements propensity clustering and\n decomposition. Propensity decomposition can be viewed on the\n one hand as a generalization of the eigenvector-based\n approximation of correlation networks, and on the other hand as\n a generalization of random multigraph models and\n conformity-based decompositions.","Published":"2013-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"properties","Version":"0.0-8","Title":"Parse Java Properties Files for R Service Bus Applications","Description":"The properties package allows to parse Java properties files\n in the context of R Service Bus applications.","Published":"2015-12-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prophet","Version":"0.1.1","Title":"Automatic Forecasting Procedure","Description":"Implements a procedure for forecasting time series data based on\n an additive model where non-linear trends are fit with yearly and weekly\n seasonality, plus holidays. It works best with daily periodicity data with\n at least one year of historical data. Prophet is robust to missing data,\n shifts in the trend, and large outliers.","Published":"2017-04-19","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"proportion","Version":"2.0.0","Title":"Inference on Single Binomial Proportion and Bayesian\nComputations","Description":"Abundant statistical literature has revealed the importance of constructing and evaluating various methods for constructing confidence intervals (CI) for single binomial proportion (p). We comprehensively provide procedures in frequentist (approximate with or without adding pseudo counts or continuity correction or exact) and in Bayesian cultures. Evaluation procedures for CI warrant active computational attention and required summaries pertaining to four criterion (coverage probability, expected length, p-confidence, p-bias, and error) are implemented.","Published":"2017-05-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"propOverlap","Version":"1.0","Title":"Feature (gene) selection based on the Proportional Overlapping\nScores","Description":"A package for selecting the most relevant features (genes) in the high-dimensional binary classification problems. The discriminative features are identified using analyzing the overlap between the expression values across both classes. The package includes functions for measuring the proportional overlapping score for each gene avoiding the outliers effect. The used measure for the overlap is the one defined in the \"Proportional Overlapping Score (POS)\" technique for feature selection. A gene mask which represents a gene's classification power can also be produced for each gene (feature). The set size of the selected genes might be set by the user. The minimum set of genes that correctly classify the maximum number of the given tissue samples (observations) can be also produced.","Published":"2014-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"propr","Version":"3.0.4","Title":"Calculating Proportionality Between Vectors of Compositional\nData","Description":"The bioinformatic evaluation of gene co-expression often begins with\n correlation-based analyses. However, this approach lacks statistical validity\n when applied to relative data. This includes, for example, biological count data\n generated by high-throughput RNA-sequencing, chromatin immunoprecipitation (ChIP),\n ChIP-sequencing, Methyl-Capture sequencing, and other techniques. This package\n implements two metrics, phi [Lovell et al (2015) ]\n and rho [Erb and Notredame (2016) ], to provide\n a valid alternatives to correlation for relative data. Unlike correlation, these\n metrics give the same result for both relative and absolute data. Pairs that are\n strongly proportional in relative space are also strongly correlated in absolute\n space. Proportionality avoids the pitfall of spurious correlation.","Published":"2017-06-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PropScrRand","Version":"1.1","Title":"Propensity score methods for assigning treatment in randomized\ntrials","Description":"This package contains functions to run propensity-biased allocation to balance covariate distributions in sequential trials and propensity-constrained randomization to balance covariate distributions in trials with known baseline covariates at time of randomization. Currently this package only supports trials comparing two groups.","Published":"2013-11-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PROreg","Version":"1.0","Title":"Patient Reported Outcomes Regression Analysis","Description":"Offers a variety of tools, such as specific plots and regression model approaches, for analyzing different patient reported questionnaires. Specially, mixed-effects models based on the beta-binomial distribution are implemented to deal with binomial data with over-dispersion (see Najera-Zuloaga J., Lee D.-J. and Arostegui I. (2017) ).","Published":"2017-05-06","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"PROscorer","Version":"0.0.1","Title":"Functions to Score Commonly-Used Patient-Reported Outcome (PRO)\nMeasures and Other Psychometric Instruments","Description":"An extensible repository of accurate, up-to-date functions to score \n commonly used patient-reported outcome (PRO), quality of life (QOL), and \n other psychometric and psychological measures. 'PROscorer', together with \n the 'PROscorerTools' package, is a system to facilitate the incorporation of \n PRO measures into research studies and clinical settings in a scientifically \n rigorous and reproducible manner. These packages and their vignettes are \n intended to help establish and promote \"best practices\" to improve the \n planning, scoring, and reporting of PRO-like measures in research. \n The 'PROscorer' \"Instrument Descriptions\" vignette contains descriptions of \n each instrument scored by 'PROscorer', complete with references. These \n instrument descriptions are suitable for inclusion in formal study protocol \n documents, grant proposals, and manuscript Method sections. Each \n 'PROscorer' function is composed of helper functions from the \n 'PROscorerTools' package, and users are encouraged to contribute new \n functions to 'PROscorer'. More scoring functions are currently in \n development and will be added in future updates.","Published":"2017-05-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PROscorerTools","Version":"0.0.1","Title":"Tools to Score Patient-Reported Outcome (PRO) and Other\nPsychometric Measures","Description":"Provides a reliable and flexible toolbox to score \n patient-reported outcome (PRO), Quality of Life (QOL), and other \n psychometric measures. The guiding philosophy is that scoring errors can \n be eliminated by using a limited number of well-tested, well-behaved \n functions to score PRO-like measures. The workhorse of the package is \n the 'scoreScale' function, which can be used to score most single-scale \n measures. It can reverse code items that need to be reversed before \n scoring and pro-rate scores for missing item data. Currently, three \n different types of scores can be output: summed item scores, mean item \n scores, and scores scaled to range from 0 to 100. The 'PROscorerTools' \n functions can be used to write new functions that score more complex \n measures. In fact, 'PROscorerTools' functions are the building blocks of \n the scoring functions in the 'PROscorer' package (which is a repository \n of functions that score specific commonly-used instruments). Users are \n encouraged to use 'PROscorerTools' to write scoring functions for their \n favorite PRO-like instruments, and to submit these functions for \n inclusion in 'PROscorer' (a tutorial vignette will be added soon). The \n long-term vision for the 'PROscorerTools' and 'PROscorer' packages is to \n provide an easy-to-use system to facilitate the incorporation of PRO \n measures into research studies in a scientifically rigorous and \n reproducible manner. These packages and their vignettes are intended to \n help establish and promote \"best practices\" for scoring and describing \n PRO-like measures in research. ","Published":"2017-05-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"prospectr","Version":"0.1.3","Title":"Miscellaneous functions for processing and sample selection of\nvis-NIR diffuse reflectance data","Description":"The package provides functions for pretreatment and sample\n selection of visible and near infrared diffuse reflectance spectra","Published":"2014-02-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ProteinDescriptors","Version":"0.1.0","Title":"Generates Various Protein Descriptors for Machine Learning\nAlgorithms","Description":"An implementation of protein descriptors in R. These descriptors\n combine the advantages of being fixed length and including partial sequential\n effect: Various length of protein sequences are described with fixed length\n vectors that are suitable for machine learning algorithms, and still includes\n partial sequential effect.","Published":"2016-03-03","License":"BSD 3-clause License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"proteomicdesign","Version":"2.0","Title":"Optimization of a multi-stage proteomic study","Description":"This package provides functions to identify the optimal\n solution that maximizes numbers of detectable differentiated\n proteins from a multi-stage clinical proteomic study.","Published":"2013-01-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"proteomics","Version":"0.2","Title":"Statistical Analysis of High Throughput Proteomics Data","Description":"Provides methods for making inference in isobaric labelled\n LC-MS/MS experiments, i.e. iTRAQ experiments. It provides a function that\n reasonably parses a CSV-export from Proteome Discoverer(TM) into a data\n frame that can be easily handled in R. Functions and methods are provided\n for quality control, filtering, norming, and the calculation of response\n variables for further analysis. The merging of multiple iTRAQ experiments\n with respect to a reference is also covered.","Published":"2014-11-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"protiq","Version":"1.2","Title":"Protein (identification and) quantification based on peptide\nevidence","Description":"Method for protein quantification based on identified and\n quantified peptides. protiq can be used for absolute and relative protein\n quantification. Input peptide abundance scores can come from various\n sources, including SRM transition areas and intensities or spectral counts\n derived from shotgun experiments. The package is still being extended to\n also include the model for protein identification, MIPGEM, presented in\n Gerster, S., Qeli, E., Ahrens, C.H. and Buehlmann, P. (2010). Protein and\n gene model inference based on statistical modeling in k-partite graphs.\n Proceedings of the National Academy of Sciences 107(27):12101-12106.","Published":"2013-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"proto","Version":"1.0.0","Title":"Prototype Object-Based Programming","Description":"An object oriented system using object-based, also\n\tcalled prototype-based, rather than class-based object oriented ideas.","Published":"2016-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"protoclass","Version":"1.0","Title":"Interpretable classification with prototypes","Description":"Greedy algorithm described in Bien and Tibshirani (2011)\n Prototype Selection for Interpretable Classification. Annals of\n Applied Statistics. 5(4). 2403-2424","Published":"2013-05-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"protoclust","Version":"1.5","Title":"Hierarchical Clustering with Prototypes","Description":"Performs minimax linkage hierarchical clustering. Every cluster\n has an associated prototype element that represents that cluster as\n described in Bien, J., and Tibshirani, R. (2011), \"Hierarchical Clustering\n with Prototypes via Minimax Linkage,\" accepted for publication in The\n Journal of the American Statistical Association, DOI:\n 10.1198/jasa.2011.tm10183.","Published":"2015-02-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PROTOLIDAR","Version":"0.1","Title":"PRocess TOol LIdar DAta in R","Description":"PROTOLIDAR package contains functions for analyze the\n LIDAR scan of plants (grapevine) and make 3D maps in GRASS GIS.","Published":"2013-01-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"protolite","Version":"1.6","Title":"Fast and Simple Object Serialization to Protocol Buffers","Description":"Optimized C++ implementations for reading and writing protocol-buffers.\n Currently supports 'rexp.proto' for serializing R objects and 'geobuf.proto' for\n geojson data. This lightweight package is complementary to the much larger\n 'RProtoBuf' package which provides a full featured toolkit for working with\n protocol-buffers in R.","Published":"2017-03-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"proton","Version":"1.0","Title":"The Proton Game","Description":"'The Proton Game' is a console-based data-crunching game for younger and older data scientists.\n Act as a data-hacker and find Slawomir Pietraszko's credentials to the Proton server.\n You have to solve four data-based puzzles to find the login and password.\n There are many ways to solve these puzzles. You may use loops, data filtering, ordering, aggregation or other tools.\n Only basics knowledge of R is required to play the game, yet the more functions you know, the more approaches you can try.\n The knowledge of dplyr is not required but may be very helpful.\n This game is linked with the ,,Pietraszko's Cave'' story available at http://biecek.pl/BetaBit/Warsaw.\n It's a part of Beta and Bit series.\n You will find more about the Beta and Bit series at http://biecek.pl/BetaBit.","Published":"2015-11-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prototest","Version":"1.1","Title":"Inference on Prototypes from Clusters of Features","Description":"Procedures for testing for group-wide signal in clusters of variables. Tests can be performed for single groups in isolation (univariate) or multiple groups together (multivariate). Specific tests include the exact and approximate (un)selective likelihood ratio tests described in Reid et al (2015), the selective F test and marginal screening prototype test of Reid and Tibshirani (2015). User may pre-specify columns to be included in prototype formation, or allow the function to select them itself. A mixture of these two is also possible. Any variable selection is accounted for using the selective inference framework. Options for non-sampling and hit-and-run null reference distributions.","Published":"2016-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"protr","Version":"1.4-0","Title":"Generating Various Numerical Representation Schemes for Protein\nSequences","Description":"Comprehensive toolkit for generating various numerical\n features of protein sequences described in Xiao et al. (2015)\n . For full functionality,\n the software 'ncbi-blast+' is needed, see\n \n for more information.","Published":"2017-06-06","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ProTrackR","Version":"0.3.4","Title":"Manipulate and Play 'ProTracker' Modules","Description":"'ProTracker' is a popular music tracker to sequence\n music on a Commodore Amiga machine. This package offers the\n opportunity to import, export, manipulate and play 'ProTracker'\n module files. Even though the file format could be considered\n archaic, it still remains popular to this date. This package\n intends to contribute to this popularity and therewith\n keeping the legacy of 'ProTracker' and the Commodore Amiga\n alive.","Published":"2016-11-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"protViz","Version":"0.2.31","Title":"Visualizing and Analyzing Mass Spectrometry Related Data in\nProteomics","Description":"Helps with quality checks, visualizations \n and analysis of mass spectrometry data, coming from proteomics \n experiments. The package is developed, tested and used at the Functional \n Genomics Center Zurich. We use this package mainly for prototyping, \n teaching, and having fun with proteomics data. But it can also be \n used to do data analysis for small scale data sets.","Published":"2017-05-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"proustr","Version":"0.1.0","Title":"Marcel Proust's Text from 'A La Recherche Du Temps Perdu'","Description":"Texts from Marcel Proust's collection \"A La Recherche Du Temps Perdu\". \n The novels contained in this collection are \"Du côté de chez Swann \", \"A l'ombre des jeunes filles en fleurs\",\n \"Le Côté de Guermantes\", \"Sodome et Gomorrhe I et II\", \"La Prisonnière\", \"Albertine disparue\", \n and \"Le Temps retrouvé\". Inspired by the 'janeaustenr' package.","Published":"2017-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"provenance","Version":"1.7","Title":"Statistical Toolbox for Sedimentary Provenance Analysis","Description":"Bundles a number of established statistical methods to facilitate\n the visual interpretation of large datasets in sedimentary geology. Includes\n functionality for adaptive kernel density estimation, multidimensional scaling,\n generalised procrustes analysis and individual differences scaling using a\n variety of dissimilarity measures. Univariate provenance proxies, such as\n single-grain ages or (isotopic) compositions are compared with the Kolmogorov-\n Smirnov, Kuiper or Sircombe-Hazelton L2 distances. Categorical provenance\n proxies, such as mineralogical, petrographic or chemical compositions are\n compared with the Aitchison and Bray-Curtis distances. Also included are tools\n to plot compositional data on ternary diagrams, to calculate the sample size\n required for specified levels of statistical precision, and to assess the\n effects of hydraulic sorting on detrital compositions. Includes an intuitive\n query-based user interface for users who are not proficient in R.","Published":"2017-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"proxy","Version":"0.4-17","Title":"Distance and Similarity Measures","Description":"Provides an extensible framework for the efficient calculation of auto- and cross-proximities, along with implementations of the most popular ones. ","Published":"2017-02-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"prozor","Version":"0.2.3","Title":"Minimal Protein Set Explaining Peptide Spectrum Matches","Description":"Determine minimal protein set explaining\n peptide spectrum matches. Utility functions for creating libraries with decoys.\n Planned is peptide FDR estimation for search results.","Published":"2017-04-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PRROC","Version":"1.3","Title":"Precision-Recall and ROC Curves for Weighted and Unweighted Data","Description":"Computes the areas under the precision-recall (PR) and ROC curve for weighted (e.g., soft-labeled) and unweighted data. In contrast to other implementations, the interpolation between points of the PR curve is done by a non-linear piecewise function. In addition to the areas under the curves, the curves themselves can also be computed and plotted by a specific S3-method. References: Davis and Goadrich (2006) ; Keilwagen et al. (2014) ; Grau et al. (2015) .","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pRSR","Version":"3.1.1","Title":"Test of Periodicity using Response Surface Regression","Description":"Tests periodicity in short time series using response surface regression.","Published":"2016-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pryr","Version":"0.1.2","Title":"Tools for Computing on the Language","Description":"Useful tools to pry back the covers of R and understand the\n language at a deeper level.","Published":"2015-06-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Przewodnik","Version":"0.16.12","Title":"Datasets and Functions Used in the Book 'Przewodnik po Pakiecie\nR'","Description":"Data sets and functions used in the polish book \n \"Przewodnik po pakiecie R\" (The Hitchhiker's Guide to the R). \n See more at . Among others you will find here \n data about housing prices, cancer patients, running times and many others.","Published":"2016-11-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PSAboot","Version":"1.1.4","Title":"Bootstrapping for Propensity Score Analysis","Description":"Bootstrapping for propensity score analysis and matching.","Published":"2016-12-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"PSAgraphics","Version":"2.1.1","Title":"Propensity Score Analysis Graphics","Description":"A collection of functions that primarily produce graphics\n to aid in a Propensity Score Analysis (PSA). Functions\n include: cat.psa and box.psa to test balance within strata of\n categorical and quantitative covariates, circ.psa for a\n representation of the estimated effect size by stratum,\n loess.psa that provides a graphic and loess based effect size\n estimate, and various balance functions that provide measures\n of the balance achieved via a PSA in a categorical covariate.","Published":"2012-03-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"psbcGroup","Version":"1.3","Title":"Penalized Parametric and Semiparametric Bayesian Survival Models\nwith Shrinkage and Grouping Priors","Description":"Algorithms for fitting penalized parametric and semiparametric Bayesian survival models with shrinkage and grouping priors.","Published":"2016-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PSCBS","Version":"0.62.0","Title":"Analysis of Parent-Specific DNA Copy Numbers","Description":"Segmentation of allele-specific DNA copy number data and detection of regions with abnormal copy number within each parental chromosome. Both tumor-normal paired and tumor-only analyses are supported.","Published":"2016-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pscl","Version":"1.4.9","Title":"Political Science Computational Laboratory, Stanford University","Description":"Bayesian analysis of item-response theory (IRT) models,\n\t roll call analysis; computing highest density regions; maximum\n\t likelihood estimation of zero-inflated and hurdle models for count\n\t data; goodness-of-fit measures for GLMs; data sets used\n\t in writing\tand teaching at the Political Science\n\t Computational Laboratory; seats-votes curves.","Published":"2015-03-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pscore","Version":"0.1-2","Title":"Standardizing Physiological Composite Risk Endpoints","Description":"Provides a number of functions to\n simplify and automate the scoring, comparison, and evaluation of\n different ways of creating composites of data. It is particularly\n aimed at facilitating the creation of physiological composites of\n metabolic syndrome symptom score (MSSS) and allostatic load (AL).","Published":"2015-06-24","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"psd","Version":"1.0-1","Title":"Adaptive, Sine-Multitaper Power Spectral Density Estimation","Description":"Produces power spectral density estimates through iterative\n refinement of the optimal number of sine-tapers at each frequency. This\n optimization procedure is based on the method of Riedel and Sidorenko\n (1995), which minimizes the Mean Square Error (sum of variance and bias)\n at each frequency, but modified for computational stability.","Published":"2015-03-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"psData","Version":"0.2.2","Title":"Download Regularly Maintained Political Science Data Sets","Description":"This R package includes functions for gathering commonly used and\n regularly maintained data set in political science. It also includes\n functions for combining components from these data sets into variables that\n have been suggested in the literature, but are not regularly maintained.","Published":"2016-09-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"pse","Version":"0.4.7","Title":"Parameter Space Exploration with Latin Hypercubes","Description":"Functions for creating Latin Hypercubes with\n prescribed correlations and performing parameter space exploration.\n Also implements the PLUE method.\n Based on the package sensitivity, by Gilles Pujol,\n Bertrand Iooss & Alexandre Janon.","Published":"2017-06-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pseudo","Version":"1.1","Title":"Pseudo - observations","Description":"Various functions for computing pseudo-observations for\n censored data regression","Published":"2012-03-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pseval","Version":"1.3.0","Title":"Methods for Evaluating Principal Surrogates of Treatment\nResponse","Description":"Contains the core methods for the evaluation of principal\n surrogates in a single clinical trial. Provides a flexible interface for\n defining models for the risk given treatment and the surrogate, the models\n for integration over the missing counterfactual surrogate responses, and the\n estimation methods. Estimated maximum likelihood and pseudo-score can be used\n for estimation, and the bootstrap for inference. A variety of post-estimation\n summary methods are provided, including print, summary, plot, and testing.","Published":"2016-09-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PSF","Version":"0.4","Title":"Forecasting of Univariate Time Series Using the Pattern\nSequence-Based Forecasting (PSF) Algorithm","Description":"Pattern Sequence Based Forecasting (PSF) takes univariate\n time series data as input and assist to forecast its future values.\n This algorithm forecasts the behavior of time series\n based on similarity of pattern sequences. Initially, clustering is done with the\n labeling of samples from database. The labels associated with samples are then\n used for forecasting the future behaviour of time series data. The further\n technical details and references regarding PSF are discussed in Vignette.","Published":"2017-04-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"psgp","Version":"0.3-6","Title":"Projected Spatial Gaussian Process (psgp) methods","Description":"Implements projected sparse Gaussian process kriging for the intamap package","Published":"2014-05-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pSI","Version":"1.1","Title":"Specificity Index Statistic","Description":"This package contains functions to calculate the Specificity Index statistic, which can be used for comparative quantitative analysis to identify genes enriched in specific cell populations across a large number of profiles, as well as perform numerous post-processing operations. NOTE:Supplementary data (human & mouse expression sets, calculated pSI datasets, etc.) can be found in pSI.data package located at the following URL: http://genetics.wustl.edu/jdlab/psi_package/","Published":"2014-01-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"psidR","Version":"1.4","Title":"Build Panel Data Sets from PSID Raw Data","Description":"Makes it easy to build panel data in wide format from Panel Survey\n of Income Dynamics (PSID) delivered raw data. Deals with data downloaded and\n pre-processed by 'Stata' or 'SAS', or can optionally download directly from\n the PSID server using the 'SAScii' package. 'psidR' takes care of merging\n data from each wave onto a cross-period index file, so that individuals can be\n followed over time. The user must specify which years they are interested in,\n and the PSID variable names (e.g. ER21003) for each year (they differ in each\n year). There are different panel data designs and sample subsetting criteria\n implemented (\"SRC\", \"SEO\", \"immigrant\" and \"latino\" samples).","Published":"2016-10-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PsiHat","Version":"1.0","Title":"Several Local False Discovery Rate Estimators","Description":"Suite of R functions for the estimation of local false discovery rate (LFDR) using several methods.","Published":"2015-10-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pso","Version":"1.0.3","Title":"Particle Swarm Optimization","Description":"The package provides an implementation of PSO consistent\n with the standard PSO 2007/2011 by Maurice Clerc et al.\n Additionally a number of ancillary routines are provided for\n easy testing and graphics.","Published":"2012-09-02","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"psoptim","Version":"1.0","Title":"Particle Swarm Optimization","Description":"Particle swarm optimization - a basic variant.","Published":"2016-01-31","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"pspearman","Version":"0.3-0","Title":"Spearman's rank correlation test","Description":"Spearman's rank correlation test with precomputed exact\n null distribution for n <= 22.","Published":"2014-03-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pspline","Version":"1.0-18","Title":"Penalized Smoothing Splines","Description":"Smoothing splines with penalties on order m derivatives.","Published":"2017-06-12","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"pssm","Version":"1.1","Title":"Piecewise Exponential Model for Time to Progression and Time\nfrom Progression to Death","Description":"Estimates parameters of a piecewise exponential model for time to progression and time from progression to death with interval censoring of the time to progression and covariates for each distribution using proportional hazards.","Published":"2017-03-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PST","Version":"0.94","Title":"Probabilistic Suffix Trees and Variable Length Markov Chains","Description":"Provides a framework for analysing state sequences with probabilistic suffix trees (PST), the construction that stores variable length Markov chains (VLMC). Besides functions for learning and optimizing VLMC models, the PST library includes many additional tools to analyse sequence data with these models: visualization tools, functions for sequence prediction and artificial sequences generation, as well as for context and pattern mining. The package is specifically adapted to the field of social sciences by allowing to learn VLMC models from sets of individual sequences possibly containing missing values, and by accounting for case weights. The library also allows to compute probabilistic divergence between two models, and to fit segmented VLMC, where sub-models fitted to distinct strata of the learning sample are stored in a single PST. This software results from research work executed within the framework of the Swiss National Centre of Competence in Research LIVES, which is financed by the Swiss National Science Foundation. The authors are grateful to the Swiss National Science Foundation for its financial support.","Published":"2017-02-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Pstat","Version":"1.0","Title":"Assessing Pst Statistics","Description":"Calculating Pst values to assess differentiation among populations from a set of quantitative traits is the primary purpose of such a package. Pst value is an index that measures the level of phenotypic differentiation among populations (Leinonen et al., 2006). The bootstrap method provides confidence intervals and distribution histograms of Pst. Variations of Pst in function of the parameter c/h^2 are studied as well. Finally, the package proposes different transformations especially to eliminate any variation resulting from allometric growth (calculation of residuals from linear regressions, Reist standardizations or Aitchison transformation).","Published":"2017-02-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pstest","Version":"0.1.1","Title":"Specification Tests for Parametric Propensity Score Models","Description":"The propensity score is one of the most widely used tools in studying the causal effect\n of a treatment, intervention, or policy. Given that the propensity score is usually unknown,\n it has to be estimated, implying that the reliability of many treatment effect estimators depends\n on the correct specification of the (parametric) propensity score. This package provides\n data-driven nonparametric diagnostic tools for detecting propensity score misspecification.","Published":"2016-11-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PsumtSim","Version":"0.4","Title":"Simulations of grouped responses relative to baseline","Description":"Functions to simulate Poisson or Normally distributed responses relative to \n a baseline and compute achieved significance level and powers for tests on the\n simulated responses.","Published":"2013-10-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"psy","Version":"1.1","Title":"Various procedures used in psychometry","Description":"Kappa, ICC, Cronbach alpha, screeplot, mtmm","Published":"2012-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"psych","Version":"1.7.5","Title":"Procedures for Psychological, Psychometric, and Personality\nResearch","Description":"A general purpose toolbox for personality, psychometric theory and experimental psychology. Functions are primarily for multivariate analysis and scale construction using factor analysis, principal component analysis, cluster analysis and reliability analysis, although others provide basic descriptive statistics. Item Response Theory is done using factor analysis of tetrachoric and polychoric correlations. Functions for analyzing data at multiple levels include within and between group statistics, including correlations and factor analysis. Functions for simulating and testing particular item and test structures are included. Several functions serve as a useful front end for structural equation modeling. Graphical displays of path diagrams, factor analysis and structural equation models are created using basic graphics. Some of the functions are written to support a book on psychometric theory as well as publications in personality research. For more information, see the web page.","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"psychometric","Version":"2.2","Title":"Applied Psychometric Theory","Description":"Contains functions useful for correlation theory,\n meta-analysis (validity-generalization), reliability, item\n analysis, inter-rater reliability, and classical utility","Published":"2010-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"psychomix","Version":"1.1-4","Title":"Psychometric Mixture Models","Description":"Psychometric mixture models based on 'flexmix' infrastructure. At the moment Rasch mixture models\n with different parameterizations of the score distribution (saturated vs. mean/variance specification),\n Bradley-Terry mixture models, and MPT mixture models are implemented. These mixture models can be estimated\n with or without concomitant variables. See vignette('raschmix', package = 'psychomix') for details on the\n Rasch mixture models.","Published":"2017-04-07","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"psychotools","Version":"0.4-2","Title":"Infrastructure for Psychometric Modeling","Description":"Infrastructure for psychometric modeling such as data classes\n (for item response data and paired comparisons), basic model fitting\n functions (for Bradley-Terry, Rasch, partial credit, rating scale,\n multinomial processing tree models), extractor functions for different types\n of parameters (item, person, threshold, discrimination), unified inference\n and visualizations, and various datasets for illustration. Intended as a\n common lightweight and efficient toolbox for psychometric modeling and a\n common building block for fitting psychometric mixture models in package\n \"psychomix\" and trees based on psychometric models in package \"psychotree\".","Published":"2016-09-09","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"psychotree","Version":"0.15-1","Title":"Recursive Partitioning Based on Psychometric Models","Description":"Recursive partitioning based on psychometric models,\n employing the general MOB algorithm (from package partykit) to obtain\n Bradley-Terry trees, Rasch trees, rating scale and partial credit trees, and\n MPT trees.","Published":"2016-09-09","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"psyphy","Version":"0.1-9","Title":"Functions for analyzing psychophysical data in R","Description":"An assortment of functions that could be useful in analyzing data from psychophysical experiments. It includes functions for calculating d' from several different experimental designs, links for m-alternative forced-choice (mafc) data to be used with the binomial family in glm (and possibly other contexts) and self-Start functions for estimating gamma values for CRT screen calibrations.","Published":"2014-01-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"psytabs","Version":"1.0","Title":"Produce Well-Formatted Tables for Psychological Research","Description":"Produces tables conforming to \"psychological style\" (i.e. APA style) based on standard R output. The resulting tables can be exported to '.rtf', '.html' or '.doc' format. ","Published":"2016-04-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PTAk","Version":"1.2-12","Title":"Principal Tensor Analysis on k Modes","Description":"A multiway method to decompose a tensor (array) of any order, as a generalisation of SVD also supporting non-identity metrics and penalisations. 2-way SVD with these extensions is also available. The package includes also some other multiway methods: PCAn (Tucker-n) and PARAFAC/CANDECOMP with these extensions.","Published":"2015-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PTE","Version":"1.5","Title":"Personalized Treatment Evaluator","Description":"We provide inference for personalized medicine models. Namely, we answer the questions: (1) how much better does a purported personalized recommendation engine for treatments do over a business-as-usual approach and (2) is that difference statistically significant?","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ptest","Version":"1.0-8","Title":"Periodicity Tests in Short Time Series","Description":"Implements p-value computations using an approximation to the cumulative distribution function for a variety of tests for periodicity. These tests include harmonic regression tests with normal and double exponential errors as well as modifications of Fisher's g test. An accompanying vignette illustrates the application of these tests.","Published":"2016-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ptinpoly","Version":"2.4","Title":"Point-In-Polyhedron Test (2D and 3D)","Description":"Function 'pip3d' tests whether a point in 3D space is\n within, exactly on, or outside an enclosed surface defined by a triangular mesh.\n Function 'pip2d' tests whether a point in 2D space is within, exactly on, or outside a polygon.","Published":"2014-08-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PtProcess","Version":"3.3-12","Title":"Time Dependent Point Process Modelling","Description":"Fits and analyses time dependent marked point process models with an emphasis on earthquake modelling. For a more detailed introduction to the package, see the topic \"PtProcess\". A list of recent changes can be found in the topic \"Change Log\".","Published":"2016-09-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ptstem","Version":"0.0.3","Title":"Stemming Algorithms for the Portuguese Language","Description":"Wraps a collection of stemming algorithms for the Portuguese\n Language.","Published":"2017-01-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ptw","Version":"1.9-12","Title":"Parametric Time Warping","Description":"Parametric Time Warping aligns patterns, i.e. it aims to\n put corresponding features at the same locations. The algorithm\n searches for an optimal polynomial describing the warping. It\n is possible to align one sample to a reference, several samples\n to the same reference, or several samples to several\n references. One can choose between calculating individual\n warpings, or one global warping for a set of samples and one\n reference. Two optimization criteria are implemented: RMS (Root\n Mean Square error) and WCC (Weighted Cross Correlation). Both\n\twarping of peak profiles and of peak lists are supported.","Published":"2017-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ptwikiwords","Version":"0.0.3","Title":"Words Used in Portuguese Wikipedia","Description":"Contains a dataset of words used in 15.000 randomly extracted pages\n from the Portuguese Wikipedia ().","Published":"2016-10-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PTXQC","Version":"0.82.6","Title":"Quality Report Generation for MaxQuant Results","Description":"Generates Proteomics (PTX) quality control (QC) reports for shotgun LC-MS data analyzed with the \n MaxQuant software suite (see ).\n Reports are customizable (target thresholds, subsetting) and available in HTML or PDF format.\n Published in J. Proteome Res., Proteomics Quality Control: Quality Control Software for MaxQuant Results (2015) .","Published":"2017-06-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ptycho","Version":"1.1-4","Title":"Bayesian Variable Selection with Hierarchical Priors","Description":"\n Bayesian variable selection for linear regression models using hierarchical\n priors. There is a prior that combines information across responses and one\n that combines information across covariates, as well as a standard spike and\n slab prior for comparison. An MCMC samples from the marginal posterior\n distribution for the 0-1 variables indicating if each covariate belongs to the\n model for each response.","Published":"2015-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PubBias","Version":"1.0","Title":"Performs simulation study to look for publication bias, using a\ntechnique described by Ioannidis and Trikalinos; Clin Trials.\n2007;4(3):245-53","Description":"I adapted a method designed by Ioannidis and Trikalinos, which\n compares the observed number of positive studies in a meta-analysis with\n the expected number, if the summary measure of effect, averaged over the\n individual studies, were assumed true. Excess in the observed number of\n positive studies, compared to the expected, is taken as evidence of\n publication bias. The observed number of positive studies, at a given level\n for statistical significance, is calculated by applying Fisher's exact test\n to the reported 2x2 table data of each constituent study, doubling the\n Fisher one-sided P-value to make a two-sided test. The corresponding\n expected number of positive studies was obtained by summing the statistical\n powers of each study. The statistical power depended on a given measure of\n effect which, here, was the pooled odds ratio of the meta-analysis was\n used. By simulating each constituent study, with the given odds ratio, and\n the same number of treated and non-treated as in the real study, the power\n of the study is estimated as the proportion of simulated studies that are\n positive, again by a Fisher's exact test. The simulated number of events in\n the treated and untreated groups was done with binomial sampling. In the\n untreated group, the binomial proportion was the percentage of actual\n events reported in the study and, in the treated group, the binomial\n sampling proportion was the untreated percentage multiplied by the risk\n ratio which was derived from the assumed common odds ratio. The statistical\n significance for judging a positive study may be varied and large\n differences between expected and observed number of positive studies around\n the level of 0.05 significance constitutes evidence of publication bias.\n The difference between the observed and expected is tested by chi-square. A\n chi-square test P-value for the difference below 0.05 is suggestive of\n publication bias, however, a less stringent level of 0.1 is often used in\n studies of publication bias as the number of published studies is usually\n small.","Published":"2013-11-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pubmed.mineR","Version":"1.0.9","Title":"Text Mining of PubMed Abstracts","Description":"Text mining of PubMed Abstracts (text and XML) from .","Published":"2017-04-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PubMedWordcloud","Version":"0.3.3","Title":"'Pubmed' Word Clouds","Description":"Create a word cloud using the abstract of publications from 'Pubmed'.","Published":"2017-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pubprint","Version":"0.2.1","Title":"Printing Results of Statistical Computing in a Publishable Way","Description":"Takes the output of statistical tests and transforms it in a\n publish-friendly pattern. Currently only APA (American Psychological\n Association) style is supported with output to HTML, LaTeX, Markdown and\n plain text. It is easily customizable, extendable and can be used well\n with 'knitr'. Additionally 'pubprint' offers a memory system that allows\n to save and retrieve results of computations.","Published":"2016-05-24","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pullword","Version":"0.2","Title":"R Interface to Pullword Service","Description":"R Interface to Pullword Service for natural language processing\n in Chinese. It enables users to extract valuable words from text by deep learning models. \n For more details please visit the official site (in Chinese) http://pullword.com/.","Published":"2016-07-23","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"pulsar","Version":"0.2.5","Title":"Parallel Utilities for Lambda Selection along a Regularization\nPath","Description":"Model selection for penalized graphical models using the Stability Approach to Regularization Selection ('StARS'), with options for speed-ups including Bounded StARS (B-StARS), batch computing, and other stability metrics (e.g., graphlet stability G-StARS).","Published":"2016-08-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pumilioR","Version":"1.3.1","Title":"Pumilio in R","Description":"R package to query and get data out of a Pumilio sound archive system (http://ljvillanueva.github.io/pumilio/).","Published":"2016-11-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PurBayes","Version":"1.3","Title":"Bayesian Estimation of Tumor Purity and Clonality","Description":"PurBayes is an MCMC-based algorithm that uses\n next-generation sequencing data to estimate tumor purity and\n clonality for paired tumor-normal data.","Published":"2013-05-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"purge","Version":"0.2.1","Title":"Purge Training Data from Models","Description":"Enables the removal of training data from fitted R models while\n retaining predict functionality. The purged models are more portable as their\n memory footprints do not scale with the training sample size.","Published":"2017-02-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"purrr","Version":"0.2.2.2","Title":"Functional Programming Tools","Description":"Make your pure functions purr with the 'purrr' package. This\n package completes R's functional programming tools with missing features\n present in other programming languages.","Published":"2017-05-11","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"purrrlyr","Version":"0.0.2","Title":"Tools at the Intersection of 'purrr' and 'dplyr'","Description":"Some functions at the intersection of 'dplyr' and 'purrr' that \n formerly lived in 'purrr'.","Published":"2017-05-13","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pushoverr","Version":"1.0.0","Title":"Send Push Notifications using Pushover","Description":"Send push notifications to mobile devices or the desktop using Pushover. These notifications can display job status, results, scraped web data, or any other text or numeric data.","Published":"2016-11-23","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"PVAClone","Version":"0.1-6","Title":"Population Viability Analysis with Data Cloning","Description":"Likelihood based population viability analysis in the\n presence of observation error and missing data.\n The package can be used to fit, compare, predict,\n and forecast various growth model types using data cloning.","Published":"2016-03-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pvar","Version":"2.2.2","Title":"Calculation and Application of p-Variation","Description":"The calculation of p-variation of the finite sample data.","Published":"2016-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pvclass","Version":"1.4","Title":"P-Values for Classification","Description":"Computes nonparametric p-values for the potential class\n memberships of new observations as well as cross-validated\n p-values for the training data. The p-values are based on\n permutation tests applied to an estimated Bayesian likelihood\n ratio, using a plug-in statistic for the Gaussian model, 'k\n nearest neighbors', 'weighted nearest neighbors' or\n 'penalized logistic regression'.\n Additionally, it provides graphical displays and quantitative\n analyses of the p-values.","Published":"2017-06-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pvclust","Version":"2.0-0","Title":"Hierarchical Clustering with P-Values via Multiscale Bootstrap\nResampling","Description":"An implementation of multiscale bootstrap resampling for\n assessing the uncertainty in hierarchical cluster analysis.\n It provides AU (approximately unbiased) p-value as well as\n BP (bootstrap probability) value for each cluster in a dendrogram.","Published":"2015-10-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pvsR","Version":"0.3","Title":"An R package to interact with the Project Vote Smart API for\nscientific research","Description":"The pvsR package facilitates data retrieval from Project\n Vote Smart's rich online data base on US politics via the Project Vote\n Smart application programming interface (PVS API). The functions in this\n package cover most PVS API classes and methods and return the\n requested data in a data frame.","Published":"2014-09-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"PWD","Version":"1.0","Title":"Time Series Regression Using the Power Weighted Densities (PWD)\nApproach","Description":"Contains functions which allow the user to perform time series regression quickly using the Power Weighted Densities (PWD) approach. alphahat_LR_one_Rcpp() is the main workhorse function within this package.","Published":"2016-02-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PWEALL","Version":"1.1.0","Title":"Design and Monitoring of Survival Trials Accounting for Complex\nSituations","Description":"Calculates various functions needed for design and monitoring survival trials\n accounting for complex situations such as delayed treatment effect, treatment crossover, non-uniform accrual,\n and different censoring distributions between groups. The event time distribution is assumed to be\n piecewise exponential (PWE) distribution and the entry time is assumed to be piecewise uniform distribution.","Published":"2017-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pweight","Version":"0.0.1","Title":"P-Value Weighting","Description":"This R package contains open source implementations\n of several p-value weighting methods, including Spjotvoll, exponential\n and Bayes weights. These are methods for improving power in multiple testing\n via the use of prior information.","Published":"2015-07-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"PWFSLSmoke","Version":"0.99.9","Title":"Utilities for Working with Air Quality Monitoring Data","Description":"Utilities for working with air quality monitoring data\n with a focus on small particulates (PM2.5) generated by wildfire\n smoke. Functions are provided for downloading available data from\n the United States Environmental Protection Agency (US EPA) and \n it's AirNow air quality site. Additional sources of PM2.5 data \n made accessible by the package include: AIRSIS (password protected),\n the Western Regional Climate Center (WRCC) and the open source site OpenAQ.","Published":"2017-03-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pwr","Version":"1.2-1","Title":"Basic Functions for Power Analysis","Description":"Power analysis functions along the lines of Cohen (1988).","Published":"2017-03-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"pwr2","Version":"1.0","Title":"Power and Sample Size Analysis for One-way and Two-way ANOVA\nModels","Description":"User friendly functions for power and sample size analysis at one-way and two-way ANOVA settings take either effect size or delta and sigma as arguments. They are designed for both one-way and two-way ANOVA settings. In addition, a function for plotting power curves is available for power comparison, which can be easily visualized by statisticians and clinical researchers.","Published":"2017-05-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pwrAB","Version":"0.1.0","Title":"Power Analysis for AB Testing","Description":"Power analysis for AB testing. The calculations are based on the Welch's unequal variances t-test,\n which is generally preferred over the Student's t-test when sample sizes and variances of the two groups are\n unequal, which is frequently the case in AB testing. In such situations, the Student's t-test will give \n biased results due to using the pooled standard deviation, unlike the Welch's t-test.","Published":"2017-06-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"PwrGSD","Version":"2.000","Title":"Power in a Group Sequential Design","Description":"Tools the evaluation of interim analysis plans for sequentially\n monitored trials on a survival endpoint; tools to construct efficacy and \n futility boundaries, for deriving power of a sequential design at a specified\n alternative, template for evaluating the performance of candidate plans at a \n set of time varying alternatives.","Published":"2014-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pwrRasch","Version":"0.1-2","Title":"Statistical Power Simulation for Testing the Rasch Model","Description":"Statistical power simulation for testing the Rasch Model based on a three-way analysis of variance design with mixed classification.","Published":"2015-09-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pwt","Version":"7.1-1","Title":"Penn World Table (Versions 5.6, 6.x, 7.x)","Description":"The Penn World Table provides purchasing power parity and\n\tnational income accounts converted to international prices for\n\t189 countries for some or all of the years 1950-2010.","Published":"2013-07-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"pwt8","Version":"8.1-1","Title":"Penn World Table (Version 8.x)","Description":"The Penn World Table 8.x provides information on relative levels of\n\tincome, output, inputs, and productivity for 167 countries\n\tbetween 1950 and 2011.","Published":"2017-01-04","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"pwt9","Version":"9.0-0","Title":"Penn World Table (Version 9.x)","Description":"The Penn World Table 9.x provides information on relative levels of\n\tincome, output, inputs, and productivity for 182 countries\n\tbetween 1950 and 2014.","Published":"2017-01-04","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"pxR","Version":"0.42.2","Title":"PC-Axis with R","Description":"Provides a set of functions for reading and writing PC-Axis files, used by different statistical organizations around the globe for data dissemination.","Published":"2017-01-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"pxweb","Version":"0.6.3","Title":"R Interface to the PX-Web/PC-Axis API","Description":"Generic interface for the PX-Web/PC-Axis API. The PX-Web/PC-Axis\n API is used by organizations such as Statistics Sweden and Statistics\n Finland to disseminate data. The R package can interact with all\n PX-Web/PC-Axis APIs to fetch information about the data hierarchy, extract\n metadata and extract and parse statistics to R data.frame format. PX-Web is\n a solution to disseminate PC-Axis data files in dynamic tables on the web.\n Since 2013 PX-Web contains an API to disseminate PC-Axis files.","Published":"2016-12-05","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"pycno","Version":"1.2","Title":"Pycnophylactic Interpolation","Description":"Given a SpatialPolygonsDataFrame and a set of populations for each polygon,\n compute a population density estimate based on Tobler's pycnophylactic interpolation\n algorithm. The result is a SpatialGridDataFrame. ","Published":"2014-08-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"pyramid","Version":"1.4","Title":"Functions to draw population pyramid","Description":"Drawing population pyramid using (1) data.frame or (2) vectors.\n\t The former is named as pyramid() and the latter pyramids(), as wrapper\n\t function of pyramid(). pyramidf() is the function to draw population\n\t pyramid within the specified frame.","Published":"2014-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"PythonInR","Version":"0.1-3","Title":"Use Python from Within R","Description":"Interact with Python from within R.","Published":"2015-11-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"qap","Version":"0.1-1","Title":"Heuristics for the Quadratic Assignment Problem (QAP)","Description":"Implements heuristics for the Quadratic Assignment Problem (QAP). Currently only a simulated annealing heuristic is available.","Published":"2017-02-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"qat","Version":"0.74","Title":"Quality Assurance Toolkit","Description":"Functions for a scientific quality assurance of meteorological data.","Published":"2016-07-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qboxplot","Version":"0.1","Title":"Quantile-Based Boxplot","Description":"Produce quantile-based box-and-whisker plot(s).","Published":"2017-03-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"QCA","Version":"2.6","Title":"Qualitative Comparative Analysis","Description":"An extensive set of functions to perform Qualitative Comparative Analysis:\n crisp sets ('csQCA'), temporal ('tQCA'), multivalue sets ('mvQCA')\n and fuzzy sets ('fsQCA'), using a GUI - graphical user interface.\n 'QCA' is a methodology that bridges the qualitative and quantitative divide\n in social science research. It uses a Boolean algorithm that results in a\n minimal causal combination which explains a given phenomenon.","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QCAfalsePositive","Version":"1.1.1","Title":"Tests for Type I Error in Qualitative Comparative Analysis (QCA)","Description":"Implements tests for Type I error in Qualitative Comparative Analysis (QCA) that take into account the multiple hypothesis tests inherent in the procedure. Tests can be carried out on three variants of QCA: crisp-set QCA (csQCA), multi-value QCA (mvQCA) and fuzzy-set QCA (fsQCA). For fsQCA, the fsQCApermTest() command implements a permutation test that provides 95% confidence intervals for the number of counterexamples and degree of consistency, respectively. The distributions of permuted values can be plotted against the observed values. For csQCA and mvQCA, simple binomial tests are implemented in csQCAbinTest() and mvQCAbinTest(), respectively.","Published":"2015-05-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"QCAGUI","Version":"2.5","Title":"Qualitative Comparative Analysis GUI","Description":"This is an obsolete version of the package QCAGUI, which is now merged with package QCA. ","Published":"2016-11-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QCApro","Version":"1.1-1","Title":"Professional Functionality for Performing and Evaluating\nQualitative Comparative Analysis","Description":"The 'QCApro' package provides professional functionality for performing configurational comparative research with Qualitative Comparative Analysis (QCA), including crisp-set, multi-value, and fuzzy-set QCA. It also offers advanced tools for sensitivity diagnostics and methodological evaluations of QCA.","Published":"2016-07-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"QCAtools","Version":"0.2.3","Title":"Helper Functions for QCA in R","Description":"Helper functions for Qualitative Comparative Analysis: evaluate and\n plot Boolean formulae on fuzzy set score data, apply Boolean operations, compute\n consistency and coverage measures.","Published":"2017-01-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"qcc","Version":"2.6","Title":"Quality Control Charts","Description":"Shewhart quality control charts for continuous, attribute and count data. Cusum and EWMA charts. Operating characteristic curves. Process capability analysis. Pareto chart and cause-and-effect chart. Multivariate control charts.","Published":"2014-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QCEWAS","Version":"1.1-0","Title":"Fast and Easy Quality Control of EWAS Results Files","Description":"Tools for (automated and manual) quality control of\n the results of Epigenome-Wide Association Studies.","Published":"2016-12-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"QCGWAS","Version":"1.0-8","Title":"Quality Control of Genome Wide Association Study results","Description":"Tools for (automated and manual) quality control of\n the results of Genome Wide Association Studies","Published":"2014-02-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"qclust","Version":"1.0","Title":"Robust Estimation of Gaussian Mixture Models","Description":"Robust estimation of Gaussian mixture models fitted by modified EM algorithm, robust clustering and classification.","Published":"2015-04-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qcr","Version":"1.0","Title":"Quality Control Review","Description":"Allows to generate Shewhart-type charts and to obtain\n numerical results of interest for a process quality control\n (involving continuous, attribute or count data).\n This package provides basic functionality for univariable and multivariable\n quality control analysis, including: xbar, xbar-one, S, R, ewna, cusum,\n mewna, mcusum and T2 charts. Additionally have nonparametric\n control charts multivariate. Parametric and nonparametric Process Capability Indices.","Published":"2016-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QCSimulator","Version":"0.0.1","Title":"A 5-Qubit Quantum Computing Simulator","Description":"Simulates a 5 qubit Quantum Computer and evaluates quantum circuits with 1,2\n qubit quantum gates.","Published":"2016-07-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"QCSIS","Version":"0.1","Title":"Sure Independence Screening via Quantile Correlation and\nComposite Quantile Correlation","Description":"Quantile correlation-sure independence screening (QC-SIS) and composite quantile correlation-sure independence screening (CQC-SIS) for ultrahigh-dimensional data.","Published":"2015-12-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qdap","Version":"2.2.5","Title":"Bridging the Gap Between Qualitative Data and Quantitative\nAnalysis","Description":"Automates many of the tasks associated with quantitative\n discourse analysis of transcripts containing discourse\n including frequency counts of sentence types, words, sentences,\n turns of talk, syllables and other assorted analysis tasks. The\n package provides parsing tools for preparing transcript data.\n Many functions enable the user to aggregate data by any number\n of grouping variables, providing analysis and seamless\n integration with other R packages that undertake higher level\n analysis and visualization of text. This affords the user a\n more efficient and targeted analysis. 'qdap' is designed for\n transcript analysis, however, many functions are applicable to\n other areas of Text Mining/Natural Language Processing.","Published":"2016-06-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qdapDictionaries","Version":"1.0.6","Title":"Dictionaries and Word Lists for the 'qdap' Package","Description":"A collection of dictionaries and word lists for use with\n the 'qdap' package.","Published":"2015-05-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qdapRegex","Version":"0.7.2","Title":"Regular Expression Removal, Extraction, and Replacement Tools","Description":"A collection of regular expression tools associated with\n the 'qdap' package that may be useful outside of the context of\n discourse analysis. Tools include\n removal/extraction/replacement of abbreviations, dates, dollar\n amounts, email addresses, hash tags, numbers, percentages,\n citations, person tags, phone numbers, times, and zip codes.","Published":"2017-04-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qdapTools","Version":"1.3.3","Title":"Tools for the 'qdap' Package","Description":"A collection of tools associated with the 'qdap' package\n that may be useful outside of the context of text analysis.","Published":"2017-06-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qdm","Version":"0.1-0","Title":"Fitting a Quadrilateral Dissimilarity Model to Same-Different\nJudgments","Description":"This package provides different specifications of a Quadrilateral\n Dissimilarity Model which can be used to fit same-different judgments\n in order to get a predicted matrix that satisfies regular minimality\n [Colonius & Dzhafarov, 2006, Measurement and representations of\n sensations, Erlbaum]. From such a matrix, Fechnerian distances can be\n computed.","Published":"2014-10-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QFASA","Version":"1.0.2","Title":"Quantitative Fatty Acid Signature Analysis","Description":"Accurate estimates of the diets of predators are required\n in many areas of ecology, but for many species current methods are\n imprecise, limited to the last meal, and often biased. The diversity\n of fatty acids and their patterns in organisms, coupled with the\n narrow limitations on their biosynthesis, properties of digestion in\n monogastric animals, and the prevalence of large storage reservoirs of\n lipid in many predators, led us to propose the use of quantitative\n fatty acid signature analysis (QFASA) to study predator diets.","Published":"2016-10-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"qfasar","Version":"1.2.0","Title":"Quantitative Fatty Acid Signature Analysis in R","Description":"An implementation of Quantitative Fatty Acid Signature\n Analysis (QFASA) in R. QFASA is a method of estimating the diet\n composition of predators. The fundamental unit of information in\n QFASA is a fatty acid signature (signature), which is a vector of\n proportions describing the composition of fatty acids within lipids.\n Signature data from at least one predator and from samples of all\n potential prey types are required. Calibration coefficients, which\n adjust for the differential metabolism of individual fatty acids by\n predators, are also required. Given those data inputs, a predator\n signature is modeled as a mixture of prey signatures and its diet\n estimate is obtained as the mixture that minimizes a measure of\n distance between the observed and modeled signatures. A variety of\n estimation options and simulation capabilities are implemented.\n Please refer to the vignette for additional details and references.","Published":"2017-01-10","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"QFRM","Version":"1.0.1","Title":"Pricing of Vanilla and Exotic Option Contracts","Description":"\n Option pricing (financial derivatives) techniques mainly following textbook 'Options, Futures and Other Derivatives', 9ed by John C.Hull, 2014. Prentice Hall. Implementations are via binomial tree option model (BOPM), Black-Scholes model, Monte Carlo simulations, etc. \n This package is a result of Quantitative Financial Risk Management course (STAT 449 and STAT 649) at Rice University, Houston, TX, USA, taught by Oleg Melnikov, statistics PhD student, as of Spring 2015.","Published":"2015-07-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qGaussian","Version":"0.1.4","Title":"The q-Gaussian Distribution","Description":"Density, distribution function, quantile function and \n random generation for the q-gaussian distribution with parameters mu and sig.","Published":"2017-03-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"QGglmm","Version":"0.5.1","Title":"Estimate Quantitative Genetics Parameters from Generalised\nLinear Mixed Models","Description":"Compute various quantitative genetics parameters from a Generalised Linear Mixed Model (GLMM) estimates. Especially, it yields the observed phenotypic mean, phenotypic variance and additive genetic variance.","Published":"2016-10-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qgraph","Version":"1.4.3","Title":"Graph Plotting Methods, Psychometric Data Visualization and\nGraphical Model Estimation","Description":"Can be used to visualize data as networks as well as provides an interface for visualizing weighted graphical models.","Published":"2017-03-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qgtools","Version":"1.0","Title":"Tools for Quantitative Genetics Data Analyses","Description":"Two linear mixed model approaches: REML(restricted maximum likelihood) and MINQUE (minimum norm quadratic unbiased estimation) approaches and several resampling techniques are integrated for various quantitative genetics analyses. With these two types of approaches, various unbalanced data structures, missing data, and any irregular genetic mating designs can be analyzed and statistically tested. This package also offers fast computations for many large data sets. Other functions will be added to this R tool in the future.","Published":"2014-09-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qha","Version":"0.0.8","Title":"Qualitative Harmonic Analysis","Description":"Multivariate description of the state changes of a qualitative variable by \n Correspondence Analysis and Clustering. See:\n Deville, J.C., & Saporta, G. (1983). \n Correspondence analysis, with an extension towards nominal time series. \n Journal of econometrics, 22(1-2), 169-189.\n Corrales, M.L., & Pardo, C.E. (2015) . \n Analisis de datos longitudinales cualitativos con analisis de correspondencias y clasificacion. \n Comunicaciones en Estadistica, 8(1), 11-32.","Published":"2016-09-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QICD","Version":"1.2.0","Title":"Estimate the Coefficients for Non-Convex Penalized Quantile\nRegression Model by using QICD Algorithm","Description":"Extremely fast algorithm \"QICD\", Iterative Coordinate \n Descent Algorithm for High-dimensional Nonconvex Penalized Quantile \n Regression. This algorithm combines the coordinate descent algorithm \n in the inner iteration with the majorization minimization step \n in the outside step. For each inner univariate minimization problem, \n we only need to compute a one-dimensional weighted median, \n which ensures fast computation. Tuning parameter selection is based \n on two different method: the cross validation and BIC for \n quantile regression model. Details are described in Peng,B and Wang,L. (2015) \n .","Published":"2017-04-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qicharts","Version":"0.5.4","Title":"Quality Improvement Charts","Description":"Functions for making run charts and basic Shewhart control\n charts for measure and count data.\n The main function, qic(), creates run and control charts and has a\n simple interface with a rich set of options to control data analysis\n and plotting, including options for automatic data aggregation by\n subgroups, easy analysis of before-and-after data, exclusion of one\n or more data points from analysis, and splitting charts into\n sequential time periods.\n Missing values and empty subgroups are handled gracefully.","Published":"2017-02-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"qiimer","Version":"0.9.4","Title":"Work with QIIME Output Files in R","Description":"Open QIIME output files in R, compute statistics, and\n create plots from the data.","Published":"2015-12-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qiitr","Version":"0.1.0","Title":"R Interface to Qiita API","Description":"Qiita is a technical knowledge sharing and collaboration platform for programmers.\n See for more information.","Published":"2016-11-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"qlcData","Version":"0.1.0","Title":"Processing Data for Quantitative Language Comparison (QLC)","Description":"This is a collection of functions to read, recode, and transcode data.","Published":"2015-10-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"qlcMatrix","Version":"0.9.5","Title":"Utility Sparse Matrix Functions for Quantitative Language\nComparison","Description":"Extension of the functionality of the Matrix package for using sparse matrices. Some of the functions are very general, while other are highly specific for special data format as used for quantitative language comparison (QLC).","Published":"2015-10-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"qlcVisualize","Version":"0.1.0","Title":"Visualization for Quantitative Language Comparison (QLC)","Description":"Collection of visualizations as used in quantitative language comparison.","Published":"2015-10-23","License":"ACM | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"qLearn","Version":"1.0","Title":"Estimation and inference for Q-learning","Description":"Functions to implement Q-learning for estimating optimal\n dynamic treatment regimes from two stage sequentially\n randomized trials, and to perform inference via m-out-of-n\n bootstrap for parameters indexing the optimal regime.","Published":"2012-03-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qmap","Version":"1.0-4","Title":"Statistical Transformations for Post-Processing Climate Model\nOutput","Description":"Empirical adjustment of the distribution of variables originating from (regional) climate model simulations using quantile mapping.","Published":"2016-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qmethod","Version":"1.5.2","Title":"Analysis of Subjective Perspectives Using Q Methodology","Description":"Analysis of Q methodology, used to identify distinct perspectives existing within a group.\n This methodology is used across social, health and environmental sciences to understand diversity of attitudes, discourses, or decision-making styles (for more information, see ).\n A single function runs the full analysis. Each step can be run separately using the corresponding functions: for automatic flagging of Q-sorts (manual flagging is optional), for statement scores, for distinguishing and consensus statements, and for general characteristics of the factors.\n Additional functions are available to import and export data, to print and plot, to import raw data from individual *.CSV files, and to make printable cards.\n The package also offers functions to print Q cards and to generate Q distributions for study administration.\n The package uses principal components and it allows manual or automatic flagging, a number of mathematical methods for rotation, and a number of correlation coefficients for the initial correlation matrix.\n See further details in the package documentation, and in the web pages below, which include a cookbook, guidelines for more advanced analysis (how to perform manual flagging or change the sign of factors), data management, and a beta graphical user interface for online and offline use.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qmrparser","Version":"0.1.5","Title":"Parser combinator in R","Description":"Basic functions for building parsers, with an application to PC-AXIS format files.","Published":"2014-12-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"QNB","Version":"1.1.8","Title":"Differential RNA Methylation Analysis for Count-Based\nSmall-Sample Sequencing Data with a Quad-Negative Binomial\nModel","Description":"As a newly emerged research area, RNA epigenetics has drawn increasing \n attention recently for the participation of RNA methylation and other \n modifications in a number of crucial biological processes. Thanks to high \n throughput sequencing techniques, such as m6A-Seq, transcriptome-wide RNA \n methylation profile is now available in the form of count-based data, with \n which it is often of interests to study the dynamics in epitranscriptomic \n layer. However, the sample size of RNA methylation experiment is usually \n very small due to its costs; and additionally, there usually exist a large \n number of genes whose methylation level cannot be accurately estimated due \n to their low expression level, making differential RNA methylation analysis \n a difficult task.\n We present QNB, a statistical approach for differential RNA methylation \n analysis with count-based small-sample sequencing data. The method is based \n on 4 independent negative binomial dis-tributions with their variances and \n means linked by local regressions. QNB showed improved performance on \n simulated and real m6A-Seq datasets when compared with competing algorithms. \n And the QNB model is also applicable to other datasets related RNA \n modifications, including but not limited to RNA bisulfite sequencing, \n m1A-Seq, Par-CLIP, RIP-Seq, etc.Please don't hesitate to contact \n if you have any questions.","Published":"2017-05-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"QoLR","Version":"1.0.3","Title":"Analysis of Health-Related Quality of Life in Oncology","Description":"To generate the scores of the EORTC QLQ-C30 questionnaire and supplementary modules and to determine the time to health-related quality of life score deterioration in longitudinal analysis.","Published":"2017-05-02","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"QPBoot","Version":"0.2","Title":"Model Validation using Quantile Spectral Analysis and Parametric\nBootstrap","Description":"Provides functionality for model validation by computing a\n parametric bootstrap and comparing the Quantile Spectral Densities.","Published":"2017-06-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qpcR","Version":"1.4-0","Title":"Modelling and analysis of real-time PCR data","Description":"Model fitting, optimal model selection and calculation of various features that are essential in the analysis of quantitative real-time polymerase chain reaction (qPCR).","Published":"2014-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qPCR.CT","Version":"1.1","Title":"qPCR data analysis and plot package","Description":"use 2^ddCT methods calculate the relative gene expression,\n data file can be export from bio-rad qpcr machine, the results\n can be plot with errorbar. ver 1.1 add GroupPlot function, can\n plot all the groups once.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"QPot","Version":"1.1","Title":"Quasi-Potential Analysis for Stochastic Differential Equations","Description":"Tools to 1) simulate and visualize stochastic differential\n equations and 2) determine stability of equilibria using the ordered-upwind\n method to compute the quasi-potential.","Published":"2016-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qqman","Version":"0.1.4","Title":"Q-Q and Manhattan Plots for GWAS Data","Description":"Create Q-Q and manhattan plots for GWAS data from PLINK results.","Published":"2017-03-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"QQperm","Version":"1.0.1","Title":"Permutation Based QQ Plot and Inflation Factor Estimation","Description":"Provides users the necessary utility functions to generate permutation-based QQ plots and also estimate inflation factor based on the empirical NULL distribution. While it has general utility, it is particularly helpful when the skewness of the Fisher's Exact test in sparse data situations with imbalanced case-control sample sizes renders the reliance on the uniform chi-square expected distribution inappropriate.","Published":"2016-10-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qqtest","Version":"1.1.1","Title":"Self Calibrating Quantile-Quantile Plots for Visual Testing","Description":"Provides the function qqtest which incorporates uncertainty in its\n qqplot display(s) so that the user might have a better sense of the\n evidence against the specified distributional hypothesis. qqtest draws a\n quantile quantile plot for visually assessing whether the data come from a\n test distribution that has been defined in one of many ways. The vertical\n axis plots the data quantiles, the horizontal those of a test distribution.\n The default behaviour generates 1000 samples from the test distribution and\n overlays the plot with shaded pointwise interval estimates for the ordered\n quantiles from the test distribution. A small number of independently\n generated exemplar quantile plots can also be overlaid. Both the interval\n estimates and the exemplars provide different comparative information to\n assess the evidence provided by the qqplot for or against the hypothesis\n that the data come from the test distribution (default is normal or\n gaussian). Finally, a visual test of significance (a lineup plot) can also\n be displayed to test the null hypothesis that the data come from the test\n distribution.","Published":"2016-02-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"qqvases","Version":"1.0.0","Title":"Animated Normal Quantile-Quantile Plots","Description":"Presents an explanatory animation of normal quantile-quantile plots based on a water-filling analogy. The animation presents a normal QQ plot as the parametric plot of the water levels in vases defined by two distributions. The distributions decorate the axes in the normal QQ plot and are optionally shown as vases adjacent to the plot. The package draws QQ plots for several distributions, either as samples or continuous functions.","Published":"2016-09-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"QRAGadget","Version":"0.1.0","Title":"A 'Shiny' Gadget for Interactive 'QRA' Visualizations","Description":"Upload raster data and easily create interactive quantitative risk analysis 'QRA' visualizations. Select\n from numerous color palettes, base-maps, and different configurations.","Published":"2016-09-24","License":"Apache License | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"qrage","Version":"1.0","Title":"Tools that Create D3 JavaScript Force Directed Graph from R","Description":"Tools that create D3 JavaScript force directed graph from R. D3 JavaScript was created by Michael Bostock. See http://d3js.org/ and, more specifically for Force Directed Graph https://github.com/mbostock/d3/wiki/Force-Layout.","Published":"2015-07-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"QRank","Version":"1.0","Title":"A Novel Quantile Regression Approach for eQTL Discovery","Description":"A Quantile Rank-score based test for the identification of expression quantitative trait loci.","Published":"2017-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qrcm","Version":"2.1","Title":"Quantile Regression Coefficients Modeling","Description":"Parametric modeling of quantile regression coefficient functions.\n Can be used with censored and truncated data.","Published":"2017-03-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qrcode","Version":"0.1.1","Title":"QRcode Generator for R","Description":"Create QRcode in R.","Published":"2015-08-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"QRegVCM","Version":"1.0","Title":"Quantile Regression in Varying-Coefficient Models","Description":"Quantile regression in varying-coefficient models (VCM) using one particular nonparametric technique called P-splines. The functions can be applied on three types of VCM; (1) Homoscedastic VCM, (2) Simple heteroscedastic VCM, and (3) General heteroscedastic VCM. ","Published":"2016-07-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qrencoder","Version":"0.1.0","Title":"Quick Response Code (QR Code) / Matrix Barcode Creator","Description":"Quick Response codes (QR codes) are a type of matrix bar code and can be\n used to authenticate transactions, provide access to multi-factor authentication\n services and enable general data transfer in an image. QR codes use four standardized \n encoding modes (numeric, alphanumeric, byte/binary, and kanji) to efficiently store \n data. Matrix barcode generation is performed efficiently in C via the included\n 'libqrencoder' library created by Kentaro Fukuchi.","Published":"2016-09-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qrfactor","Version":"1.4","Title":"Simultaneous simulation of Q and R mode factor analyses with\nSpatial data","Description":"The qrfactor package simultaneously runs both Q and R mode factor analyses. The package contains only one function called qrfactor() that can perform PCA, R-mode Factor Analysis, Q-mode Factor Analysis, Simultaneous R- and Q-mode Factor Analysis, Principal Coordinate Analysis, as wells as Multidimensional Scaling (MDS). Loadings and scores can easily be computed from the simulation.The plot.qrfactor() function offers several annotated biplots for all possible combinations of eigenvectors, loadings, and scores. Input data includes shapefiles, tables and dataframe","Published":"2014-01-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qrjoint","Version":"1.0-0","Title":"Joint Estimation in Linear Quantile Regression","Description":"Joint estimation of quantile specific intercept and slope parameters in a linear regression setting.","Published":"2016-03-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qrLMM","Version":"1.3","Title":"Quantile Regression for Linear Mixed-Effects Models","Description":"Quantile regression (QR) for Linear \n Mixed-Effects Models via the asymmetric Laplace distribution (ALD). \n It uses the Stochastic Approximation of the EM (SAEM) algorithm for \n deriving exact maximum likelihood estimates and full inference results \n for the fixed-effects and variance components. \n It also provides graphical summaries for assessing the algorithm \n convergence and fitting results.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QRM","Version":"0.4-13","Title":"Provides R-Language Code to Examine Quantitative Risk Management\nConcepts","Description":"Accompanying package to the book\n Quantitative Risk Management: Concepts, Techniques and Tools by\n Alexander J. McNeil, Rüdiger Frey, and Paul Embrechts.","Published":"2016-03-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qrmdata","Version":"2016-01-03-1","Title":"Data Sets for Quantitative Risk Management Practice","Description":"Various data sets (stocks, stock indices, constituent data, FX,\n zero-coupon bond yield curves, volatility, commodities) for Quantitative\n Risk Management practice.","Published":"2016-05-29","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"qrmix","Version":"0.9.0","Title":"Quantile Regression Mixture Models","Description":"Implements the robust algorithm for fitting finite mixture models based on quantile regression proposed by Emir et al., 2017 (unpublished).","Published":"2017-05-03","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"qrmtools","Version":"0.0-7","Title":"Tools for Quantitative Risk Management","Description":"Functions and data sets for reproducing selected results from\n the book \"Quantitative Risk Management: Concepts, Techniques and Tools\".\n Furthermore, new developments and auxiliary functions for Quantitative\n Risk Management practice.","Published":"2017-06-16","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"qrng","Version":"0.0-3","Title":"(Randomized) Quasi-Random Number Generators","Description":"Functionality for generating (randomized) quasi-random numbers in\n high dimensions.","Published":"2016-06-11","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"qrNLMM","Version":"1.4","Title":"Quantile Regression for Nonlinear Mixed-Effects Models","Description":"Quantile regression (QR) for Nonlinear\n Mixed-Effects Models via the asymmetric Laplace distribution (ALD). \n It uses the Stochastic Approximation of the EM (SAEM) algorithm for \n deriving exact maximum likelihood estimates and full inference results \n for the fixed-effects and variance components.\n It also provides graphical summaries for assessing the algorithm\n convergence and fitting results.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qrnn","Version":"1.1.3","Title":"Quantile Regression Neural Network","Description":"Fit a quantile regression neural network with optional\n left censoring using a variant of the finite smoothing\n algorithm.","Published":"2015-07-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qrsvm","Version":"0.2.1","Title":"SVM Quantile Regression with the Pinball Loss","Description":"Quantile Regression (QR) using Support Vector Machines under the Pinball-Loss. Estimation is based on \"Nonparametric Quantile Regression\" by I. Takeuchi, Q.V.Le , T. Sears, A.J.Smola (2004). Implementation relies on 'quadprog' package, package 'kernlab' Kernelfunctions and package 'Matrix' nearPD to find next Positive definite Kernelmatrix. Package estimates quantiles individually but an Implementation of non crossing constraints coming soon. Function multqrsvm() now supports parallel backend for faster fitting. ","Published":"2017-05-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"QSARdata","Version":"1.3","Title":"Quantitative Structure Activity Relationship (QSAR) Data Sets","Description":"Molecular descriptors and outcomes for several public domain data sets","Published":"2013-07-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"qtbase","Version":"1.0.12","Title":"Interface Between R and Qt","Description":"Dynamic bindings to the Qt library for calling Qt\n methods and extending Qt classes from R. Other packages build upon 'qtbase'\n to provide special-purpose high-level interfaces to specific parts of Qt.","Published":"2017-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qte","Version":"1.2.0","Title":"Quantile Treatment Effects","Description":"Provides several methods for computing the Quantile Treatment\n Effect (QTE) and Quantile Treatment Effect on the Treated (QTET). The main cases\n covered are (i) Treatment is randomly assigned, (ii) Treatment is as good as\n randomly assigned after conditioning on some covariates (also called conditional\n independence or selection on observables), (iii) Identification is based on a\n Difference in Differences assumption (several varieties are available in the\n package).","Published":"2017-06-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qtl","Version":"1.41-6","Title":"Tools for Analyzing QTL Experiments","Description":"Analysis of experimental crosses to identify genes\n (called quantitative trait loci, QTLs) contributing to variation in\n quantitative traits.","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"qtlbook","Version":"0.18-5","Title":"Datasets for the R/qtl Book","Description":"Datasets for the book, A Guide to QTL Mapping with R/qtl.","Published":"2016-05-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"qtlc","Version":"1.0","Title":"Densitometric Analysis of Thin-Layer Chromatography Plates","Description":"Densitometric evaluation of the photo-archived quantitative thin-layer chromatography (TLC) plates.","Published":"2016-01-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qtlcharts","Version":"0.9-6","Title":"Interactive Graphics for QTL Experiments","Description":"Web-based interactive charts (using D3.js) for the analysis of\n experimental crosses to identify genetic loci (quantitative trait\n loci, QTL) contributing to variation in quantitative traits.","Published":"2017-06-01","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"qtlDesign","Version":"0.941","Title":"Design of QTL experiments","Description":"Tools for the design of QTL experiments","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"qtlhot","Version":"0.9.0","Title":"Inference for QTL Hotspots","Description":"Functions to infer co-mapping trait hotspots and causal models","Published":"2013-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qtlmt","Version":"0.1-4","Title":"Tools for Mapping Multiple Complex Traits","Description":"Provides tools for joint analysis of multiple traits in a backcross (BC) or recombinant inbred lines (RIL) population. It can be used to select an optimal subset of traits for multiple-trait mapping, analyze multiple traits via the SURE model, which can associate different QTL with different traits, and perform multiple-trait composite multiple-interval mapping.","Published":"2015-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qtlnet","Version":"1.3.6","Title":"Causal Inference of QTL Networks","Description":"Functions to Simultaneously Infer Causal Graphs and Genetic Architecture","Published":"2014-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QTLRel","Version":"0.2-15","Title":"Tools for Mapping of Quantitative Traits of Genetically Related\nIndividuals and Calculating Identity Coefficients from\nPedigrees","Description":"This software provides tools for quantitative trait mapping in populations such as advanced intercross lines where relatedness among individuals should not be ignored. It can estimate background genetic variance components, impute missing genotypes, simulate genotypes, perform a genome scan for putative quantitative trait loci (QTL), and plot mapping results. It also has functions to calculate identity coefficients from pedigrees, especially suitable for pedigrees that consist of a large number of generations, or estimate identity coefficients from genotypic data in certain circumstances.","Published":"2015-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Qtools","Version":"1.2","Title":"Utilities for Quantiles","Description":"This is a collection of functions for unconditional and conditional quantiles. These include methods for transformation-based quantile regression, quantile-based measures of location, scale and shape, methods for quantiles of discrete variables, quantile-based multiple imputation, and restricted quantile regression.","Published":"2017-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qtpaint","Version":"0.9.1","Title":"Qt-Based Painting Infrastructure","Description":"Low-level interface to functionality in Qt for efficiently drawing\n dynamic graphics and handling basic user input.","Published":"2015-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qtutils","Version":"0.1-3","Title":"Miscellaneous Qt-based utilities","Description":"Miscellaneous Qt-based tools for R","Published":"2012-05-23","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QuACN","Version":"1.8.0","Title":"QuACN: Quantitative Analysis of Complex Networks","Description":"Quantitative Analysis of Complex Networks. This package offers a set of topological network measures to analyze complex Networks structurally.","Published":"2014-11-19","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"quad","Version":"1.0","Title":"Exact permutation moments of quadratic form statistics","Description":"This package gives you the exact first four permutation moments for the most commonly used quadratic form statistics, which need not be positive definite. The extension of this work to quadratic forms greatly expands the utility of density approximations for these problems, including for high-dimensional applications, where the statistics must be extreme in order to exceed stringent testing thresholds. Approximate p-values are obtained by matching the exact moments to the Pearson family of distributions using the PearsonDS package.","Published":"2014-07-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"quadmesh","Version":"0.1.0","Title":"Quadrangle Mesh","Description":"Gridded data need not be regular, quadmesh creates a mesh from rasters. Future versions may be more general.","Published":"2016-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"quadprog","Version":"1.5-5","Title":"Functions to solve Quadratic Programming Problems","Description":"This package contains routines and documentation for\n solving quadratic programming problems.","Published":"2013-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"quadprogXT","Version":"0.0.1","Title":"Quadratic Programming with Absolute Value Constraints","Description":"Extends the quadprog package to solve quadratic programs with\n absolute value constraints and absolute values in the objective function.","Published":"2017-05-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"quadrupen","Version":"0.2-5","Title":"Sparsity by Worst-Case Quadratic Penalties","Description":"Fits classical sparse regression models with\n efficient active set algorithms by solving quadratic problems. Also\n provides a few methods for model selection purpose (cross-validation,\n stability selection).","Published":"2017-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qualCI","Version":"0.1","Title":"Causal Inference with Qualitative and Ordinal Information on\nOutcomes","Description":"Exact one-sided p-values and confidence intervals for an outcome variable defined on an interval measurement scale with only qualitative and ordinal information available. ","Published":"2014-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QualInt","Version":"1.0.0","Title":"Test for Qualitative Interactions","Description":"Used for testing for qualitative interactions between\n treatment effects and patient subgroups. Here the term treatment effect\n means a comparison result of two treatments within each patient subgroup.\n Models included in this package are Gaussian, binomial and Cox models.\n Methods included here are interval based graphical approach and Gail Simon\n LRT.","Published":"2014-10-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qualityTools","Version":"1.55","Title":"Statistical Methods for Quality Science","Description":"Contains methods associated with the Define, Measure, Analyze, Improve and Control (i.e. DMAIC) cycle of the Six Sigma Quality Management methodology.It covers distribution fitting, normal and non-normal process capability indices, techniques for Measurement Systems Analysis especially gage capability indices and Gage Repeatability (i.e Gage RR) and Reproducibility studies, factorial and fractional factorial designs as well as response surface methods including the use of desirability functions. Improvement via Six Sigma is project based strategy that covers 5 phases: Define - Pareto Chart; Measure - Probability and Quantile-Quantile Plots, Process Capability Indices for various distributions and Gage RR Analyze i.e. Pareto Chart, Multi-Vari Chart, Dot Plot; Improve - Full and fractional factorial, response surface and mixture designs as well as the desirability approach for simultaneous optimization of more than one response variable. Normal, Pareto and Lenth Plot of effects as well as Interaction Plots; Control - Quality Control Charts can be found in the 'qcc' package. The focus is on teaching the statistical methodology used in the Quality Sciences.","Published":"2016-02-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qualpalr","Version":"0.4.1","Title":"Automatic Generation of Qualitative Color Palettes","Description":"Automatic generation of distinct qualitative color palettes,\n optionally adapted to color blindness. It takes a subspace of the HSL color\n space as input and projects it to the DIN99d color space where it selects\n and return colors that are maximally distinct.","Published":"2017-05-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"qualtRics","Version":"2.0","Title":"Download Qualtrics Survey Data Directly into R","Description":"Qualtrics \n allows users to collect online data through surveys.\n This package contains convenience functions to pull\n survey results straight into R using the Qualtrics\n API. See for more \n information about the Qualtrics API.","Published":"2017-06-16","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"qualV","Version":"0.3-3","Title":"Qualitative Validation Methods","Description":"Qualitative methods for the validation of dynamic models.\n It contains (i) an orthogonal set of deviance measures for absolute,\n relative and ordinal scale and (ii) approaches accounting for time\n shifts. The first approach transforms time to take time delays and speed\n differences into account. The second divides the time series into\n interval units according to their main features and finds the longest\n common subsequence (LCS) using a dynamic programming algorithm.","Published":"2017-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qualvar","Version":"0.1.0","Title":"Implements Indices of Qualitative Variation Proposed by Wilcox\n(1973)","Description":"Implements indices of qualitative variation proposed by Wilcox (1973).","Published":"2015-08-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Quandl","Version":"2.8.0","Title":"API Wrapper for Quandl.com","Description":"Functions for interacting directly with the Quandl API to offer\n data in a number of formats usable in R, downloading a zip with all data from a\n Quandl database, and the ability to search. This R package uses the Quandl API.\n For more information go to https://www.quandl.com/docs/api. For more help on the\n package itself go to https://www.quandl.com/help/r.","Published":"2016-04-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"quantable","Version":"0.2.2","Title":"Streamline Descriptive Analysis of Quantitative Data Matrices","Description":"Methods which streamline the descriptive analysis of quantitative\n matrices. Matrix columns are samples while rows are features i.e. proteins, genes.","Published":"2016-07-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"quantchem","Version":"0.13","Title":"Quantitative chemical analysis: calibration and evaluation of\nresults","Description":"Statistical evaluation of calibration curves by different\n regression techniques: ordinary, weighted, robust (up to 4th\n order polynomial). Log-log and Box-Cox transform, estimation\n of optimal power and weighting scheme. Tests for\n heteroscedascity and normality of residuals. Different kinds of\n plots commonly used in illustrating calibrations. Easy \"inverse\n prediction\" of concentration by given responses and statistical\n evaluation of results (comparison of precision and accuracy by\n common tests).","Published":"2012-08-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"quanteda","Version":"0.9.9-65","Title":"Quantitative Analysis of Textual Data","Description":"A fast, flexible framework for for the management, processing, and\n quantitative analysis of textual data in R.","Published":"2017-05-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"quantification","Version":"0.2.0","Title":"Quantification of Qualitative Survey Data","Description":"Provides different functions for quantifying qualitative survey data. It supports the Carlson-Parkin method, the regression approach, the balance approach and the conditional expectations method.","Published":"2016-11-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"QuantifQuantile","Version":"2.2","Title":"Estimation of Conditional Quantiles using Optimal Quantization","Description":"Estimation of conditional quantiles using optimal quantization.\n Construction of an optimal grid of N quantizers, estimation of conditional\n quantiles and data driven selection of the size N of the grid. Graphical\n illustrations for the selection of N and of resulting estimated curves or\n surfaces when the dimension of the covariate is one or two.","Published":"2015-08-13","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"quantileDA","Version":"1.1","Title":"Quantile Classifier","Description":"Code for centroid, median and quantile classifiers.","Published":"2016-02-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"QuantileGradeR","Version":"0.1.1","Title":"Quantile-Adjusted Restaurant Grading","Description":"Implementation of the food safety restaurant grading system adopted by Public Health - Seattle & King County (see Ashwood, Z.C., Elias, B., and Ho. D.E. \"Improving the Reliability of Food Safety Disclosure: A Quantile Adjusted Restaurant Grading System for Seattle-King County\" (working paper)). As reported in the accompanying paper, this package allows jurisdictions to easily implement refinements that address common challenges with unadjusted grading systems. First, in contrast to unadjusted grading, where the most recent single routine inspection is the primary determinant of a grade, grading inputs are allowed to be flexible. For instance, it is straightforward to base the grade on average inspection scores across multiple inspection cycles. Second, the package can identify quantile cutoffs by inputting substantively meaningful regulatory thresholds (e.g., the proportion of establishments receiving sufficient violation points to warrant a return visit). Third, the quantile adjustment equalizes the proportion of establishments in a flexible number of grading categories (e.g., A/B/C) across areas (e.g., ZIP codes, inspector areas) to account for inspector differences. Fourth, the package implements a refined quantile adjustment that addresses two limitations with the stats::quantile() function when applied to inspection score datasets with large numbers of score ties. The quantile adjustment algorithm iterates over quantiles until, over all restaurants in all areas, grading proportions are within a tolerance of desired global proportions. In addition the package allows a modified definition of \"quantile\" from \"Nearest Rank\". Instead of requiring that at least p[1]% of restaurants receive the top grade and at least (p[1]+p[2])% of restaurants receive the top or second best grade for quantiles p, the algorithm searches for cutoffs so that as close as possible p[1]% of restaurants receive the top grade, and as close as possible to p[2]% of restaurants receive the second top grade.","Published":"2017-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"quantmod","Version":"0.4-10","Title":"Quantitative Financial Modelling Framework","Description":"Specify, build, trade, and analyse quantitative financial trading strategies.","Published":"2017-06-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"quantoptr","Version":"0.1.2","Title":"Algorithms for Quantile- And Mean-Optimal Treatment Regimes","Description":"Estimation methods for optimal treatment regimes under three different criteria, namely marginal quantile, marginal mean, and mean absolute difference. For the first two criteria, both one-stage and two-stage estimation method are implemented. A doubly robust estimator for estimating the quantile-optimal treatment regime is also included. ","Published":"2017-03-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QuantPsyc","Version":"1.5","Title":"Quantitative Psychology Tools","Description":"Contains functions useful for data screening, testing\n moderation, mediation and estimating power.","Published":"2012-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"quantreg","Version":"5.33","Title":"Quantile Regression","Description":"Estimation and inference methods for models of conditional quantiles: \n Linear and nonlinear parametric and non-parametric (total variation penalized) models \n for conditional quantiles of a univariate response and several methods for handling\n censored survival data. Portfolio selection methods based on expected shortfall\n risk are also included.","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"quantreg.nonpar","Version":"1.0","Title":"Nonparametric Series Quantile Regression","Description":"Implements the nonparametric quantile regression method developed by Belloni, Chernozhukov, and Fernandez-Val (2011) to partially linear quantile models. Provides point estimates of the conditional quantile function and its derivatives based on series approximations to the nonparametric part of the model. Provides pointwise and uniform confidence intervals using analytic and resampling methods.","Published":"2016-04-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"quantregForest","Version":"1.3-5","Title":"Quantile Regression Forests","Description":"Quantile Regression Forests is a tree-based ensemble\n method for estimation of conditional quantiles. It is\n particularly well suited for high-dimensional data. Predictor\n variables of mixed classes can be handled. The package is\n dependent on the package 'randomForest', written by Andy Liaw.","Published":"2016-05-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"quantregGrowth","Version":"0.3-2","Title":"Growth Charts via Regression Quantiles","Description":"Fits non-crossing regression quantiles as a function of linear covariates and a smooth terms via B-splines with difference penalties. ","Published":"2016-09-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"quantspec","Version":"1.2-1","Title":"Quantile-Based Spectral Analysis of Time Series","Description":"Methods to determine, smooth and plot quantile periodograms for\n univariate and multivariate time series.","Published":"2016-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"QuantTools","Version":"0.5.5","Title":"Enhanced Quantitative Trading Modelling","Description":"Download and organize historical market data from multiple sources like Yahoo (), Google (), Finam (), MOEX () and IQFeed (). Code your trading algorithms in modern C++11 with powerful event driven tick processing API including trading costs and exchange communication latency and transform detailed data seamlessly into R. In just few lines of code you will be able to visualize every step of your trading model from tick data to multi dimensional heat maps.","Published":"2017-05-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"QuantumClone","Version":"1.0.0.4","Title":"Clustering Mutations using High Throughput Sequencing (HTS) Data","Description":"Using HTS data, clusters mutations in order to recreate putative\n clones from the data provided. It requires genotype at the location of the\n variant as well as the depth of coverage and number of reads supporting the\n mutation. Additional information may be provided, such as the contamination\n in the tumor sample. This package also provides a function QuantumCat() which\n simulates data obtained from tumor sequencing.","Published":"2017-03-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"quarrint","Version":"1.0.0","Title":"Interaction Prediction Between Groundwater and Quarry Extension\nUsing Discrete Choice Models and Artificial Neural Networks","Description":"An implementation of two interaction indices between extractive\n activity and groundwater resources based on hazard and vulnerability\n parameters used in the assessment of natural hazards. One index is based\n on a discrete choice model and the other is relying on an artificial\n neural network.","Published":"2016-11-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"QuasiSeq","Version":"1.0-8","Title":"Analyzing RNA Sequencing Count Tables Using Quasi-Likelihood","Description":"Identify differentially expressed genes in RNA-seq count data using quasi-Poisson or quasi-negative binomial models with 'QL', 'QLShrink' and 'QLSpline' methods (Lund, Nettleton, McCarthy, and Smyth, 2012).","Published":"2015-05-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"questionr","Version":"0.6.1","Title":"Functions to Make Surveys Processing Easier","Description":"Set of functions to make the processing and analysis of\n surveys easier : interactive shiny apps and addins for data recoding,\n contingency tables, dataset metadata handling, and several convenience\n functions.","Published":"2017-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"queuecomputer","Version":"0.8.1","Title":"Computationally Efficient Queue Simulation","Description":"Implementation of a computationally efficient method for\n simulating queues with arbitrary arrival and service times.","Published":"2017-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"queueing","Version":"0.2.10","Title":"Analysis of Queueing Networks and Models","Description":"It provides versatile tools for analysis of birth and death based Markovian Queueing Models\n and Single and Multiclass Product-Form Queueing Networks.\n It implements M/M/1, M/M/c, M/M/Infinite, M/M/1/K, M/M/c/K, M/M/c/c, M/M/1/K/K, M/M/c/K/K, M/M/c/K/m, M/M/Infinite/K/K,\n Multiple Channel Open Jackson Networks, Multiple Channel Closed Jackson Networks,\n Single Channel Multiple Class Open Networks, Single Channel Multiple Class Closed Networks\n and Single Channel Multiple Class Mixed Networks.\n Also it provides a B-Erlang, C-Erlang and Engset calculators.\n This work is dedicated to the memory of D. Sixto Rios Insua.","Published":"2017-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"quhomology","Version":"1.1.0","Title":"Calculation of Homology of Quandles, Racks, Biquandles and\nBiracks","Description":"This calculates the Quandle, Rack and Degenerate Homology groups of\n Racks and Biracks (as well as Quandles and Biquandles). In addition, a test is\n provided to ascertain if a given set with one or two given functions is indeed a\n biquandle or not.","Published":"2016-05-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"QUIC","Version":"1.1","Title":"Regularized sparse inverse covariance matrix estimation","Description":"Use Newton's method and coordinate descent to solve the\n regularized inverse covariance matrix estimation problem.\n Please refer to: Sparse Inverse Covariance Matrix Estimation\n Using Quadratic Approximation, Cho-Jui Hsieh, Matyas A. Sustik,\n Inderjit S. Dhillon, Pradeep Ravikumar, Advances in Neural\n Information Processing Systems 24, 2011, p. 2330--2338.","Published":"2012-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"quickmapr","Version":"0.2.0","Title":"Quickly Map and Explore Spatial Data","Description":"While analyzing geospatial data, easy visualization is often\n needed that allows for quick plotting, and simple, but easy interactivity.\n Additionally, visualizing geospatial data in projected coordinates is also\n desirable. The 'quickmapr' package provides a simple method to visualize 'sp'\n and 'raster' objects, allows for basic zooming, panning, identifying,\n labeling, selecting, and measuring spatial objects. Importantly, it does \n not require that the data be in geographic coordinates.","Published":"2016-09-17","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"quickmatch","Version":"0.1.2","Title":"Quick Generalized Full Matching","Description":"\n Provides functions for constructing near-optimal generalized full matching.\n Generalized full matching is an extension of the original full matching method\n to situations with more intricate study designs. The package is made with\n large data sets in mind and derives matches more than an order of magnitude\n quicker than other methods.","Published":"2017-05-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"quickpsy","Version":"0.1.4","Title":"Fits Psychometric Functions for Multiple Groups","Description":"Quickly fits and plots psychometric functions (normal, logistic,\n Weibull or any or any function defined by the user) for multiple groups.","Published":"2016-10-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"quickReg","Version":"1.0.0","Title":"Build Regression Models Quickly and Display the Results Using\n'ggplot2'","Description":"A set of functions to extract results from regression models and\n plot the effect size using 'ggplot2' seamlessly. While 'broom' is useful to\n convert statistical analysis objects into tidy data frames, 'coefplot' is adept at showing\n multivariate regression results. With specific outcome, this package could build regression models\n automatically, extract results into a data frame and provide a quicker way to summarize\n models' statistical findings using 'ggplot2'.","Published":"2016-08-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"quint","Version":"1.2.1","Title":"Qualitative Interaction Trees","Description":"Grows a qualitative interaction tree. Quint is a tool for subgroup analysis, suitable for data from a two-arm randomized controlled trial.","Published":"2016-12-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"quipu","Version":"1.9.0","Title":"Summary charts of micro satellite profiles for a set of\nbiological samples","Description":"Gene banks increasingly use molecular markers for routine\n characterization of plant collections and farmer managed diversity. The\n gene bank of the International Potato Center presently uses a\n micro-satellite marker kit to produce molecular profiles for potato\n accessions. We have been searching for a compact graphical representation\n that shows both molecular diversity and accession characteristics - thus\n permitting biologists and collection curators to have a simple way to\n interpret molecular data. Inspired by the ancient Andean data recording\n system we devised a graph that allows for standardized representation while\n leaving room for updates of the marker kit and the collection of\n accessions. The graph has been used in several catalogs of potatoes.","Published":"2014-04-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qut","Version":"1.2","Title":"Quantile Universal Threshold","Description":"Selection of a threshold parameter based on the Quantile Universal Threshold (QUT) for GLM-lasso and Square-root lasso to obtain a sparse model \n with a good compromise between high true positive rate and low false discovery rate.","Published":"2017-03-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qVarSel","Version":"1.0","Title":"Variables Selection for Clustering and Classification","Description":"For a given data matrix A and cluster centers/prototypes collected in the matrix P, the functions described here select a subset of statistic variables Q that mostly explains/justifies P as prototypes. The functions are useful to reduce the data dimension for classification and to discard masking variables for clustering.","Published":"2014-06-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"qvcalc","Version":"0.9-0","Title":"Quasi Variances for Factor Effects in Statistical Models","Description":"Functions to compute quasi variances and associated measures of approximation error.","Published":"2016-03-30","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"QVM","Version":"0.1.1","Title":"Questionnaires Validation Module","Description":"Implement a multivariate analysis interface for questionnaire validation of Likert-type scale variables.","Published":"2016-07-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"qwraps2","Version":"0.2.4","Title":"Quick Wraps 2","Description":"A collection of (wrapper) functions the creator found useful\n for quickly placing data summaries and formatted regression results into\n '.Rnw' or '.Rmd' files. Functions for generating commonly used graphics,\n such as receiver operating curves or Bland-Altman plots, are also provided\n by 'qwraps2'. 'qwraps2' is a updated version of a package 'qwraps'. The\n original version 'qwraps' was never submitted to CRAN but can be found at\n . The implementation and limited scope\n of the functions within 'qwraps2' is\n fundamentally different from 'qwraps'.","Published":"2016-12-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"QZ","Version":"0.1-6","Title":"Generalized Eigenvalues and QZ Decomposition","Description":"Generalized eigenvalues and QZ decomposition\n (generalized Schur form) for an N-by-N non-symmetric\n matrix A or paired matrices (A,B) with eigenvalues reordering\n mechanism. The package is mainly based complex*16 and double\n precision of LAPACK library (version 3.4.2.)","Published":"2017-05-14","License":"Mozilla Public License 2.0","snapshot_date":"2017-06-23"} {"Package":"R.cache","Version":"0.12.0","Title":"Fast and Light-Weight Caching (Memoization) of Objects and\nResults to Speed Up Computations","Description":"Memoization can be used to speed up repetitive and computational expensive function calls. The first time a function that implements memoization is called the results are stored in a cache memory. The next time the function is called with the same set of parameters, the results are momentarily retrieved from the cache avoiding repeating the calculations. With this package, any R object can be cached in a key-value storage where the key can be an arbitrary set of R objects. The cache memory is persistent (on the file system).","Published":"2015-11-12","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"R.devices","Version":"2.15.1","Title":"Unified Handling of Graphics Devices","Description":"Functions for creating plots and image files in a unified way\n regardless of output format (EPS, PDF, PNG, SVG, TIFF, WMF, etc.). Default\n device options as well as scales and aspect ratios are controlled in a uniform\n way across all device types. Switching output format requires minimal changes\n in code. This package is ideal for large-scale batch processing, because it\n will never leave open graphics devices or incomplete image files behind, even on\n errors or user interrupts.","Published":"2016-11-10","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"R.filesets","Version":"2.11.0","Title":"Easy Handling of and Access to Files Organized in Structured\nDirectories","Description":"A file set refers to a set of files located in one or more directories on the file system. This package provides classes and methods to locate, setup, subset, navigate and iterate such sets. The API is designed such that these classes can be extended via inheritance to provide a richer API for special file formats. Moreover, a specific name format is defined such that filenames and directories can be considered to have full names which consists of a name followed by comma-separated tags. This adds additional flexibility to identify file sets and individual files. NOTE: This package's API should be considered to be in an beta stage. Its main purpose is currently to support the aroma.* packages, where it is one of the main core components; if you decide to build on top of this package, please contact the author first.","Published":"2017-02-28","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"R.huge","Version":"0.9.0","Title":"Methods for Accessing Huge Amounts of Data [deprecated]","Description":"DEPRECATED. Do not start building new projects based on this package. Cross-platform alternatives are the following packages: bigmemory (CRAN), ff (CRAN), BufferedMatrix (Bioconductor). The main usage of it was inside the aroma.affymetrix package. (The package currently provides a class representing a matrix where the actual data is stored in a binary format on the local file system. This way the size limit of the data is set by the file system and not the memory.)","Published":"2015-02-22","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"r.jive","Version":"2.1","Title":"Perform JIVE Decomposition for Multi-Source Data","Description":"Performs the JIVE decomposition on a list of data sets when the data share a dimension, returning low-rank matrices that capture the joint and individual structure of the data [O'Connell, MJ and Lock, EF (2016) ]. It provides two methods of rank selection when the rank is unknown, a permutation test and a BIC selection algorithm. Also included in the package are three plotting functions for visualizing the variance attributed to each data source: a bar plot that shows the percentages of the variability attributable to joint and individual structure, a heatmap that shows the structure of the variability, and principal component plots. ","Published":"2017-04-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"R.matlab","Version":"3.6.1","Title":"Read and Write MAT Files and Call MATLAB from Within R","Description":"Methods readMat() and writeMat() for reading and writing MAT files. For user with MATLAB v6 or newer installed (either locally or on a remote host), the package also provides methods for controlling MATLAB (trademark) via R and sending and retrieving data between R and MATLAB.","Published":"2016-10-20","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"R.methodsS3","Version":"1.7.1","Title":"S3 Methods Simplified","Description":"Methods that simplify the setup of S3 generic functions and S3 methods. Major effort has been made in making definition of methods as simple as possible with a minimum of maintenance for package developers. For example, generic functions are created automatically, if missing, and naming conflict are automatically solved, if possible. The method setMethodS3() is a good start for those who in the future may want to migrate to S4. This is a cross-platform package implemented in pure R that generates standard S3 methods.","Published":"2016-02-16","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"R.oo","Version":"1.21.0","Title":"R Object-Oriented Programming with or without References","Description":"Methods and classes for object-oriented programming in R with or without references. Large effort has been made on making definition of methods as simple as possible with a minimum of maintenance for package developers. The package has been developed since 2001 and is now considered very stable. This is a cross-platform package implemented in pure R that defines standard S3 classes without any tricks.","Published":"2016-11-01","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"R.rsp","Version":"0.41.0","Title":"Dynamic Generation of Scientific Reports","Description":"The RSP markup language makes any text-based document come alive. RSP provides a powerful markup for controlling the content and output of LaTeX, HTML, Markdown, AsciiDoc, Sweave and knitr documents (and more), e.g. 'Today's date is <%=Sys.Date()%>'. Contrary to many other literate programming languages, with RSP it is straightforward to loop over mixtures of code and text sections, e.g. in month-by-month summaries. RSP has also several preprocessing directives for incorporating static and dynamic contents of external files (local or online) among other things. Functions rstring() and rcat() make it easy to process RSP strings, rsource() sources an RSP file as it was an R script, while rfile() compiles it (even online) into its final output format, e.g. rfile('report.tex.rsp') generates 'report.pdf' and rfile('report.md.rsp') generates 'report.html'. RSP is ideal for self-contained scientific reports and R package vignettes. It's easy to use - if you know how to write an R script, you'll be up and running within minutes.","Published":"2017-04-16","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"R.utils","Version":"2.5.0","Title":"Various Programming Utilities","Description":"Utility functions useful when programming and developing R packages.","Published":"2016-11-07","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"R0","Version":"1.2-6","Title":"Estimation of R0 and Real-Time Reproduction Number from\nEpidemics","Description":"Estimation of reproduction numbers for disease outbreak, based on\n incidence data. The R0 package implements several documented methods. It is\n therefore possible to compare estimations according to the methods used.\n Depending on the methods requested by user, basic reproduction number\n (commonly denoted as R0) or real-time reproduction number (referred to as\n R(t)) is computed, along with a 95% Confidence Interval. Plotting outputs\n will give different graphs depending on the methods requested : basic\n reproductive number estimations will only show the epidemic curve\n (collected data) and an adjusted model, whereas real-time methods will also\n show the R(t) variations throughout the outbreak time period. Sensitivity\n analysis tools are also provided, and allow for investigating effects of\n varying Generation Time distribution or time window on estimates.","Published":"2015-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"R1magic","Version":"0.3.2","Title":"Compressive Sampling: Sparse Signal Recovery Utilities","Description":"Utilities for sparse signal recovery suitable for compressed sensing. L1, L2 and TV penalties, DFT basis matrix, simple sparse signal generator, mutual cumulative coherence between two matrices and examples, Lp complex norm, scaling back regression coefficients.","Published":"2015-04-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"R2admb","Version":"0.7.15","Title":"'ADMB' to R Interface Functions","Description":"A series of functions to call 'AD Model Builder' (i.e.,\n compile and run models) from within R, read the results back\n into R as 'admb' objects, and provide standard accessors (i.e.\n coef(), vcov(), etc.)","Published":"2016-12-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"R2BayesX","Version":"1.1-0","Title":"Estimate Structured Additive Regression Models with 'BayesX'","Description":"An R interface to estimate structured additive regression (STAR) models with 'BayesX'.","Published":"2016-11-17","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"R2Cuba","Version":"1.1-0","Title":"Multidimensional Numerical Integration","Description":"It is a wrapper around the Cuba-1.6 library by Thomas Hahn available from the URL http://www.feynarts.de/cuba/. Implement four general-purpose multidimensional integration algorithms: Vegas, Suave, Divonne and Cuhre.","Published":"2015-10-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"r2d2","Version":"1.0-0","Title":"Bivariate (Two-Dimensional) Confidence Region and Frequency\nDistribution","Description":"This package provides generic functions to analyze the distribution\n of two continuous variables: 'conf2d' to calculate a smooth empirical\n confidence region, and 'freq2d' to calculate a frequency distribution.","Published":"2014-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"r2dRue","Version":"1.0.4","Title":"2d Rain Use Efficience model","Description":"2dRUE is a methodology to make a diagnostic of land\n condition in a large territory during a given time period. The\n following projects have funded this package: DeSurvey IP (EC\n FP6 Integrated Project contract No. 003950), DesertWatch (ESA\n DUE contract No. 18487/04/I-LG) and MesoTopos (Junta de\n Andalucia PE ref. RNM-4023).","Published":"2013-06-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"R2G2","Version":"1.0-2","Title":"Converting R CRAN outputs into Google Earth","Description":"Converting R CRAN outputs into Google Earth.","Published":"2013-04-23","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"r2glmm","Version":"0.1.1","Title":"Computes R Squared for Mixed (Multilevel) Models","Description":"The model R squared and semi-partial R squared for the linear and\n generalized linear mixed model (LMM and GLMM) are computed with confidence\n limits. The R squared measure from Edwards et.al (2008) \n is extended to the GLMM using penalized quasi-likelihood (PQL) estimation\n (see Jaeger et al. 2016 ). Three methods\n of computation are provided and described as follows. Firstly, The\n Kenward-Roger approach. Due to some inconsistency between the 'pbkrtest'\n package and the 'glmmPQL' function, the Kenward-Roger approach in the\n 'r2glmm' package is limited to the LMM. Secondly, The method introduced\n by Nakagawa and Schielzeth (2013) \n and later extended by Johnson (2014) .\n The 'r2glmm' package only computes marginal R squared for the LMM and does\n not generalize the statistic to the GLMM; however, confidence limits and\n semi-partial R squared for fixed effects are useful additions. Lastly, an\n approach using standardized generalized variance (SGV) can be used for\n covariance model selection. Package installation instructions can be found\n in the readme file.","Published":"2016-11-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"R2GUESS","Version":"1.7","Title":"Wrapper Functions for GUESS","Description":"Wrapper functions for GUESS, a GPU-enabled sparse Bayesian variable\n selection method for linear regression based analysis of possibly\n multivariate/correlated outcomes.","Published":"2016-01-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"R2HTML","Version":"2.3.2","Title":"HTML Exportation for R Objects","Description":"Includes HTML function and methods to write in an HTML\n file. Thus, making HTML reports is easy. Includes a function\n that allows redirection on the fly, which appears to be very\n useful for teaching purpose, as the student can keep a copy of\n the produced output to keep all that he did during the course.\n Package comes with a vignette describing how to write HTML\n reports for statistical analysis. Finally, a driver for 'Sweave'\n allows to parse HTML flat files containing R code and to\n automatically write the corresponding outputs (tables and\n graphs).","Published":"2016-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"R2jags","Version":"0.5-7","Title":"Using R to Run 'JAGS'","Description":"Providing wrapper functions to implement Bayesian analysis in JAGS. Some major features include monitoring convergence of a MCMC model using Rubin and Gelman Rhat statistics, automatically running a MCMC model till it converges, and implementing parallel processing of a MCMC model for multiple chains.","Published":"2015-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"r2lh","Version":"0.7","Title":"R to LaTeX and HTML","Description":"generate univariate and bivariate analyses in LaTeX or\n HTML formats","Published":"2011-07-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"R2MLwiN","Version":"0.8-3","Title":"Running 'MLwiN' from Within R","Description":"An R command interface to the 'MLwiN' multilevel\n modelling software package.","Published":"2016-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"R2OpenBUGS","Version":"3.2-3.2","Title":"Running OpenBUGS from R","Description":"Using this package,\n it is possible to call a BUGS model, summarize inferences and\n convergence in a table and graph, and save the simulations in arrays for easy access\n in R. ","Published":"2017-02-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"R2PPT","Version":"2.1","Title":"Simple R Interface to Microsoft PowerPoint using rcom or\nRDCOMClient","Description":"R2PPT provides a simple set of wrappers to easily use rcom\n or RDCOMClient for generating Microsoft PowerPoint\n presentations.","Published":"2012-08-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"R2STATS","Version":"0.68-38","Title":"A GTK GUI for fitting and comparing GLM and GLMM in R","Description":"R2STATS is a gWidgetsRGtk2 GUI for fitting and comparing GLM or GLMM (based on Douglas Bates' lme4 package) in R. It is designed to make comparisons between numerous models easy, both numerically and graphically, which may be useful for teaching. Relevant plots are automatically produced for each model family. R2STATS is *not* a generic graphical interface for R, but a GUI for statistical modelling in a model comparison approach.","Published":"2014-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"r2stl","Version":"1.0.0","Title":"r2stl, R package for visualizing data using a 3D printer","Description":"r2stl, R package for visualizing data using a 3D printer\n Package r2stl converts R data to STL (stereolithography) files\n that can be used to feed a 3-dimensional printer. The\n 3-dimensional output from an R function can be materialized\n into a solid surface in a plastic material, therefore allowing\n more detailed examination. There are many possible uses for\n this new R tool, such as to examine mathematical expressions\n with very irregular shapes, to aid teaching people with\n impaired vision, to create raised relief maps from digital\n elevation maps (DEMs), to bridge the gap between mathematical\n tools and rapid prototyping, and many more. Ian Walker created\n the function \"r2stl\" and Jose' Gama assembled the package.","Published":"2012-10-05","License":"CC BY-NC-SA 3.0","snapshot_date":"2017-06-23"} {"Package":"R2SWF","Version":"0.9-1","Title":"Convert R Graphics to Flash Animations","Description":"Using the Ming library\n (http://www.libming.org/) to create Flash animations.\n Users can either use the SWF device swf() to generate SWF file\n directly through plotting functions like plot() and lines(),\n or convert images of other formats (SVG, PNG, JPEG) into SWF.","Published":"2015-09-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"R2ucare","Version":"1.0.0","Title":"Goodness-of-Fit Tests for Capture-Recapture Models","Description":"Performs goodness-of-fit tests for capture-recapture models. Also contains several functions to process capture-recapture data.","Published":"2017-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"R2wd","Version":"1.5","Title":"Write MS-Word documents from R","Description":"This package uses either the statconnDCOM server (via the\n rcom package) or the RDCOMClient to communicate with MS-Word\n via the COM interface.","Published":"2012-03-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"R2WinBUGS","Version":"2.1-21","Title":"Running 'WinBUGS' and 'OpenBUGS' from 'R' / 'S-PLUS'","Description":"Invoke a 'BUGS' model in 'OpenBUGS' or 'WinBUGS', a class \"bugs\" for 'BUGS' \n results and functions to work with that class.\n Function write.model() allows a 'BUGS' model file to be written. \n The class and auxiliary functions could be used with other MCMC programs, including 'JAGS'.","Published":"2015-07-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"R330","Version":"1.0","Title":"An R package for Stats 330","Description":"This is a collection of useful functions and data for\n Stats 330","Published":"2012-06-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"R4CouchDB","Version":"0.7.5","Title":"A R Convenience Layer for CouchDB 2.0","Description":"Provides a collection of functions for basic\n database and document management operations such as add, get, list access\n or delete. Every cdbFunction() gets and returns a list() containing the\n connection setup. Such a list can be generated by cdbIni().","Published":"2017-03-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"R4dfp","Version":"0.2-4","Title":"4dfp MRI Image Read and Write Routines","Description":"This package provides an R interface with 2-part 4dfp MRI images\n (.4dfp.ifh and .4dfp.img files.)","Published":"2013-09-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"r4ss","Version":"1.24.0","Title":"R Code for Stock Synthesis","Description":"A collection of R functions for use with Stock Synthesis, a\n fisheries stock assessment modeling platform written in ADMB by Dr. Richard\n D. Methot at the NOAA Northwest Fisheries Science Center. The functions\n include tools for summarizing and plotting results, manipulating files,\n visualizing model parameterizations, and various other common stock\n assessment tasks.","Published":"2015-12-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"R6","Version":"2.2.2","Title":"Classes with Reference Semantics","Description":"The R6 package allows the creation of classes with reference\n semantics, similar to R's built-in reference classes. Compared to reference\n classes, R6 classes are simpler and lighter-weight, and they are not built\n on S4 classes so they do not require the methods package. These classes\n allow public and private members, and they support inheritance, even when\n the classes are defined in different packages.","Published":"2017-06-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"R6Frame","Version":"0.1.0","Title":"R6 Wrapper for Data Frames","Description":"Provides a R6 \"frame\" around data which allows one to create more\n complex objects/operations based on the underlying data.","Published":"2016-05-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RAC","Version":"1.1.1","Title":"R Package for Aqua Culture","Description":"Solves the bioenergetic balance for different aquaculture sea fish (Sea Bream and Sea Bass) and shellfish (Mussel and Clam) both at individual and population scale.","Published":"2017-03-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"race","Version":"0.1.59","Title":"Racing methods for the selection of the best","Description":"Implementation of some racing methods for the empirical\n selection of the best. If the R package `rpvm' is installed\n (and if PVM is available, properly configured, and\n initialized), the evaluation of the candidates are performed in\n parallel on different hosts.","Published":"2012-04-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RAD","Version":"0.3","Title":"Fit RAD models to biological data","Description":"Fit a variety of models to Rank Abundance Data","Published":"2012-06-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RADami","Version":"1.1-2","Title":"Phylogenetic Analysis of RADseq Data","Description":"Implements import, export, manipulation, visualization, and downstream\n (post-clustering) analysis of RADseq data, integrating with the 'pyRAD' package by Deren Eaton.","Published":"2017-02-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RADanalysis","Version":"0.5.5","Title":"Normalization and Study of Rank Abundance Distributions","Description":"It has tools for normalization of rank abundance\n distributions (RAD) to a desired number of ranks using MaxRank\n Normalization method.\n RADs are commonly used in biology/ecology and mathematically equivalent\n to complementary cumulative distributions (CCDFs) which are used in\n physics, linguistics and sociology and more generally in data science.","Published":"2016-03-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"radar","Version":"1.0.0","Title":"Fundamental Formulas for Radar","Description":"Fundamental formulas for Radar, for attenuation, range, velocity,\n effectiveness, power, scatter, doppler, geometry, radar equations, etc.\n Based on Nick Guy's Python package PyRadarMet","Published":"2014-12-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"radarchart","Version":"0.3.1","Title":"Radar Chart from 'Chart.js'","Description":"Create interactive radar charts using the 'Chart.js' 'JavaScript' library\n and the 'htmlwidgets' package. 'Chart.js' is a \n lightweight library that supports several types of simple chart using the 'HTML5' \n canvas element. This package provides an R interface specifically to the \n radar chart, sometimes called a spider chart, for visualising multivariate data.","Published":"2016-12-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"radiant","Version":"0.8.0","Title":"Business Analytics using R and Shiny","Description":"A platform-independent browser-based interface for business\n analytics in R, based on the shiny package. The application combines the\n functionality of radiant.data, radiant.design, radiant.basics,\n radiant.model, and radiant.multivariate.","Published":"2017-04-29","License":"AGPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"radiant.basics","Version":"0.8.0","Title":"Basics Menu for Radiant: Business Analytics using R and Shiny","Description":"The Radiant Basics menu includes interfaces for probability calculation, central limit theorem simulation, comparing means and proportions, goodness-of-fit testing, cross-tabs, and correlation. The application extends the functionality in radiant.data.","Published":"2017-04-27","License":"AGPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"radiant.data","Version":"0.8.1","Title":"Data Menu for Radiant: Business Analytics using R and Shiny","Description":"The Radiant Data menu includes interfaces for loading, saving,\n viewing, visualizing, summarizing, transforming, and combining data. It also\n contains functionality to generate reproducible reports of the analyses\n conducted in the application.","Published":"2017-04-25","License":"AGPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"radiant.design","Version":"0.8.0","Title":"Design Menu for Radiant: Business Analytics using R and Shiny","Description":"The Radiant Design menu includes interfaces for design of\n experiments, sampling, and sample size calculation. The application extends\n the functionality in radiant.data.","Published":"2017-04-27","License":"AGPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"radiant.model","Version":"0.8.0","Title":"Model Menu for Radiant: Business Analytics using R and Shiny","Description":"The Radiant Model menu includes interfaces for linear and logistic\n regression, Neural Networks, model evaluation, decision analysis, and\n simulation. The application extends the functionality in radiant.data.","Published":"2017-04-28","License":"AGPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"radiant.multivariate","Version":"0.8.0","Title":"Multivariate Menu for Radiant: Business Analytics using R and\nShiny","Description":"The Radiant Multivariate menu includes interfaces for perceptual\n mapping, factor analysis, cluster analysis, and conjoint analysis. The\n application extends the functionality in radiant.data.","Published":"2017-04-29","License":"AGPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"radiomics","Version":"0.1.2","Title":"Radiomic Image Processing Toolbox","Description":"Functions to extract first and second order statistics from\n images.","Published":"2016-05-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RadioSonde","Version":"1.4","Title":"Tools for plotting skew-T diagrams and wind profiles","Description":"RadioSonde is a collection of programs for reading and\n plotting SKEW-T,log p diagrams and wind profiles for data\n collected by radiosondes (the typical weather balloon-borne\n instrument), which we will call \"flights\", \"sondes\", or\n \"profiles\" throughout the associated documentation. The raw\n data files are in a common format that has a header followed by\n specific variables. Use \"help(ExampleSonde)\" for the full\n explanation of the data files. ","Published":"2014-09-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"radir","Version":"1.0.2","Title":"Inverse-Regression Estimation of Radioactive Doses","Description":"Radioactive doses estimation using individual chromosomal aberrations information. See Higueras M, Puig P, Ainsbury E, Rothkamm K. (2015) .","Published":"2017-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"radmixture","Version":"0.0.1","Title":"Calculate Population Stratification","Description":"Implementation of ADMIXTURE for individual ancestry inference in R. Specifically, ADMIXTURE is a software tool for maximum likelihood estimation of individual ancestries from multilocus SNP genotype datasets, see . Users can use 'radmixture' to calculate ancestry components with different public datasets. It is very convenient and fast for personal genotype data. For more details, see .","Published":"2017-03-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RadOnc","Version":"1.1.3","Title":"Analytical Tools for Radiation Oncology","Description":"Designed for the import, analysis, and visualization of dosimetric and volumetric data in Radiation Oncology, the tools herein enable import of dose-volume histogram information from multiple treatment planning system platforms and 3D structural representations and dosimetric information from 'DICOM-RT' files. These tools also enable subsequent visualization and statistical analysis of these data.","Published":"2017-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RadTran","Version":"1.0","Title":"Radon and Soil Gas Transport in 2D Porous Medium","Description":"Contains 4 different functions for radon and soil gas transport in a porous medium.","Published":"2015-01-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Radviz","Version":"0.7.0","Title":"Project Multidimensional Data in 2D Space","Description":"An implementation of the radviz projection in R. It enables the visualization of\n multidimensional data while maintaining the relation to the original dimensions.\n This package provides functions to create and plot radviz projections, and a number of summary\n plots that enable comparison and analysis. For reference see Ankerst et al. (1996) \n for original implementation, \n see Di Caro et al. (2010) for the original method for dimensional\n anchor arrangements.","Published":"2016-12-08","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"RAdwords","Version":"0.1.12","Title":"Loading Google Adwords Data into R","Description":"Aims at loading Google Adwords data into R. Adwords is an online\n advertising service that enables advertisers to display advertising copy to web\n users (see for more information). \n Therefore the package implements three main features. First, the package\n provides an authentication process for R with the Google Adwords API (see \n for more information) via OAUTH2.\n Second, the package offers an interface to apply the Adwords query language in\n R and query the Adwords API with ad-hoc reports. Third, the received data are\n transformed into suitable data formats for further data processing and data\n analysis.","Published":"2017-04-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rafalib","Version":"1.0.0","Title":"Convenience Functions for Routine Data Exploration","Description":"A series of shortcuts for routine tasks originally developed by Rafael A. Irizarry to facilitate data exploration. ","Published":"2015-08-09","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"rags2ridges","Version":"2.2","Title":"Ridge Estimation of Precision Matrices from High-Dimensional\nData","Description":"Proper L2-penalized ML estimators for the\n precision matrix as well as supporting functions to employ these estimators\n in a graphical modeling setting.","Published":"2017-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ragt2ridges","Version":"0.2.4","Title":"Ridge Estimation of Vector Auto-Regressive (VAR) Processes","Description":"Ridge maximum likelihood estimation of vector auto-regressive processes and supporting functions for their exploitation.","Published":"2017-04-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ragtop","Version":"0.5","Title":"Pricing Equity Derivatives with Extensions of Black-Scholes","Description":"Algorithms to price American and European\n equity options, convertible bonds and a\n variety of other financial derivatives. It uses an\n extension of the usual Black-Scholes model in which\n jump to default may occur at a probability specified\n by a power-law link between stock price and hazard\n rate as found in the paper by Takahashi, Kobayashi,\n and Nakagawa (2001) . We\n use ideas and techniques from Andersen and\n Buffum (2002) and\n Linetsky (2006) .","Published":"2016-09-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RAHRS","Version":"1.0.2","Title":"Data Fusion Filters for Attitude Heading Reference System (AHRS)\nwith Several Variants of the Kalman Filter and the Mahoney and\nMadgwick Filters","Description":"Data fusion filters for Attitude Heading Reference System (AHRS) based on\n Vlad Maximov's GyroLib AHRS library (quaternion based linearized/extended/unscented Kalman filter,\n Euler based LKF, gyro-free with vector matching, SVD calibration and EKF calibration),\n Sebastian O.H. Madgwick AHRS algorithms and Sebastian O.H. Madgwick implementation of Mayhony et al AHRS algorithm.","Published":"2015-07-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rainbow","Version":"3.4","Title":"Rainbow Plots, Bagplots and Boxplots for Functional Data","Description":"Functions and data sets for functional data display and outlier detection.","Published":"2016-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"raincpc","Version":"0.4","Title":"Obtain and Analyze Rainfall Data from the Climate Prediction\nCenter","Description":"The Climate Prediction Center's (CPC) rainfall data for the\n world (1979 to present, 50 km resolution) and the USA (1948 to\n present, 25 km resolution), is one of the few high quality, long\n term, observation based, daily rainfall products available for free.\n Although raw data is available at CPC's ftp site, obtaining,\n processing and visualizing the data is not straightforward. There are\n more than 12,000 files for the world and about 24,000 files for the USA.\n Moreover, file formats and file extensions have not been consistent.\n This package provides functionality to download, process and visualize\n over 35 years of global rainfall data and over 65 years of USA rainfall\n data from CPC.","Published":"2014-08-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rainfreq","Version":"0.3","Title":"Rainfall Frequency (Design Storm) Estimates from the US National\nWeather Service","Description":"Estimates of rainfall at desired frequency (e.g., 1% annual\n chance or 100-year return period) and desired duration (e.g.,\n 24-hour duration) are often required in the design of dams and other\n hydraulic structures, catastrophe risk modeling, environmental\n planning and management. One major source of such estimates for the\n USA is the NOAA National Weather Service's (NWS) division of\n Hydrometeorological Design Studies Center (HDSC). Raw data from\n NWS-HDSC is available at 1-km resolution and comes as a huge number\n of GIS files. This package provides functionality to easily access\n and analyze the 1-km GIS files provided by NWS' PF Data Server for\n the entire USA. This package also comes with datasets on record point\n rainfall measurements provided by NWS-HDSC.","Published":"2014-11-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rakeR","Version":"0.1.2","Title":"Easy Spatial Microsimulation (Raking) in R","Description":"Functions for performing spatial microsimulation ('raking')\n in R.","Published":"2016-11-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rAltmetric","Version":"0.7.0","Title":"Retrieves Altmerics Data for Any Published Paper from\n'Altmetric.com'","Description":"Provides a programmatic interface to the citation information and alternate metrics provided by 'Altmetric'. Data from Altmetric allows researchers to immediately track the impact of their published work, without having to wait for citations. This allows for faster engagement with the audience interested in your work. For more information, visit .","Published":"2017-04-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RAM","Version":"1.2.1.3","Title":"R for Amplicon-Sequencing-Based Microbial-Ecology","Description":"Characterizing environmental microbiota diversity using amplicon-based next generation sequencing (NGS) data. Functions are developed to manipulate operational taxonomic unit (OTU) table, perform descriptive and inferential statistics, and generate publication-quality plots.","Published":"2016-01-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Ramble","Version":"0.1.1","Title":"Parser Combinator for R","Description":"Parser generator for R using combinatory parsers. It\n is inspired by combinatory parsers developed in Haskell.","Published":"2016-10-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rambo","Version":"1.1","Title":"The Random Subgraph Model","Description":"Estimate the parameters, the number of classes and cluster vertices of a random network into groups with homogeneous connection profiles. The clustering is performed for directed graphs with typed edges (edges are assumed to be drawn from multinomial distributions) for which a partition of the vertices is available.","Published":"2013-11-13","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"rAmCharts","Version":"2.1.3","Title":"JavaScript Charts API Tool","Description":"API for using 'AmCharts' Library. Based on 'htmlwidgets', it\n provides a global architecture to generate 'JavaScript' source code for charts.\n Most of classes in the library have their equivalent in R with S4 classes;\n for those classes, not all properties have been referenced but can easily be\n added in the constructors. Complex properties (e.g. 'JavaScript' object) can\n be passed as named list. See examples at . and for more information\n about the library. The package includes the free version of 'AmCharts'\n Library. Its only limitation is a small link to the web site displayed on\n your charts. If you enjoy this library, do not hesitate to refer to this\n page to purchase a licence, and thus\n support its creators and get a period of Priority Support. See also for more information about 'AmCharts' company.","Published":"2017-01-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ramcmc","Version":"0.1.0","Title":"Robust Adaptive Metropolis Algorithm","Description":"Function for adapting the shape of the random walk Metropolis proposal\n as specified by robust adaptive Metropolis algorithm by Vihola (2012) . \n Package also includes fast functions for rank-one Cholesky update and downdate.\n These functions can be used directly from R or the corresponding C++ header files \n can be easily linked to other R packages.","Published":"2016-11-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ramidst","Version":"0.1.0","Title":"An Interface to the AMIDST Toolbox for Data Stream Processing","Description":"Offers a link to some of the functionality of the\n AMIDST toolbox for handling data streams.\n More precisely, the package provides inference and concept drift detection\n using hybrid Bayesian networks.","Published":"2016-10-21","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"ramify","Version":"0.3.3","Title":"Additional Matrix Functionality","Description":"Additional matrix functionality for R including: (1) wrappers for \n the base matrix function that allow matrices to be created from character\n strings and lists (the former is especially useful for creating block\n matrices), (2) better printing of large matrices via the generic \"pretty\" \n print function, and (3) a number of convenience functions for users more\n familiar with other scientific languages like 'Julia', 'Matlab'/'Octave', or\n 'Python'+'NumPy'.","Published":"2016-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RAMP","Version":"2.0.1","Title":"Regularized Generalized Linear Models with Interaction Effects","Description":"Provides an efficient procedure for fitting the entire solution\n path for high-dimensional regularized quadratic generalized linear models with\n interactions effects under the strong or weak heredity constraint.","Published":"2017-05-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RAMpath","Version":"0.4","Title":"Structural Equation Modeling Using the Reticular Action Model\n(RAM) Notation","Description":"We rewrite of RAMpath software developed by John McArdle and Steven Boker as an R package. In addition to performing regular SEM analysis through the R package lavaan, RAMpath has unique features. First, it can generate path diagrams according to a given model. Second, it can display path tracing rules through path diagrams and decompose total effects into their respective direct and indirect effects as well as decompose variance and covariance into individual bridges. Furthermore, RAMpath can fit dynamic system models automatically based on latent change scores and generate vector field plots based upon results obtained from a bivariate dynamic system. Starting version 0.4, RAMpath can conduct power analysis for both univariate and bivariate latent change score models.","Published":"2016-10-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ramps","Version":"0.6-14","Title":"Bayesian Geostatistical Modeling with RAMPS","Description":"Bayesian geostatistical modeling of Gaussian processes using a reparameterized and marginalized posterior sampling (RAMPS) algorithm designed to lower autocorrelation in MCMC samples. Package performance is tuned for large spatial datasets.","Published":"2016-06-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ramsvm","Version":"2.0","Title":"Reinforced Angle-Based Multicategory Support Vector Machines","Description":"Provides a solution path for Reinforced Angle-based Multicategory Support Vector Machines, with linear learning, polynomial learning, and Gaussian kernel learning.","Published":"2016-01-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"randaes","Version":"0.3","Title":"Random number generator based on AES cipher","Description":"The deterministic part of the Fortuna cryptographic\n pseudorandom number generator, described by Schneier & Ferguson\n \"Practical Cryptography\"","Published":"2012-01-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"randgeo","Version":"0.2.0","Title":"Generate Random 'WKT' or 'GeoJSON'","Description":"Generate random positions (latitude/longitude), \n Well-known text ('WKT') points or polygons, or 'GeoJSON' points or \n polygons. ","Published":"2017-02-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RandMeta","Version":"0.1.0","Title":"Efficient Numerical Algorithm for Exact Inference in Meta\nAnalysis","Description":"A novel numerical algorithm that provides functionality for estimating the exact 95% confidence interval of the location parameter in the random effects model, and is much faster than the naive method. Works best when the number of studies is between 6-20.","Published":"2017-04-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"randNames","Version":"0.2.3","Title":"Package Provides Access to Fake User Data","Description":"Generates random names with additional information including fake\n SSNs, gender, location, zip, age, address, and nationality.","Published":"2016-07-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"random","Version":"0.2.6","Title":"True Random Numbers using RANDOM.ORG","Description":"The true random number service provided by the RANDOM.ORG\n website created by Mads Haahr samples atmospheric noise via radio tuned to\n an unused broadcasting frequency together with a skew correction algorithm\n due to John von Neumann. More background is available in the included\n vignette based on an essay by Mads Haahr. In its current form, the package\n offers functions to retrieve random integers, randomized sequences and\n random strings.","Published":"2017-02-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"random.polychor.pa","Version":"1.1.4-2","Title":"A Parallel Analysis with Polychoric Correlation Matrices","Description":"The Function performs a parallel analysis using simulated polychoric correlation matrices. The nth-percentile of the eigenvalues distribution obtained from both the randomly generated and the real data polychoric correlation matrices is returned. A plot comparing the two types of eigenvalues (real and simulated) will help determine the number of real eigenvalues that outperform random data. The function is based on the idea that if real data are non-normal and the polychoric correlation matrix is needed to perform a Factor Analysis, then the Parallel Analysis method used to choose a non-random number of factors should also be based on randomly generated polychoric correlation matrices and not on Pearson correlation matrices. Random data sets are simulated assuming or a uniform or a multinomial distribution or via the bootstrap method of resampling (i.e., random permutations of cases). Also Multigroup Parallel analysis is made available for random (uniform and multinomial distribution and with or without difficulty factor) and bootstrap methods. An option to choose between default or full output is also available as well as a parameter to print Fit Statistics (Chi-squared, TLI, RMSEA, RMR and BIC) for the factor solutions indicated by the Parallel Analysis. ","Published":"2016-07-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"randomcoloR","Version":"1.0.0","Title":"Generate Attractive Random Colors","Description":"Simple methods to generate attractive random colors. The random\n colors are from a wrapper of 'randomColor.js'\n . In addition, it also generates\n optimally distinct colors based on k-means (inspired by 'IWantHue'\n ).","Published":"2016-03-29","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"RandomFields","Version":"3.1.50","Title":"Simulation and Analysis of Random Fields","Description":"Methods for the inference on and the simulation of Gaussian fields are provided, as well as methods for the simulation of extreme value random fields.","Published":"2017-04-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RandomFieldsUtils","Version":"0.3.25","Title":"Utilities for the Simulation and Analysis of Random Fields","Description":"Various utilities are provided that might be used in spatial statistics and elsewhere. It delivers a method for solving linear equations that checks the sparsity of the matrix before any algorithm is used. Furthermore, it includes the Struve functions.","Published":"2017-04-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"randomForest","Version":"4.6-12","Title":"Breiman and Cutler's Random Forests for Classification and\nRegression","Description":"Classification and regression based on a forest of trees\n using random inputs.","Published":"2015-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"randomForest.ddR","Version":"0.1.2","Title":"Distributed 'randomForest' for Big Data using 'ddR' API","Description":"Distributed training and prediction of random forest models based upon 'randomForest' package using 'ddR' (Distributed Data Structures) API in the 'ddR' package.","Published":"2017-03-10","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"randomForestSRC","Version":"2.4.2","Title":"Random Forests for Survival, Regression and Classification\n(RF-SRC)","Description":"A unified treatment of Breiman's random forests for survival, regression and classification problems based on Ishwaran and Kogalur's random survival forests (RSF) package. The package runs in both serial and parallel (OpenMP) modes. Now extended to include multivariate and unsupervised forests.","Published":"2017-03-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"randomGLM","Version":"1.02-1","Title":"Random General Linear Model Prediction","Description":"The package implements a bagging predictor based on\n general linear models","Published":"2013-01-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"randomizationInference","Version":"1.0.3","Title":"Flexible Randomization-Based Inference","Description":"Allows the user to conduct randomization-based inference for a wide variety of experimental scenarios. The package leverages a potential outcomes framework to output randomization-based p-values and null intervals for test statistics geared toward any estimands of interest, according to the specified null and alternative hypotheses. Users can define custom randomization schemes so that the randomization distributions are accurate for their experimental settings. The package also creates visualizations of randomization distributions and can test multiple test statistics simultaneously.","Published":"2015-01-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"randomizeBE","Version":"0.3-3","Title":"Create a Random List for Crossover Studies","Description":"Contains a function to randomize subjects, patients in groups of \n sequences (treatment sequences).\n If a blocksize is given, the randomization will be done within blocks.\n The randomization may be controlled by a Wald-Wolfowitz runs test.\n Functions to obtain the p-value of that test are included.\n The package is mainly intended for randomization of bioequivalence studies\n but may be used also for other clinical crossover studies.\n Contains two helper functions sequences() and williams() to get the sequences \n of commonly used designs in BE studies.","Published":"2017-03-22","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"randomizeR","Version":"1.3","Title":"Randomization for Clinical Trials","Description":"This tool enables the user to choose a randomization procedure\n based on sound scientific criteria. It comprises the generation of\n randomization sequences as well the assessment of randomization procedures\n based on carefully selected criteria. Furthermore, randomizeR provides a\n function for the comparison of randomization procedures.","Published":"2016-06-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"randomizr","Version":"0.6.0","Title":"Easy to Use Tools for Common Forms of Random Assignment and\nSampling","Description":"Generates random assignments for common experimental designs and \n\t random samples for common sampling designs.","Published":"2017-05-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"randomLCA","Version":"1.0-11","Title":"Random Effects Latent Class Analysis","Description":"Fits standard and random effects latent class models. The single level random effects model is described in Qu et al and the two level random effects model in Beath and Heller . Examples are given for their use in diagnostic testing.","Published":"2017-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"randomNames","Version":"1.0-0.0","Title":"Function for Generating Random Names and a Dataset","Description":"Function for generating random gender and ethnicity correct first and/or last names. Names are chosen proportionally based upon their probability of appearing in a large scale data base of real names.","Published":"2017-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"randomUniformForest","Version":"1.1.5","Title":"Random Uniform Forests for Classification, Regression and\nUnsupervised Learning","Description":"Ensemble model, for classification, regression\n\tand unsupervised learning, based on a forest of unpruned \n\tand randomized binary decision trees. Each tree is grown \n\tby sampling, with replacement, a set of variables at each node. \n\tEach cut-point is generated randomly, according to the continuous \n\tUniform distribution. For each tree, data are either bootstrapped \n\tor subsampled. The unsupervised mode introduces clustering, dimension reduction\n\tand variable importance, using a three-layer engine. Random Uniform Forests are mainly \n\taimed to lower correlation between trees (or trees residuals), to provide a deep analysis \n\tof variable importance and to allow native distributed and incremental learning.","Published":"2015-02-16","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RandPro","Version":"0.1.0","Title":"Random Projection","Description":"Performs random projection using Johnson-Lindenstrauss (JL) Lemma (see William B.Johnson and Joram Lindenstrauss (1984) ). Random Projection is a technique, where the data in the high dimensional space is projected into the low dimensional space using JL transform. The original high dimensional data matrix is multiplied with the low dimensional random matrix which results in reduced matrix. The random matrix can be generated by Gaussian matrix or sparse matrix.","Published":"2017-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"randstr","Version":"0.2.0","Title":"Generate Random Strings","Description":"Generate random strings of a dictated size of symbol set and\n distribution of the lengths of strings.","Published":"2016-03-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"randtests","Version":"1.0","Title":"Testing randomness in R","Description":"Several non parametric randomness tests for numeric sequences","Published":"2014-11-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"randtoolbox","Version":"1.17","Title":"Toolbox for Pseudo and Quasi Random Number Generation and RNG\nTests","Description":"Provides (1) pseudo random generators - general linear congruential generators, multiple recursive generators and generalized feedback shift register (SF-Mersenne Twister algorithm and WELL generators); (2) quasi random generators - the Torus algorithm, the Sobol sequence, the Halton sequence (including the Van der Corput sequence) and (3) some RNG tests - the gap test, the serial test, the poker test. The package depends on rngWELL package but it can be provided without this dependency on demand to the maintainer. For true random number generation, use the 'random' package, for Latin Hypercube Sampling (a hybrid QMC method), use the 'lhs' package. A number of RNGs and tests for RNGs are also provided by 'RDieHarder', all available on CRAN. There is also a small stand-alone package 'rngwell19937' for the WELL19937a RNG. ","Published":"2015-07-30","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RandVar","Version":"1.0.1","Title":"Implementation of Random Variables","Description":"Implements random variables by means of S4 classes and methods.","Published":"2017-04-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"rangeBuilder","Version":"1.4","Title":"Occurrence Filtering, Geographic and Taxonomic Standardization\nand Generation of Species Range Polygons","Description":"Provides tools for filtering occurrence records, generating alpha-hull-derived range polygons and mapping species distributions. ","Published":"2017-05-31","License":"ACM","snapshot_date":"2017-06-23"} {"Package":"rangeMapper","Version":"0.3-1","Title":"A Platform for the Study of Macro-Ecology of Life History Traits","Description":"Tools for easy generation of (life-history) traits maps based on\n species range (extent-of-occurrence) maps.","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rangemodelR","Version":"1.0.1","Title":"Mid-Domain Effect and Species Richness Patterns","Description":"Generates expected values of species richness, with continuous or\n scattered ranges, for data across one or two dimensions.","Published":"2016-08-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ranger","Version":"0.8.0","Title":"A Fast Implementation of Random Forests","Description":"A fast implementation of Random Forests, particularly suited for high\n dimensional data. Ensembles of classification, regression, survival and\n probability prediction trees are supported. Data from genome-wide association\n studies can be analyzed efficiently. In addition to data frames, datasets of\n class 'gwaa.data' (R package 'GenABEL') can be directly analyzed.","Published":"2017-06-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RankAggreg","Version":"0.5","Title":"Weighted rank aggregation","Description":"This package performs aggregation of ordered lists based\n on the ranks using several different algorithms: Borda count,\n Cross-Entropy Monte Carlo algorithm, Genetic algorithm, and a\n brute force algorithm (for small problems)","Published":"2014-09-01","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"Rankcluster","Version":"0.94","Title":"Model-Based Clustering for Multivariate Partial Ranking Data","Description":"Implementation of a model-based clustering algorithm for\n ranking data. Multivariate rankings as well as partial rankings are taken\n into account. This algorithm is based on an extension of the Insertion\n Sorting Rank (ISR) model for ranking data, which is a meaningful and\n effective model parametrized by a position parameter (the modal ranking,\n quoted by mu) and a dispersion parameter (quoted by pi). The heterogeneity\n of the rank population is modelled by a mixture of ISR, whereas conditional\n independence assumption is considered for multivariate rankings.","Published":"2016-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rankdist","Version":"1.1.2","Title":"Distance Based Ranking Models","Description":"Implements distance based probability models for ranking data. \n The supported distance metrics include Kendall distance, Spearman distance, Footrule distance, Hamming distance,\n , Weighted-tau distance and Weighted Kendall distance.\n Phi-component model and mixture models are also supported.","Published":"2015-09-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rankFD","Version":"0.0.1","Title":"Rank-Based Tests for General Factorial Designs","Description":"The rankFD() function calculates the Wald-type statistic (WTS) and the ANOVA-type\n\t statistic (ATS) for nonparametric factorial designs, e.g., for count, ordinal or score data\n\t in a crossed design with an arbitrary number of factors.","Published":"2016-06-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"rankhazard","Version":"1.1.0","Title":"Rank-Hazard Plots","Description":"Rank-hazard plots Karvanen and Harrell (2009) visualize the relative importance of covariates in a proportional hazards model. The key idea is to rank the covariate values and plot the relative hazard as a function of ranks scaled to interval [0,1]. The relative hazard is plotted in respect to the reference hazard, which can bee.g. the hazard related to the median of the covariate.","Published":"2016-05-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RankingProject","Version":"0.1.1","Title":"The Ranking Project: Visualizations for Comparing Populations","Description":"Functions to generate plots and tables for comparing independently-\n sampled populations. Companion package to \"A Primer on Visualizations\n for Comparing Populations, Including the Issue of Overlapping Confidence\n Intervals\" by Wright, Klein, and Wieczorek (2017, in press).","Published":"2017-03-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RankResponse","Version":"3.1.1","Title":"Ranking Responses in a Single Response Question or a Multiple\nResponse Question","Description":"Methods for ranking responses of a single response question or a multiple response question","Published":"2014-09-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RANKS","Version":"1.0","Title":"Ranking of Nodes with Kernelized Score Functions","Description":"Implementation of Kernelized score functions and other semi-supervised learning algorithms for node label ranking in biomolecular networks. RANKS can be easily applied to a large set of different relevant problems in computational biology, ranging from automatic protein function prediction, to gene disease prioritization and drug repositioning, and more in general to any bioinformatics problem that can be formalized as a node label ranking problem in a graph. The modular nature of the implementation allows to experiment with different score functions and kernels and to easily compare the results with baseline network-based methods such as label propagation and random walk algorithms, as well as to enlarge the algorithmic scheme by adding novel user-defined score functions and kernels.","Published":"2015-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RANN","Version":"2.5.1","Title":"Fast Nearest Neighbour Search (Wraps ANN Library) Using L2\nMetric","Description":"Finds the k nearest neighbours for every point in a given dataset\n in O(N log N) time using Arya and Mount's ANN library (v1.1.3). There is\n support for approximate as well as exact searches, fixed radius searches\n and 'bd' as well as 'kd' trees. The distance is computed using the L2\n (Euclidean) metric. Please see package 'RANN.L1' for the same\n functionality using the L1 (Manhattan, taxicab) metric.","Published":"2017-05-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RANN.L1","Version":"2.5","Title":"Fast Nearest Neighbour Search (Wraps ANN Library) Using L1\nMetric","Description":"Finds the k nearest neighbours for every point in a given dataset\n in O(N log N) time using Arya and Mount's ANN library (v1.1.3). There is\n support for approximate as well as exact searches, fixed radius searches\n and 'bd' as well as 'kd' trees. The distance is computed using the L1\n (Manhattan, taxicab) metric. Please see package 'RANN' for the same\n functionality using the L2 (Euclidean) metric.","Published":"2015-05-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RAP","Version":"1.1","Title":"Reversal Association Pattern","Description":"To find the reversal association between variables.","Published":"2013-05-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rapiclient","Version":"0.1.2","Title":"Dynamic OpenAPI/Swagger Client","Description":"Access services specified in OpenAPI (formerly Swagger) format.\n It is not a code generator. Client is generated dynamically as a list of R \n functions.","Published":"2017-02-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RApiDatetime","Version":"0.0.3","Title":"R API Datetime","Description":"Access to the C-level R date and datetime code is provided for\n C-level API use by other packages via registration of native functions.\n Client packages simply include a single header 'RApiDatetime.h' provided\n by this package, and also 'import' it. The R Core group is the original\n author of the code made available with slight modifications by this package. ","Published":"2017-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RapidPolygonLookup","Version":"0.1","Title":"Polygon lookup using kd trees","Description":"Facilitates efficient polygon search using kd trees.\n Coordinate level spatial data can be aggregated to higher geographical\n identities like census blocks, ZIP codes or police district boundaries.\n This process requires mapping each point in the given data set to a\n particular identity of the desired geographical hierarchy. Unless efficient\n data structures are used, this can be a daunting task. The operation\n point.in.polygon() from the package sp is computationally expensive.\n Here, we exploit kd-trees as efficient nearest neighbor search algorithm\n to dramatically reduce the effective number of polygons being searched.","Published":"2014-01-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RAPIDR","Version":"0.1.1","Title":"Reliable Accurate Prenatal non-Invasive Diagnosis R package","Description":"Package to perform non-invasive fetal testing for aneuploidies\n using sequencing count data from cell-free DNA","Published":"2014-11-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RApiSerialize","Version":"0.1.0","Title":"R API Serialization","Description":"This package provides other packages with access to the internal \n R serialization code. Access to this code is provided at the C function\n level by using the registration of native function mechanism. Client\n packages simply include a single header file RApiSerializeAPI.h provided by\n this package.\n\n This packages builds on the Rhpc package by Junji Nakano and Ei-ji Nakama\n which also includes a (partial) copy of the file src/main/serialize.c from R\n itself. \n\n The R Core group is the original author of the serialization code made\n available by this package.","Published":"2014-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RAppArmor","Version":"2.0.2","Title":"Bindings to AppArmor and Security Related Linux Tools","Description":"Bindings to various methods in the kernel for enforcing security\n restrictions. AppArmor can apply mandatory access control (MAC) policies on\n a given task (process) via security profiles with detailed ACL definitions. \n In addition the package has kernel bindings for setting the process hardware\n resource limits (rlimit), uid, gid, affinity and priority. The high level R\n function 'eval.secure' builds on these methods to do dynamic sandboxing:\n it evaluates a single R expression within a temporary fork which acts as a \n sandbox by enforcing fine grained restrictions without affecting the main R \n process. Recent versions on this package can also be installed on systems\n without libapparmor, in which case some features are automatically disabled.","Published":"2016-05-17","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"rappdirs","Version":"0.3.1","Title":"Application Directories: Determine Where to Save Data, Caches,\nand Logs","Description":"An easy way to determine which directories on the users computer\n you should use to save data, caches and logs. A port of Python's 'Appdirs'\n (\\url{https://github.com/ActiveState/appdirs}) to R.","Published":"2016-03-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rapport","Version":"1.0","Title":"A Report Templating System","Description":"Facilitating the creation of reproducible statistical\n report templates. Once created, rapport templates can be exported to\n various external formats (HTML, LaTeX, PDF, ODT etc.) with pandoc as the\n converter backend.","Published":"2015-11-18","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"rapportools","Version":"1.0","Title":"Miscellaneous (stats) helper functions with sane defaults for\nreporting","Description":"Helper functions that act as wrappers to more advanced statistical\n methods with the advantage of having sane defaults for quick reporting.","Published":"2014-01-07","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"raptr","Version":"0.0.3","Title":"Representative and Adequate Prioritization Toolkit in R","Description":"Biodiversity is in crisis. The overarching aim of conservation \n is to preserve biodiversity patterns and processes. To this end, protected \n areas are established to buffer species and preserve biodiversity processes. \n But resources are limited and so protected areas must be cost-effective. This \n package contains tools to generate plans for protected areas (prioritizations),\n using spatially explicit targets for biodiversity patterns and processes. \n To obtain solutions in a feasible amount of time, this package uses the \n commercial 'Gurobi' software package (obtained from ). \n Additionally, the 'rgurobi' package can also be installed to provide extra \n functionality (obtained from ).","Published":"2016-11-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RArcInfo","Version":"0.4-12","Title":"Functions to import data from Arc/Info V7.x binary coverages","Description":"This package uses the functions written by Daniel \n Morissette to read geographical information in Arc/Info \n V 7.x format and E00 files to import the coverages into R variables.","Published":"2011-11-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rareGE","Version":"0.1","Title":"Testing Gene-Environment Interaction for Rare Genetic Variants","Description":"Tests gene-environment interaction for rare genetic variants using Sequence Kernel Association Test (SKAT) type gene-based tests. Includes two tests for the interaction term only, and one joint test for genetic main effects and gene-environment interaction.","Published":"2014-07-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rareNMtests","Version":"1.1","Title":"Ecological and biogeographical null model tests for comparing\nrarefaction curves","Description":"Randomization tests for the statistical comparison of \\emph{i} = two or more individual-based, sample-based or coverage-based rarefaction curves. The ecological null hypothesis is that the \\emph{i} samples were all drawn randomly from a single assemblage, with (necessarily) a single underlying species abundance distribution. The biogeographic null hypothesis is that the \\emph{i} samples were all drawn from different assemblages that, nonetheless, share similar species richness and species abundance distributions","Published":"2014-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rarhsmm","Version":"1.0.4","Title":"Regularized Autoregressive Hidden Semi Markov Model","Description":"Fit Gaussian hidden Markov (or semi-Markov) models with / without autoregressive coefficients and with / without regularization. The fitting algorithm for the hidden Markov model is illustrated by Rabiner (1989) . The shrinkage estimation on the covariance matrices is based on the method by Ledoit et al. (2004) . The shrinkage estimation on the autoregressive coefficients uses the elastic net shrinkage detailed in Zou et al. (2005) .","Published":"2017-05-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Rarity","Version":"1.3-6","Title":"Calculation of Rarity Indices for Species and Assemblages of\nSpecies","Description":"Allows calculation of rarity weights for species and indices of rarity for assemblages of species according to different methods (Leroy et al. 2012, Insect. Conserv. Divers. 5:159-168 ; Leroy et al. 2013, Divers. Distrib. 19:794-803 ). ","Published":"2016-12-23","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"rARPACK","Version":"0.11-0","Title":"Solvers for Large Scale Eigenvalue and SVD Problems","Description":"Previously an R wrapper of the 'ARPACK' library\n , and now a shell of the\n R package 'RSpectra', an R interface to the 'Spectra' library\n for solving large scale\n eigenvalue/vector problems. The current version of 'rARPACK'\n simply imports and exports the functions provided by 'RSpectra'.\n New users of 'rARPACK' are advised to switch to the 'RSpectra' package.","Published":"2016-03-10","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RaschSampler","Version":"0.8-8","Title":"Rasch Sampler","Description":"MCMC based sampling of binary matrices with fixed margins as used in exact Rasch model tests. ","Published":"2015-07-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rasclass","Version":"0.2.2","Title":"Supervised Raster Image Classification","Description":"Software to perform supervised and pixel based raster image classification. It has been designed to facilitate land-cover analysis. Five classification algorithms can be used: Maximum Likelihood Classification, Multinomial Logistic Regression, Neural Networks, Random Forests and Support Vector Machines. The output includes the classified raster and standard classification accuracy assessment such as the accuracy matrix, the overall accuracy and the kappa coefficient. An option for in-sample verification is available.","Published":"2016-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rase","Version":"0.3-2","Title":"Range Ancestral State Estimation for Phylogeography and\nComparative Analyses","Description":"Implements the Range Ancestral State Estimation for phylogeography described in Quintero, I., Keil, P., Jetz, W., & Crawford, F. W. (2015) . It also includes Bayesian inference of ancestral states under a Brownian Motion model of character evolution and Maximum Likelihood estimation of rase for n-dimensional data. Visualizing functions in 3D are implemented using the rgl package.","Published":"2017-03-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"raster","Version":"2.5-8","Title":"Geographic Data Analysis and Modeling","Description":"Reading, writing, manipulating, analyzing and modeling of gridded spatial data. The package implements basic and high-level functions. Processing of very large files is supported.","Published":"2016-06-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rasterImage","Version":"0.3.0","Title":"An Improved Wrapper of Image()","Description":"This is a wrapper function for image(), which makes reasonable\n raster plots with nice axis and other useful features.","Published":"2016-09-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rasterKernelEstimates","Version":"1.0.1","Title":"Kernel Based Estimates on in-Memory Raster Images","Description":"Performs kernel based estimates on in-memory raster images \n from the raster package. These kernel estimates include local means\n variances, modes, and quantiles. All results are in the form of \n raster images, preserving original resolution and projection attributes.","Published":"2016-08-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rasterVis","Version":"0.41","Title":"Visualization Methods for Raster Data","Description":"Methods for enhanced visualization and interaction with raster data. It implements visualization methods for quantitative data and categorical data, both for univariate and multivariate rasters. It also provides methods to display spatiotemporal rasters, and vector fields. See the website for examples.","Published":"2016-12-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RateDistortion","Version":"1.01","Title":"Routines for Solving Rate-Distortion Problems","Description":"An implementation of routines for solving rate-distortion problems.\n Rate-distortion theory is a field within information theory that\n examines optimal lossy compression. That is, given that some\n information must be lost, how can a communication channel be designed\n that minimizes the cost of communication error? Rate-distortion\n theory is concerned with the optimal (minimal cost) solution to such\n tradeoffs. An important tool for solving rate-distortion problems is\n the Blahut algorithm, developed by Richard Blahut and described in:\n\n Blahut, R. E. (1972). Computation of channel capacity and\n rate-distortion functions. IEEE Transactions on Information Theory,\n IT-18(4), 460-473.\n\n This package implements the basic Blahut algorithm, and additionally contains a number of `helper' functions, including a routine for searching for an information channel that minimizes cost subject to a constraint on information rate.","Published":"2015-08-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ratelimitr","Version":"0.3.9","Title":"Rate Limiting for R","Description":"Allows to limit the rate at which one or more functions can be called.","Published":"2017-06-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rateratio.test","Version":"1.0-2","Title":"Exact rate ratio test","Description":"A function which performs exact rate ratio tests and returns an object of class htest.","Published":"2014-01-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"raters","Version":"2.0.1","Title":"A Modification of Fleiss' Kappa in Case of Nominal and Ordinal\nVariables","Description":"The kappa statistic implemented by Fleiss is a very popular index for assessing the reliability of agreement among multiple observers. It is used both in the psychological and in the psychiatric field. Other fields of application are typically medicine, biology and engineering. Unfortunately,the kappa statistic may behave inconsistently in case of strong agreement between raters, since this index assumes lower values than it would have been expected. We propose a modification kappa implemented by Fleiss in case of nominal and ordinal variables. Monte Carlo simulations are used both to testing statistical hypotheses and to calculating percentile bootstrap confidence intervals based on proposed statistic in case of nominal and ordinal data.","Published":"2014-12-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ratesci","Version":"0.2-0","Title":"Confidence Intervals for Comparisons of Binomial or Poisson\nRates","Description":"Computes confidence intervals for the rate (or risk)\n difference (\"RD\") or rate ratio (or relative risk, \"RR\") for \n binomial proportions or Poisson rates, or for odds ratio \n (\"OR\", binomial only). Also confidence intervals for a single \n binomial or Poisson rate, and intervals for matched pairs. \n Includes asymptotic score methods including skewness corrections, \n which have been developed in Laud (2017, in press)\n from Miettinen & Nurminen (1985) and \n Gart & Nam (1988) . Also includes MOVER methods\n (Method Of Variance Estimates Recovery), derived from the\n Newcombe method but using equal-tailed Jeffreys intervals,\n and generalised for incorporating prior information. \n Also methods for stratified calculations (e.g. meta-analysis),\n either assuming fixed effects or incorporating stratum\n heterogeneity.","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RatingScaleReduction","Version":"1.1","Title":"Rating Scale Reduction Procedure","Description":"Describes a new procedure of reducing items in a rating scale called Rating Scale Reduction (RSR). The new stop criterion in RSR procedure is added (stop global max).","Published":"2017-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rationalfun","Version":"0.1-0","Title":"Manipulation of Rational Functions","Description":"This package provides several functions to\n manipulate rational functions, including basic\n arithmetic operators, derivatives and integrals with\n EXPLICIT forms.","Published":"2011-11-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RAtmosphere","Version":"1.1","Title":"Standard Atmospheric profiles","Description":"This package provide an easy way to produce atmospheric profiles of Pressure,\n Temperature and Density according to the standard atmosphere 1976. \n It provides also profiles of molecular volume backscatter coefficients for standard atmosphere,\n and approximated estimations of sunset, sunrise and solar zenith angle.","Published":"2014-01-15","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rattle","Version":"4.1.0","Title":"Graphical User Interface for Data Mining in R","Description":"The R Analytic Tool To Learn Easily (Rattle) provides a \n Gnome (RGtk2) based interface to R functionality for data mining. \n The aim is to provide a simple and intuitive interface \n that allows a user to quickly load data from a CSV file \n (or via ODBC), transform and explore the data, \n build and evaluate models, and export models as PMML (predictive\n modelling markup language) or as scores. All of this with knowing little \n about R. All R commands are logged and commented through the log tab. Thus they\n are available to the user as a script file or as an aide for the user to \n learn R or to copy-and-paste directly into R itself. \n Rattle also exports a number of utility \n functions and the graphical user interface, invoked as rattle(), does \n not need to be run to deploy these. ","Published":"2016-01-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rAverage","Version":"0.5-3","Title":"Parameter Estimation for the Averaging Model of Information\nIntegration Theory","Description":"Functions to estimate parameters of averaging models of Anderson's Information Integration Theory.","Published":"2017-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rAvis","Version":"0.1.4","Title":"Interface to the Bird-Watching Dataset Proyecto AVIS","Description":"Interface to database. \n It provides means to download data filtered by species, order,\n family, and several other criteria. Provides also basic functionality to\n plot exploratory maps of the datasets.","Published":"2015-06-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"raw","Version":"0.1.4","Title":"R Actuarial Workshops","Description":"In order to facilitate R instruction for actuaries, we have organized several \n sets of publicly available data of interest to non-life actuaries. In addition, we suggest \n a set of packages, which most practicing actuaries will use routinely. Finally, there is \n an R markdown skeleton for basic reserve analysis.","Published":"2016-11-29","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"rbamtools","Version":"2.16.6","Title":"Read and Write BAM (Binary Alignment) Files","Description":"Provides an interface to functions of the 'SAMtools' C-Library by Heng Li.","Published":"2017-03-24","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"rBayesianOptimization","Version":"1.1.0","Title":"Bayesian Optimization of Hyperparameters","Description":"A Pure R implementation of Bayesian Global Optimization with Gaussian Processes.","Published":"2016-09-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rbcb","Version":"0.1.1","Title":"R Interface to Brazilian Central Bank Web Services","Description":"The Brazilian Central Bank API delivers many datasets which regard economic\n activity, regional economy, international economy, public finances, credit\n indicators and many more. For more information please see .\n These datasets can be accessed through 'rbcb' functions and can be obtained in\n different data structures common to R ('tibble', 'data.frame', 'xts', ...).","Published":"2017-05-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rbefdata","Version":"0.3.5","Title":"BEFdata R package","Description":"Basic R package to access data structures offered by any\n BEFdata portal instance.","Published":"2013-11-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rbenchmark","Version":"1.0.0","Title":"Benchmarking routine for R","Description":"rbenchmark is inspired by the Perl module Benchmark, and\n is intended to facilitate benchmarking of arbitrary R code. The\n library consists of just one function, benchmark, which is a\n simple wrapper around system.time. Given a specification of\n the benchmarking process (counts of replications, evaluation\n environment) and an arbitrary number of expressions, benchmark\n evaluates each of the expressions in the specified environment,\n replicating the evaluation as many times as specified, and\n returning the results conveniently wrapped into a data frame.","Published":"2012-08-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rbent","Version":"0.1.0","Title":"Robust Bent Line Regression","Description":"An implementation of robust bent line regression. It can fit the bent line regression and test the existence of change point,\n for the paper, \"Feipeng Zhang and Qunhua Li (2016). Robust bent line regression, submitted.\"","Published":"2016-06-08","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"rBeta2009","Version":"1.0","Title":"The Beta Random Number and Dirichlet Random Vector Generating\nFunctions","Description":"The package contains functions to generate random numbers\n from the beta distribution and random vectors from the\n Dirichlet distribution.","Published":"2012-03-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rbgm","Version":"0.0.4","Title":"Tools for 'Box Geometry Model' (BGM) Files and Topology for the\nAtlantis Ecosystem Model","Description":"Facilities for working with Atlantis box-geometry model (BGM) \n files. Atlantis is a deterministic, biogeochemical, whole-of-ecosystem model. \n Functions are provided to read from BGM files directly, preserving their \n internal topology, as well as helper functions to generate 'Spatial' objects.\n This functionality aims to simplify the creation and modification of box \n and geometry as well as the ability to integrate with other data sources. ","Published":"2016-07-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rbhl","Version":"0.8.0","Title":"Interface to the 'Biodiversity' 'Heritage' Library","Description":"Interface to 'Biodiversity' 'Heritage' Library ('BHL')\n () 'API'\n (). 'BHL' is a\n repository of 'digitized' literature on 'biodiversity'\n studies, including 'floras', research papers, and more.","Published":"2017-04-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rbi","Version":"0.7.0","Title":"R Interface to LibBi","Description":"Provides a complete R interface to LibBi, a library for Bayesian inference (see for more information). This includes functions for manipulating LibBi models, for reading and writing LibBi input/output files, for converting LibBi output to provide traces for use with the coda package, and for running LibBi from R.","Published":"2017-02-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RbioRXN","Version":"1.5.1","Title":"Process Rhea, KEGG, MetaCyc, Unipathway Biochemical Reaction\nData","Description":"To facilitate retrieving and processing biochemical reaction data such as Rhea, MetaCyc, KEGG and Unipathway, the package provides the functions to download and parse data, instantiate generic reaction and check mass-balance. The package aims to construct an integrated metabolic network and genome-scale metabolic model.","Published":"2015-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rbiouml","Version":"1.7","Title":"Interact with BioUML Server","Description":"Functions for connecting to BioUML server, querying BioUML repository and launching BioUML analyses.","Published":"2015-09-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rbison","Version":"0.5.4","Title":"Interface to the 'USGS' 'BISON' 'API'","Description":"Interface to the 'USGS' 'BISON' ()\n 'API', a 'database' for species occurrence data. Data comes from\n species in the United States from participating data providers. You can get\n data via 'taxonomic' and location based queries. A simple function\n is provided to help visualize data.","Published":"2017-04-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rbitcoin","Version":"0.9.2","Title":"R & bitcoin integration","Description":"Utilities related to Bitcoin. Unified markets API interface\n (bitstamp, kraken, btce, bitmarket). Both public and private API calls.\n Integration of data structures for all markets. Support SSL. Read Rbitcoin\n documentation (command: ?btc) for more information.","Published":"2014-09-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rbitcoinchartsapi","Version":"1.0.4","Title":"R Package for the BitCoinCharts.com API","Description":"An R package for the BitCoinCharts.com API.","Published":"2014-06-14","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"Rblpapi","Version":"0.3.6","Title":"R Interface to 'Bloomberg'","Description":"An R Interface to 'Bloomberg' is provided via the 'Blp API'.","Published":"2017-04-20","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rbmn","Version":"0.9-2","Title":"Handling Linear Gaussian Bayesian Networks","Description":"Creation, manipulation, simulation of linear Gaussian Bayesian\n networks from text files and more...","Published":"2013-08-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RBMRB","Version":"2.0.4","Title":"BMRB Data Access and Visualization","Description":"The Biological Magnetic Resonance Data Bank (BMRB,) collects, annotates, archives, and disseminates (worldwide in the public domain) the important spectral and quantitative data derived from NMR(Nuclear Magnetic Resonance) spectroscopic investigations of biological macromolecules and metabolites. This package provides an interface to BMRB database for easy data access and includes a minimal set of data visualization functions. Users are encouraged to make their own data visualizations using BMRB data. ","Published":"2017-03-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rbokeh","Version":"0.5.0","Title":"R Interface for Bokeh","Description":"A native R plotting library that provides a flexible declarative interface for creating interactive web-based graphics, backed by the Bokeh visualization library .","Published":"2016-10-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rborist","Version":"0.1-7","Title":"Extensible, Parallelizable Implementation of the Random Forest\nAlgorithm","Description":"Scalable decision tree training and prediction.","Published":"2017-06-17","License":"MPL (>= 2) | GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rbounds","Version":"2.1","Title":"Perform Rosenbaum bounds sensitivity tests for matched and\nunmatched data","Description":"Takes matched and unmatched data and calculates Rosenbaum bounds for the treatment effect. Calculates bounds for binary outcome data, Hodges-Lehmann point estimates, Wilcoxon signed-rank test for matched data and matched IV estimators, Wilcoxon sum rank test, and for data with multiple matched controls. Package is also designed to work with the Matching package and operate on Match() objects.","Published":"2014-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RBPcurve","Version":"1.2","Title":"The Residual-Based Predictiveness Curve","Description":"The RBP curve is a visual tool to assess the\n performance of prediction models.","Published":"2017-05-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rbugs","Version":"0.5-9","Title":"Fusing R and OpenBugs and Beyond","Description":"Functions to prepare files needed for running BUGS in\n batch-mode, and running BUGS from R. Support for Linux and\n Windows systems with OpenBugs is emphasized.","Published":"2013-04-09","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"rbundler","Version":"0.3.7","Title":"Rbundler manages an application's dependencies systematically\nand repeatedly","Description":"Rbundler manages a project-specific library for dependency\n package installation. By specifying dependencies in a DESCRIPTION file\n in a project's root directory, one may install and use dependencies\n in a repeatable fashion without requiring manual maintenance.\n rbundler creates a project-specific R library in\n `PROJECT_ROOT/.Rbundle` (by default) and a project-specific\n `R_LIBS_USER` value, set in `PROJECT_ROOT/.Renviron`. It supports\n dependency management for R standard \"Depends\", \"Imports\",\n \"Suggests\", and \"LinkingTo\" package dependencies. rbundler also\n attempts to validate and install versioned dependencies, such\n as \">=\", \"==\", \"<=\". Note that, due to the way R manages package\n installation, differing nested versioned dependencies are not\n allowed. For example, if your project depends on packages A (== 1),\n and B (== 2), but package A depends on B (== 1), then a nested\n dependency violation will cause rbundler to error out.","Published":"2014-05-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rbvs","Version":"1.0.2","Title":"Ranking-Based Variable Selection","Description":"Implements the Ranking-Based Variable Selection\n algorithm for variable selection in high-dimensional data.","Published":"2015-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RCA","Version":"2.0","Title":"Relational Class Analysis","Description":"Relational Class Analysis (RCA) is a method for detecting\n heterogeneity in attitudinal data (as described in Goldberg\n A., 2011, Am. J. Soc, 116(5)).","Published":"2016-02-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RCALI","Version":"0.2-18","Title":"Calculation of the Integrated Flow of Particles Between Polygons","Description":"Calculate the flow of particles between polygons by two integration methods: integration by a cubature method and integration on a grid of points.","Published":"2017-04-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rcanvec","Version":"0.2.1","Title":"Access and Plot CanVec and CanVec+ Data for Rapid Basemap\nCreation in Canada","Description":"Provides an interface to the National Topographic System (NTS),\n which is the way in which a number of freely available Canadian datasets are\n organized. CanVec and CanVec+ datasets, which include all data used to create\n Canadian topographic maps, are two such datasets that are useful in creating\n vector-based maps for locations across Canada. This packages searches CanVec\n data by location, plots it using pretty defaults, and exports it to human-\n readable shapefiles for use in another GIS.","Published":"2016-09-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rcapture","Version":"1.4-2","Title":"Loglinear Models for Capture-Recapture Experiments","Description":"Estimation of abundance and other of demographic parameters for closed \n populations, open populations and the robust design in capture-recapture experiments \n using loglinear models. ","Published":"2014-08-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rCarto","Version":"0.8","Title":"This package builds maps with a full cartographic layout","Description":"This package makes some maps using shapefiles and\n dataframes. Five kinds of maps are available : proportionnal\n circles, proportionnal circles colored by a discretized\n quantitative variable, proportionnal circles colored by the\n modalities of a qualitative variable, choropleth and typology.","Published":"2013-03-20","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"RCassandra","Version":"0.1-3","Title":"R/Cassandra interface","Description":"This packages provides a direct interface (without the\n\t use of Java) to the most basic functionality of Apache\n\t Cassanda such as login, updates and queries.","Published":"2013-12-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rCAT","Version":"0.1.5","Title":"Conservation Assessment Tools","Description":"A set of tools to help with species conservation assessments (Red List threat assessments). Includes tool for Extent of occurrence, Area of Occupancy, Minimum Enclosing Rectangle, a geographic Projection Wizard and Species batch processing.","Published":"2017-01-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rCBA","Version":"0.0.1","Title":"CBA Classifier for R","Description":"Provides implementations of rule pruning algorithms based on the \"Classification Based on Associations\" (CBA). It can be used for building classification models from association rules. Rules are pruned in the order of precedence given by the sort criteria and a default rule is added. CBA was originally proposed by Liu, B. Hsu, W. and Ma, Y (1998). Integrating Classification and Association Rule Mining. Proceedings KDD-98, New York, 27-31 August. AAAI. pp80-86.","Published":"2015-12-11","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"rcbalance","Version":"1.8.4","Title":"Large, Sparse Optimal Matching with Refined Covariate Balance","Description":"Tools for large, sparse optimal matching of treated units\n\tand control units in observational studies. Provisions are\n\tmade for refined covariate balance constraints, which include\n\tfine and near-fine balance as special cases. Matches are \n\toptimal in the sense that they are computed as solutions to\n\tnetwork optimization problems rather than greedy algorithms. ","Published":"2017-06-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rcbsubset","Version":"1.1.2","Title":"Optimal Subset Matching with Refined Covariate Balance","Description":"Tools for optimal subset matching of treated units\n\tand control units in observational studies, with support\n\tfor refined covariate balance constraints, (including\n\tfine and near-fine balance as special cases). A close \n\trelative is the 'rcbalance' package. ","Published":"2016-02-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rcc","Version":"1.0.0","Title":"Parametric Bootstrapping to Control Rank Conditional Coverage","Description":"Functions to implement the parametric and non-parametric bootstrap \n confidence interval methods described in Morrison and Simon (2017) .","Published":"2017-02-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rccdates","Version":"1.0.0","Title":"Date Functions for Swedish Cancer Data","Description":"Identify, convert and handle dates as used within the Swedish cancer register and associated cancer quality registers in Sweden. Especially the cancer register sometimes use nonstandard date variables where day and/or month can be \"00\" or were the date format is a mixture of\"%Y-%m-%d\", \"%Y%m%d\" and \"%y%V\" (two digit year and week number according to ISO 8601,which is not completely supported by R).These dates must be approximated to valid dates before being used in for example survival analysis. The package also includes some convenient functions for calculating \"lead times\" (relying on 'difftime') and introduce a \"year\" class with relevant S3-methods to handle yearly cohort.","Published":"2016-07-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rccmisc","Version":"0.3.7","Title":"Miscellaneous R Functions for Swedish Regional Cancer Centers","Description":"Functions either required by other Swedish Regional Cancer Center packages or standalone functions outside the scope of other packages.","Published":"2016-12-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rcdd","Version":"1.2","Title":"Computational Geometry","Description":"R interface to (some of) cddlib\n ().\n Converts back and forth between two representations of a convex polytope:\n as solution of a set of linear equalities and inequalities and as\n convex hull of set of points and rays.\n Also does linear programming and redundant generator elimination\n (for example, convex hull in n dimensions). All functions can use exact\n infinite-precision rational arithmetic.","Published":"2017-06-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rcdk","Version":"3.3.8","Title":"Interface to the CDK Libraries","Description":"Allows the user to access functionality in the\n CDK, a Java framework for chemoinformatics. This allows the user to load\n molecules, evaluate fingerprints, calculate molecular descriptors and so on.\n In addition the CDK API allows the user to view structures in 2D.","Published":"2016-11-26","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"rcdklibs","Version":"2.0","Title":"The CDK Libraries Packaged for R","Description":"An R interface to the Chemistry Development Kit, a Java library\n for chemoinformatics. Given the size of the library itself, this package is\n not expected to change very frequently. To make use of the CDK within R, it is\n suggested that you use the 'rcdk' package. Note that it is possible to directly\n interact with the CDK using 'rJava'. However 'rcdk' exposes functionality in a more\n idiomatic way. The CDK library itself is released as LGPL and the sources can be\n obtained from .","Published":"2017-06-11","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"RCEIM","Version":"0.3","Title":"R Cross Entropy Inspired Method for Optimization","Description":"An implementation of a stochastic heuristic method for performing multidimensional function optimization. The method is inspired in the Cross-Entropy Method. It does not relies on derivatives, neither imposes particularly strong requirements into the function to be optimized. Additionally, it takes profit from multi-core processing to enable optimization of time-consuming functions.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcellData","Version":"1.3-2","Title":"Example Dataset for 'Rcell' Package","Description":"Example dataset for 'Rcell' package. Contains images and cell data object. ","Published":"2015-05-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rcereal","Version":"1.2.1","Title":"C++11 Header Files for 'cereal'","Description":"To facilitate using 'cereal' with Rcpp.\n 'cereal' is a header-only C++11 serialization library.\n 'cereal' takes arbitrary data types and reversibly turns them into\n different representations, such as compact binary encodings, XML,\n or JSON. 'cereal' was designed to be fast, light-weight, and easy\n to extend - it has no external dependencies and can be easily\n bundled with other code or used standalone. Please see\n for more information.","Published":"2017-01-26","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RcextTools","Version":"0.1.0","Title":"Analytical Procedures in Support of Brazilian Public Sector\nExternal Auditing","Description":"Set of analytical procedures based on advanced data analysis in support of Brazil's public sector external control activity.","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rcgmin","Version":"2013-2.21","Title":"Conjugate Gradient Minimization of Nonlinear Functions","Description":"Conjugate gradient minimization of nonlinear functions\n with box constraints incorporating the Dai/Yuan update. This\n implementation should be used in place of the \"CG\" algorithm\n of the optim() function.","Published":"2014-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rchallenge","Version":"1.3.0","Title":"A Simple Data Science Challenge System","Description":"A simple data science challenge system using R Markdown and Dropbox .\n It requires no network configuration, does not depend on external platforms\n like e.g. Kaggle and can be easily installed on a personal computer.","Published":"2016-10-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rchess","Version":"0.1","Title":"Chess Move, Generation/Validation, Piece Placement/ Movement,\nand Check/Checkmate/Stalemate Detection","Description":"R package for chess validations, pieces movements and check\n detection. Also integrates functions to plot chess boards given a\n Forsyth Edwards and Portable Game notations.","Published":"2015-11-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RchivalTag","Version":"0.0.5","Title":"Analyzing Archival Tagging Data","Description":"A set of functions to generate, access and analyze standard data products from archival tagging data.","Published":"2017-06-08","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"Rchoice","Version":"0.3-1","Title":"Discrete Choice (Binary, Poisson and Ordered) Models with Random\nParameters","Description":"An implementation of simulated maximum likelihood method for the\n estimation of Binary (Probit and Logit), Ordered (Probit and Logit) and\n Poisson models with random parameters for cross-sectional and longitudinal\n data.","Published":"2016-10-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rChoiceDialogs","Version":"1.0.6","Title":"rChoiceDialogs collection","Description":"Collection of portable choice dialog widgets","Published":"2014-09-11","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"RChronoModel","Version":"0.4","Title":"Post-Processing of the Markov Chain Simulated by ChronoModel or\nOxcal","Description":"Provides a list of functions for the statistical analysis and the post-processing of the Markov Chains simulated by ChronoModel (see for more information). ChronoModel is a friendly software to construct a chronological model in a Bayesian framework. Its output is a sampled Markov chain from the posterior distribution of dates component the chronology. The functions can also be applied to the analyse of mcmc output generated by Oxcal software.","Published":"2017-01-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rcicr","Version":"0.3.4.1","Title":"Reverse-Correlation Image-Classification Toolbox","Description":"Functions to generate stimuli and analyze data of reverse correlation image classification experiments (psychophysical tasks aimed at visualizing cognitive mental representations of faces).","Published":"2016-07-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RCircos","Version":"1.2.0","Title":"Circos 2D Track Plot","Description":"A simple and flexible way to generate Circos 2D track plot images for genomic data visualization is implemented in this package. The types of plots include: heatmap, histogram, lines, scatterplot, tiles and plot items for further decorations include connector, link (lines and ribbons), and text (gene) label. All functions require only R graphics package that comes with R base installation. ","Published":"2016-09-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rclimateca","Version":"0.2","Title":"Fetch Climate Data from Environment Canada","Description":"The Environment Canada climate archives \n are an important source of data for climate researchers in Canada and world wide.\n The repository contains temperature, precipitation, and wind data for more than\n 8,000 locations. The functions in this package simplify the process of downloading,\n subsetting, and manipulating these data for the purposes of more efficient workflows\n in climate research.","Published":"2017-02-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RClimMAWGEN","Version":"1.1","Title":"RClimMAWGEN (R Climate Index Multi-site Auto-regressive Weather\nGENeretor): a package to generate time series of climate\nindices from RMAWGEN generations","Description":"This package contains wrapper functions and methods which allow to\n use \"climdex.pcic\" and \"RMAWGEN\" packages. With this simple approach it is\n possible to calculate climate change indices, suggested by the WMO-CCL,\n CLIVAR, ETCCDMI(http://www.climdex.org),on stochastic generations of\n temperature and precipitation time series, obtained by the application of\n RMAWGEN. Each index can be applied to both observed data and to synthetic\n time series produced by the Weather Generator, over a reference period\n (e.g. 1981-2010, as in the example). It contains also functions and methods\n to evaluate the generated time series of climate change indices consistency\n by statistical tests.Bugs/comments/questions/collaboration of any kind are\n warmly welcomed.","Published":"2014-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rClinicalCodes","Version":"1.0.1","Title":"R tools for integrating with the www.clinicalcodes.org\nrepository","Description":"R tools for integrating with the www.clinicalcodes.org web\n repository","Published":"2014-06-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RClone","Version":"1.0.2","Title":"Partially Clonal Populations Analysis","Description":"R version of 'GenClone' (a computer program to analyse genotypic data, test for clonality and describe spatial clonal organization, Arnaud-Haond & Belkhir 2007, ), this package allows clone handling as 'GenClone' does, plus the possibility to work with several populations, MultiLocus Lineages (MLL) custom definition and use, and p-value calculation for psex statistic (probability of originating from distinct sexual events) and psex_Fis statistic (taking account of Hardy-Weinberg equilibrium departure) as 'MLGsim'/'MLGsim2' (a program for detecting clones using a simulation approach, Stenberg et al. 2003).","Published":"2016-06-06","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"Rclusterpp","Version":"0.2.3","Title":"Linkable C++ clustering","Description":"Provide flexible native clustering routines that can be\n linked against in downstream packages.","Published":"2013-11-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rCMA","Version":"1.1","Title":"R-to-Java Interface for 'CMA-ES'","Description":"Tool for providing access to the Java version 'CMAEvolutionStrategy' of\n Nikolaus Hansen. 'CMA-ES' is the Covariance Matrix Adaptation Evolution Strategy,\n see https://www.lri.fr/~hansen/cmaes_inmatlab.html#java.","Published":"2015-04-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rcmdcheck","Version":"1.2.1","Title":"Run 'R CMD check' from 'R' and Capture Results","Description":"Run 'R CMD check' from 'R' programmatically, and capture the\n results of the individual checks.","Published":"2016-09-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rcmdr","Version":"2.3-2","Title":"R Commander","Description":"\n A platform-independent basic-statistics GUI (graphical user interface) for R, based on the tcltk package.","Published":"2017-01-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrMisc","Version":"1.0-5","Title":"R Commander Miscellaneous Functions","Description":"\n Various statistical, graphics, and data-management functions used by the Rcmdr package in the R Commander GUI for R. ","Published":"2016-08-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.BCA","Version":"0.9-8","Title":"Rcmdr Plug-In for Business and Customer Analytics","Description":"An Rcmdr \"plug-in\" to accompany the book Customer and \n\t\tBusiness Analytics: Applied Data Mining for Business Decision\n\t\tMaking Using R by Daniel S. Putler and Robert E. Krider.","Published":"2014-09-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.BiclustGUI","Version":"1.1.0","Title":"'Rcmdr' Plug-in GUI for Biclustering","Description":"A plug-in for R Commander ('Rcmdr'). The package is a Graphical\n User Interface (GUI) in which several biclustering methods can be executed,\n followed by diagnostics and plots of the results. Further, the GUI also has\n the possibility to connect the methods to more general diagnostic packages for\n biclustering. Biclustering methods from 'biclust', 'fabia', 's4vd', 'iBBiG',\n 'isa2', 'BiBitR', 'rqubic' and 'BicARE' are implemented. Additionally, 'superbiclust' and\n 'BcDiag' are also implemented to be able to further investigate results. The\n GUI also provides a couple of extra utilities to export, save, search through\n and plot the results. 'RcmdrPlugin.BiclustGUI' also provides a very specific\n framework for biclustering in which new methods, diagnostics and plots can be\n added. Scripts were prepared so that R-package developers can freely design\n their own dialogs in the GUI which can then be added by the maintainer of\n 'RcmdrPlugin.BiclustGUI'. These scripts do not required any knowledge of 'tcltk'\n and 'Rcmdr' and are easy to fill in.","Published":"2017-01-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.coin","Version":"1.0-22","Title":"Rcmdr Coin Plug-In","Description":"This package provides a Rcmdr \"plug-in\" based on coin (Conditional Inference Procedures in a Permutation Test Framework).","Published":"2014-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.depthTools","Version":"1.3","Title":"R commander Depth Tools Plug-In","Description":"This package provides an Rcmdr plug-in based on the\n depthTools package, which implements different robust\n statistical tools for the description and analysis of gene\n expression data based on the Modified Band Depth, namely, the\n scale curves for visualizing the dispersion of one or various\n groups of samples (e.g. types of tumors), a rank test to decide\n whether two groups of samples come from a single distribution\n and two methods of supervised classification techniques, the DS\n and TAD methods.","Published":"2013-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.DoE","Version":"0.12-3","Title":"R Commander Plugin for (industrial) Design of Experiments","Description":"The package provides a platform-independent GUI for design of experiments.\n It is implemented as a plugin to the R-Commander, which is a more general \n graphical user interface for statistics in R based on tcl/tk. \n DoE functionality can be accessed through the menu Design that is added to the \n R-Commander menus.","Published":"2014-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.doex","Version":"0.2.0","Title":"Rcmdr plugin for Stat 4309 course","Description":"This package provides an Rcmdr \"plug-in\" based on the\n Design of experiments class Stat 4309","Published":"2011-08-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.EACSPIR","Version":"0.2-2","Title":"Plugin de R-Commander para el Manual 'EACSPIR'","Description":"Este paquete proporciona una interfaz grafica de usuario (GUI) para algunos de los procedimientos estadisticos detallados en un curso de 'Estadistica aplicada a las Ciencias Sociales mediante el programa informatico R' (EACSPIR). LA GUI se ha desarrollado como un Plugin del programa R-Commander.","Published":"2015-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.EBM","Version":"1.0-10","Title":"Rcmdr Evidence Based Medicine Plug-in Package","Description":"Rcmdr plug-in GUI extension for Evidence Based Medicine medical indicators calculations (Sensitivity, specificity, absolute risk reduction, relative risk, ...).","Published":"2015-10-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.EcoVirtual","Version":"1.0","Title":"Rcmdr EcoVirtual Plugin","Description":"A Rcmdr \"plug-in\" for the EcoVirtual package, designed primarily for teaching ecological models using simulations.","Published":"2016-07-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.epack","Version":"1.2.5","Title":"Rcmdr plugin for time series","Description":"This package provides an Rcmdr \"plug-in\" based on the time\n series functions. Contributors: G. Jay Kerns, John Fox, and\n Richard Heiberger.","Published":"2012-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.Export","Version":"0.3-1","Title":"Export R Output to LaTeX or HTML","Description":"Export Rcmdr output to LaTeX or HTML code. The\n plug-in was originally intended to facilitate exporting Rcmdr\n output to formats other than ASCII text and to provide R\n novices with an easy-to-use, easy-to-access reference on\n exporting R objects to formats suited for printed output. The\n package documentation contains several pointers on creating\n reports, either by using conventional word processors or\n LaTeX/LyX.","Published":"2015-10-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.EZR","Version":"1.35","Title":"R Commander Plug-in for the EZR (Easy R) Package","Description":"EZR (Easy R) adds a variety of statistical functions, including survival analyses, ROC analyses, metaanalyses, sample size calculation, and so on, to the R commander. EZR enables point-and-click easy access to statistical functions, especially for medical statistics. EZR is platform-independent and runs on Windows, Mac OS X, and UNIX. Its complete manual is available only in Japanese (Chugai Igakusha, ISBN: 978-4-498-10901-8 or Nankodo, ISBN: 978-4-524-26158-1), but an report that introduced the investigation of EZR was published in Bone Marrow Transplantation (Nature Publishing Group) as an Open article. This report can be used as a simple manual. It can be freely downloaded from the journal website as shown below.","Published":"2017-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.FactoMineR","Version":"1.6-0","Title":"Graphical User Interface for FactoMineR","Description":"Rcmdr Plugin for the 'FactoMineR' package.","Published":"2016-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.FuzzyClust","Version":"1.1","Title":"R Commander Plug-in for Fuzzy Clustering Methods (Fuzzy C-Means\nand Gustafson Kessel)","Description":"The R Commander Plug-in for Fuzzy Clustering Methods. This Plug-\n in provide Graphical User Interface of 2 methods of Fuzzy Clustering (Fuzzy C-\n Means /FCM and Gustafson Kessel-Babuska). For validation of clustering, this plug-\n in use Xie Beni Index, MPC index, and CE index. For statistical test (test\n of significant differences of grouping/clustering), this plug-in use MANOVA\n analysis with Pillai trace statistics. For stabilize the result, this package provide\n soft voting cluster ensemble function. Visualization of result are provided via plugin that must be load in Rcmdr file.","Published":"2016-09-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.GWRM","Version":"1.0.1","Title":"R Commander Plug-in for Fitting Generalized Waring Regression\nModels","Description":"Provides an Rcmdr plug-in based on the 'GWRM' package.","Published":"2016-02-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.HH","Version":"1.1-46","Title":"Rcmdr Support for the HH Package","Description":"Rcmdr menu support for many of the functions in the HH package.\n The focus is on menu items for functions we use in our introductory\n courses.","Published":"2016-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.IPSUR","Version":"0.2-1","Title":"An IPSUR Plugin for the R Commander","Description":"\n This package is an R Commander plugin that accompanies IPSUR, an Introduction to Probability and Statistics Using R.","Published":"2014-09-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.KMggplot2","Version":"0.2-4","Title":"R Commander Plug-in for Data Visualization with 'ggplot2'","Description":"A GUI front-end for 'ggplot2' supports Kaplan-Meier plot, histogram,\n Q-Q plot, box plot, errorbar plot, scatter plot, line chart, pie chart,\n bar chart, contour plot, and distribution plot.","Published":"2016-12-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.lfstat","Version":"0.8.1","Title":"Rcmdr Plug-in for Low Flow Analysis","Description":"Provides an Rcmdr \"plug-in\" based on the lfstat package for low flow analysis.","Published":"2016-08-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.MA","Version":"0.0-2","Title":"Graphical User Interface for Conducting Meta-Analyses in R","Description":"Easy to use interface for conducting meta-analysis in R. This\n package is an Rcmdr-plugin, which allows the user to conduct analyses in a\n menu-driven, graphical user interface environment (e.g., CMA, SPSS). It\n uses recommended procedures as described in The Handbook of Research\n Synthesis and Meta-Analysis (Cooper, Hedges, & Valentine, 2009).","Published":"2014-09-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.mosaic","Version":"1.0-7","Title":"Adds menu items to produce mosaic plots and assoc plots to Rcmdr","Description":"Rcmdr menu items to display mosaic and assoc plots Allows\n to visually restructure the underlying structables Developed\n after extended discussions with Rich Heiberger","Published":"2013-01-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.MPAStats","Version":"1.2.1","Title":"R Commander Plug-in for MPA Statistics","Description":"Extends R Commander with a unified menu of new and pre-existing \n statistical functions related to public management and policy analysis \n statistics. Functions and menus have been renamed according to the \n usage in PMGT 630 in the Master of Public Administration program at\n Brigham Young University.","Published":"2016-05-14","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.NMBU","Version":"1.8.7","Title":"R Commander Plug-in for University Level Applied Statistics","Description":"An R Commander \"plug-in\" extending functionality of linear models\n and providing an interface to Partial Least Squares Regression and Linear and\n Quadratic Discriminant analysis. Several statistical summaries are extended,\n predictions are offered for additional types of analyses, and extra plots, tests\n and mixed models are available.","Published":"2016-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.orloca","Version":"4.1","Title":"orloca Rcmdr Plug-in","Description":"This package provides a GUI for the orloca package, it is\n developed as an Rcmdr plug-in.","Published":"2013-01-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.PcaRobust","Version":"1.1.4","Title":"R Commander Plug-in for Robust Principal Component Analysis","Description":"The R commander plug-in for robust principal component analysis. The Graphical User Interface for Principal Component Analysis (PCA) with Hubert Algorithm method.","Published":"2016-09-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.plotByGroup","Version":"0.1-0","Title":"Rcmdr plots by group using lattice","Description":"Rcmdr menu support for some of the graphics by group in\n the lattice package","Published":"2013-01-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.pointG","Version":"0.6.6","Title":"Graphical POINT of view for questionnaire data Rcmdr Plug-In","Description":"This package provides a Rcmdr \"plug-in\" to analyze questionnaire data.","Published":"2014-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.qual","Version":"2.2.6","Title":"Rcmdr plugin for quality control course","Description":"This package provides an Rcmdr \"plug-in\" based on the\n Quality control class Stat 4300","Published":"2013-09-14","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.RMTCJags","Version":"1.0-2","Title":"R MTC Jags 'Rcmdr' Plugin","Description":"Mixed Treatment Comparison is a methodology to compare directly and/or indirectly health strategies (drugs, treatments, devices). This package provides an 'Rcmdr' plugin to perform Mixed Treatment Comparison for binary outcome using BUGS code from Bristol University (Lu and Ades).","Published":"2016-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.ROC","Version":"1.0-18","Title":"Rcmdr Receiver Operator Characteristic Plug-In PACKAGE","Description":"Rcmdr GUI extension plug-in for Receiver Operator Characteristic tools from pROC and ROCR packages. Also it ads a Rcmdr GUI extension for Hosmer and Lemeshow GOF test from the package ResourceSelection.","Published":"2015-02-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.sampling","Version":"1.1","Title":"Tools for sampling in Official Statistical Surveys","Description":"This package includes tools for calculating sample sizes and \n selecting samples using various sampling designs. This package is an extension\n of RcmdrPlugin.EHESsampling which was developed as part of the EHES pilot project.\n The EHES Pilot project has received funding from the European Commission and \n DG Sanco. The views expressed here are those of the authors and they do not represent \n Commission's official position. ","Published":"2013-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.SCDA","Version":"1.1","Title":"Rcmdr Plugin for Designing and Analyzing Single-case Experiments","Description":"Provides a GUI for the SCVA, SCRT and SCMA packages. The package is written as an Rcmdr plugin.","Published":"2015-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.seeg","Version":"1.0","Title":"Rcmdr Plugin for seeg","Description":"Supports the text book Acevedo, M.F (2013) \"Data Analysis\n and Statistics for Geography, Environmental Science, and\n Engineering\" CRC Press","Published":"2013-01-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.SLC","Version":"0.2","Title":"SLC Rcmdr Plug-in","Description":"This package provides a GUI for the SLC package, it is\n written as an Rcmdr plug-in.","Published":"2013-01-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.SM","Version":"0.3.1","Title":"Rcmdr Sport Management Plug-In","Description":"This package provides an Rcmdr \"plug-in\" for studying\n sport management data.","Published":"2012-12-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.sos","Version":"0.3-0","Title":"Efficiently search the R help pages","Description":"Rcmdr interface to the 'sos' package. The plug-in renders\n the 'sos' searching functionality easily accessible via the Rcmdr\n menus. It also simplifies the task of performing multiple searches and \n subsequently obtaining the union or the intersection of the results. ","Published":"2013-12-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.steepness","Version":"0.3-2","Title":"Steepness Rcmdr Plug-in","Description":"This package provides a GUI for the steepness package, it\n is written as an Rcmdr plug-in.","Published":"2014-10-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.survival","Version":"1.1-1","Title":"R Commander Plug-in for the 'survival' Package","Description":"An R Commander plug-in for the survival\n package, with dialogs for Cox models, parametric survival regression models,\n estimation of survival curves, and testing for differences in survival\n curves, along with data-management facilities and a variety of tests, \n diagnostics and graphs.","Published":"2016-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.TeachingDemos","Version":"1.1-0","Title":"Rcmdr Teaching Demos Plug-in","Description":"Provides an Rcmdr \"plug-in\" based on the TeachingDemos package, and is primarily for illustrative purposes.","Published":"2016-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.temis","Version":"0.7.8","Title":"Graphical Integrated Text Mining Solution","Description":"An 'R Commander' plug-in providing an integrated solution to perform\n a series of text mining tasks such as importing and cleaning a corpus, and\n analyses like terms and documents counts, vocabulary tables, terms\n co-occurrences and documents similarity measures, time series analysis,\n correspondence analysis and hierarchical clustering. Corpora can be imported\n from spreadsheet-like files, directories of raw text files, 'Twitter' queries,\n as well as from 'Dow Jones Factiva', 'LexisNexis', 'Europresse' and 'Alceste' files.","Published":"2016-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcmdrPlugin.UCA","Version":"4.1-1","Title":"UCA Rcmdr Plug-in","Description":"Some extension to Rcmdr (R Commander), randomness test, variance test for one normal sample and predictions using active model, made by R-UCA project and used in teaching statistics at University of Cadiz (UCA).","Published":"2017-05-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RCMIP5","Version":"1.2.0","Title":"Tools for Manipulating and Summarizing CMIP5 Data","Description":"Working with CMIP5 data can be tricky, forcing scientists to write\n custom scripts and programs. The `RCMIP5` package aims to ease this\n process, providing a standard, robust, and high-performance set of scripts\n to (i) explore what data have been downloaded, (ii) identify missing data,\n (iii) average (or apply other mathematical operations) across experimental\n ensembles, (iv) produce both temporal and spatial statistical summaries,\n and (v) produce easy-to-work-with graphical and data summaries.","Published":"2016-07-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rcolombos","Version":"2.0.2","Title":"Interface to Colombos Compendia using the Exposed REST API","Description":"Provides programmatic access to Colombos, a web based\n interface for exploring and analyzing comprehensive organism-specific\n cross-platform expression compendia of bacterial organisms.","Published":"2015-11-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RColorBrewer","Version":"1.1-2","Title":"ColorBrewer Palettes","Description":"Provides color schemes for maps (and other graphics)\n designed by Cynthia Brewer as described at http://colorbrewer2.org","Published":"2014-12-07","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"rcompanion","Version":"1.5.6","Title":"Functions to Support Extension Education Program Evaluation","Description":"Functions and datasets to support \"Summary and Analysis of\n Extension Education Program Evaluation in R\" and \"An R\n Companion for the Handbook of Biological Statistics\". \n Vignettes are available at .","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RConics","Version":"1.0","Title":"Computations on Conics","Description":"Solve some conic related problems (intersection of conics with lines and conics, arc length of an ellipse, polar lines, etc.). ","Published":"2014-12-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rcoreoa","Version":"0.1.0","Title":"Client for the CORE API","Description":"Client for the CORE API ().\n CORE () aggregates open access research\n outputs from repositories and journals worldwide and make them\n available to the public.","Published":"2017-06-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rcorpora","Version":"1.2.0","Title":"A Collection of Small Text Corpora of Interesting Data","Description":"A collection of small text corpora of interesting data.\n It contains all data sets from https://github.com/dariusk/corpora.\n Some examples:\n names of animals: birds, dinosaurs, dogs; foods: beer categories,\n pizza toppings; geography: English towns, rivers, oceans;\n humans: authors, US presidents, occupations; science: elements,\n planets; words: adjectives, verbs, proverbs, US president quotes.","Published":"2016-05-02","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"Rcplex","Version":"0.3-3","Title":"R Interface to CPLEX","Description":"R interface to CPLEX solvers for linear, quadratic, and (linear and quadratic) mixed integer programs. Support for quadratically constrained programming is available. See the file \"INSTALL\" for details on how to install the Rcplex package in Linux/Unix-like and Windows systems. Support for sparse matrices is provided by an S3-style class \"simple_triplet_matrix\" from package slam and by objects from the Matrix package class hierarchy.","Published":"2016-06-12","License":"LGPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"RCPmod","Version":"2.154","Title":"Regions of Common Profiles Modelling with Mixtures-of-Experts","Description":"Identifies regions of common (species) profiles (RCPs), possibly when sampling artefacts are present. Within a region the probability of sampling all species remains approximately constant. This is performed using mixtures-of-experts models. The package also contains associated methods, such as diagnostics.","Published":"2016-09-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rcpp","Version":"0.12.11","Title":"Seamless R and C++ Integration","Description":"The 'Rcpp' package provides R functions as well as C++ classes which\n offer a seamless integration of R and C++. Many R data types and objects can be\n mapped back and forth to C++ equivalents which facilitates both writing of new\n code as well as easier integration of third-party libraries. Documentation \n about 'Rcpp' is provided by several vignettes included in this package, via the \n 'Rcpp Gallery' site at , the paper by Eddelbuettel and \n Francois (2011, JSS), and the book by Eddelbuettel (2013, Springer); see \n 'citation(\"Rcpp\")' for details on these last two.","Published":"2017-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rcpp11","Version":"3.1.2.0","Title":"R and C++11","Description":"Rcpp11 includes a header only C++11 library that facilitates \n integration between R and modern C++. ","Published":"2014-11-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RcppAnnoy","Version":"0.0.8","Title":"'Rcpp' Bindings for 'Annoy', a Library for Approximate Nearest\nNeighbors","Description":"'Annoy' is a small C++ library for Approximate Nearest Neighbors \n written for efficient memory usage as well an ability to load from / save to\n disk. This package provides an R interface by relying on the 'Rcpp' package,\n exposing the same interface as the original Python wrapper to 'Annoy'. See\n for more on 'Annoy'. 'Annoy' is released\n under Version 2.0 of the Apache License. Also included is a small Windows\n port of 'mmap' which is released under the MIT license.","Published":"2016-10-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppAPT","Version":"0.0.3","Title":"'Rcpp' Interface to the APT Package Manager","Description":"The 'APT Package Management System' provides Debian and\n Debian-derived Linux systems with a powerful system to resolve package\n dependencies. This package offers access directly from R.","Published":"2016-12-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppArmadillo","Version":"0.7.900.2.0","Title":"'Rcpp' Integration for the 'Armadillo' Templated Linear Algebra\nLibrary","Description":"'Armadillo' is a templated C++ linear algebra library (by Conrad\n Sanderson) that aims towards a good balance between speed and ease of use. Integer,\n floating point and complex numbers are supported, as well as a subset of\n trigonometric and statistics functions. Various matrix decompositions are\n provided through optional integration with LAPACK and ATLAS libraries.\n The 'RcppArmadillo' package includes the header files from the templated\n 'Armadillo' library. Thus users do not need to install 'Armadillo' itself in\n order to use 'RcppArmadillo'. From release 7.800.0 on, 'Armadillo' is\n licensed under Apache License 2; previous releases were under licensed as\n MPL 2.0 from version 3.800.0 onwards and LGPL-3 prior to that;\n 'RcppArmadillo' (the 'Rcpp' bindings/bridge to Armadillo) is licensed under\n the GNU GPL version 2 or later, as is the rest of 'Rcpp'. Note that\n Armadillo requires a fairly recent compiler; for the g++ family at least\n version 4.6.* is required. ","Published":"2017-06-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppBDT","Version":"0.2.3","Title":"Rcpp bindings for the Boost Date_Time library","Description":"This package provides R with access to Boost Date_Time\n functionality by using Rcpp modules. \n\n Functionality from Boost Date_Time for dates, durations (both for days \n and datetimes), timezones, and posix time (\"ptime\") is provided. The posix\n time implementation can support high-resolution of up to nano-second\n precision by using 96 bits (instead of R's 64) to present a ptime object.","Published":"2014-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppBlaze","Version":"0.1.1","Title":"'Rcpp' Integration for the 'Blaze' High-Performance C++ Math\nLibrary","Description":"'Blaze' is an open-source, high-performance C++ math library\n for dense and sparse arithmetic. With its state-of-the-art Smart Expression\n Template implementation 'Blaze' combines the elegance and ease of use of a\n domain-specific language with 'HPC'-grade performance, making it one of the most\n intuitive and fastest C++ math libraries available. The 'Blaze' library offers:\n - high performance through the integration of 'BLAS' libraries and manually\n tuned 'HPC' math kernels - vectorization by 'SSE', 'SSE2', 'SSE3', 'SSSE3', 'SSE4', \n 'AVX', 'AVX2', 'AVX-512', 'FMA', and 'SVML' - parallel execution by 'OpenMP', C++11 \n threads and 'Boost' threads ('Boost' threads are disabled in 'RcppBlaze') - the \n intuitive and easy to use API of a domain specific language - unified arithmetic \n with dense and sparse vectors and matrices - thoroughly tested matrix and vector \n arithmetic - completely portable, high quality C++ source code The 'RcppBlaze' \n package includes the header files from the 'Blaze' library with disabling some \n functionalities related to link to the thread and system libraries which make \n 'RcppBlaze' be a header-only library. Therefore, users do not need to install \n 'Blaze and' the dependency 'Boost'. 'Blaze' is licensed under the New (Revised) \n BSD license, while 'RcppBlaze' (the 'Rcpp' bindings/bridge to 'Blaze') is licensed \n under the GNU GPL version 2 or later, as is the rest of 'Rcpp'. Note that since \n 'Blaze' has committed to C++14 commit to C++14 which does not used by most R users\n from version 3.0, we will use the version 2.6 of 'Blaze' which is C++98 compatible \n to support the most compilers and system.","Published":"2017-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppCCTZ","Version":"0.2.3","Title":"'Rcpp' Bindings for the 'CCTZ' Library","Description":"'Rcpp' Access to the 'CCTZ' timezone library is provided. 'CCTZ' is\n a C++ library for translating between absolute and civil times using the rules\n of a time zone. The 'CCTZ' source code, released under the Apache 2.0 License,\n is included in this package. See for more\n details.","Published":"2017-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppClassic","Version":"0.9.6","Title":"Deprecated 'classic' Rcpp API","Description":"The RcppClassic package provides a deprecated C++ library which\n facilitates the integration of R and C++. \n\n New projects should use the new Rcpp API in the Rcpp package.","Published":"2015-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppClassicExamples","Version":"0.1.1","Title":"Examples using RcppClassic to interface R and C++","Description":"The Rcpp package contains a C++ library that facilitates\n the integration of R and C++ in various ways via a rich API.\n This API was preceded by an earlier version which has been\n deprecated since 2010 (but is still supported to provide\n backwards compatability in the package RcppClassic). This\n package RcppClassicExamples provides usage examples for the\n older, deprecated API. There is also a corresponding package\n RcppExamples package with examples for the newer, current API\n which we strongly recommend as the basis for all new\n development.","Published":"2012-12-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppCNPy","Version":"0.2.6","Title":"Read-Write Support for 'NumPy' Files via 'Rcpp'","Description":"The 'cnpy' library written by Carl Rogers provides read and write\n facilities for files created with (or for) the 'NumPy' extension for 'Python'.\n Vectors and matrices of numeric types can be read or written to and from\n files as well as compressed files. Support for integer files is available if\n the package has been built with -std=c++11 which is the default starting\n with release 0.2.3 following the release of R 3.1.0, and available on all\n platforms following the release of R 3.3.0 with the updated 'Rtools'.","Published":"2016-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppDE","Version":"0.1.5","Title":"Global Optimization by Differential Evolution in C++","Description":"An efficient C++ based implementation of the 'DEoptim'\n function which performs global optimization by differential evolution. \n Its creation was motivated by trying to see if the old approximation \"easier,\n shorter, faster: pick any two\" could in fact be extended to achieving all\n three goals while moving the code from plain old C to modern C++. The\n initial version did in fact do so, but a good part of the gain was due to \n an implicit code review which eliminated a few inefficiencies which have\n since been eliminated in 'DEoptim'.","Published":"2016-01-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppDL","Version":"0.0.5","Title":"Deep Learning Methods via Rcpp","Description":"This package is based on the C++ code from Yusuke Sugomori,\n which implements basic machine learning methods with \n many layers (deep learning), including dA (Denoising Autoencoder), \n SdA (Stacked Denoising Autoencoder), RBM (Restricted Boltzmann machine) and \n DBN (Deep Belief Nets).","Published":"2014-12-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RcppEigen","Version":"0.3.3.3.0","Title":"'Rcpp' Integration for the 'Eigen' Templated Linear Algebra\nLibrary","Description":"R and 'Eigen' integration using 'Rcpp'.\n 'Eigen' is a C++ template library for linear algebra: matrices, vectors,\n numerical solvers and related algorithms. It supports dense and sparse\n matrices on integer, floating point and complex numbers, decompositions of\n such matrices, and solutions of linear systems. Its performance on many\n algorithms is comparable with some of the best implementations based on\n 'Lapack' and level-3 'BLAS'. The 'RcppEigen' package includes the header\n files from the 'Eigen' C++ template library (currently version 3.3.3). Thus\n users do not need to install 'Eigen' itself in order to use 'RcppEigen'.\n Since version 3.1.1, 'Eigen' is licensed under the Mozilla Public License\n (version 2); earlier version were licensed under the GNU LGPL version 3 or\n later. 'RcppEigen' (the 'Rcpp' bindings/bridge to 'Eigen') is licensed under\n the GNU GPL version 2 or later, as is the rest of 'Rcpp'.","Published":"2017-05-01","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RcppExamples","Version":"0.1.8","Title":"Examples using 'Rcpp' to Interface R and C++","Description":"Examples for Seamless R and C++ integration\n The 'Rcpp' package contains a C++ library that facilitates the integration of\n R and C++ in various ways. This package provides some usage examples.\n\n Note that the documentation in this package currently does not cover all the\n features in the package. It is not even close. On the other hand, the site\n is regrouping a large number of examples for 'Rcpp'.","Published":"2016-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppFaddeeva","Version":"0.1.0","Title":"'Rcpp' Bindings for the 'Faddeeva' Package","Description":"Access to a family of Gauss error functions for arbitrary complex arguments is provided via the 'Faddeeva' package by Steven G. Johnson (see for more information).","Published":"2015-06-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppGetconf","Version":"0.0.2","Title":"'Rcpp' Interface for Querying System Configuration Variables","Description":"The 'getconf' command-line tool provided by 'libc' allows\n querying of a large number of system variables. This package provides\n similar functionality.","Published":"2016-08-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppGSL","Version":"0.3.2","Title":"'Rcpp' Integration for 'GNU GSL' Vectors and Matrices","Description":"'Rcpp' integration for 'GNU GSL' vectors and matrices\n The 'GNU Scientific Library' (or 'GSL') is a collection of numerical routines for\n scientific computing. It is particularly useful for C and C++ programs as it\n provides a standard C interface to a wide range of mathematical routines. There\n are over 1000 functions in total with an extensive test suite. The 'RcppGSL'\n package provides an easy-to-use interface between 'GSL' data structures and\n R using concepts from 'Rcpp' which is itself a package that eases the\n interfaces between R and C++. This package also serves as a prime example of\n how to build a package that uses 'Rcpp' to connect to another third-party\n library. The 'autoconf' script, 'inline' plugin and example package can all\n be used as a stanza to write a similar package against another library.","Published":"2017-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppHMM","Version":"1.0.1","Title":"Rcpp Hidden Markov Model","Description":"Collection of functions to evaluate sequences, decode hidden states and estimate parameters from a single or multiple sequences of a discrete time Hidden Markov Model. The observed values can be modeled by a multinomial distribution for categorical emissions, a mixture of Gaussians for continuous data and also a mixture of Poissons for discrete values. It includes functions for random initialization, simulation, backward or forward sequence evaluation, Viterbi or forward-backward decoding and parameter estimation using an Expectation-Maximization approach.","Published":"2017-02-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppHoney","Version":"0.1.6","Title":"Iterator Based Expression Template Expansion of Standard\nOperators","Description":"Creates an easy way to use expression templates with R\n semantics on any iterator based structure.","Published":"2016-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppMLPACK","Version":"1.0.10-6","Title":"'Rcpp' Integration for the 'MLPACK' Library","Description":"'MLPACK' is an intuitive, fast, scalable C++ machine learning\n library, meant to be a machine learning analog to 'LAPACK'. It\n aims to implement a wide array of machine learning methods\n and function as a Swiss army knife for machine learning\n researchers: 'MLPACK' is available from ;\n sources are included in the package.","Published":"2016-12-30","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppMsgPack","Version":"0.1.1","Title":"'MsgPack' C++ Header Files","Description":"'MessagePack' is an efficient binary serialization format.\n It lets you exchange data among multiple languages like 'JSON'. But it is\n faster and smaller. Small integers are encoded into a single byte, and\n typical short strings require only one extra byte in addition to the strings\n themselves. This package provides headers from the 'msgpack-c'\n implementation for C and C++(11) for use by R, particularly 'Rcpp'. The\n included 'msgpack-c' headers are licensed under the Boost Software License\n (Version 1.0); the code added by this package as well the R integration are\n licensed under the GPL (>= 2). See the files 'COPYRIGHTS' and 'AUTHORS' for\n a full list of copyright holders and contributors to 'msgpack-c'. ","Published":"2017-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppNumerical","Version":"0.3-1","Title":"'Rcpp' Integration for Numerical Computing Libraries","Description":"A collection of open source libraries for numerical computing\n (numerical integration, optimization, etc.) and their integration with\n 'Rcpp'.","Published":"2016-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppOctave","Version":"0.18.1","Title":"Seamless Interface to Octave -- And Matlab","Description":"Direct interface to Octave. The primary goal is to facilitate the\n port of Matlab/Octave scripts to R. The package enables to call any Octave\n functions from R and as well as browsing their documentation, passing\n variables between R and Octave, using R core RNGs in Octave, which ensures\n that stochastic computations are also reproducible.","Published":"2015-10-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppParallel","Version":"4.3.20","Title":"Parallel Programming Tools for 'Rcpp'","Description":"High level functions for parallel programming with 'Rcpp'.\n For example, the 'parallelFor()' function can be used to convert the work of\n a standard serial \"for\" loop into a parallel one and the 'parallelReduce()'\n function can be used for accumulating aggregate or other values.","Published":"2016-08-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RcppProgress","Version":"0.3","Title":"An Interruptible Progress Bar with OpenMP Support for C++ in R\nPackages","Description":"Allows to display a progress bar in the R\n console for long running computations taking place in c++ code,\n and support for interrupting those computations even in multithreaded\n code, typically using OpenMP.","Published":"2017-01-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RcppQuantuccia","Version":"0.0.1","Title":"R Bindings to the 'Quantuccia' Header-Only Essentials of\n'QuantLib'","Description":"'QuantLib' bindings are provided for R using 'Rcpp' and the\n header-only 'Quantuccia' variant (put together by Peter Caspers) offering\n an essential subset of 'QuantLib'. See the included file 'AUTHORS' for a full\n list of contributors to both 'QuantLib' and 'Quantuccia'.","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppRedis","Version":"0.1.7","Title":"'Rcpp' Bindings for 'Redis' using the 'hiredis' Library","Description":"Connection to the 'Redis' key/value store using the\n C-language client library 'hiredis'. 'MsgPack' encoding is optional\n if the 'RcppMsgPack' package is detected. You can install via from\n the 'ghrr' drat repository listed below.","Published":"2016-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppRoll","Version":"0.2.2","Title":"Efficient Rolling / Windowed Operations","Description":"Provides fast and efficient routines for\n common rolling / windowed operations. Routines for the\n efficient computation of windowed mean, median,\n sum, product, minimum, maximum, standard deviation\n and variance are provided.","Published":"2015-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppShark","Version":"3.1.1","Title":"R Interface to the Shark Machine Learning Library","Description":"An R interface to the C++/Boost Shark machine learning library.","Published":"2017-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppSMC","Version":"0.1.5","Title":"Rcpp Bindings for Sequential Monte Carlo","Description":"R access to the Sequential Monte Carlo Template Classes\n by Johansen (Journal of Statistical Software, 2009, v30, i6) is provided.\n At present, two additional examples have been added, and the first \n example from the JSS paper has been extended. Further integration \n and extensions are planned.","Published":"2017-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppStreams","Version":"0.1.1","Title":"'Rcpp' Integration of the 'Streamulus' 'DSEL' for Stream\nProcessing","Description":"The 'Streamulus' (template, header-only) library by\n Irit Katriel (at )\n provides a very powerful yet convenient framework for stream\n processing. This package connects 'Streamulus' to R by providing \n both the header files and all examples.","Published":"2016-08-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RcppTOML","Version":"0.1.3","Title":"'Rcpp' Bindings to Parser for Tom's Obvious Markup Language","Description":"The configuration format defined by 'TOML' (which expands to\n \"Tom's Obvious Markup Language\") specifies an excellent format (described at\n ) suitable for both human editing as well\n as the common uses of a machine-readable format. This package uses 'Rcpp' to\n connect the 'cpptoml' parser written by Chase Geigle (in modern C++11) to R. ","Published":"2017-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppXts","Version":"0.0.4","Title":"Interface the xts API via Rcpp","Description":"This package provides access to some of the C level\n functions of the xts package.\n\n In its current state, the package is mostly a proof-of-concept to\n support adding useful functions, and does not yet add any of\n its own.","Published":"2013-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RcppZiggurat","Version":"0.1.3","Title":"'Rcpp' Integration of Different \"Ziggurat\" Normal RNG\nImplementations","Description":"The Ziggurat generator for normally distributed random numbers,\n originally proposed by Marsaglia and Tsang (JSS, 2000), has been improved \n upon a few times starting with Leong et al (JSS, 2005). This package provides\n an aggregation in order to compare different implementations. The goal is to\n provide an 'faster but good enough' alternative for use with R and C++ code. \n\n The package is still in an early state. Unless you know what you are doing,\n sticking with the generators provided by R may be a good idea as these have\n been extremely diligently tested. ","Published":"2015-07-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rcqp","Version":"0.4","Title":"Interface to the Corpus Query Protocol","Description":"Implements Corpus Query Protocol functions based on the\n CWB software. Rely on CWB (GPL v2), PCRE (BSD licence), glib2\n (LGPL).","Published":"2016-06-12","License":"GPL-2 | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"Rcrawler","Version":"0.1.1","Title":"Web Crawler and Scraper","Description":"Performs parallel web crawling and web scraping. It is designed to crawl, parse and store web pages to produce data that can be directly used for analysis application. For details see Khalil and Fakir (2017) .","Published":"2017-06-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RCriteo","Version":"1.0.2","Title":"Loading Criteo Data into R","Description":"Aims at loading Criteo online advertising campaign data into R.\n Criteo is an online advertising service that enables\n advertisers to display commercial ads to web users. The package provides\n an authentication process for R with the Criteo API . Moreover, the package features an\n interface to query campaign data from the Criteo API. The data can be downloaded\n and will be transformed into a R data frame.","Published":"2016-07-07","License":"GPL (>= 2) | MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rcriticor","Version":"1.1","Title":"Critical Periods","Description":"Pierre's correlogram. Research of critical periods in the past. Integrates a time series in a given window. ","Published":"2017-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rcrossref","Version":"0.7.0","Title":"Client for Various 'CrossRef' 'APIs'","Description":"Client for various 'CrossRef' 'APIs', including 'metadata' search\n with their old and newer search 'APIs', get 'citations' in various formats\n (including 'bibtex', 'citeproc-json', 'rdf-xml', etc.), convert 'DOIs'\n to 'PMIDs', and 'vice versa', get citations for 'DOIs', and get links to\n full text of articles when available.","Published":"2017-04-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rcrypt","Version":"0.1.1","Title":"Symmetric File Encryption Using GPG","Description":"Provides easy symmetric file encryption using GPG with\n cryptographically strong defaults. Only symmetric encryption is \n supported. GPG is pre-installed with most Linux distributions. \n Windows users will need to install 'Gpg4win' (http://www.gpg4win.org/). \n OS X users will need to install 'GPGTools' (https://gpgtools.org/).","Published":"2015-09-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rcsdp","Version":"0.1.55","Title":"R Interface to the CSDP Semidefinite Programming Library","Description":"R interface to the CSDP semidefinite programming library. Installs version 6.1.1 of CSDP from the COIN-OR website if required. An existing installation of CSDP may be used by passing the proper configure arguments to the installation command. See the INSTALL file for further details.","Published":"2016-04-25","License":"CPL-1.0","snapshot_date":"2017-06-23"} {"Package":"rcss","Version":"1.2","Title":"Convex Switching Systems","Description":"The numerical treatment of optimal switching problems in a finite time setting when the state evolves as a controlled Markov chain consisting of a uncontrolled continuous component following linear dynamics and a controlled Markov chain taking values in a finite set. The reward functions are assumed to be convex and Lipschitz continuous in the continuous state. The action set is finite.","Published":"2017-01-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Rcssplot","Version":"0.2.0.0","Title":"Styling of Graphics using Cascading Style Sheets","Description":"Provides a means to style plots through cascading style sheets.\n This separates the aesthetics from the data crunching in plots and charts.","Published":"2017-03-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rCUR","Version":"1.3","Title":"CUR decomposition package","Description":"Functions and objects for CUR matrix decomposition.","Published":"2012-07-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rcure","Version":"0.1.0","Title":"Robust Cure Models for Survival Analysis","Description":"Implements robust cure models for survival analysis by incorporate\n a weakly informative prior in the logistic part of cure models. Estimates\n prognostic accuracy, i.e. AUC, k-index and c-index, with bootstrap confidence\n interval for cure models.","Published":"2017-01-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RCurl","Version":"1.95-4.8","Title":"General Network (HTTP/FTP/...) Client Interface for R","Description":"A wrapper for 'libcurl' \n\tProvides functions to allow one to compose general HTTP requests\n and provides convenient functions to fetch URIs, get & post\n forms, etc. and process the results returned by the Web server.\n This provides a great deal of control over the HTTP/FTP/...\n connection and the form of the request while providing a\n higher-level interface than is available just using R socket\n connections. Additionally, the underlying implementation is\n robust and extensive, supporting FTP/FTPS/TFTP (uploads and\n downloads), SSL/HTTPS, telnet, dict, ldap, and also supports\n cookies, redirects, authentication, etc.","Published":"2016-03-01","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"rcv","Version":"0.2.0","Title":"Ranked Choice Voting","Description":"A collection of ranked choice voting data and functions to \n manipulate, run elections with, and visualize this data and others. \n It can bring in raw data, transform it into a ballot you can read, \n and return election results for an RCV contest.","Published":"2017-06-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rd2md","Version":"0.0.2","Title":"Markdown Reference Manuals","Description":"The native R functionalities only allow PDF exports of reference manuals. This shall be extended by converting the package documentation files into markdown files and combining them into a markdown version of the package reference manual.","Published":"2017-05-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Rd2roxygen","Version":"1.6.2","Title":"Convert Rd to 'Roxygen' Documentation","Description":"Functions to convert Rd to 'roxygen' documentation. It can parse an\n Rd file to a list, create the 'roxygen' documentation and update the original\n R script (e.g. the one containing the definition of the function)\n accordingly. This package also provides utilities that can help developers\n build packages using 'roxygen' more easily. The 'formatR' package can be used\n to reformat the R code in the examples sections so that the code will be\n more readable.","Published":"2017-03-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rda","Version":"1.0.2-2","Title":"Shrunken Centroids Regularized Discriminant Analysis","Description":"Shrunken Centroids Regularized Discriminant Analysis for\n the classification purpose in high dimensional data.","Published":"2012-07-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RDataCanvas","Version":"0.1","Title":"Basic Runtime Support for Datacanvas.io","Description":"Provides basic functionalities for writing a module\n for http://datacanvas.io. The http://datacanvas.io is a big data\n analytics platform that helps data scientists to build, manage\n and share data pipelines.","Published":"2014-12-09","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rdatacite","Version":"0.1.0","Title":"'DataCite' Client for 'OAI-PMH' Methods and their Search 'API'","Description":"Client for the web service methods provided\n by 'DataCite' (), including functions to interface with\n their 'OAI-PMH' 'metadata' service, and a 'RESTful' search 'API'. The 'API'\n is backed by 'SOLR', allowing expressive queries, including faceting,\n statistics on variables, and 'more-like-this' queries.","Published":"2016-02-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rdatamarket","Version":"0.6.5","Title":"Data access API for DataMarket.com","Description":"Fetches data from DataMarket.com, either as\n timeseries in zoo form (dmseries) or as long-form data\n frames (dmlist). Metadata including dimension structure\n is fetched with dminfo, or just the dimensions with\n dmdims.","Published":"2014-11-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rdataretriever","Version":"1.0.0","Title":"R Interface to the Data Retriever","Description":"Provides an R interface to the Data Retriever\n via the Data Retriever's\n command line interface. The Data Retriever automates the\n tasks of finding, downloading, and cleaning public datasets,\n and then stores them in a local database.","Published":"2017-03-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rdd","Version":"0.57","Title":"Regression Discontinuity Estimation","Description":"Provides the tools to undertake estimation in\n Regression Discontinuity Designs. Both sharp and fuzzy designs are\n supported. Estimation is accomplished using local linear regression.\n A provided function will utilize Imbens-Kalyanaraman optimal\n bandwidth calculation. A function is also included to test the\n assumption of no-sorting effects.","Published":"2016-03-14","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"rddensity","Version":"0.2","Title":"Manipulation Testing Based on Density Discontinuity","Description":"Density discontinuity test (a.k.a. manipulation test) is commonly employed in regression discontinuity designs and other treatment effect settings to detect whether there is evidence suggesting perfect self-selection (manipulation) around a cutoff where a treatment/policy assignment changes. This package provides tools for conducting the aforementioned statistical test: rddensity() to construct local polynomial based density discontinuity test given a prespecified cutoff, rdbwdensity() to perform bandwidth selection, and rdplotdensity() to construct density plot near the cutoff.","Published":"2017-06-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rddtools","Version":"0.4.0","Title":"Toolbox for Regression Discontinuity Design ('RDD')","Description":"Set of functions for Regression Discontinuity Design ('RDD'), for\n data visualisation, estimation and testing.","Published":"2015-07-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rDEA","Version":"1.2-5","Title":"Robust Data Envelopment Analysis (DEA) for R","Description":"Data Envelopment Analysis for R, estimating robust DEA scores without and with environmental variables and doing returns-to-scale tests.","Published":"2016-11-25","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"rdefra","Version":"0.3.4","Title":"Interact with the UK AIR Pollution Database from DEFRA","Description":"Get data from DEFRA's UK-AIR website . It basically scrapes the HTML content.","Published":"2017-03-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rdetools","Version":"1.0","Title":"Relevant Dimension Estimation (RDE) in Feature Spaces","Description":"The package provides functions for estimating the relevant\n dimension of a data set in feature spaces, applications to\n model selection, graphical illustrations and prediction.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rdian","Version":"0.1.1","Title":"Client Library for The Guardian","Description":"A client library for 'The Guardian' (https://www.guardian.com/)\n and their API, this package allows users to search for Guardian articles and\n retrieve both the content and metadata.","Published":"2016-02-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rdice","Version":"1.0.0","Title":"A Collection of Functions to Experiment Dice Rolls","Description":"A collection of functions to simulate\n dice rolls and the like. In particular, experiments and exercises can\n be performed looking at combinations and permutations of values in dice\n rolls and coin flips, together with the corresponding frequencies of\n occurrences. When applying each function, the user has to input the\n number of times (rolls, flips) to toss the dice. Needless to say, the more\n the tosses, the more the frequencies approximate the actual probabilities.\n Moreover, the package provides functions to generate non-transitive sets\n of dice (like Efron's) and to check whether a given set of dice is non-transitive\n with given probability.","Published":"2016-09-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RDIDQ","Version":"1.0","Title":"It perform Quality check on data","Description":"The package has many function that helps to perform\n various quality check on the data.It basically provides many\n function that helps in performing Extrapolative data analysis.","Published":"2012-12-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RDieHarder","Version":"0.1.3","Title":"R interface to the dieharder RNG test suite","Description":"The RDieHarder packages provides an R interface to \n the dieharder suite of random number generators and tests that \n was developed by Robert G. Brown and David Bauer, extending \n earlier work by George Marsaglia and others.","Published":"2014-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rdist","Version":"0.0.2","Title":"Calculate Pairwise Distances","Description":"A common framework for calculating distance matrices.","Published":"2017-05-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Rdistance","Version":"1.3.2","Title":"Distance Sampling Analyses","Description":"Analysis of distance sampling data collected on line transect surveys. Estimates distance-based detection functions and abundances.","Published":"2015-07-22","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"rdlocrand","Version":"0.2","Title":"Local Randomization Methods for RD Designs","Description":"The regression discontinuity (RD) design is a popular quasi-experimental design for causal inference and policy evaluation. Under the local randomization approach, RD designs can be interpreted as randomized experiments inside a window around the cutoff. This package provides tools to perform randomization inference for RD designs under local randomization: rdrandinf() to perform hypothesis testing using randomization inference, rdwinselect() to select a window around the cutoff in which randomization is likely to hold, rdsensitivity() to assess the sensitivity of the results to different window lengths and null hypotheses and rdrbounds() to construct Rosenbaum bounds for sensitivity to unobserved confounders.","Published":"2017-04-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RDML","Version":"0.9-6","Title":"Importing Real-Time Thermo Cycler (qPCR) Data from RDML Format\nFiles","Description":"Imports real-time thermo cycler (qPCR) data from Real-time PCR\n Data Markup Language (RDML) and transforms to the appropriate formats of\n the 'qpcR' and 'chipPCR' packages. Contains a dendrogram visualization \n for the structure of RDML object and GUI for RDML editing.","Published":"2017-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rDNA","Version":"1.31","Title":"R Bindings for the Discourse Network Analyzer","Description":"Control the Java software Discourse Network Analyzer (DNA) from \n within R. Network matrices, statement frequency time series and attributes \n of actors can be transferred directly into R.","Published":"2016-07-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rDNAse","Version":"1.1-1","Title":"Generating Various Numerical Representation Schemes of DNA\nSequences","Description":"Comprehensive toolkit for generating various numerical representation schemes of DNA sequence. The descriptors and similarity\n scores included are extensively used in bioinformatics and chemogenomics.","Published":"2016-07-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rdnb","Version":"0.1-1","Title":"R Interface to the 'Deutsche Nationalbibliothek (German National\nLibrary) API'","Description":"A wrapper for the 'Deutsche Nationalbibliothek (German National\n Library) API', available at . The German National Library is\n the German central archival library, collecting, archiving, bibliographically\n classifying all German and German-language publications, foreign\n publications about Germany, translations of German works, and the works of\n German-speaking emigrants published abroad between 1933 and 1945. A personal\n access token is required for usage.","Published":"2017-02-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RDocumentation","Version":"0.8.0","Title":"Integrate R with 'RDocumentation.org'","Description":"Wraps around the default help functionality in R. Instead of plain documentation files, documentation will now show up as it does on 'RDocumentation.org', a platform that shows R documentation from CRAN, GitHub and Bioconductor, together with informative stats to assess the package quality and possibilities to discuss packages.","Published":"2016-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rdomains","Version":"0.1.5","Title":"Get the Category of Content Hosted by a Domain","Description":"Get the category of content hosted by a domain. Use Shallalist , \n Virustotal (which provides access to lots of services) , \n McAfee , Alexa , \n DMOZ , or validated machine learning classifiers based on \n Shallalist data to learn about the kind of content hosted by a domain.","Published":"2016-11-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RDota2","Version":"0.1.6","Title":"An R Steam API Client for Valve's Dota2","Description":"An R API Client for Valve's Dota2. RDota2 can be easily used \n to connect to the Steam API and retrieve data for Valve's popular video \n game Dota2. You can find out more about Dota2 at \n .","Published":"2016-10-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rdoxygen","Version":"1.0.0","Title":"Create Doxygen Documentation for Source Code","Description":"Create doxygen documentation for source code in R packages. \n Includes a RStudio Addin, that allows to trigger the doxygenize process.","Published":"2017-05-25","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rdpack","Version":"0.4-20","Title":"Update and Manipulate Rd Documentation Objects","Description":"Functions for manipulation of Rd objects, including\n function reprompt() for updating existing Rd\n documentation for functions, methods and classes and\n function rebib() for import of references from 'bibtex'\n files. There is also a macro for importing 'bibtex'\n references which an be used in Rd files and 'roxygen'\n comments without importing this package.","Published":"2016-07-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rdpla","Version":"0.1.0","Title":"Client for the Digital Public Library of America ('DPLA')","Description":"Interact with the Digital Public Library of America\n ('DPLA') 'REST' 'API'\n from R, including search\n and more.","Published":"2016-10-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rdrobust","Version":"0.98","Title":"Robust Data-Driven Statistical Inference in\nRegression-Discontinuity Designs","Description":"Regression-discontinuity (RD) designs are quasi-experimental research designs popular in social, behavioral and natural sciences. The RD design is usually employed to study the (local) causal effect of a treatment, intervention or policy. This package provides tools for data-driven graphical and analytical statistical inference in RD\tdesigns: rdrobust() to construct local-polynomial point estimators and robust confidence intervals for average treatment effects at the \tcutoff in Sharp, Fuzzy and Kink RD settings, rdbwselect() to perform bandwidth selection for the different procedures implemented, and rdplot() to conduct exploratory data analysis (RD plots).","Published":"2017-06-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rdrools","Version":"1.0.1","Title":"A Rules Engine for R Based on 'Drools'","Description":"An R interface for using the popular Java based Drools, which is a Business Rule Management System (See for more information). This package allows you to run a set of rules written in DRL format on the data using the Drools engine. Thanks to Mu Sigma for their continued support throughout the development of the package. Credits to Mu Sigma for their continued support through out the development of the package.","Published":"2017-06-19","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"Rdroolsjars","Version":"1.0.1","Title":"Rdrools JARs","Description":"External jars required for package 'Rdrools'.","Published":"2017-06-19","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"rdrop2","Version":"0.7.0","Title":"Programmatic Interface to the 'Dropbox' API","Description":"Provides full programmatic access to the Dropbox file hosting platform (dropbox.com), including support for all standard file operations.","Published":"2015-07-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rdryad","Version":"0.2.0","Title":"Access for Dryad Web Services","Description":"Interface to the Dryad Solr API, their OAI-PMH service, and\n fetch datasets.","Published":"2015-12-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RDS","Version":"0.7-8","Title":"Respondent-Driven Sampling","Description":"Provides functionality for carrying out estimation\n with data collected using Respondent-Driven Sampling. This includes\n Heckathorn's RDS-I and RDS-II estimators as well as Gile's Sequential\n Sampling estimator. The package is part of the \"RDS Analyst\" suite of\n packages for the analysis of respondent-driven sampling data.","Published":"2016-12-27","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"Rdsdp","Version":"1.0.4-2","Title":"R Interface to DSDP Semidefinite Programming Library","Description":"R interface to DSDP semidefinite programming library. The DSDP software is a free open source implementation of an interior-point method for semidefinite programming. It provides primal and dual solutions, exploits low-rank structure and sparsity in the data, and has relatively low memory requirements for an interior-point method. ","Published":"2016-04-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Rdsm","Version":"2.1.1","Title":"Threads Environment for R","Description":"Provides a threads-type programming environment for R.\n The package gives the R programmer the clearer, more concise\n shared memory world view, and in some cases gives superior\n performance as well. In addition, it enables parallel processing on\n very large, out-of-core matrices. ","Published":"2014-10-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RDSTK","Version":"1.1","Title":"An R wrapper for the Data Science Toolkit API","Description":"This package provides an R interface to Pete Warden's Data\n Science Toolkit. See www.datasciencetoolkit.org for more\n information. The source code for this package can be found at\n github.com/rtelmore/RDSTK Happy hacking!","Published":"2013-05-15","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RDStreeboot","Version":"1.0","Title":"RDS Tree Bootstrap Method","Description":"A tree bootstrap method for estimating uncertainty in respondent-driven samples (RDS). Quantiles are estimated by multilevel resampling in such a way that preserves the dependencies of and accounts for the high variability of the RDS process.","Published":"2016-11-24","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"Rdtq","Version":"0.1","Title":"Density Tracking by Quadrature","Description":"Implementation of density tracking by quadrature (DTQ) algorithms for stochastic differential equations (SDEs). DTQ algorithms numerically compute the density function of the solution of an SDE with user-specified drift and diffusion functions. The calculation does not require generation of sample paths, but instead proceeds in a deterministic fashion by repeatedly applying quadrature to the Chapman-Kolmogorov equation associated with a discrete-time approximation of the SDE. The DTQ algorithm is provably convergent. For several practical problems of interest, we have found the DTQ algorithm to be fast, accurate, and easy to use.","Published":"2016-11-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rDVR","Version":"0.1.1","Title":"The rDVR package allows you to start stop and save a video\nserver from within R","Description":"The rDVR package allows you to start stop and save a video server\n from within R. It does this by way of a REST interface to a JAVA service.\n The jar binary relies on the screen recorder included in the Monte Media\n Library (CC BY 3.0 licence) developed by Werner Randelshofer\n (http://http://www.randelshofer.ch/monte/). The REST interface was modified\n from https://github.com/tuenti/VideoRecorderService which has an Apache\n licence.","Published":"2014-04-03","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"rdwd","Version":"0.8.0","Title":"Select and Download Climate Data from 'DWD' (German Weather\nService)","Description":"Handle climate data from the 'DWD' ('Deutscher Wetterdienst', see \n for more information).\n Choose files with 'selectDWD()', download and process data sets with 'dataDWD()' and 'readDWD()'.","Published":"2017-06-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"reactR","Version":"0.1.2","Title":"React Helpers","Description":"Make it easy to use 'react' in R with helper\n dependency functions, embedded 'Babel' transpiler,\n and examples. Please note the separate 'react'\n BSD license \n when using 'react' in your projects.","Published":"2017-04-10","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ReacTran","Version":"1.4.2","Title":"Reactive transport modelling in 1D, 2D and 3D","Description":"Routines for developing models that describe reaction and advective-diffusive transport in one, two or three dimensions.\n Includes transport routines in porous media, in estuaries, and in bodies with variable shape.","Published":"2014-12-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"read.dbc","Version":"1.0.5","Title":"Read Data Stored in DBC (Compressed DBF) Files","Description":"Functions for reading and decompressing the DBC (compressed DBF) files. Please note that this is the file format used by the Brazilian Ministry of Health (DATASUS) to publish healthcare datasets. It is not related to the FoxPro or CANdb DBC file formats.","Published":"2016-09-16","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"readability","Version":"0.1.1","Title":"Calculate Readability Scores","Description":"Calculate readability scores by grouping variables. Readability is\n an approximation of the ease with which a reader parses and comprehends a\n written text. These scores use text attributes such as syllable counts,\n number of words, and number of characters to calculate an approximate\n grade level reading ease for the text. The readability scores that are\n calculated include: Flesch Kincaid, Gunning Fog Index, Coleman Liau,\n SMOG, and Automated Readability Index.","Published":"2017-01-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"readbitmap","Version":"0.1-4","Title":"Simple Unified Interface to Read Bitmap Images (BMP,JPEG,PNG)","Description":"Identifies and reads Windows BMP, JPEG and PNG format bitmap\n images. Identification defaults to the use of the magic number embedded in\n the file rather than the file extension. Reading of JPEG and PNG image\n depends on libjpg and libpng libraries. See file INSTALL for details if\n necessary.","Published":"2014-09-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"readBrukerFlexData","Version":"1.8.5","Title":"Reads Mass Spectrometry Data in Bruker *flex Format","Description":"Reads data files acquired by Bruker Daltonics' matrix-assisted\n laser desorption/ionization-time-of-flight mass spectrometer of the *flex\n series.","Published":"2017-04-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"readbulk","Version":"1.1.0","Title":"Read and Combine Multiple Data Files","Description":"Combine multiple data files from a common directory.\n The data files will be read into R and bound together, creating a\n single large data.frame. A general function is provided along with\n a specific function for data that was collected using the open-source\n experiment builder 'OpenSesame' .","Published":"2016-10-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"reader","Version":"1.0.6","Title":"Suite of Functions to Flexibly Read Data from Files","Description":"A set of functions to simplify reading data from files. The main function, reader(), should read most common R datafile types without needing any parameters except the filename. Other functions provide simple ways of handling file paths and extensions, and automatically detecting file format and structure.","Published":"2017-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"readHAC","Version":"1.0","Title":"Read Acoustic HAC Format","Description":"Read Acoustic HAC format.","Published":"2017-02-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"readJDX","Version":"0.2.3","Title":"Import Data in the JCAMP-DX Format","Description":"Import data written in the JCAMP-DX format. This is an instrument-independent format used in the field of spectroscopy. Examples include IR, NMR, and Raman spectroscopy. See the vignette for background and supported formats. The official JCAMP-DX site is .","Published":"2016-12-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"readMLData","Version":"0.9-7","Title":"Reading Machine Learning Benchmark Data Sets in Different\nFormats","Description":"Functions for reading data sets in different formats\n for testing machine learning tools are provided. This allows to run\n a loop over several data sets in their original form, for example\n if they are downloaded from UCI Machine Learning Repository.\n The data are not part of the package and have to be downloaded\n separately.","Published":"2015-01-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"readMzXmlData","Version":"2.8.1","Title":"Reads Mass Spectrometry Data in mzXML Format","Description":"Functions for reading mass spectrometry data in mzXML format.","Published":"2015-09-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"readODS","Version":"1.6.4","Title":"Read and Write ODS Files","Description":"Import ODS (OpenDocument Spreadsheet) into R as a data frame. Also support writing data frame into ODS file.","Published":"2016-11-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"readOffice","Version":"0.2.2","Title":"Read Text Out of Modern Office Files","Description":"Reads in text from 'unstructured' modern Microsoft Office files (XML based files) such as Word and PowerPoint.\n This does not read in structured data (from Excel or Access) as there are many other great packages to that do so already.","Published":"2017-03-08","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"readr","Version":"1.1.1","Title":"Read Rectangular Text Data","Description":"The goal of 'readr' is to provide a fast and friendly way to read\n rectangular data (like 'csv', 'tsv', and 'fwf'). It is designed to flexibly\n parse many types of data found in the wild, while still cleanly failing when\n data unexpectedly changes.","Published":"2017-05-16","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"readstata13","Version":"0.9.0","Title":"Import 'Stata' Data Files","Description":"Function to read and write the 'Stata' file format.","Published":"2017-05-05","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"readtext","Version":"0.50","Title":"Import and Handling for Plain and Formatted Text Files","Description":"Functions for importing and handling text files and formatted text\n files with additional meta-data, such including '.csv', '.tab', '.json', '.xml',\n '.pdf', '.doc', '.docx', '.xls', '.xlsx', and others.","Published":"2017-05-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"readxl","Version":"1.0.0","Title":"Read Excel Files","Description":"Import excel files into R. Supports '.xls' via the embedded\n 'libxls' C library and '.xlsx' via\n the embedded 'RapidXML' C++ library . Works on\n Windows, Mac and Linux without external dependencies.","Published":"2017-04-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RealVAMS","Version":"0.3-3","Title":"Multivariate VAM Fitting","Description":"Fits a multivariate value-added model (VAM), see Broatch and Lohr (2012) , with normally distributed test scores and a binary outcome indicator. A pseudo-likelihood approach, Wolfinger (1993) , is used for the estimation of this joint generalized linear mixed model. This material is based upon work supported by the National Science Foundation under grants DRL-1336027 and DRL-1336265. ","Published":"2017-03-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"reams","Version":"0.1","Title":"Resampling-Based Adaptive Model Selection","Description":"Resampling methods for adaptive linear model selection.\n These can be thought of as extensions of the Akaike information\n criterion that account for searching among candidate models.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rearrangement","Version":"2.1","Title":"Monotonize Point and Interval Functional Estimates by\nRearrangement","Description":"The rearrangement operator (Hardy,\n Littlewood, and Polya 1952) for univariate, bivariate, and\n trivariate point estimates of monotonic functions. The package\n additionally provides a function that creates simultaneous\n confidence intervals for univariate functions and applies the\n rearrangement operator to these confidence intervals.","Published":"2016-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"REAT","Version":"1.3.2","Title":"Regional Economic Analysis Toolbox","Description":"Collection of models and analysis methods used in regional and urban economics and (quantitative) economic geography, e.g. measures of inequality, regional disparities and convergence, regional specialization as well as accessibility and spatial interaction models. ","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"REBayes","Version":"0.85","Title":"Empirical Bayes Estimation and Inference in R","Description":"Kiefer-Wolfowitz maximum likelihood estimation for mixture models\n and some other density estimation and regression methods based on convex\n optimization.","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rebird","Version":"0.4.0","Title":"R Client for the eBird Database of Bird Observations","Description":"A programmatic client for the eBird database, including functions\n for searching for bird observations by geographic location (latitude,\n longitude), eBird hotspots, location identifiers, by notable sightings, by\n region, and by taxonomic name.","Published":"2017-04-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rebmix","Version":"2.9.2","Title":"Finite Mixture Modeling, Clustering & Classification","Description":"R functions for random univariate and multivariate finite mixture model generation, estimation, clustering and classification. Variables can be continuous, discrete, independent or dependent and may follow normal, lognormal, Weibull, gamma, binomial, Poisson, Dirac or circular von Mises parametric families.","Published":"2017-06-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rebus","Version":"0.1-3","Title":"Build Regular Expressions in a Human Readable Way","Description":"Build regular expressions piece by piece using human readable code.\n This package is designed for interactive use. For package development, use\n the rebus.* dependencies.","Published":"2017-04-25","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"rebus.base","Version":"0.0-3","Title":"Core Functionality for the 'rebus' Package","Description":"Build regular expressions piece by piece using human readable code.\n This package contains core functionality, and is primarily intended to be\n used by package developers.","Published":"2017-04-25","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"rebus.datetimes","Version":"0.0-1","Title":"Date and Time Extensions for the 'rebus' Package","Description":"Build regular expressions piece by piece using human readable code.\n This package contains date and time functionality, and is primarily intended\n to be used by package developers.","Published":"2015-12-16","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"rebus.numbers","Version":"0.0-1","Title":"Numeric Extensions for the 'rebus' Package","Description":"Build regular expressions piece by piece using human readable code.\n This package contains number-related functionality, and is primarily intended\n to be used by package developers.","Published":"2015-12-16","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"rebus.unicode","Version":"0.0-2","Title":"Unicode Extensions for the 'rebus' Package","Description":"Build regular expressions piece by piece using human readable code.\n This package contains Unicode functionality, and is primarily intended to be\n used by package developers.","Published":"2017-01-03","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"RECA","Version":"1.3","Title":"Relevant Component Analysis for Supervised Distance Metric\nLearning","Description":"Relevant Component Analysis (RCA) tries to find a linear\n transformation of the feature space such that the effect of irrelevant\n variability is reduced in the transformed space.","Published":"2016-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"recexcavAAR","Version":"0.3.0","Title":"3D Reconstruction of Archaeological Excavations","Description":"A toolset for 3D reconstruction and analysis of excavations. It provides methods to reconstruct natural and artificial surfaces based on field measurements. This allows to spatially contextualize documented subunits and features. Intended to be part of a 3D visualization workflow.","Published":"2017-02-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rechonest","Version":"1.2","Title":"R Interface to Echo Nest API","Description":"The 'Echo nest' is the industry's leading\n music intelligence company, providing developer with deepest understanding of\n music content and music fans. This package can be used to access artist's data\n including songs, blogs, news, reviews etc. Song's data including audio summary,\n style, danceability, tempo etc can also be accessed.","Published":"2016-03-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ReCiPa","Version":"3.0","Title":"Redundancy Control in Pathways databases","Description":"Pathways in a database could have many redundancies among\n them. This package allows the user to set a maximum value for\n the proportion of these redundancies.","Published":"2012-12-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"recluster","Version":"2.8","Title":"Ordination Methods for the Analysis of Beta-Diversity Indices","Description":"Beta-diversity indices provide dissimilarity matrices with particular distribution of data requiring specific treatment. For example the high frequency of ties and zero values in turnover indices produces hierarchical cluster dendrograms whose topology and bootstrap supports are affected by the order of rows in the original matrix. Moreover, biogeographical regionalization can be facilitated by a combination of hierarchical clustering and multi-dimensional scaling. The recluster package provides robust techniques to analyze pattern of similarity in species composition.","Published":"2015-03-06","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"recmap","Version":"0.5.20","Title":"Compute the Rectangular Statistical Cartogram","Description":"Provides an interface and a C++ implementation of the RecMap MP2\n construction heuristic (see 'citation(\"recmap\")' for details). This algorithm\n draws maps according to a given statistical value (e.g. election results,\n population or epidemiological data). The basic idea of the RecMap algorithm is\n that each map region (e.g. different countries) is represented by a\n rectangle. The area of each rectangle represents the statistical value given\n as input (maintain zero cartographic error). Documentation about RecMap is\n provided by a vignette included in this package.","Published":"2017-04-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"recoder","Version":"0.1","Title":"A Simple and Flexible Recoder","Description":"Simple, easy to use, and flexible functionality for recoding variables. It allows for simple piecewise definition of transformations.","Published":"2015-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"recommenderlab","Version":"0.2-2","Title":"Lab for Developing and Testing Recommender Algorithms","Description":"Provides a research infrastructure to test and develop\n recommender algorithms including UBCF, IBCF, FunkSVD and association\n rule-based algorithms.","Published":"2017-04-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"recommenderlabBX","Version":"0.1-1","Title":"Book-Crossing Dataset (BX) for 'recommenderlab'","Description":"Provides the Book-Crossing Dataset for the package recommenderlab.","Published":"2015-07-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"recommenderlabJester","Version":"0.1-2","Title":"Jester Dataset for 'recommenderlab'","Description":"Provides the Jester Dataset for package recommenderlab.","Published":"2016-05-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"reconstructr","Version":"2.0.0","Title":"Session Reconstruction and Analysis","Description":"Functions to reconstruct sessions from web log or other user trace data\n and calculate various metrics around them, producing tabular,\n output that is compatible with 'dplyr' or 'data.table' centered processes.","Published":"2016-09-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RecordLinkage","Version":"0.4-10","Title":"Record Linkage in R","Description":"Provides functions for linking and de-duplicating data sets.\n Methods based on a stochastic approach are implemented as well as \n classification algorithms from the machine learning domain. ","Published":"2016-07-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Records","Version":"1.0","Title":"Record Values and Record Times","Description":"Functions for generating k-record values and k-record\n times","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"recosystem","Version":"0.4.1","Title":"Recommender System using Matrix Factorization","Description":"R wrapper of the 'libmf' library\n (http://www.csie.ntu.edu.tw/~cjlin/libmf/) for recommender\n system using matrix factorization. It is typically used to\n approximate an incomplete matrix using the product of two\n matrices in a latent space. Other common names for this task\n include \"collaborative filtering\", \"matrix completion\",\n \"matrix recovery\", etc. High performance multi-core parallel\n computing is supported in this package.","Published":"2017-03-21","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"red","Version":"1.1.1","Title":"IUCN Redlisting Tools","Description":"Includes algorithms to facilitate the assessment of extinction\n risk of species according to the IUCN (International Union for Conservation of\n Nature, see for more information) red list criteria.","Published":"2017-06-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"reda","Version":"0.3.1","Title":"Recurrent Event Data Analysis","Description":"Functions that fit gamma frailty model with spline or piecewise\n constant baseline rate function for recurrent event data, compute and\n plot parametric mean cumulative function (MCF) from a fitted model\n as well as nonparametric sample MCF (Nelson-Aalen estimator) are provided.\n Most functions are S4 methods that produce S4 class objects.","Published":"2016-12-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"REdaS","Version":"0.9.3","Title":"Companion Package to the Book 'R: Einführung durch angewandte\nStatistik'","Description":"Provides functions used in the 'R: Einführung durch angewandte Statistik' (second edition).","Published":"2015-11-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"redcapAPI","Version":"1.3","Title":"R Interface to REDCap","Description":"Access data stored in REDCap databases using the Application\n Programming Interface (API). REDCap (Research Electronic Data CAPture) is\n a web application for building and managing online surveys and databases\n developed at Vanderbilt University. The API allows users to access data\n and project meta data (such as the data dictionary) from the web\n programmatically. The redcapAPI package facilitates the process of\n accessing data with options to prepare an analysis-ready data set\n consistent with the definitions in a database's data dictionary.","Published":"2015-03-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"REDCapR","Version":"0.9.8","Title":"Interaction Between R and REDCap","Description":"Encapsulates functions to streamline calls from R to the REDCap\n API. REDCap (Research Electronic Data CAPture) is a web application for\n building and managing online surveys and databases developed at Vanderbilt\n University. The Application Programming Interface (API) offers an avenue\n to access and modify data programmatically, improving the capacity for\n literate and reproducible programming.","Published":"2017-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RedditExtractoR","Version":"2.0.2","Title":"Reddit Data Extraction Toolkit","Description":"Reddit is an online bulletin board and a social networking website\n where registered users can submit and discuss content. This package uses\n Reddit API to extract Reddit data using Reddit API. Note that due to the API\n limitations, the number of comments that can extracted is limited to 500 per\n thread. The package consists of 4 functions, one for extracting relevant URLs,\n one for extracting features out of given URLs, one that does both together and\n one that constructs graphs based on the structure of a thread.","Published":"2015-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"reddPrec","Version":"0.3","Title":"Reconstruction of Daily Data - Precipitation","Description":"Computes quality control for daily precipitation datasets, reconstructs the original series by estimating precipitation in missing values, creates new series in a specified pair of coordinates and creates grids.","Published":"2017-01-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"redist","Version":"1.3-1","Title":"Markov Chain Monte Carlo Methods for Redistricting Simulation","Description":"Enables researchers to sample redistricting plans from a pre-\n specified target distribution using a Markov Chain Monte Carlo algorithm.\n The package allows for the implementation of various constraints in the\n redistricting process such as geographic compactness and population parity\n requirements. The algorithm also can be used in combination with efficient\n simulation methods such as simulated and parallel tempering algorithms. Tools\n for analysis such as inverse probability reweighting and plotting functionality\n are included. The package implements methods described in Fifield, Higgins, Imai\n and Tarr (2016) ``A New Automated Redistricting Simulator Using Markov Chain\n Monte Carlo,'' working paper available at .","Published":"2017-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"redland","Version":"1.0.17-9","Title":"RDF Library Bindings in R","Description":"Provides methods to parse, query and serialize information\n stored in the Resource Description Framework (RDF). RDF is described at .\n This package supports RDF by implementing an R interface to the Redland RDF C library,\n described at . In brief, RDF provides a structured graph\n consisting of Statements composed of Subject, Predicate, and Object Nodes.","Published":"2016-12-15","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"rEDM","Version":"0.5.4","Title":"Applications of Empirical Dynamic Modeling from Time Series","Description":"A new implementation of EDM algorithms based on research software previously developed for internal use in the Sugihara Lab (UCSD/SIO). Contains C++ compiled objects that use time delay embedding to perform state-space reconstruction and nonlinear forecasting and an R interface to those objects using 'Rcpp'. It supports both the simplex projection method from Sugihara & May (1990) and the S-map algorithm in Sugihara (1994) . In addition, this package implements convergent cross mapping as described in Sugihara et al. (2012) and multiview embedding as described in Ye & Sugihara (2016) . ","Published":"2016-11-16","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Redmonder","Version":"0.2.0","Title":"Microsoft(r)-Inspired Color Palettes","Description":"Provide color schemes for maps (and other graphics) based on the\n color palettes of several Microsoft(r) products. Forked from 'RColorBrewer' v1.1-2.","Published":"2017-01-04","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"redR","Version":"1.0.0","Title":"REgularization by Denoising (RED)","Description":"Regularization by Denoising uses a denoising engine to solve\n many image reconstruction ill-posed inverse problems. This is a R\n implementation of the algorithm developed by Romano et.al. (2016) . Currently,\n only the gradient descent optimization framework is implemented. Also,\n only the median filter is implemented as a denoiser engine. However,\n (almost) any denoiser engine can be plugged in. There are currently available\n 3 reconstruction tasks: denoise, deblur and super-resolution. And again,\n any other task can be easily plugged into the main function 'RED'.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"redux","Version":"1.0.0","Title":"R Bindings to 'hiredis'","Description":"A 'hiredis' wrapper that includes support for\n transactions, pipelining, blocking subscription, serialisation of\n all keys and values, 'Redis' error handling with R errors.\n Includes an automatically generated 'R6' interface to the full\n 'hiredis' 'API'. Generated functions are faithful to the\n 'hiredis' documentation while attempting to match R's argument\n semantics. Serialisation must be explicitly done by the user, but\n both binary and text-mode serialisation is supported.","Published":"2017-05-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"REEMtree","Version":"0.90.3","Title":"Regression Trees with Random Effects for Longitudinal (Panel)\nData","Description":"This package estimates regression trees with random\n effects as a way to use data mining techniques to describe\n longitudinal or panel data.","Published":"2011-08-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ref","Version":"0.99","Title":"References for R","Description":"small package with functions for creating references,\n reading from and writing to references and a memory efficient\n refdata type that transparently encapsulates matrices and\n data.frames","Published":"2013-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"referenceIntervals","Version":"1.1.1","Title":"Reference Intervals","Description":"This is a collection of tools to allow the medical professional to\n calculate appropriate reference ranges (intervals) with confidence intervals around\n the limits for diagnostic purposes.","Published":"2014-09-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RefFreeEWAS","Version":"2.1","Title":"EWAS using Reference-Free DNA Methylation Mixture Deconvolution","Description":"\n Reference-free method for conducting EWAS while deconvoluting DNA methylation arising as mixtures of cell types. \n The older method (Houseman et al., 2014,) is similar to surrogate variable analysis (SVA and ISVA), except that it makes additional use of a biological mixture assumption.\n The newer method (Houseman et al., 2016, ) is similar to non-negative matrix factorization, with additional constraints and additional utilities.","Published":"2017-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"refGenome","Version":"1.7.3","Title":"Gene and Splice Site Annotation Using Annotation Data from\nEnsembl and UCSC Genome Browsers","Description":"Contains functionality for import and managing of downloaded genome annotation Data from Ensembl genome browser (European Bioinformatics Institute) and from UCSC genome browser (University of California, Santa Cruz) and annotation routines for genomic positions and splice site positions.","Published":"2017-03-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"refimpact","Version":"0.1.0","Title":"API Wrapper for the UK REF 2014 Impact Case Studies Database","Description":"Provides wrapper functions around the UK Research\n Excellence Framework 2014 Impact Case Studies Database API\n . The database contains relevant publication and\n research metadata about each case study as well as several paragraphs of\n text from the case study submissions. Case studies in the database are\n licenced under a CC-BY 4.0 licence\n .","Published":"2016-09-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RefManageR","Version":"0.13.1","Title":"Straightforward 'BibTeX' and 'BibLaTeX' Bibliography Management","Description":"Provides tools for importing and working with bibliographic\n references. It greatly enhances the 'bibentry' class by providing a class\n 'BibEntry' which stores 'BibTeX' and 'BibLaTeX' references, supports 'UTF-8'\n encoding, and can be easily searched by any field, by date ranges, and by\n various formats for name lists (author by last names, translator by full names,\n etc.). Entries can be updated, combined, sorted, printed in a number of styles,\n and exported. 'BibTeX' and 'BibLaTeX' '.bib' files can be read into 'R' and\n converted to 'BibEntry' objects. Interfaces to 'NCBI Entrez', 'CrossRef', and\n 'Zotero' are provided for importing references and references can be created\n from locally stored 'PDF' files using 'Poppler'. Includes functions for citing\n and generating a bibliography with hyperlinks for documents prepared with\n 'RMarkdown' or 'RHTML'.","Published":"2016-11-13","License":"GPL-2 | GPL-3 | BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"refnr","Version":"0.1.0","Title":"Refining Data Table Using a Set of Formulas","Description":"A tool for refining data frame with formulas.","Published":"2016-04-19","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"refset","Version":"0.1.0","Title":"Subsets with Reference Semantics","Description":"Provides subsets with reference semantics, i.e. subsets\n which automatically reflect changes in the original object, and which\n optionally update the original object when they are changed.","Published":"2016-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"refund","Version":"0.1-16","Title":"Regression with Functional Data","Description":"Methods for regression for functional\n data, including function-on-scalar, scalar-on-function, and\n function-on-function regression. Some of the functions are applicable to\n image data.","Published":"2016-08-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"refund.shiny","Version":"0.3.0","Title":"Interactive Plotting for Functional Data Analyses","Description":"Interactive plotting for functional data analyses.","Published":"2016-11-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"refund.wave","Version":"0.1","Title":"Wavelet-Domain Regression with Functional Data","Description":"Methods for regressing scalar responses on functional or image predictors, via transformation to the wavelet domain and back.","Published":"2014-07-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"regclass","Version":"1.5","Title":"Tools for an Introductory Class in Regression and Modeling","Description":"Contains basic tools for visualizing, interpreting, and building regression models. It has been designed for use with the book Introduction to Regression and Modeling with R by Adam Petrie, Cognella Publishers, ISBN: 978-1-63189-250-9 .","Published":"2017-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RegClust","Version":"1.0","Title":"Cluster analysis via regression coefficients","Description":"This package clusters regression coefficients using the methods of clustering through linear regression models (CLM) (Qin and Self 2006). Maximum likelihood approach is used to infer the parameters for each cluster. Bayesian information criterion (BIC) combined with Bootstrapped maximum volume (BMV) criterion are used to determine the number of clusters. ","Published":"2014-02-24","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"reGenotyper","Version":"1.2.0","Title":"Detecting Mislabeled Samples in Genetic Data","Description":"Detecting mislabeled samples in genetic data.","Published":"2015-05-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"REGENT","Version":"1.0.6","Title":"Risk Estimation for Genetic and Environmental Traits","Description":"Produces population distribution of disease risk and statistical risk categories, and predicts risks for individuals with genotype information.","Published":"2015-08-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"regexPipes","Version":"0.0.1","Title":"Wrappers Around 'base::grep()' for Use with Pipes","Description":"Provides wrappers around base::grep() where the first argument\n is standardized to take the data object. This makes it less of a pain to use\n regular expressions with 'magrittr' or other pipe operators.","Published":"2016-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"regexr","Version":"1.1.0","Title":"Readable Regular Expressions","Description":"An R framework for constructing and managing human\n readable regular expressions. It aims to provide tools that\n enable the user to write regular expressions in a way that is\n similar to the ways R code is written. The tools allow the user\n to (1) write in smaller, modular, named, sub-expressions, (2)\n write top to bottom, rather than a single string (3) comment\n individual chunks, (4) indent expressions to clearly present\n regular expression groups, (5) add vertical line spaces and R\n comments (i.e., #), and (6) test the validity of the\n concatenated expression and the modular sub-expressions.","Published":"2015-08-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"reghelper","Version":"0.3.3","Title":"Helper Functions for Regression Analysis","Description":"A set of functions used to automate commonly used methods in\n regression analysis. This includes plotting interactions, calculating simple\n slopes, calculating standardized coefficients, etc. See the reghelper\n documentation for more information, documentation, and examples.","Published":"2017-04-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"registry","Version":"0.3","Title":"Infrastructure for R Package Registries","Description":"Provides a generic infrastructure for creating and using registries.","Published":"2015-07-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"reglogit","Version":"1.2-4","Title":"Simulation-Based Regularized Logistic Regression","Description":"Regularized (polychotomous) logistic regression \n by Gibbs sampling. The package implements subtly different \n MCMC schemes with varying efficiency depending on the data type \n (binary v. binomial, say) and the desired estimator (regularized maximum\n likelihood, or Bayesian maximum a posteriori/posterior mean, etc.) through a \n unified interface.","Published":"2015-06-22","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"regnet","Version":"0.1.1","Title":"Network-Based Regularization for Generalized Linear Models","Description":"Network-based regularization has achieved success in variable selections for \n high-dimensional biological data, due to its ability to incorporate the correlations \n among genomic features. This package provides procedures for fitting network-based \n regularization, minimax concave penalty (MCP) and lasso penalty for generalized linear \n models. This first version, regnet0.1.1, focuses on binary outcomes. Functions for \n continuous, survival outcomes and other regularization methods will be included in the \n forthcoming upgraded version. ","Published":"2017-05-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"regpro","Version":"0.1.1","Title":"Nonparametric Regression","Description":"\n Tools are provided for\n (1) nonparametric regression (kernel, local linear),\n (2) semiparametric regression (single index, additive models), and\n (3) quantile regression (linear, kernel).","Published":"2016-01-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"regress","Version":"1.3-15","Title":"Gaussian Linear Models with Linear Covariance Structure","Description":"Functions to fit Gaussian linear model by maximising the\n residual log likelihood where the covariance structure can be\n written as a linear combination of known matrices. Can be used\n for multivariate models and random effects models. Easy\n straight forward manner to specify random effects models,\n including random interactions. Code now optimised to use\n Sherman Morrison Woodbury identities for matrix inversion in\n random effects models. We've added the ability to fit models\n using any kernel as well as a function to return the mean and\n covariance of random effects conditional on the data (BLUPs).","Published":"2017-04-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RegressionFactory","Version":"0.7.2","Title":"Expander Functions for Generating Full Gradient and Hessian from\nSingle-Slot and Multi-Slot Base Distributions","Description":"The expander functions rely on the mathematics developed for the Hessian-definiteness invariance theorem for linear projection transformations of variables, described in authors' paper, to generate the full, high-dimensional gradient and Hessian from the lower-dimensional derivative objects. This greatly relieves the computational burden of generating the regression-function derivatives, which in turn can be fed into any optimization routine that utilizes such derivatives. The theorem guarantees that Hessian definiteness is preserved, meaning that reasoning about this property can be performed in the low-dimensional space of the base distribution. This is often a much easier task than its equivalent in the full, high-dimensional space. Definiteness of Hessian can be useful in selecting optimization/sampling algorithms such as Newton-Raphson optimization or its sampling equivalent, the Stochastic Newton Sampler. Finally, in addition to being a computational tool, the regression expansion framework is of conceptual value by offering new opportunities to generate novel regression problems.","Published":"2016-09-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"regRSM","Version":"0.5","Title":"Random Subspace Method (RSM) for Linear Regression","Description":"Performs Random Subspace Method (RSM) for high-dimensional linear regression to obtain variable importance measures. The final model is chosen based on validation set or Generalized Information Criterion.","Published":"2015-09-11","License":"LGPL-2 | LGPL-3 | GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"regsel","Version":"0.2","Title":"Variable Selection and Regression","Description":"Functions for fitting linear and generalized linear models with variable selection. The functions can automatically do Stepwise Regression, Lasso or Elastic Net as variable selection methods. Lasso and Elastic net are improved and handle factors better (they can either include or exclude all factor levels).","Published":"2016-03-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"regsem","Version":"0.8.1","Title":"Regularized Structural Equation Modeling","Description":"Uses both ridge and lasso penalties (and extensions) to penalize\n specific parameters in structural equation models. The package offers additional\n cost functions, cross validation, and other extensions beyond traditional structural\n equation models.","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"regspec","Version":"2.4","Title":"Non-Parametric Bayesian Spectrum Estimation for Multirate Data","Description":"Computes linear Bayesian spectral estimates from multirate\n\tdata for second-order stationary time series. Provides credible\n\tintervals and methods for plotting various spectral estimates.","Published":"2016-10-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"regsubseq","Version":"0.12","Title":"Detect and Test Regular Sequences and Subsequences","Description":"For a sequence of event occurence times, we are interested in\n finding subsequences in it that are too \"regular\". We define regular as being\n significantly different from a homogeneous Poisson process. The departure from\n the Poisson process is measured using a L1 distance. See Di and Perlman 2007\n for more details.","Published":"2014-03-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"regtest","Version":"0.05","Title":"Regression testing","Description":"Functions for unary and binary regression tests","Published":"2012-08-24","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"regtools","Version":"1.0.1","Title":"Regression Tools","Description":"Tools for linear, nonlinear and nonparametric regression\n and classification. Parametric fit assessment using\n nonparametric methods. One vs. All and All vs. All\n multiclass classification. Nonparametric regression for\n general dimension, locally-linear option. Nonlinear \n regression with Eickert-White method for dealing with \n heteroscedasticity, k-NN for general dimension and \n general descriptive functions.","Published":"2016-11-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rehh","Version":"2.0.2","Title":"Searching for Footprints of Selection using Haplotype\nHomozygosity Based Tests","Description":"Functions for the detection of footprints of selection on\n dense SNP data using Extended Homozygosity Haplotype (EHH)\n based tests. The package includes computation of EHH, iHS\n (within population) and Rsb or XP-EHH (across pairs of populations)\n statistics. Various plotting functions are also included to\n facilitate visualization and interpretation of the results.","Published":"2016-11-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rehh.data","Version":"1.0.0","Title":"Data Only: Searching for Footprints of Selection using Haplotype\nHomozygosity Based Tests","Description":"Contains example data for the 'rehh' package. ","Published":"2016-11-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ReinforcementLearning","Version":"1.0.1","Title":"Model-Free Reinforcement Learning","Description":"Performs model-free reinforcement learning in R. This implementation enables the learning\n of an optimal policy based on sample sequences consisting of states, actions and rewards. In \n addition, it supplies multiple predefined reinforcement learning algorithms, such as experience \n replay.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ReIns","Version":"1.0.4","Title":"Functions from \"Reinsurance: Actuarial and Statistical Aspects\"","Description":"Functions from the book \"Reinsurance: Actuarial and Statistical Aspects\" (2017) by Hansjoerg Albrecher, Jan Beirlant and Jef Teugels .","Published":"2017-06-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"reinstallr","Version":"0.1.4","Title":"Search and Install Missing Packages","Description":"Search R files for not installed packages and run install.packages.","Published":"2016-12-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rel","Version":"1.3.1","Title":"Reliability Coefficients","Description":"Derives point estimates with confidence intervals for Bennett et als S, Cohen's kappa, Conger's kappa, Fleiss' kappa, Gwet's AC, intraclass correlation coefficients, Krippendorff's alpha, Scott's pi, the standard error of measurement, and weighted kappa.","Published":"2017-05-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rela","Version":"4.1","Title":"Item Analysis Package with Standard Errors","Description":"Item analysis with alpha standard error and principal axis\n factoring for continuous variable scales (with plots).","Published":"2009-10-27","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"relabeLoadings","Version":"1.0","Title":"Relabel Loadings from MCMC Output for Confirmatory Factor\nAnalysis","Description":"In confirmatory factor analysis (CFA), structural constraints\n typically ensure that the model is identified up to all possible reflections,\n i.e., column sign changes of the matrix of loadings. Such reflection invariance\n is problematic for Bayesian CFA when the reflection modes are not well separated\n in the posterior distribution. Imposing rotational constraints -- fixing\n some loadings to be zero or positive in order to pick a factor solution that\n corresponds to one reflection mode -- may not provide a satisfactory solution\n for Bayesian CFA. The function 'relabel' uses the relabeling algorithm of\n Erosheva and Curtis to correct for sign invariance in MCMC draws from CFA\n models. The MCMC draws should come from Bayesian CFA models that are fit without\n rotational constraints.","Published":"2016-11-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"relaimpo","Version":"2.2-2","Title":"Relative importance of regressors in linear models","Description":"relaimpo provides several metrics for assessing relative importance in linear models. These can be printed, plotted and bootstrapped. The recommended metric is lmg, which provides a decomposition of the model explained variance into non-negative contributions. There is a version of this package available that additionally provides a new and also recommended metric called pmvd. If you are a non-US user, you can download this extended version from Ulrike Groempings web site.","Published":"2013-09-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Relatedness","Version":"1.4","Title":"An Algorithm to Infer Relatedness","Description":"Inference of relatedness coefficients from a bi-allelic genotype matrix using a Maximum Likelihood estimation.","Published":"2016-06-03","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"relations","Version":"0.6-6","Title":"Data Structures and Algorithms for Relations","Description":"Data structures and algorithms for k-ary relations with\n arbitrary domains, featuring relational algebra, predicate functions,\n and fitters for consensus relations.","Published":"2015-10-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"relax","Version":"1.3.15","Title":"relax -- R Editor for Literate Analysis and lateX","Description":"package relax contains some functions for report\n writing, presentation, and programming: relax(), tangleR(),\n weaveR(), (g)slider(). \"relax\" is written in R and Tcl/Tk.\n relax creates a new window (top level Tcl/Tk widget) that consists\n of two text fields and some buttons and menus.\n Text (chunks) and code (chunks) are inserted in the upper text field (report field).\n Code chunks are evaluated by clicking on EvalRCode.\n Results are shown in the lower text field (output field) and\n will be transferred to the report field by pressing on Insert.\n In this way you get correct reports. These reports can be\n loaded again in cause of presentation, modification and result checking.\n tangleR() and weaveR() implement a plain kind of tangling\n and weaving. gslider() and slider() are designed to define sliders for interactive\n experiments in a simple way.\n The syntax rules of code chunks and text chunks are defined by \n the noweb system proposed by Norman Ramsey\n (http://www.eecs.harvard.edu/~nr/noweb/intro.html).","Published":"2014-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"relaxnet","Version":"0.3-2","Title":"Relaxation of glmnet models (as in relaxed lasso, Meinshausen\n2007)","Description":"Extends the glmnet package with \"relaxation\", done by running glmnet once on the entire predictor matrix, then again on each different subset of variables from along the regularization path. Relaxation may lead to improved prediction accuracy for truly sparse data generating models, as well as fewer false positives (i.e. fewer noncontributing predictors in the final model). Penalty may be lasso (alpha = 1) or elastic net (0 < alpha < 1). For this version, family may be \"gaussian\" or \"binomial\" only. Takes advantage of fast FORTRAN code from the glmnet package.","Published":"2013-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"relaxo","Version":"0.1-2","Title":"Relaxed Lasso","Description":"Relaxed Lasso is a generalisation of the Lasso shrinkage\n technique for linear regression. Both variable selection and\n parameter estimation is achieved by regular Lasso, yet both\n steps do not necessarily use the same penalty parameter. The\n results include all standard Lasso solutions but allow often\n for sparser models while having similar or even slightly better\n predictive performance if many predictor variables are present.\n The package depends on the LARS package.","Published":"2012-06-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"reldist","Version":"1.6-6","Title":"Relative Distribution Methods","Description":"Tools for the comparison of distributions. This includes nonparametric estimation of the relative distribution PDF and CDF and numerical summaries as described in \"Relative Distribution Methods in the Social Sciences\" by Mark S. Handcock and Martina Morris, Springer-Verlag, 1999, Springer-Verlag, ISBN 0387987789.","Published":"2016-10-09","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"relen","Version":"1.0.1","Title":"Compute Relative Entropy","Description":"This function computes the relative entropy (H) as an index for qualitative variation of a factor.","Published":"2015-11-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"relevent","Version":"1.0-4","Title":"Relational Event Models","Description":"Tools to fit relational event models.","Published":"2015-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Reliability","Version":"0.0-2","Title":"Functions for estimating parameters in software reliability\nmodels","Description":"Functions for estimating parameters in software reliability models.\n Only infinite failure models are implemented so far. ","Published":"2009-02-01","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"ReliabilityTheory","Version":"0.1.5","Title":"Tools for Structural Reliability Analysis","Description":"A variety of tools useful for performing structural\n reliability analysis, such as with structure function and\n system signatures. Plans to expand more widely.","Published":"2015-10-14","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"reliaR","Version":"0.01","Title":"Package for some probability distributions","Description":"A collection of utilities for some reliability\n models/probability distributions.","Published":"2011-01-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"relimp","Version":"1.0-5","Title":"Relative Contribution of Effects in a Regression Model","Description":"Functions to facilitate inference on the relative importance of predictors in a linear or generalized linear model, and a couple of useful Tcl/Tk widgets.","Published":"2016-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"relMix","Version":"1.2.3","Title":"Relationship Inference for DNA Mixtures","Description":"Makes relationship inference involving DNA mixtures with unknown profiles. ","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"relSim","Version":"0.2-0","Title":"Relative Simulator","Description":"A set of tools to explore the behaviour statistics used for forensic DNA interpretation when close relatives are involved. The package also offers some useful tools for exploring other forensic DNA situations.","Published":"2015-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"relsurv","Version":"2.0-9","Title":"Relative Survival","Description":"Various functions for relative survival analysis.","Published":"2016-04-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RelValAnalysis","Version":"1.0","Title":"Relative Value Analysis","Description":"Classes and functions for analyzing the performance of portfolios relative to a benchmark.","Published":"2014-06-26","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"rem","Version":"1.2.8","Title":"Relational Event Models (REM)","Description":"Calculate endogenous network effects in event sequences and fit relational event models (REM): Using network event sequences (where each tie between a sender and a target in a network is time-stamped), REMs can measure how networks form and evolve over time. Endogenous patterns such as popularity effects, inertia, similarities, cycles or triads can be calculated and analyzed over time.","Published":"2017-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rematch","Version":"1.0.1","Title":"Match Regular Expressions with a Nicer 'API'","Description":"A small wrapper on 'regexpr' to extract the matches and\n captured groups from the match of a regular expression to a character\n vector.","Published":"2016-04-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rematch2","Version":"2.0.1","Title":"Tidy Output from Regular Expression Matching","Description":"Wrappers on 'regexpr' and 'gregexpr' to return the match\n results in tidy data frames.","Published":"2017-06-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"remindR","Version":"0.0.1","Title":"Insert and Extract \"Reminders\" from Function Comments","Description":"Insert/extract text \"reminders\" into/from function source code \n comments or as the \"comment\" attribute of any object. \n The former can be handy in development as reminders of e.g. argument\n requirements, expected objects in the calling environment, required options\n settings, etc. The latter can be used to provide information of the object and \n as simple manual \"tooltips\" for users, among other things.","Published":"2017-03-07","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"remix","Version":"2.1","Title":"Remix your data","Description":"remix provides remix, a quick and easy function for\n describing datasets. It can be view as a mix of cast (in\n package reshape) and summary.formula (in package Hmisc).","Published":"2011-10-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rEMM","Version":"1.0-11","Title":"Extensible Markov Model for Modelling Temporal Relationships\nBetween Clusters","Description":"Implements TRACDS (Temporal Relationships \n between Clusters for Data Streams), a generalization of \n Extensible Markov Model (EMM). TRACDS adds a temporal or order model\n to data stream clustering by superimposing a dynamically adapting\n Markov Chain. Also provides an implementation of EMM (TRACDS on top of tNN \n data stream clustering). Development of this \n package was supported in part by NSF IIS-0948893 and R21HG005912 from \n the National Human Genome Research Institute.","Published":"2015-07-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"remMap","Version":"0.2-0","Title":"Regularized Multivariate Regression for Identifying Master\nPredictors","Description":"remMap is developed for fitting multivariate response regression models under the high-dimension-low-sample-size setting ","Published":"2015-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"remote","Version":"1.2.1","Title":"Empirical Orthogonal Teleconnections in R","Description":"Empirical orthogonal teleconnections in R.\n 'remote' is short for 'R(-based) EMpirical Orthogonal TEleconnections'.\n It implements a collection of functions to facilitate empirical\n orthogonal teleconnection analysis. Empirical Orthogonal Teleconnections\n (EOTs) denote a regression based approach to decompose spatio-temporal\n fields into a set of independent orthogonal patterns. They are quite\n similar to Empirical Orthogonal Functions (EOFs) with EOTs producing\n less abstract results. In contrast to EOFs, which are orthogonal in both\n space and time, EOT analysis produces patterns that are orthogonal in\n either space or time.","Published":"2016-09-17","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"remoter","Version":"0.3-2","Title":"Remote R: Control a Remote R Session from a Local One","Description":"A set of utilities for controlling a remote R session\n from a local one. Simply set up a server (see package vignette\n for more details) and connect to it from your local R session,\n including 'RStudio'. Network communication is handled\n by the 'ZeroMQ' library by way of the 'pbdZMQ' package. The \n client/server framework is a custom 'REPL'.","Published":"2016-04-29","License":"BSD 2-clause License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"remotes","Version":"1.0.0","Title":"R Package Installation from Remote Repositories, Including\n'GitHub'","Description":"Download and install R packages stored in 'GitHub',\n 'BitBucket', or plain 'subversion' or 'git' repositories. This package\n is a lightweight replacement of the 'install_*' functions in 'devtools'.\n Indeed most of the code was copied over from 'devtools'.","Published":"2016-09-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"REndo","Version":"1.2","Title":"Fitting Linear Models with Endogenous Regressors using Latent\nInstrumental Variables","Description":"Fits linear models with endogenous regressor using latent\n instrumental variable approaches. The methods included in the package\n are Lewbel's (1997) higher moments approach as well as Lewbel's (2012) \n heteroskedasticity approach, Park and Gupta's (2012) joint estimation method\n that uses Gaussian copula and Kim and Frees's (2007) multilevel generalized\n method of moment approach that deals with endogeneity in a multilevel setting.\n These are statistical techniques to address the endogeneity problem where no\n external instrumental variables are needed.\n This version: \n - solves an error occurring when using the multilevelIV() function with two levels, random intercept. \n - returns the AIC and BIC for copulaCorrection() (method 1) and latentIV() methods.\n - residuals and fitted values can be saved by users for latentIV() and copulaCorrection() methods.\n - improves the summary methods for copulaCorrection() and multilevelIV() functions.","Published":"2017-04-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Renext","Version":"3.1-0","Title":"Renewal Method for Extreme Values Extrapolation","Description":"Peaks Over Threshold (POT) or 'methode du renouvellement'. The distribution for the exceedances can be chosen, and heterogeneous data (including historical data or block data) can be used in a Maximum-Likelihood framework. ","Published":"2016-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RenextGUI","Version":"1.4-0","Title":"GUI for Renext","Description":"Graphical User Interface for Renext.","Published":"2016-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rentrez","Version":"1.1.0","Title":"Entrez in R","Description":"Provides an R interface to the NCBI's EUtils API\n allowing users to search databases like GenBank and PubMed, process the\n results of those searches and pull data into their R sessions.","Published":"2017-06-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Reol","Version":"1.55","Title":"R interface to the Encyclopedia of Life","Description":"An R interface to the Encyclopedia of Life API. Includes functions for downloading and extracting information off the EOL pages.","Published":"2014-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ReorderCluster","Version":"1.0","Title":"Reordering the dendrogram according to the class labels","Description":"Tools for performing the leaf reordering for the dendrogram that preserves the hierarchical clustering result and at the same time tries to group instances from the same class together.","Published":"2014-07-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RepeatABEL","Version":"1.1","Title":"GWAS for Multiple Observations on Related Individuals","Description":"Performs genome-wide association studies on individuals that are both related and have repeated measurements.","Published":"2016-08-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"repeated","Version":"1.1.0","Title":"Non-Normal Repeated Measurements Models","Description":"Various functions to fit models for non-normal repeated\n measurements.","Published":"2017-02-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RepeatedHighDim","Version":"2.0.0","Title":"Global tests for expression data of high-dimensional sets of\nmolecular features","Description":"Global tests for expression data of high-dimensional sets of\n molecular features.","Published":"2013-08-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"repfdr","Version":"1.1-3","Title":"Replicability Analysis for Multiple Studies of High Dimension","Description":"Estimation of Bayes and local Bayes false discovery rates for\n replicability analysis.","Published":"2015-04-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"repijson","Version":"0.1.0","Title":"Tools for Handling EpiJSON (Epidemiology Data) Files","Description":"Supplies classes and routines to convert data to and from EpiJSON files. This package provides conversion functions for data.frame, sp and obkData.","Published":"2015-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"replicatedpp2w","Version":"0.1-1","Title":"Two-Way ANOVA-Like Method to Analyze Replicated Point Patterns","Description":"Test for effects of both individual factors and their interaction on replicated spatial patterns in a two factorial design.","Published":"2015-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"replicationInterval","Version":"2.0.1","Title":"Replication Interval Functions","Description":"A common problem faced by journal reviewers and authors is the question of\n whether the results of a replication study are consistent with the original\n published study. One solution to this problem is to examine the effect size\n from the original study and generate the range of effect sizes that could\n reasonably be obtained (due to random sampling) in a replication attempt\n (i.e., calculate a replication interval). If a replication effect size falls\n outside the replication interval, then that effect likely did not occur\n due to the effects of sampling error alone. Alternatively, if a replication\n effect size falls within the replication interval, then the replication\n effect could have reasonably occurred due to the effects of sampling error\n alone. This package has functions that calculate the replication interval for \n the correlation (i.e., r), standardized mean difference (i.e., d-value), and mean. \n The calculations used in version 2.0.0 and onward differ from past calculations \n due to feedback during the journal review process. The new calculations allow \n for a more precise interpretation of the replication interval.","Published":"2016-05-26","License":"MIT License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"replyr","Version":"0.4.0","Title":"Diligent Use of Big Data for R","Description":"Methods to diligently use 'dplyr' remote data sources ('SQL' databases,\n 'Spark' 2.0.0 and above).\n Adds convenience functions to make such tasks more like\n working with an in-memory R 'data.frame'.","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"repmis","Version":"0.5","Title":"Miscellaneous Tools for Reproducible Research","Description":"Tools to load 'R' packages\n and automatically generate BibTeX files citing them as well as load and\n cache plain-text and 'Excel' formatted data stored on 'GitHub', and\n from other sources.","Published":"2016-02-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"repo","Version":"2.0.2","Title":"A Data-Centered Data Flow Manager","Description":"A data manager meant to avoid manual storage/retrieval of\n data to/from the file system. It builds one (or more) centralized\n repository where R objects are stored with annotations, tags,\n dependency notes, provenance traces. It also provides navigation\n tools to easily locate, load and edit previously stored resources.","Published":"2016-05-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"repolr","Version":"3.4","Title":"Repeated Measures Proportional Odds Logistic Regression","Description":"Fits linear models to repeated ordinal scores using GEE methodology.","Published":"2016-02-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ReporteRs","Version":"0.8.8","Title":"Microsoft Word and PowerPoint Documents Generation","Description":"Create 'Microsoft Word' document (>=2007) and \n 'Microsoft PowerPoint' document (>=2007) from R. There are\n several features to let you format and present R outputs ; e.g. Editable\n Vector Graphics, functions for complex tables reporting, reuse of corporate\n template document. You can use the package as a tool for fast reporting\n and as a tool for reporting automation. The package does not require\n any installation of Microsoft product to be able to write Microsoft files.","Published":"2017-01-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ReporteRsjars","Version":"0.0.2","Title":"External jars required for package ReporteRs","Description":"External jars required for package ReporteRs. ReporteRs is an \n\tR package for creating Microsoft Word document (>=2007), Microsoft \n\tPowerpoint document (>=2007) and HTML documents from R.","Published":"2014-08-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"reportr","Version":"1.2.2","Title":"A General Message and Error Reporting System","Description":"Provides a system for reporting messages, which provides certain useful features over the standard R system, such as the incorporation of output consolidation, message filtering, expression substitution, automatic generation of stack traces for debugging, and conditional reporting based on the current \"output level\".","Published":"2016-10-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"reportReg","Version":"0.1.0","Title":"An Easy Way to Report Regression Analysis","Description":"Provides an easy way to report the results of regression analysis, including:\n 1. Proportional hazards regression model from function 'coxph' of package 'survival';\n 2. Ordered logistic regression from function 'polr' of package 'MASS';\n 3. Binary logistic regression from function 'glm' of package 'stats';\n 4. Linear regression from function 'lm' of packages 'stats'.","Published":"2017-05-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"reportROC","Version":"1.0","Title":"An Easy Way to Report ROC Analysis","Description":"Provides an easy way to report the results of ROC analysis, including:\n 1. an ROC curve. 2. the value of Cutoff, SEN (sensitivity), SPE (specificity),\n AUC (Area Under Curve), AUC.SE (the standard error of AUC), \n PLR (positive likelihood ratio), NLR (negative likelihood ratio), \n PPV (positive predictive value), NPV (negative predictive value).","Published":"2017-04-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"reportRx","Version":"1.0","Title":"Tools for automatically generating reproducible clinical report","Description":"reportRx is a set of tools that integrates with LaTeX and knitr to\n automatically generate generate reproducible clinical reports. Fuctions to\n automatically produce demoraphic tables, outcome summaries, univariate and\n multivariate analysis results and more are included.","Published":"2013-12-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"reports","Version":"0.1.4","Title":"Assist the Workflow of Writing Academic Articles and Other\nReports","Description":"Assists in writing reports and\n presentations by providing a frame work that brings together\n existing R, LaTeX/.docx and Pandoc tools. The package is\n designed to be used with RStudio, MiKTex/Tex Live/LibreOffice,\n knitr, knitcitations, Pandoc and pander. The user will want to\n download these free programs/packages to maximize the\n effectiveness of the reports package. Functions with two\n letter names are general text formatting functions for copying\n text from articles for inclusion as a citation.","Published":"2014-12-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"reporttools","Version":"1.1.2","Title":"Generate LaTeX Tables of Descriptive Statistics","Description":"These functions are especially helpful when writing reports of data analysis using Sweave.","Published":"2015-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"REPPlab","Version":"0.9.4","Title":"R Interface to 'EPP-Lab', a Java Program for Exploratory\nProjection Pursuit","Description":"An R Interface to 'EPP-lab' v1.0. 'EPP-lab' is a Java program for\n projection pursuit using genetic algorithms written by Alain Berro and S. Larabi\n Marie-Sainte and is included in the package. The 'EPP-lab' sources are available\n under https://github.com/fischuu/EPP-lab.git.","Published":"2016-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"repr","Version":"0.12.0","Title":"Serializable Representations","Description":"String and binary representations of objects for several formats /\n mime types.","Published":"2017-04-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"represent","Version":"1.0","Title":"Determine the representativity of two multidimensional data sets","Description":"Contains workhorse function jrparams(), as well as two\n helper functions Mboxtest() and JRsMahaldist(), and four\n example data sets.","Published":"2012-03-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"represtools","Version":"0.1.2","Title":"Reproducible Research Tools","Description":"Reproducible research tools automates the creation of an analysis directory structure and work flow. There are R markdown\n skeletons which encapsulate typical analytic work flow steps. Functions will create appropriate modules which may\n pass data from one step to another.","Published":"2016-08-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"reprex","Version":"0.1.1","Title":"Prepare Reproducible Example Code for Sharing","Description":"Convenience wrapper that uses the 'rmarkdown' package to render\n small snippets of code to target formats that include both code and output.\n The goal is to encourage the sharing of small, reproducible, and runnable\n examples on code-oriented websites, such as and\n , or in email. 'reprex' also extracts clean, runnable R\n code from various common formats, such as copy/paste from an R session.","Published":"2017-01-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"reproducer","Version":"0.1.8","Title":"Reproduce Statistical Analyses and Meta-Analyses","Description":"Includes data analysis functions (e.g., to calculate effect sizes and 95% Confidence Intervals (CI) on Standardised Effect Sizes (d) for ABBA cross-over repeated-measures experimental designs), data presentation functions (e.g., density curve overlaid on histogram), and the data sets analyzed in different research papers in software engineering (e.g., related to software defect prediction or multi-site experiment concerning the extent to which structured abstracts were clearer and more complete than conventional abstracts) to streamline reproducible research in software engineering. ","Published":"2017-02-12","License":"CC BY 4.0","snapshot_date":"2017-06-23"} {"Package":"REPTILE","Version":"1.0","Title":"Regulatory DNA Element Prediction","Description":"Predicting regulatory DNA elements based on epigenomic signatures. This package is more of a set of building blocks than a direct solution. REPTILE regulatory prediction pipeline is built on this R package. See for more information.","Published":"2016-06-21","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"REQS","Version":"0.8-12","Title":"R/EQS Interface","Description":"This package contains the function run.eqs() which calls\n an EQS script file, executes the EQS estimation, and, finally,\n imports the results as R objects. These two steps can be\n performed separately: call.eqs() calls and executes EQS,\n whereas read.eqs() imports existing EQS outputs as objects into\n R. It requires EQS 6.2 (build 98 or higher).","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"request","Version":"0.1.0","Title":"High Level 'HTTP' Client","Description":"High level and easy 'HTTP' client for 'R'. Provides functions for\n building 'HTTP' queries, including query parameters, body requests, headers,\n authentication, and more.","Published":"2016-01-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"requireR","Version":"1.0.0.1","Title":"R Source Code Modularizer","Description":"Modularizes source code. Keeps the global environment clean,\n explicifies interdependencies. Inspired by 'RequireJS'.","Published":"2017-01-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rerddap","Version":"0.4.2","Title":"General Purpose Client for 'ERDDAP' Servers","Description":"General purpose R client for 'ERDDAP' servers. Includes\n functions to search for 'datasets', get summary information on\n 'datasets', and fetch 'datasets', in either 'csv' or 'netCDF' format.\n 'ERDDAP' information: \n .","Published":"2017-05-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"REREFACT","Version":"1.0","Title":"Reordering and/or Reflecting Factors for Simulation Studies with\nExploratory Factor Analysis","Description":"Executes a post-rotation algorithm that REorders and/or REflects FACTors (REREFACT) for each replication of a simulation study with exploratory factor analysis.","Published":"2016-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"reReg","Version":"1.0-0","Title":"Recurrent Event Regression","Description":"A collection of regression models for recurrent event process and failure time. ","Published":"2015-10-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"resample","Version":"0.4","Title":"Resampling Functions","Description":"Bootstrap, permutation tests, and other resampling functions,\n\tfeaturing easy-to-use syntax.","Published":"2015-04-12","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"resampledata","Version":"0.2.0","Title":"Data Sets for Mathematical Statistics with Resampling in R","Description":"Package of data sets from \"Mathematical Statistics\n with Resampling in R\" (2011) by Laura Chihara and Tim Hesterberg.","Published":"2016-08-17","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"resemble","Version":"1.2.2","Title":"Regression and Similarity Evaluation for Memory-Based Learning\nin Spectral Chemometrics","Description":"Implementation of functions for spectral similarity/dissimilarity\n analysis and memory-based learning (MBL) for non-linear modeling\n in complex spectral datasets. In chemometrics MBL is also known as local modeling.","Published":"2016-03-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"reservoir","Version":"1.1.5","Title":"Tools for Analysis, Design, and Operation of Water Supply\nStorages","Description":"Measure single-storage water supply system performance using resilience,\n reliability, and vulnerability metrics; assess storage-yield-reliability\n relationships; determine no-fail storage with sequent peak analysis; optimize\n release decisions for water supply, hydropower, and multi-objective reservoirs\n using deterministic and stochastic dynamic programming; generate inflow\n replicates using parametric and non-parametric models; evaluate inflow\n persistence using the Hurst coefficient.","Published":"2016-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"reshape","Version":"0.8.6","Title":"Flexibly Reshape Data","Description":"Flexibly restructure and aggregate data using \n just two functions: melt and cast.","Published":"2016-10-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"reshape2","Version":"1.4.2","Title":"Flexibly Reshape Data: A Reboot of the Reshape Package","Description":"Flexibly restructure and aggregate data using just two\n functions: melt and 'dcast' (or 'acast').","Published":"2016-10-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"reshapeGUI","Version":"0.1.0","Title":"A GUI for the reshape2 and plyr packages","Description":"A tool for learning how to use the functions, melt,\n acast/dcast, and ddply.","Published":"2011-05-31","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"ResistorArray","Version":"1.0-28","Title":"electrical properties of resistor networks","Description":"electrical properties of resistor networks.","Published":"2012-11-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ResourceSelection","Version":"0.3-2","Title":"Resource Selection (Probability) Functions for Use-Availability\nData","Description":"Resource Selection (Probability) Functions\n for use-availability wildlife data\n based on weighted distributions as described in\n Lele and Keim (2006) ,\n Lele (2009) ,\n and Solymos & Lele (2016) .","Published":"2017-02-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"respirometry","Version":"0.4.0","Title":"Tools for Conducting Respirometry Experiments","Description":"Provides tools to enable the researcher to more precisely conduct\n respirometry experiments. Strong emphasis is on aquatic respirometry. Tools\n focus on helping the researcher setup and conduct experiments. Analysis of the\n resulting data is not a focus since analyses are often specific to a particular\n setup, and thus are better created by the researcher individually. This\n package provides tools for intermittent, flow-through, and closed respirometry\n techniques.","Published":"2017-05-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RESS","Version":"1.3","Title":"Integrates R and Essentia","Description":"Contains three functions that query AuriQ Systems' Essentia Database and return the results in R. 'essQuery' takes a single Essentia command and captures the output in R, where you can save the output to a dataframe or stream it directly into additional analysis. 'read.essentia' takes an Essentia script and captures the output csv data into R, where you can save the output to a dataframe or stream it directly into additional analysis. 'capture.essentia' takes a file containing any number of Essentia commands and captures the output of the specified statements into R dataframes. Essentia can be downloaded for free at http://www.auriq.com/documentation/source/install/index.html.","Published":"2015-10-28","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"REST","Version":"1.0.1","Title":"RcmdrPlugin Easy Script Templates","Description":"Contains easy scripts which can be used to quickly create GUI windows for 'Rcmdr' Plugins. No knowledge about Tcl/Tk is required to make use of these scripts (These scripts are a generalisation of the template scripts in the 'RcmdrPlugin.BiclustGUI' package).","Published":"2015-04-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"restfulr","Version":"0.0.11","Title":"R Interface to RESTful Web Services","Description":"Models a RESTful service as if it were a nested R list.","Published":"2017-04-21","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"restimizeapi","Version":"1.0.0","Title":"Functions for Working with the 'www.estimize.com' Web Services","Description":"Provides the user with functions to develop their trading strategy,\n uncover actionable trading ideas, and monitor consensus shifts with\n crowdsourced earnings and economic estimate data directly from\n . Further information regarding the web services this\n package invokes can be found at .","Published":"2015-05-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"restlos","Version":"0.2-2","Title":"Robust Estimation of Location and Scatter","Description":"The restlos package provides algorithms for robust estimation of location (mean and mode) and scatter based on minimum spanning trees (pMST), self-organizing maps (Flood Algorithm), Delaunay triangulations (RDELA), and nested minimum volume convex sets (MVCH). The functions are also suitable for outlier detection.","Published":"2015-08-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"restrictedMVN","Version":"1.0","Title":"Multivariate Normal Restricted by Affine Constraints","Description":"A fast Gibbs sampler for multivariate normal with affine constraints.","Published":"2016-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"restriktor","Version":"0.1-55","Title":"Restricted Statistical Estimation and Inference for Linear\nModels","Description":"Allow for easy-to-use testing of linear equality and inequality \n restrictions about parameters and effects in linear, robust linear and generalized linear statistical models.","Published":"2017-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"resumer","Version":"0.0.3","Title":"Build Resumes with R","Description":"Using a database, LaTeX and R easily build attractive resumes.","Published":"2016-08-03","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rethinker","Version":"1.0.0","Title":"RethinkDB Client","Description":"Simple, native RethinkDB client.","Published":"2015-12-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"reticulate","Version":"0.9","Title":"R Interface to Python","Description":"R interface to Python modules, classes, and functions. When calling\n into Python R data types are automatically converted to their equivalent Python\n types. When values are returned from Python to R they are converted back to R\n types. Compatible with all versions of Python >= 2.7.","Published":"2017-06-23","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"retimes","Version":"0.1-2","Title":"Reaction Time Analysis","Description":"Reaction time analysis by maximum likelihood","Published":"2013-07-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"retistruct","Version":"0.5.10","Title":"Retinal Reconstruction Program","Description":"Reconstructs retinae by morphing a flat surface with\n cuts (a dissected flat-mount retina) onto a curvilinear surface (the a\n standard retinal shape). It can estimate the position of a point on the\n intact adult retina to within 8 degrees of arc (3.6% of nasotemporal axis).\n The coordinates in reconstructed retinae can be transformed to visuotopic\n coordinates.","Published":"2015-02-16","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"reutils","Version":"0.2.3","Title":"Talk to the NCBI EUtils","Description":"An interface to NCBI databases such as PubMed, GenBank, or GEO\n powered by the Entrez Programming Utilities (EUtils). The nine EUtils\n provide programmatic access to the NCBI Entrez query and database\n system for searching and retrieving biological data.","Published":"2016-09-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"reval","Version":"2.0.0","Title":"Repeated Function Evaluation for Sensitivity Analysis","Description":"Simplified scenario testing and sensitivity analysis with R via a\n generalized function for one-factor-at-a-time (OFAT) sensitivity analysis,\n evaluation of parameter sets and (sampled) parameter permutations. Options\n for formatting output and parallel processing are also provided.","Published":"2015-05-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"revdbayes","Version":"1.1.0","Title":"Ratio-of-Uniforms Sampling for Bayesian Extreme Value Analysis","Description":"Provides functions for the Bayesian analysis of extreme value\n models. The 'rust' package is\n used to simulate a random sample from the required posterior distribution.\n The functionality of 'revdbayes' is similar to the 'evdbayes' package\n , which uses Markov Chain\n Monte Carlo ('MCMC') methods for posterior simulation. See the 'revdbayes' \n website for more information, documentation and examples.","Published":"2017-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"revealedPrefs","Version":"0.2","Title":"Revealed Preferences and Microeconomic Rationality","Description":"Computation of (direct and indirect) revealed preferences, fast non-parametric tests of rationality axioms (WARP, SARP, GARP), simulation of axiom-consistent data, and detection of axiom-consistent subpopulations.","Published":"2014-11-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"revealjs","Version":"0.9","Title":"R Markdown Format for 'reveal.js' Presentations","Description":"R Markdown format for 'reveal.js' presentations, a framework \n for easily creating beautiful presentations using HTML.","Published":"2017-03-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RevEcoR","Version":"0.99.3","Title":"Reverse Ecology Analysis on Microbiome","Description":"An implementation of the reverse ecology framework. Reverse ecology\n refers to the use of genomics to study ecology with no a priori assumptions\n about the organism(s) under consideration, linking organisms to their\n environment. It allows researchers to reconstruct the metabolic networks and\n study the ecology of poorly characterized microbial species from their\n genomic information, and has substantial potentials for microbial community\n ecological analysis.","Published":"2016-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"revgeo","Version":"0.11","Title":"Reverse Geocoding with the Photon Geocoder for OpenStreetMap and\nGoogle Maps","Description":"Function revgeo() allows you to use the Photon geocoder for OpenStreetMap and Google Maps to reverse geocode coordinate pairs with minimal hassle. ","Published":"2017-05-03","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"reweight","Version":"1.2.1","Title":"Adjustment of Survey Respondent Weights","Description":"Adjusts the weights of survey respondents so that the\n marginal distributions of certain variables fit more closely to\n those from a more precise source (e.g. Census Bureau's data).","Published":"2012-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rex","Version":"1.1.1","Title":"Friendly Regular Expressions","Description":"A friendly interface for the construction of regular expressions.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rexperigen","Version":"0.2.1","Title":"R Interface to Experigen","Description":"Provides convenience functions to communicate with\n an Experigen server: Experigen ()\n is an online framework for creating linguistic experiments,\n and it stores the results on a dedicated server. This package can be\n used to retrieve the results from the server, and it is especially\n helpful with registered experiments, as authentication with the server\n has to happen.","Published":"2016-08-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rexpokit","Version":"0.24.1","Title":"R wrappers for EXPOKIT; other matrix functions","Description":"This package wraps some of the matrix exponentiation utilities from EXPOKIT (http://www.maths.uq.edu.au/expokit/), a FORTRAN library that is widely recommended for matrix exponentiation (Sidje RB, 1998. \"Expokit: A Software Package for Computing Matrix Exponentials.\" ACM Trans. Math. Softw. 24(1): 130-156). EXPOKIT includes functions for exponentiating both small, dense matrices, and large, sparse matrices (in sparse matrices, most of the cells have value 0). Rapid matrix exponentiation is useful in phylogenetics when we have a large number of states (as we do when we are inferring the history of transitions between the possible geographic ranges of a species), but is probably useful in other ways as well.","Published":"2013-07-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rfacebook","Version":"0.6.15","Title":"Access to Facebook API via R","Description":"Provides an interface to the Facebook API.","Published":"2017-05-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rfast","Version":"1.8.1","Title":"Fast R Functions","Description":"A collection of fast (utility) functions for data analysis. Column- and row- wise means, medians, variances, minimums, maximums, many t, F and G-square tests, many regressions (normal, logistic, Poisson), are some of the many fast functions.","Published":"2017-05-15","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"RFc","Version":"0.1-2","Title":"Client for FetchClimate Web Service","Description":"Returns environmental data such as air temperature,\n precipitation rate and wind speed from the FetchClimate Web service ()\n based on user specified arguments such as geographical regions or coordinates and time bounds.","Published":"2016-06-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rFDSN","Version":"0.0.0","Title":"Get Seismic Data from the International Federation of Digital\nSeismograph Networks","Description":"This package facilitates searching for and downloading seismic time series in miniSEED format (a minimalist version of the Standard for the Exchange of Earthquake Data) from International Federation of Digital Seismograph Networks repositories. This package can also be used to gather information about seismic networks (stations, channels, locations, etc) and find historical earthquake data (origins, magnitudes, etc).","Published":"2014-09-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rFerns","Version":"2.0.2","Title":"Random Ferns Classifier","Description":"An R implementation of the random ferns classifier by Ozuysal et\n al., modified for generic and multi-label classification and featuring OOB error\n approximation and importance measure.","Published":"2016-07-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RFGLS","Version":"1.1","Title":"Rapid Feasible Generalized Least Squares","Description":"RFGLS uses a generalized least-squares method to perform single-marker association analysis, \n in datasets of nuclear families containing parents, twins, and/or adoptees","Published":"2013-09-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RFgroove","Version":"1.1","Title":"Importance Measure and Selection for Groups of Variables with\nRandom Forests","Description":"Variable selection tools for groups of variables and functional data based on a new grouped variable importance with random forests.","Published":"2016-03-17","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"rfigshare","Version":"0.3.7","Title":"An R Interface to 'figshare'","Description":"An interface to 'figshare' (http://figshare.com), a scientific repository to archive and assign 'DOIs' to data, software, figures, and more.","Published":"2015-06-15","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"RFinanceYJ","Version":"0.3.1","Title":"RFinanceYJ","Description":"Japanese stock market from Yahoo!-finance-Japan","Published":"2013-08-13","License":"BSD 3-clause License","snapshot_date":"2017-06-23"} {"Package":"RFinfer","Version":"0.2.0","Title":"Inference for Random Forests","Description":"A set of add on tools for the 'randomForest' package.","Published":"2016-06-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rfishbase","Version":"2.1.2","Title":"R Interface to 'FishBase'","Description":"A programmatic interface to , re-written\n based on an accompanying 'RESTful' API. Access tables describing over 30,000\n species of fish, their biology, ecology, morphology, and more. This package also\n supports experimental access to data, which contains\n nearly 200,000 species records for all types of aquatic life not covered by\n 'FishBase.'","Published":"2017-04-19","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"rfisheries","Version":"0.2","Title":"'Programmatic Interface to the 'openfisheries.org' API'","Description":"A programmatic interface to 'openfisheries.org'. This package is\n part of the 'rOpenSci' suite (http://ropensci.org).","Published":"2016-02-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rfit","Version":"0.23.0","Title":"Rank Estimation for Linear Models","Description":"R estimation and inference for linear models. Estimation\n is for general scores and a library of commonly used score\n functions is included.","Published":"2016-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rflann","Version":"1.3","Title":"Basic R Interface to the 'FLANN' C++ Library","Description":"Basic R interface for the 'FLANN' C++ library version 1.8.4 written by Marius Muja\n and David Lowe. K-nearest neighbours searching and radius searching.","Published":"2017-01-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RFLPtools","Version":"1.6","Title":"Tools to analyse RFLP data","Description":"RFLPtools provides functions to analyse DNA fragment samples \n (i.e. derived from RFLP-analysis) and standalone BLAST report files \n (i.e. DNA sequence analysis).","Published":"2014-08-14","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"RFmarkerDetector","Version":"1.0.1","Title":"Multivariate Analysis of Metabolomics Data using Random Forests","Description":"A collection of tools for multivariate analysis of metabolomics\n data, which includes several preprocessing methods (normalization, scaling)\n and various exploration and data visualization techniques (Principal\n Components Analysis and Multi Dimensional Scaling). The core of the package\n is the Random Forest algorithm used for the construction, optimization and\n validation of classification models with the aim of identifying potentially\n relevant biomarkers.","Published":"2016-02-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rfml","Version":"0.1.0","Title":"MarkLogic NoSQL Database Server in-Database Analytics for R","Description":"Functionality required to efficiently use R with MarkLogic NoSQL Database Server, . Many basic and complex R operations are pushed down into the database, which removes the main memory boundary of R and allows to make full use of MarkLogic server. In order to use the package you need a MarkLogic Server version 8 or higher.","Published":"2016-03-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Rfmtool","Version":"1.2","Title":"Fuzzy Measure Tools for R","Description":"Various tools for handling fuzzy measures, calculating Shapley value and Interaction index, Choquet and Sugeno integrals, as well as fitting fuzzy measures to empirical data are provided. Construction of fuzzy measures from empirical data is done by solving a linear programming problem by using 'lpsolve' package, whose source in C adapted to the R environment \n is included. The description of the basic theory of fuzzy measures is in the manual in the Doc folder in this package. ","Published":"2016-03-19","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"rfoaas","Version":"1.1.0","Title":"R Interface to 'FOAAS'","Description":"R access to the 'FOAAS' (F... Off As A Service) web service is provided.","Published":"2016-08-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RFOC","Version":"3.4-3","Title":"Graphics for Spherical Distributions and Earthquake Focal\nMechanisms","Description":"Graphics for statistics on a sphere, as applied to geological fault data, crystallography, earthquake focal mechanisms, radiation patterns, ternary plots and geographical/geological maps. Non-double couple plotting of focal spheres and source type maps are included for statistical analysis of moment tensors.","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RForcecom","Version":"1.1","Title":"Data Integration Feature for Force.com and Salesforce.com","Description":"Insert, update,\n retrieve, delete and bulk operate datasets with a SaaS based CRM\n Salesforce.com and a PaaS based application platform Force.com from R.","Published":"2016-07-19","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"rfordummies","Version":"0.1.3","Title":"Code Examples to Accompany the Book \"R for Dummies\"","Description":"Contains all the code examples in the book \"R for Dummies\" (1st\n edition). You can view the table of contents as well as the sample code for each\n chapter.","Published":"2016-12-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"rforensicbatwing","Version":"1.3","Title":"BATWING for calculating forensic trace-suspect match\nprobabilities","Description":"A modified version (with great help from Ian J. Wilson) of Ian J. Wilson's program BATWING for calculating forensic trace-suspect match probabilities.","Published":"2014-06-27","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rForest","Version":"0.1","Title":"Forest Inventory and Analysis","Description":"Set of tools designed for forest inventory analysis.","Published":"2016-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RFormatter","Version":"0.1.1","Title":"R Source Code Formatter","Description":"The R Formatter formats R source code. It is very much based on\n formatR, but tries to improve it by heuristics. For example, spaces can be\n forced around the division operator \"/\".","Published":"2016-05-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rfPermute","Version":"2.1.5","Title":"Estimate Permutation p-Values for Random Forest Importance\nMetrics","Description":"Estimate significance of importance metrics\n for a Random Forest model by permuting the response\n variable. Produces null distribution of importance\n metrics for each predictor variable and p-value of\n observed. Provides summary and visualization functions for 'randomForest' \n results.","Published":"2016-10-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RFreak","Version":"0.3-0","Title":"R/FrEAK interface","Description":"An R interface to a modified version of the Free Evolutionary Algorithm Kit FrEAK. FrEAK is a toolkit written in Java to design and analyze evolutionary algorithms. Both the R interface and an extended version of FrEAK are contained in the RFreak package. For more information on FrEAK see http://sourceforge.net/projects/freak427/.","Published":"2014-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rFSA","Version":"0.1.0","Title":"Feasible Solution Algorithm for Finding Best Subsets and\nInteractions","Description":"Uses the lm() and glm() functions to fit models\n generated from a feasible solution algorithm. The feasible solution algorithm comes up with model forms of a\n specific type that can have fixed variables, higher order interactions and their\n lower order terms.","Published":"2016-10-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rFTRLProximal","Version":"1.0.0","Title":"FTRL-Proximal Algorithm","Description":"An efficient C++ based implementation of \"Follow The (Proximally) Regularized Leader\" online learning algorithm.\n This algorithm was proposed in McMahan et al. (2013) .","Published":"2016-12-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rfUtilities","Version":"2.1-0","Title":"Random Forests Model Selection and Performance Evaluation","Description":"Utilities for Random Forest model selection, class balance\n correction, significance test, cross validation and partial dependency\n plots.","Published":"2017-06-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RGA","Version":"0.4.2","Title":"A Google Analytics API Client","Description":"Provides functions for accessing and retrieving data from the\n Google Analytics APIs (https://developers.google.com/analytics/). Supports\n OAuth 2.0 authorization. Package provides access to the Management, Core\n Reporting, Multi-Channel Funnels Reporting, Real Time Reporting and\n Metadata APIs. Access to all the Google Analytics accounts which the user\n has access to. Auto-pagination to return more than 10,000 rows of the\n results by combining multiple data requests. Also package provides\n shiny app to explore the core reporting API dimensions and metrics.","Published":"2016-04-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rga4gh","Version":"0.1.1","Title":"An Interface to the GA4GH API","Description":"An Interface to the GA4GH API that allows users to easily GET responses and POST requests to\n GA4GH Servers. See for more information about the GA4GH project.","Published":"2016-11-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rgabriel","Version":"0.7","Title":"Gabriel Multiple Comparison Test and Plot the Confidence\nInterval on Barplot","Description":"This package was created to analyze multi-level one-way\n experimental designs. It is designed to handle vectorized\n observation and factor data where there are unequal sample\n sizes and population variance homogeneity can not be assumed.\n To conduct the Gabriel test, create two vectors: one for your \n observations and one for the factor level of each observation. \n The function, rgabriel, conduct the test and save the output as\n a vector to input into the gabriel.plot function, which produces \n a confidence interval plot for Multiple Comparison.","Published":"2013-12-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rgam","Version":"0.6.3","Title":"Robust Generalized Additive Model","Description":"Robust Generalized Additive Model","Published":"2014-01-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rGammaGamma","Version":"1.0.12","Title":"Gamma convolutions for methylation array background correction","Description":"This package implements a Gamma convolution model for\n background correction.","Published":"2013-11-11","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"Rgb","Version":"1.5.1","Title":"The R Genome Browser","Description":"Classes and methods to efficiently handle (slice, annotate, draw ...) genomic features (such as genes or transcripts), and an interactive interface to browse them.","Published":"2017-04-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rgbif","Version":"0.9.8","Title":"Interface to the Global 'Biodiversity' Information Facility\n'API'","Description":"A programmatic interface to the Web Service methods\n provided by the Global Biodiversity Information Facility ('GBIF';\n ). 'GBIF' is a database\n of species occurrence records from sources all over the globe.\n 'rgbif' includes functions for searching for taxonomic names,\n retrieving information on data providers, getting species occurrence\n records, and getting counts of occurrence records.","Published":"2017-04-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RGBM","Version":"1.0-7","Title":"LS-TreeBoost and LAD-TreeBoost for Gene Regulatory Network\nReconstruction","Description":"Provides an implementation of Regularized LS-TreeBoost & LAD-TreeBoost algorithm for Regulatory Network inference from any type of expression data (Microarray/RNA-seq etc). See Mall et al (2017) .","Published":"2017-05-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"Rgbp","Version":"1.1.2","Title":"Hierarchical Modeling and Frequency Method Checking on\nOverdispersed Gaussian, Poisson, and Binomial Data","Description":"We utilize approximate Bayesian machinery to fit two-level conjugate hierarchical models on overdispersed Gaussian, Poisson, and Binomial data and evaluates whether the resulting approximate Bayesian interval estimates for random effects meet the nominal confidence levels via frequency coverage evaluation. The data that Rgbp assumes comprise observed sufficient statistic for each random effect, such as an average or a proportion of each group, without population-level data. The approximate Bayesian tool equipped with the adjustment for density maximization produces approximate point and interval estimates for model parameters including second-level variance component, regression coefficients, and random effect. For the Binomial data, the package provides an option to produce posterior samples of all the model parameters via the acceptance-rejection method. The package provides a quick way to evaluate coverage rates of the resultant Bayesian interval estimates for random effects via a parametric bootstrapping, which we call frequency method checking.","Published":"2017-06-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RGCCA","Version":"2.1.2","Title":"Regularized and Sparse Generalized Canonical Correlation\nAnalysis for Multiblock Data","Description":"Multiblock data analysis concerns the analysis of several sets of variables (blocks) observed on the same group of individuals. The main aims of the RGCCA package are: (i) to study the relationships between blocks and (ii) to identify subsets of variables of each block which are active in their relationships with the other blocks. ","Published":"2017-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rgcvpack","Version":"0.1-4","Title":"R Interface for GCVPACK Fortran Package","Description":"Thin plate spline fitting and prediction","Published":"2013-11-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rgdal","Version":"1.2-7","Title":"Bindings for the Geospatial Data Abstraction Library","Description":"Provides bindings to Frank Warmerdam's Geospatial Data Abstraction Library (GDAL) (>= 1.6.3) and access to projection/transformation operations from the PROJ.4 library. The GDAL and PROJ.4 libraries are external to the package, and, when installing the package from source, must be correctly installed first. Both GDAL raster and OGR vector map data can be imported into R, and GDAL raster data and OGR vector data exported. Use is made of classes defined in the sp package. Windows and Mac Intel OS X binaries (including GDAL, PROJ.4 and Expat) are provided on CRAN. ","Published":"2017-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RGENERATE","Version":"1.3.5","Title":"Tools to Generate Vector Time Series","Description":"A method 'generate()' is implemented in this package for the random\n generation of vector time series according to models obtained by 'RMAWGEN',\n 'vars' or other packages. This package was created to generalize the\n algorithms of the 'RMAWGEN' package for the analysis and generation of any\n environmental vector time series.","Published":"2017-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RGENERATEPREC","Version":"1.2","Title":"Tools to Generate Daily-Precipitation Time Series","Description":"The method 'generate()' is extended for spatial multi-site\n stochastic generation of daily precipitation. It generates precipitation\n occurrence in several sites using logit regression (Generalized Linear\n Models) and D.S. Wilks' approach (Journal of Hydrology, 1998).","Published":"2017-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RGenetics","Version":"0.1","Title":"R packages for genetics research","Description":"R packages for genetics research","Published":"2013-11-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rgenoud","Version":"5.7-12.4","Title":"R Version of GENetic Optimization Using Derivatives","Description":"A genetic algorithm plus derivative optimizer.","Published":"2015-07-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rgeoapi","Version":"1.1.0","Title":"Get Information from the GeoAPI","Description":"Provides access to information from \n about French \n \"Communes\", \"Departements\" and \"Regions\".","Published":"2016-10-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rgeolocate","Version":"1.0.0","Title":"IP Address Geolocation","Description":"Connectors to online and offline sources for taking IP addresses\n and geolocating them to country, city, timezone and other geographic ranges. For\n individual connectors, see the package index.","Published":"2017-02-11","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"rgeos","Version":"0.3-23","Title":"Interface to Geometry Engine - Open Source (GEOS)","Description":"Interface to Geometry Engine - Open Source (GEOS) using the C API for topology operations on geometries. The GEOS library is external to the package, and, when installing the package from source, must be correctly installed first. Windows and Mac Intel OS X binaries are provided on CRAN.","Published":"2017-04-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rgexf","Version":"0.15.3","Title":"Build, Import and Export GEXF Graph Files","Description":"Create, read and write GEXF (Graph Exchange XML Format) graph files (used in Gephi and others). Using the XML package, it allows the user to easily build/read graph files including attributes, GEXF viz attributes (such as color, size, and position), network dynamics (for both edges and nodes) and edge weighting. Users can build/handle graphs element-by-element or massively through data-frames, visualize the graph on a web browser through \"sigmajs\" (a javascript library) and interact with the igraph package.","Published":"2015-03-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rggobi","Version":"2.1.21","Title":"Interface Between R and 'GGobi'","Description":"A command-line interface to 'GGobi', an interactive and dynamic\n graphics package. 'Rggobi' complements the graphical user interface of\n 'GGobi' providing a way to fluidly transition between analysis and\n exploration, as well as automating common tasks.","Published":"2016-08-31","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rgho","Version":"1.0.1","Title":"Access WHO Global Health Observatory Data from R","Description":"Access WHO Global Health Observatory\n ()\n data from R via the Athena web service\n (),\n an application program interface providing\n a simple query interface to the World\n Health Organization's data and statistics content.","Published":"2017-01-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RGIFT","Version":"0.1-5","Title":"Create quizzes in GIFT Format","Description":"This package provides some functions to create quizzes\n in the GIFT format. This format is used by several Virtual Learning\n Environments such as Moodle.","Published":"2014-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rgl","Version":"0.98.1","Title":"3D Visualization Using OpenGL","Description":"Provides medium to high level functions for 3D interactive graphics, including\n functions modelled on base graphics (plot3d(), etc.) as well as functions for\n constructing representations of geometric objects (cube3d(), etc.). Output\n may be on screen using OpenGL, or to various standard 3D file formats including\n WebGL, PLY, OBJ, STL as well as 2D image formats, including PNG, Postscript, SVG, PGF.","Published":"2017-03-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rglobi","Version":"0.2.9","Title":"R Interface to Global Biotic Interactions","Description":"A programmatic interface to the web service methods\n provided by Global Biotic Interactions (GloBI). GloBI provides \n access to spatial-temporal species interaction records from \n sources all over the world. rglobi provides methods to search \n species interactions by location, interaction type, and \n taxonomic name. In addition, it supports Cypher, a graph query\n language, to allow for executing custom queries on the GloBI \n aggregate species interaction data set.","Published":"2016-03-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rglpk","Version":"0.6-3","Title":"R/GNU Linear Programming Kit Interface","Description":"R interface to the GNU Linear Programming Kit.\n 'GLPK' is open source software for solving large-scale linear programming (LP),\n mixed integer linear programming ('MILP') and other related problems.","Published":"2017-05-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"rglwidget","Version":"0.2.1","Title":"'rgl' in 'htmlwidgets' Framework","Description":"The contents of this package have\n been merged into rgl, so it is no longer needed.","Published":"2016-08-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rgnuplot","Version":"1.0.3","Title":"R Interface for Gnuplot","Description":"Interface for gnuplot\n Based on gnuplot_i version 1.11, the GPL code from Nicolas Devillard.","Published":"2015-07-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rGoodData","Version":"0.1.0","Title":"GoodData API Client Package","Description":"Export raw reports from 'GoodData' business intelligence platform \n (see for more information).","Published":"2017-03-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RGoogleAnalytics","Version":"0.1.1","Title":"R Wrapper for the Google Analytics API","Description":"Provides functions for accessing and retrieving data from the\n Google Analytics API","Published":"2014-08-16","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"RGoogleAnalyticsPremium","Version":"0.1.1","Title":"Unsampled Data in R for Google Analytics Premium Accounts","Description":"It fires a query to the API to get the unsampled data in R for Google Analytics Premium Accounts. It retrieves data from the Google drive document and stores it into the local drive. The path to the excel file is returned by this package. The user can read data from the excel file into R using read.csv() function.","Published":"2015-11-02","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"RGoogleFit","Version":"0.3.1","Title":"R Interface to Google Fit API","Description":"Provides interface to Google Fit REST API v1 (see ). ","Published":"2017-05-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RgoogleMaps","Version":"1.4.1","Title":"Overlays on Static Maps","Description":"Serves two purposes: (i) Provide a\n comfortable R interface to query the Google server for static\n maps, and (ii) Use the map as a background image to overlay\n plots within R. This requires proper coordinate scaling.","Published":"2016-09-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rgp","Version":"0.4-1","Title":"R genetic programming framework","Description":"RGP is a simple modular Genetic Programming (GP) system build in\n pure R. In addition to general GP tasks, the system supports Symbolic\n Regression by GP through the familiar R model formula interface. GP\n individuals are represented as R expressions, an (optional) type system\n enables domain-specific function sets containing functions of diverse\n domain- and range types. A basic set of genetic operators for variation\n (mutation and crossover) and selection is provided.","Published":"2014-08-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rgpui","Version":"0.1-2","Title":"UI for the RGP genetic programming framework","Description":"RGP UI provides as modern web-based user interface to the modular\n Genetic Programming (GP) system RGP.","Published":"2014-12-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rgr","Version":"1.1.13","Title":"Applied Geochemistry EDA","Description":"Geological Survey of Canada (GSC) functions for exploratory data analysis with applied geochemical data, with special application to the estimation of background ranges and identification of outliers, 'anomalies', to support mineral exploration and environmental studies. Additional functions are provided to support analytical data QA/QC, ANOVA for investigations of field sampling and analytical variability, and utility tasks. NOTE: function caplot() for concentration-area plots employs package 'akima', however, 'akima' is only licensed for not-for-profit use. Therefore, not-for-profit users of 'rgr' will have to independently make package 'akima' available through library(....); and use of function caplot() by for-profit users will fail.","Published":"2016-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RGraphics","Version":"2.0-14","Title":"Data and Functions from the Book R Graphics, Second Edition","Description":"Data and Functions from the book R Graphics, Second Edition. There is a function to produce each figure in the book, plus several functions, classes, and methods defined in Chapter 8. ","Published":"2016-03-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RGraphM","Version":"0.1.5","Title":"Graph Matching Library for R","Description":"This is a wrapper package for the graph matching library 'graphm'. The original 'graphm' C/C++ library can be found in . Latest version ( 0.52 ) of this library is slightly modified to fit 'Rcpp' usage and included in the source package. The development version of the package is also available at .","Published":"2017-06-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rgrass7","Version":"0.1-9","Title":"Interface Between GRASS 7 Geographical Information System and R","Description":"Interpreted interface between GRASS 7 geographical \n information system and R, based on starting R from within the GRASS GIS\n environment, or running free-standing R in a temporary GRASS location;\n the package provides facilities for using all GRASS commands from the \n R command line. This package may not be used for GRASS 6, for which\n spgrass6 should be used.","Published":"2016-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rGroovy","Version":"1.0","Title":"Groovy Language Integration","Description":"Integrates the Groovy scripting language with the R Project for Statistical Computing.","Published":"2015-10-31","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"RGtk2","Version":"2.20.33","Title":"R Bindings for Gtk 2.8.0 and Above","Description":"Facilities in the R language for programming\n graphical interfaces using Gtk, the Gimp Tool Kit.","Published":"2017-05-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RGtk2Extras","Version":"0.6.1","Title":"Data frame editor and dialog making wrapper for RGtk2","Description":"Useful add-ons for RGtk2","Published":"2012-10-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rgw","Version":"0.1.0","Title":"Goodman-Weare Affine-Invariant Sampling","Description":"Implementation of the affine-invariant method of Goodman & Weare (2010) , a method of producing Monte-Carlo samples from a target distribution.","Published":"2016-10-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RH2","Version":"0.2.3","Title":"DBI/RJDBC interface to h2 Database","Description":"DBI/RJDBC interface to h2 database. h2 version 1.3.175 is included.","Published":"2014-09-14","License":"Mozilla Public License 1.1","snapshot_date":"2017-06-23"} {"Package":"rhandsontable","Version":"0.3.4","Title":"Interface to the 'Handsontable.js' Library","Description":"An R interface to the 'Handsontable' JavaScript library, which is a\n minimalist Excel-like data grid editor. See for details.","Published":"2016-11-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rHealthDataGov","Version":"1.0.1","Title":"Retrieve data sets from the HealthData.gov data API","Description":"An R interface for the HealthData.gov data API. For each data resource, you can filter results (server-side) to select subsets of data.","Published":"2014-06-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RHMS","Version":"1.1","Title":"Hydrologic Modelling System for R Users","Description":"Hydrologic modelling system is an object oriented tool which enables R users to simulate and analyze hydrologic events. The package proposes functions and methods for construction, simulation, visualization, and calibration of hydrologic systems.","Published":"2017-04-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rhnerm","Version":"1.1","Title":"Random Heteroscedastic Nested Error Regression","Description":"Performs the random heteroscedastic nested error regression model described in Kubokawa, Sugasawa, Ghosh and Chaudhuri (2016) .","Published":"2016-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rhoR","Version":"1.1.0.0","Title":"Rho for Inter Rater Reliability","Description":"Rho is used to test the generalization of inter rater reliability\n (IRR) statistics. Calculating rho starts by generating a large number of\n simulated, fully-coded data sets: a sizable collection of hypothetical\n populations, all of which have a kappa value below a given threshold -- which\n indicates unacceptable agreement. Then kappa is calculated on a sample from\n each of those sets in the collection to see if it is equal to or higher than\n the kappa in then real sample. If less than five percent of the distribution\n of samples from the simulated data sets is greater than actual observed kappa,\n the null hypothesis is rejected and one can conclude that if the two raters had\n coded the rest of the data, we would have acceptable agreement (kappa above the\n threshold).","Published":"2017-02-16","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rhosp","Version":"1.07","Title":"Side Effect Risks in Hospital : Simulation and Estimation","Description":"Evaluating risk (that a patient arises a side effect) during hospitalization is the main purpose of this package. Several methods (Parametric, non parametric and De Vielder estimation) to estimate the risk constant (R) are implemented in this package. There are also functions to simulate the different models of this issue in order to quantify the previous estimators. It is necessary to read at least the first six pages of the report to understand the topic.","Published":"2015-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rhpc","Version":"0.15-244","Title":"Permits *apply() Style Dispatch for 'HPC'","Description":"Function of apply style using 'MPI' provides better 'HPC' environment on R.\n and this package supports long vector, can deal with slightly big data.","Published":"2015-09-01","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"RHPCBenchmark","Version":"0.1.0","Title":"Benchmarks for High-Performance Computing Environments","Description":"Microbenchmarks for determining the run time\n performance of aspects of the R programming environment and packages\n relevant to high-performance computation. The benchmarks are divided into\n three categories: dense matrix linear algebra kernels, sparse matrix linear\n algebra kernels, and machine learning functionality.","Published":"2017-05-23","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RhpcBLASctl","Version":"0.15-148","Title":"Control the Number of Threads on 'BLAS'","Description":"Control the number of threads on 'BLAS' (Aka 'GotoBLAS', 'ACML' and 'MKL').\n and possible to control the number of threads in 'OpenMP'.\n get a number of logical cores and physical cores if feasible.","Published":"2015-05-28","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"rHpcc","Version":"1.0","Title":"Interface between HPCC and R","Description":"rHpcc is an R package providing an Interface between R and\n HPCC.Familiarity with ECL (Enterprise Control Language) is a\n must to use this package.HPCC is a massive parallel-processing\n computing platform that solves Big Data problems.ECL is the\n Enterprise Control Language designed specifically for huge data\n projects using the HPCC platform.Its extreme scalability comes\n from a design that allows you to leverage every query you\n create for re-use in subsequent queries as needed. To do this,\n ECL takes a dictionary approach to building queries wherein\n each ECL definition defines an Attribute. Each previously\n defined Attribute can then be used in succeeding ECL Attribute\n definitions as the language extends itself as you use it.","Published":"2012-08-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RHRV","Version":"4.2.3","Title":"Heart Rate Variability Analysis of ECG Data","Description":"Allows users to import data files containing heartbeat\tpositions in the most broadly used formats, to remove outliers or points with unacceptable physiological values present in the time series, to plot HRV data, and to perform time domain, frequency domain and nonlinear HRV analysis.","Published":"2017-02-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RHT","Version":"1.0","Title":"Regularized Hotelling's T-square Test for Pathway (Gene Set)\nAnalysis","Description":"This package offers functions to perform regularized\n Hotelling's T-square test for pathway or gene set analysis. The\n package is tailored for but not limited to proteomics data, in\n which sample sizes are often small, a large proportion of the\n data are missing and/or correlations may be present.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ri","Version":"0.9","Title":"ri: R package for performing randomization-based inference for\nexperiments","Description":"This package provides a set of tools for conducting exact\n or approximate randomization-based inference for experiments of\n arbitrary design. The primary functionality of the package is\n in the generation, manipulation and use of permutation matrices\n implied by given experimental designs. Among other features,\n the package facilitates estimation of average treatment\n effects, constant effects variance estimation, randomization\n inference for significance testing against sharp null\n hypotheses and visualization of data and results.","Published":"2012-05-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RI2by2","Version":"1.3","Title":"Randomization Inference for Treatment Effects on a Binary\nOutcome","Description":"Computes attributable effects based confidence interval, permutation\n test confidence interval, or asymptotic confidence interval for the\n the average treatment effect on a binary outcome.","Published":"2016-10-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RIA","Version":"1.1.0","Title":"Radiomics Image Analysis Toolbox for Grayscale Images","Description":"Radiomics image analysis toolbox for grayscale 2D and 3D images. RIA calculates first-order,\n gray level co-occurrence matrix, gray level run length matrix and geometry-based statistics.\n Almost all calculations are done using vectorized formulas to optimize run speeds. Calculation\n of several thousands of parameters only takes minutes on a single core of a conventional PC.","Published":"2017-06-08","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"riceware","Version":"0.4","Title":"A Diceware Passphrase Implementation","Description":"The Diceware method can be used to generate strong passphrases.\n In short, you roll a 6-faced dice 5 times in a row, the number obtained is\n matched against a dictionary of easily remembered words. By combining together 7\n words thus generated, you obtain a password that is relatively easy to remember,\n but would take several millions years (on average) for a powerful computer to guess.","Published":"2015-05-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rich","Version":"1.0.1","Title":"Computes and Compares Species Richnesses","Description":"Computes rarefaction curves, cumulated and mean species\n richness. Compares these estimates by means of randomization\n tests.","Published":"2016-08-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ridge","Version":"2.2","Title":"Ridge Regression with Automatic Selection of the Penalty\nParameter","Description":"Linear and logistic ridge regression functions. Additionally includes special functions for \n genome-wide single-nucleotide polymorphism (SNP) data.","Published":"2017-02-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RidgeFusion","Version":"1.0-3","Title":"R Package for Ridge Fusion in Statistical Learning","Description":"This package implements ridge fusion methodology for inverse covariance matrix estimation for use in quadratic discriminant analysis. The package also contains function for model based clustering using ridge fusion for inverse matrix estimation, as well as tuning parameter selection functions. We have also implemented QDA using joint inverse covariance estimation.","Published":"2014-09-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ridigbio","Version":"0.3.5","Title":"Interface to the iDigBio Data API","Description":"An interface to iDigBio's search API that allows downloading\n specimen records. Searches are returned as a data.frame. Other functions\n such as the metadata end points return lists of information. iDigBio is a US\n project focused on digitizing and serving museum specimen collections on the\n web. See for information on iDigBio.","Published":"2017-02-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Ridit","Version":"1.1","Title":"Ridit Analysis (An extension of the Kruskal-Wallis Test.)","Description":"An extension of the Kruskal-Wallis Test that allow\n selection of arbitrary reference group. Also provide Mean Ridit\n for each group. Mean Ridit of a group is an estimate of\n probability a random observation from that group will be\n greater than or equal to a random observation from reference\n group.","Published":"2012-10-15","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"riem","Version":"0.1.1","Title":"Accesses Weather Data from the Iowa Environment Mesonet","Description":"Allows to get weather data from Automated Surface Observing System (ASOS) stations (airports) in the\n whole world thanks to the Iowa Environment Mesonet website.","Published":"2016-09-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rif","Version":"0.2.0","Title":"Client for 'Neuroscience' Information Framework 'APIs'","Description":"Client for 'Neuroscience' Information Framework ('NIF') 'APIs'\n (; ).\n Package includes functions for each 'API' route, and gives back data\n in tidy data.frame format.","Published":"2017-05-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RIFS","Version":"0.1-5","Title":"Random Iterated Function System (RIFS)","Description":"RIFS package provides functionality for generating &\n plotting prefractals in R^n with various protofractal sets and\n partition coefficient for iterative segments","Published":"2012-06-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RImageJROI","Version":"0.1.1","Title":"Read 'ImageJ' Region of Interest (ROI) Files","Description":"Provides functions to read 'ImageJ' (http://imagej.nih.gov/ij/)\n Region of Interest (ROI) files, to plot the ROIs and to convert them to\n 'spatstat' (http://spatstat.org/) spatial patterns.","Published":"2015-05-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RImagePalette","Version":"0.1.1","Title":"Extract the Colors from Images","Description":"A pure R implementation of the median cut algorithm.\n Extracts the dominant colors from an image, and turns them into\n a scale for use in plots or for fun!","Published":"2016-01-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RImpala","Version":"0.1.6","Title":"Using Cloudera 'Impala' Through 'R'","Description":"Cloudera 'Impala', which is a massively parallel processing (MPP) SQL query engine runs natively in Apache Hadoop. 'RImpala' facilitates the connection and execution of distributed queries through 'R'. 'Impala' supports JDBC integration which 'RImpala' utilizes to establish the connection between 'R' and 'Impala'. Thanks to Mu Sigma for their continued support throughout the development of the package.","Published":"2015-05-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rinat","Version":"0.1.5","Title":"Access iNaturalist Data Through APIs","Description":"A programmatic interface to the API provided by the iNaturalist website to download species occurrence data submitted by citizen scientists.","Published":"2017-03-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rindex","Version":"0.12","Title":"Indexing for R","Description":"Index structures allow quickly accessing elements from\n large collections. While btree are optimized for disk databases\n and ttree for ram databases we use hybrid static indexing which\n is quite optimal for R.","Published":"2012-08-24","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ring","Version":"1.0.0","Title":"Circular / Ring Buffers","Description":"Circular / ring buffers in R and C. There are a couple\n of different buffers here with different implementations that\n represent different trade-offs.","Published":"2017-04-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RInno","Version":"0.0.3","Title":"An Installation Framework for Shiny Apps","Description":"Installs shiny apps using Inno Setup, an open source software that builds installers for Windows programs .","Published":"2017-03-31","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RInside","Version":"0.2.14","Title":"C++ Classes to Embed R in C++ Applications","Description":"C++ classes to embed R in C++ applications\n The 'RInside' packages makes it easier to have \"R inside\" your C++ application\n by providing a C++ wrapper class providing the R interpreter.\n\n As R itself is embedded into your application, a shared library build of R\n is required. This works on Linux, OS X and even on Windows provided you use\n the same tools used to build R itself. \n\n Numerous examples are provided in the eight subdirectories of the examples/\n directory of the installed package: standard, mpi (for parallel computing)\n qt (showing how to embed 'RInside' inside a Qt GUI application), wt (showing\n how to build a \"web-application\" using the Wt toolkit), armadillo (for\n 'RInside' use with 'RcppArmadillo') and eigen (for 'RInside' use with 'RcppEigen').\n The example use GNUmakefile(s) with GNU extensions, so a GNU make is required\n (and will use the GNUmakefile automatically).\n\n Doxygen-generated documentation of the C++ classes is available at the\n 'RInside' website as well.","Published":"2017-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RInSp","Version":"1.2","Title":"R Individual Specialization (RInSp)","Description":"Functions to calculate several ecological indices of individual and population niche width\n (Araujo's E, clustering and pairwise similarity among individuals, IS, Petraitis' W, and Roughgarden's\n WIC/TNW) to assess individual specialization based on data of resource use. Resource use can be\n quantified by counts of categories, measures of mass/lenght or proportions. Monte Carlo resampling\n procedures are available for hypothesis testing against multinomial null models.","Published":"2015-02-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rintrojs","Version":"0.1.2","Title":"Wrapper for the 'Intro.js' Library","Description":"A wrapper for the 'Intro.js' library (For more info: ). \n This package makes it easy to include step-by-step introductions, and clickable hints in a 'Shiny' \n application. It supports both static introductions in the UI, and programmatic introductions from \n the server-side. ","Published":"2016-12-02","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"rio","Version":"0.5.5","Title":"A Swiss-Army Knife for Data I/O","Description":"Streamlined data import and export by making assumptions that\n the user is probably willing to make: 'import()' and 'export()' determine\n the data structure from the file extension, reasonable defaults are used for\n data import and export (e.g., 'stringsAsFactors=FALSE'), web-based import is\n natively supported (including from SSL/HTTPS), compressed files can be read\n directly without explicit decompression, and fast import packages are used where\n appropriate. An additional convenience function, 'convert()', provides a simple\n method for converting between file types.","Published":"2017-06-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rioja","Version":"0.9-15","Title":"Analysis of Quaternary Science Data","Description":"Functions for the analysis of Quaternary science data, including\n constrained clustering, WA, WAPLS, IKFA, MLRC and MAT transfer \n functions, and stratigraphic diagrams.","Published":"2017-06-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rip46","Version":"1.0.2","Title":"Utils for IP4 and IP6 Addresses","Description":"Utility functions and S3 classes for IPv4 and IPv6 addresses, including \n conversion to and from binary representation.","Published":"2015-09-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ripa","Version":"2.0-2","Title":"R Image Processing and Analysis","Description":"A package including various functions for image processing and analysis. With this package is possible to process and analyse RGB, LAN (multispectral) and AVIRIS (hyperspectral) images. This packages also provides functions for reading JPEG files, extracted from the archived 'rimage' package.","Published":"2014-05-31","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rIsing","Version":"0.1.0","Title":"High-Dimensional Ising Model Selection","Description":"Fits an Ising model to a binary dataset using L1 regularized\n logistic regression and extended BIC. Also includes a fast lasso logistic\n regression function for high-dimensional problems. Uses the 'libLBFGS'\n optimization library by Naoaki Okazaki.","Published":"2016-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Risk","Version":"1.0","Title":"Computes 26 Financial Risk Measures for Any Continuous\nDistribution","Description":"Computes 26 financial risk measures for any continuous distribution. The 26 financial risk measures include value at risk, expected shortfall due to Artzner et al. (1999) , tail conditional median due to Kou et al. (2013) , expectiles due to Newey and Powell (1987) , beyond value at risk due to Longin (2001) , expected proportional shortfall due to Belzunce et al. (2012) , elementary risk measure due to Ahmadi-Javid (2012) , omega due to Shadwick and Keating (2002), sortino ratio due to Rollinger and Hoffman (2013), kappa due to Kaplan and Knowles (2004), Wang (1998)'s risk measures, Stone (1973)'s risk measures, Luce (1980)'s risk measures, Sarin (1987)'s risk measures, Bronshtein and Kurelenkova (2009)'s risk measures.","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RiskPortfolios","Version":"2.1.1","Title":"Computation of Risk-Based Portfolios","Description":"Collection of functions designed to compute risk-based portfolios as described \n in Ardia et al. (2016) and Ardia et al. (2017) .","Published":"2017-02-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"riskR","Version":"1.1","Title":"Risk Management","Description":"Computes risk measures from data, as well as performs risk management procedures.","Published":"2015-12-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"riskRegression","Version":"1.3.7","Title":"Risk Regression Models and Prediction Scores for Survival\nAnalysis with Competing Risks","Description":"Implementation of the following methods for event history analysis.\n Risk regression models for survival endpoints also in the presence of\n competing risks are fitted using binomial regression based on a time sequence\n of binary event status variables. A formula interface for the Fine-Gray regression \n model and an interface for the combination of cause-specific Cox regression models.\n A toolbox for assessing and\n comparing performance of risk predictions (risk markers and risk prediction models).\n Prediction performance is measured by the Brier score and the area under the ROC curve\n for binary possibly time-dependent outcome.\n Inverse probability of censoring weighting and pseudo values are used to deal with right censored data.\n Lists of risk markers and lists of risk models are assessed simultaneously.\n Cross-validation repeatedly splits the data, trains the risk prediction models on one part of each split\n and then summarizes and compares the performance across splits. ","Published":"2017-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"risksetROC","Version":"1.0.4","Title":"Riskset ROC curve estimation from censored survival data","Description":"Compute time-dependent Incident/dynamic accuracy measures\n (ROC curve, AUC, integrated AUC )from censored survival data\n under proportional or non-proportional hazard assumption of\n Heagerty & Zheng (Biometrics, Vol 61 No 1, 2005, PP 92-105).","Published":"2012-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"riskSimul","Version":"0.1","Title":"Risk Quantification for Stock Portfolios under the T-Copula\nModel","Description":"Implements efficient simulation procedures to estimate tail loss probabilities and conditional excess for a stock portfolio. The log-returns are assumed to follow a t-copula model with generalized hyperbolic or t marginals. ","Published":"2014-11-09","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"RISmed","Version":"2.1.7","Title":"Download Content from NCBI Databases","Description":"A set of tools to extract bibliographic content from the National\n Center for Biotechnology Information (NCBI) databases, including PubMed. The\n name RISmed is a portmanteau of RIS (for Research Information Systems, a common\n tag format for bibliographic data) and PubMed.","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Ritc","Version":"1.0.2","Title":"Isothermal Titration Calorimetry (ITC) Data Analysis","Description":"Implements the simulation and regression of\n integrated Isothermal Titration Calorimetry (ITC) data using\n the most commonly used one-to-one binding reaction model.","Published":"2016-09-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rite","Version":"0.3.4","Title":"The Right Editor to Write R","Description":"A simple yet powerful script editor built natively in R with tcltk.","Published":"2014-08-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ritis","Version":"0.5.4","Title":"Integrated Taxonomic Information System Client","Description":"An interface to the Integrated Taxonomic Information System ('ITIS')\n (). Includes functions to work with the 'ITIS' REST\n 'API' methods (), as well as the\n 'Solr' web service ().","Published":"2016-10-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RItools","Version":"0.1-15","Title":"Randomization Inference Tools","Description":"Tools for randomization inference.","Published":"2016-05-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"riv","Version":"2.0-4","Title":"Robust instrumental variables estimator","Description":"Finds a robust instrumental variables estimator using a\n high breakdown point S-estimator of multivariate location\n and scatter matrix.","Published":"2013-10-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"riverdist","Version":"0.14.0","Title":"River Network Distance Computation and Applications","Description":"Reads river network shape files and computes network distances.\n Also included are a variety of computation and graphical tools designed \n for fisheries telemetry research, such as minimum home range, kernel density \n estimation, and clustering analysis using empirical k-functions with \n a bootstrap envelope. Tools are also provided for editing the river \n networks, meaning there is no reliance on external software.","Published":"2017-03-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rivernet","Version":"1.1","Title":"Read, Analyze and Plot River Networks","Description":"Functions for reading, analysing and plotting river networks.\n For this package, river networks consist of sections and nodes with associated attributes, \n e.g. to characterise their morphological, chemical and biological state.\n The package provides functions to read this data from text files, to analyse the network\n structure and network paths and regions consisting of sections and nodes that fulfill\n prescribed criteria, and to plot the river network and associated properties.","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"riverplot","Version":"0.6","Title":"Sankey or Ribbon Plots","Description":"Sankey plots are a type of diagram that is convenient to\n illustrate how flow of information, resources etc. separates and joins,\n much like observing how rivers split and merge. For example, they can be\n used to compare different clusterings.","Published":"2017-02-17","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"rivervis","Version":"0.46.0","Title":"River Visualisation Tool","Description":"This R package is a flexible and efficient tool to \n visualise both quantitative and qualitative data from river surveys. \n It can be used to produce diagrams with the topological structure of \n the river network.","Published":"2015-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rivivc","Version":"0.9","Title":"In vitro in vivo correlation linear level A","Description":"This package is devoted to the IVIVC linear level A with\n numerical deconvolution method. The latter is working for\n inequal and incompatible timepoints between impulse and\n response curves. A numerical convolution method is also\n available. Application domains include pharamaceutical industry\n QA/QC and R&D together with academic research.","Published":"2012-10-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rivr","Version":"1.2","Title":"Steady and Unsteady Open-Channel Flow Computation","Description":"A tool for undergraduate and graduate courses in open-channel\n hydraulics. Provides functions for computing normal and critical depths,\n steady-state water surface profiles (e.g. backwater curves) and unsteady flow\n computations (e.g. flood wave routing).","Published":"2016-03-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RJaCGH","Version":"2.0.4","Title":"Reversible Jump MCMC for the Analysis of CGH Arrays","Description":"Bayesian analysis of CGH microarrays fitting Hidden Markov\n Chain models. The selection of the number of states is made via\n their posterior probability computed by Reversible Jump Markov\n Chain Monte Carlo Methods. Also returns probabilistic common\n regions for gains/losses.","Published":"2015-07-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rjade","Version":"0.1","Title":"A Clean, Whitespace-Sensitive Template Language for Writing HTML","Description":"Jade is a high performance template engine heavily influenced by\n Haml and implemented with JavaScript for node and browsers.","Published":"2015-02-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RJafroc","Version":"0.1.1","Title":"Analysis of Data Acquired Using the Receiver Operating\nCharacteristic Paradigm and Its Extensions","Description":"A common task in medical imaging is assessing whether a new imaging system or device is an improvement over an existing one. Observer performance methodology, such as receiver operating characteristic analysis, is widely used for this purpose. Receiver operating characteristic studies are often required for regulatory approval of new devices. The purpose of this work is to software for the analysis of data acquired using the receiver operating characteristic paradigm and its location specific extensions. It is an enhanced implementation of existing Windows software (http://www.devchakraborty.com). In this paradigm the radiologist rates each image for confidence in presence of disease. The images are typically split equally between actually non-diseased and diseased. A common figure of merit is the area under the receiver operating characteristic curve, which has the physical interpretation as the probability that a diseased image is rated higher than a non-diseased one. In receiver operating characteristic studies a number of radiologists (readers) rate images in two or more treatments, and the object of the analysis is to determine the significance of the inter-treatment difference between reader-averaged figures of merit. In the free-response paradigm the reader marks the locations of suspicious regions and rates each region for confidence in presence of disease, and credit for detection is only given if a true lesion is correctly localized. In the region of interest paradigm each image is divided into a number of regions and the reader rates each region. Each paradigm requires definition of a valid figure of merit that rewards correct decisions and penalizes incorrect ones and specialized significance testing procedures are applied. The package reads data in all currently used data formats including Excel. Significance testing uses two models in widespread use, a jackknife pseudo-value based model and an analysis of variance model with correlated errors. Included are tools for (1) calculating a variety of free-response figures of merit; (2) sample size estimation for planning a future study based on pilot data; (3) viewing empirical operating characteristics in receiver operating characteristic and free-response paradigms; (4) producing formatted report files; and (5) saving a data file in appropriate format for analysis with alternate software. In addition to open-source access to the functions, the package includes a graphical interface for users already familiar with the Windows software, who simply wish to run the program.","Published":"2015-08-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rjags","Version":"4-6","Title":"Bayesian Graphical Models using MCMC","Description":"Interface to the JAGS MCMC library.","Published":"2016-02-19","License":"GPL (== 2)","snapshot_date":"2017-06-23"} {"Package":"rJava","Version":"0.9-8","Title":"Low-Level R to Java Interface","Description":"Low-level interface to Java VM very much like .C/.Call and friends. Allows creation of objects, calling methods and accessing fields.","Published":"2016-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RJDBC","Version":"0.2-5","Title":"Provides access to databases through the JDBC interface","Description":"RJDBC is an implementation of R's DBI interface using JDBC as a back-end. This allows R to connect to any DBMS that has a JDBC driver.","Published":"2014-12-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rje","Version":"1.9","Title":"Miscellaneous useful functions","Description":"A series of useful functions, some available in different\n forms in other packages, but which have been extended, sped up, or\n otherwise modified in some way considered useful to the author.","Published":"2014-08-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rjmcmc","Version":"0.2.2","Title":"Reversible-Jump MCMC Using Post-Processing","Description":"Performs reversible-jump MCMC (Green, 1995)\n , specifically the restriction introduced by\n Barker & Link (2013) . By utilising\n a 'universal parameter' space, RJMCMC is treated as a Gibbs sampling\n problem. Previously-calculated posterior distributions are used to\n quickly estimate posterior model probabilities. Jacobian matrices are\n found using automatic differentiation.","Published":"2017-03-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rJPSGCS","Version":"0.2-7","Title":"R-interface to Gene Drop Simulation from JPSGCS","Description":"R-interface to gene drop programs from Alun Thomas' Java Programs for Statistical Genetics and Computational Statistics (JPSGCS)","Published":"2014-12-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Rjpstatdb","Version":"0.1","Title":"R interface of the Gateway to Advanced and User-friendly\nStatistics Service","Description":"R interface to statistical database organized by Japanese government (http://statdb.nstac.go.jp/)","Published":"2013-08-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RJSDMX","Version":"1.7","Title":"R Interface to SDMX Web Services","Description":"Provides functions to retrieve data and metadata from providers \n\t\t\t that disseminate data by means of SDMX web services. \n\t\t\t SDMX (Statistical Data and Metadata eXchange) is a standard that \n\t\t\t has been developed with the aim of simplifying the exchange of \n\t\t\t statistical information. \n\t\t\t More about the SDMX standard and the SDMX Web Services \n\t\t\t can be found at: http://sdmx.org .","Published":"2017-03-09","License":"EUPL","snapshot_date":"2017-06-23"} {"Package":"rjson","Version":"0.2.15","Title":"JSON for R","Description":"Converts R object into JSON objects and vice-versa","Published":"2014-11-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rjsonapi","Version":"0.1.0","Title":"Consumer for APIs that Follow the JSON API Specification","Description":"Consumer for APIs that Follow the JSON API Specification\n (). Package mostly consumes data - with experimental\n support for serving JSON API data.","Published":"2017-01-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RJSONIO","Version":"1.3-0","Title":"Serialize R objects to JSON, JavaScript Object Notation","Description":"This is a package that allows conversion to and from \n data in Javascript object notation (JSON) format.\n This allows R objects to be inserted into Javascript/ECMAScript/ActionScript code\n and allows R programmers to read and convert JSON content to R objects.\n This is an alternative to rjson package. Originally, that was too slow for converting large R objects to JSON\n and was not extensible. rjson's performance is now similar to this package, and perhaps slightly faster in some cases.\n This package uses methods and is readily extensible by defining methods for different classes, \n vectorized operations, and C code and callbacks to R functions for deserializing JSON objects to R. \n The two packages intentionally share the same basic interface. This package (RJSONIO) has many additional\n options to allow customizing the generation and processing of JSON content.\n This package uses libjson rather than implementing yet another JSON parser. The aim is to support\n other general projects by building on their work, providing feedback and benefit from their ongoing development.","Published":"2014-07-28","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RJSplot","Version":"2.1","Title":"Interactive Graphs with R","Description":"Creates interactive graphs with 'R'. It joins the data analysis power of R and the visualization libraries of JavaScript in one package.","Published":"2017-05-19","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"rjstat","Version":"0.3.0","Title":"Read and Write 'JSON-stat' Data Sets","Description":"Read and write the 'JSON-stat' format (http://json-stat.org) to and\n from (lists of) R data frames. Not all features are supported, especially\n the extensive metadata features of 'JSON-stat'.","Published":"2016-05-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rJython","Version":"0.0-4","Title":"R interface to Python via Jython","Description":"R interface to Python via Jython allowing R to call python\n code.","Published":"2012-07-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rkafka","Version":"1.0","Title":"Using Apache 'Kafka' Messaging Queue Through 'R'","Description":"Apache 'Kafka' is an open-source message broker project developed by the Apache Software Foundation which can be thought of as a distributed, partitioned, replicated commit log service.At a high level, producers send messages over the network to the 'Kafka' cluster which in turn serves them up to consumers.See for more information.Functions included in this package enable:1.Creating 'Kafka' producer 2.Writing messages to a topic 3.Closing 'Kafka' producer 4.Creating 'Kafka' consumer 5.Reading messages from a topic 6.Closing 'Kafka' consumer. The jars required for this package are included in a separate package 'rkafkajars'.Thanks to Mu Sigma for their continued support throughout the development of the package.","Published":"2015-04-13","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rkafkajars","Version":"1.1","Title":"External Jars Required for Package 'rkafka'","Description":"The 'rkafkajars' package collects all the external jars required for the 'rkafka' package.","Published":"2017-06-20","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RKEA","Version":"0.0-6","Title":"R/KEA Interface","Description":"An R interface to KEA (Version 5.0).\n KEA (for Keyphrase Extraction Algorithm) allows for extracting\n keyphrases from text documents. It can be either used for free\n indexing or for indexing with a controlled vocabulary. For more\n information see .","Published":"2015-04-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RKEAjars","Version":"5.0-1","Title":"R/KEA Interface Jars","Description":"External jars required for package RKEA.","Published":"2015-04-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RKEEL","Version":"1.1.21","Title":"Using Keel in R Code","Description":"KEEL is a popular Java software for a large number of different knowledge data discovery tasks.\n This package takes the advantages of KEEL and R, allowing to use KEEL algorithms in simple R code.\n The implemented R code layer between R and KEEL makes easy both using KEEL algorithms in R as implementing new algorithms for 'RKEEL' in a very simple way.\n It includes more than 100 algorithms for classification, regression, association rules and preprocess, which allows a more complete experimentation process.\n For more information about KEEL, see .","Published":"2017-02-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RKEELdata","Version":"1.0.3","Title":"Datasets from KEEL for it Use in RKEEL","Description":"KEEL is a popular Java software for a large number of different knowledge data discovery tasks. Furthermore, RKEEL is a package with a R code layer between R and KEEL, for using KEEL in R code. This package includes the datasets from KEEL in .dat format for its use in RKEEL package. For more information about KEEL, see .","Published":"2017-01-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RKEELjars","Version":"1.0.15","Title":"Java Executable .jar Files for 'RKEEL'","Description":"KEEL is a popular Java software for a large number of different knowledge data discovery tasks. Furthermore, 'RKEEL' is a package with a R code layer between R and KEEL, for using KEEL in R code. This package downloads and install the .jar files necessary for 'RKEEL' algorithms execution. For more information about KEEL, see .","Published":"2017-01-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rKIN","Version":"0.1","Title":"(Kernel) Isotope Niche Estimation","Description":"Applies methods used to estimate animal homerange, but\n instead of geospatial coordinates, we use isotopic coordinates. The estimation\n methods include: 1) 2-dimensional bivariate normal kernel utilization density\n estimator, 2) bivariate normal ellipse estimator, and 3) minimum convex polygon\n estimator, all applied to stable isotope data. Additionally, functions to\n determine niche area, polygon overlap between groups and levels (confidence\n contours) and plotting capabilities.","Published":"2017-01-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RKlout","Version":"1.0","Title":"Fetch Klout Scores for Twitter Users","Description":"An interface of R to Klout API v2. It fetches Klout Score for a Twitter Username/handle in real time. Klout is a website and mobile app that uses social media analytics to rank its users according to online social influence via the \"Klout Score\", which is a numerical value between 1 and 100. In determining the user score, Klout measures the size of a user's social media network and correlates the content created to measure how other users interact with that content.","Published":"2015-07-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rknn","Version":"1.2-1","Title":"Random KNN Classification and Regression","Description":"Random knn classification and regression are implemented. Random knn based feature selection methods are also included. The approaches are mainly developed for high-dimensional data with small sample size.","Published":"2015-06-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rknots","Version":"1.3.2","Title":"Topological Analysis of Knotted Proteins, Biopolymers and 3D\nStructures","Description":"Contains functions for the topological analysis of polymers, with a focus on protein structures.","Published":"2016-10-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rkt","Version":"1.5","Title":"Mann-Kendall Test, Seasonal and Regional Kendall Tests","Description":"Contains function rkt which computes the Mann-Kendall test (MK) and the Seasonal and the Regional Kendall Tests for trend (SKT and RKT) and Theil-Sen's slope estimator. ","Published":"2017-03-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rkvo","Version":"0.1","Title":"Read Key/Value Pair Observations","Description":"This package provides functionality to read files\n containing observations which consist of arbitrary key/value\n pairs.","Published":"2014-07-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Rlab","Version":"2.15.1","Title":"Functions and Datasets Required for ST370 class","Description":"Functions and Datasets Required for ST370 class","Published":"2012-08-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rlabkey","Version":"2.1.135","Title":"Data Exchange Between R and LabKey Server","Description":"The LabKey client library for R makes it easy for R users to\n load live data from a LabKey Server, ,\n into the R environment for analysis, provided users have permissions\n to read the data. It also enables R users to insert, update, and\n delete records stored on a LabKey Server, provided they have appropriate\n permissions to do so.","Published":"2017-06-19","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"rLakeAnalyzer","Version":"1.8.3","Title":"Lake Physics Tools","Description":"Standardized methods for calculating common important derived\n physical features of lakes including water density based based on\n temperature, thermal layers, thermocline depth, lake number, Wedderburn\n number, Schmidt stability and others.","Published":"2016-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rlang","Version":"0.1.1","Title":"Functions for Base Types and Core R and 'Tidyverse' Features","Description":"A toolbox for working with base types, core R features\n like the condition system, and core 'Tidyverse' features like tidy\n evaluation.","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rlas","Version":"1.1.3","Title":"Read and Write 'las' and 'laz' Binary File Formats Used for\nRemote Sensing Data","Description":"Read and write 'las' and 'laz' binary file formats. The LAS file format is a public file format for the interchange of 3-dimensional point cloud data between data users. The LAS specifications are approved by the American Society for Photogrammetry and Remote Sensing. The LAZ file format is an open and lossless compression scheme for binary LAS format versions 1.0 to 1.3.","Published":"2017-06-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rld","Version":"1.0","Title":"Analyze and Design Repeated Low-Dose Challenge Experiments","Description":"Analyzes data from repeated low-dose challenge experiments and provide vaccine efficacy estimates. In addition, this package can provide guidance to design repeated low-dose challenge studies.","Published":"2017-01-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rlda","Version":"0.2.0","Title":"Bayesian LDA for Mixed-Membership Clustering Analysis","Description":"Estimates the Bayesian LDA model for mixed-membership clustering based on different types of data (i.e., Multinomial, Bernoulli, and Binomial entries).","Published":"2017-06-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rLDCP","Version":"1.0.1","Title":"Text Generation from Data","Description":"Linguistic Descriptions of Complex Phenomena (LDCP) is an architecture and methodology that allows us to model complex phenomena, interpreting input data, and generating automatic text reports customized to the user needs (see and ). The proposed package contains a set of methods that facilitates the development of LDCP systems. It main goal is increasing the visibility and practical use of this research line.","Published":"2017-02-09","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RLeafAngle","Version":"1.0","Title":"Estimates, Plots and Evaluates Leaf Angle Distribution\nFunctions, Calculates Extinction Coefficients","Description":"Leaf angle distribution is described by a number of functions\n (e.g. ellipsoidal, Beta and rotated ellipsoidal). The parameters of leaf angle\n distributions functions are estimated through different empirical relationship.\n This package includes estimations of parameters of different leaf angle\n distribution function, plots and evaluates leaf angle distribution functions,\n calculates extinction coefficients given leaf angle distribution.\n Reference: Wang(2007). ","Published":"2017-06-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rleafmap","Version":"0.2","Title":"Interactive Maps with R and Leaflet","Description":"Display spatial data with interactive maps powered by the \n open-source JavaScript library 'Leaflet' (see ). Maps can be rendered in a web browser or\n displayed in the HTML viewer pane of 'RStudio'. This package is designed to be easy to use and\n can create complex maps with vector and raster data, web served map tiles and interface elements.","Published":"2015-09-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rlecuyer","Version":"0.3-4","Title":"R Interface to RNG with Multiple Streams","Description":"Provides an interface to the C implementation of the\n random number generator with multiple independent streams\n developed by L'Ecuyer et al (2002). The main purpose of this\n package is to enable the use of this random number generator in\n parallel R applications.","Published":"2015-09-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rlibeemd","Version":"1.3.7","Title":"Ensemble Empirical Mode Decomposition (EEMD) and Its Complete\nVariant (CEEMDAN)","Description":"An R interface for C library libeemd for performing the ensemble\n empirical mode decomposition (EEMD), its complete variant (CEEMDAN) or the\n regular empirical mode decomposition (EMD).","Published":"2016-03-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rLiDAR","Version":"0.1","Title":"LiDAR Data Processing and Visualization","Description":"Set of tools for reading, processing and visualizing small set \n\tof LiDAR (Light Detection and Ranging) data for forest inventory applications. ","Published":"2015-04-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rLindo","Version":"8.0.1","Title":"R Interface to LINDO API","Description":"An interface to LINDO API. Supports Linear, Integer, Quadratic, Conic, General Nonlinear, Global, and Stochastic Programming models. To download the trial version LINDO API, please visit www.lindo.com/rlindo.","Published":"2013-08-12","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"Rlinkedin","Version":"0.2","Title":"Access to the LinkedIn API via R","Description":"A series of functions that allow users\n to access the 'LinkedIn' API to get information about connections,\n search for people and jobs, share updates with their network,\n and create group discussions. For more information about using\n the API please visit .","Published":"2016-10-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rlist","Version":"0.4.6.1","Title":"A Toolbox for Non-Tabular Data Manipulation","Description":"Provides a set of functions for data manipulation with\n list objects, including mapping, filtering, grouping, sorting,\n updating, searching, and other useful functions. Most functions\n are designed to be pipeline friendly so that data processing with\n lists can be chained.","Published":"2016-04-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rlm","Version":"1.2","Title":"Robust Fitting of Linear Model","Description":"Robust fitting of linear model which can take response in matrix form.","Published":"2016-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rlo","Version":"0.3.2","Title":"Utilities for Writing to 'LibreOffice Writer' Documents","Description":"Utilities for writing to 'LibreOffice Writer'\n (see for more information)\n documents using the 'Python-UNO' bridge.","Published":"2016-11-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Rlof","Version":"1.1.1","Title":"R Parallel Implementation of Local Outlier Factor(LOF)","Description":"R parallel implementation of Local Outlier Factor(LOF) which uses multiple CPUs to significantly speed up the LOF computation for large datasets. (Note: The overall performance depends on the computers especially the number of the cores).It also supports multiple k values to be calculated in parallel, as well as various distance measures in addition to the default Euclidean distance. ","Published":"2015-09-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RLogicalOps","Version":"0.1","Title":"Process Logical Operations","Description":"Processing logical operations such as AND/OR/NOT operations\n dynamically. It also handles nesting in the operations.","Published":"2016-02-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RLRsim","Version":"3.1-3","Title":"Exact (Restricted) Likelihood Ratio Tests for Mixed and Additive\nModels","Description":"Rapid, simulation-based exact (restricted) likelihood ratio tests\n for testing the presence of variance components/nonparametric terms for\n models fit with nlme::lme(),lme4::lmer(), lmeTest::lmer(), gamm4::gamm4(),\n mgcv::gamm() and SemiPar::spm().","Published":"2016-11-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RLT","Version":"3.1.0","Title":"Reinforcement Learning Trees","Description":"Random forest with a variety of additional features for regression, classification and survival analysis. The features include: parallel computing with OpenMP, embedded model for selecting the splitting variable (based on Zhu, Zeng & Kosorok, 2015), subject weight, variable weight, tracking subjects used in each tree, etc.","Published":"2017-04-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rLTP","Version":"0.1.4","Title":"R Interface to the 'LTP'-Cloud Service","Description":"R interface to the 'LTP'-Cloud service for Natural Language Processing\n in Chinese (http://www.ltp-cloud.com/).","Published":"2017-05-29","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"RLumModel","Version":"0.2.1","Title":"Solving Ordinary Differential Equations to Understand\nLuminescence","Description":"A collection of functions to simulate luminescence signals in the\n mineral quartz based on published models.","Published":"2017-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RLumShiny","Version":"0.1.1","Title":"'Shiny' Applications for the R Package 'Luminescence'","Description":"A collection of 'shiny' applications for the R package\n 'Luminescence'. These mainly, but not exclusively, include applications for\n plotting chronometric data from e.g. luminescence or radiocarbon dating. It\n further provides access to bootstraps tooltip and popover functionality and\n contains the 'jscolor.js' library with a custom 'shiny' output binding.","Published":"2016-07-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rly","Version":"1.4.2","Title":"'Lex' and 'Yacc'","Description":"R implementation of the common parsing tools 'lex' and 'yacc'.","Published":"2017-01-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RM.weights","Version":"1.0","Title":"Weighted Rasch Modeling and Extensions using Conditional Maximum\nLikelihood","Description":"Rasch model and extensions for survey data, using Conditional Maximum likelihood (CML). ","Published":"2016-07-15","License":"GPL (>= 3.0.0)","snapshot_date":"2017-06-23"} {"Package":"RM2","Version":"0.0","Title":"Revenue Management and Pricing Package","Description":"RM2 is a simple package that implements functions used in\n revenue management and pricing environments.","Published":"2008-08-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rmaf","Version":"3.0.1","Title":"Refined Moving Average Filter","Description":"Uses refined moving average filter based on the optimal and data-driven moving average lag q or smoothing spline to estimate trend and seasonal components, as well as irregularity (residuals) for univariate time series or data. ","Published":"2015-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RMallow","Version":"1.0","Title":"Fit Multi-Modal Mallows' Models to ranking data","Description":"An EM algorithm to fit Mallows' Models to full or partial\n rankings, with or without ties.","Published":"2012-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rmalschains","Version":"0.2-3","Title":"Continuous Optimization using Memetic Algorithms with Local\nSearch Chains (MA-LS-Chains) in R","Description":"An implementation of an algorithm family for continuous\n optimization called memetic algorithms with local search chains\n (MA-LS-Chains). Memetic algorithms are hybridizations of genetic\n algorithms with local search methods. They are especially suited\n for continuous optimization.","Published":"2016-11-29","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rmapshaper","Version":"0.2.0","Title":"Edit 'GeoJSON' and Spatial Objects","Description":"Edit and simplify 'geojson' and 'Spatial' objects.\n This is wrapper around the 'mapshaper' 'javascript' library\n to perform topologically-aware\n polygon simplification, as well as other operations such as clipping,\n erasing, dissolving, and converting 'multi-part' to 'single-part' geometries.\n It relies on the 'geojsonio' package for working with 'geojson' objects,\n and the 'sp' and 'rgdal' packages for working with 'Spatial' objects.","Published":"2017-02-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RmarineHeatWaves","Version":"0.15.7","Title":"Detect Marine Heat Waves and Marine Cold Spells","Description":"Given a time series of daily temperatures, the package provides tools\n to detect extreme thermal events, including marine heat waves, and to\n calculate the exceedances above or below specified threshold values.\n It outputs the properties of all detected events and exceedances.","Published":"2017-06-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RMark","Version":"2.2.2","Title":"R Code for Mark Analysis","Description":"An interface to the software package MARK that constructs input\n files for MARK and extracts the output. MARK was developed by Gary White\n and is freely available at \n but is not open source.","Published":"2016-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rmarkdown","Version":"1.6","Title":"Dynamic Documents for R","Description":"Convert R Markdown documents into a variety of formats.","Published":"2017-06-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rmatio","Version":"0.11.0","Title":"Read and Write Matlab Files","Description":"rmatio is a package for reading and writing Matlab MAT\n files from R. The rmatio package supports reading MAT version 4,\n MAT version 5 and MAT compressed version 5. The rmatio package can\n write version 5 MAT files and version 5 files with variable\n compression.","Published":"2014-12-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RMAWGEN","Version":"1.3.3","Title":"Multi-Site Auto-Regressive Weather GENerator","Description":"S3 and S4 functions are implemented for spatial multi-site\n stochastic generation of daily time series of temperature and\n precipitation. These tools make use of Vector AutoRegressive models (VARs).\n The weather generator model is then saved as an object and is calibrated by\n daily instrumental \"Gaussianized\" time series through the 'vars' package\n tools. Once obtained this model, it can it can be used for weather\n generations and be adapted to work with several climatic monthly time\n series.","Published":"2017-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RMC","Version":"0.2","Title":"Functions for fitting Markov models","Description":"Functions for fitting, diagnosing and predicting from a\n class of Markov models.","Published":"2010-11-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rmcfs","Version":"1.2.5","Title":"The MCFS-ID Algorithm for Feature Selection and Interdependency\nDiscovery","Description":"MCFS-ID (Monte Carlo Feature Selection and Interdependency Discovery) is a \n Monte Carlo method-based tool for feature selection. It also allows for the discovery of interdependencies between the relevant features. MCFS-ID is particularly suitable for the analysis of high-dimensional, 'small n large p' transactional and biological data.","Published":"2017-04-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rmcorr","Version":"0.1.0","Title":"Repeated Measures Correlation","Description":"Compute the repeated measures correlation, a statistical technique\n for determining the overall within-individual relationship among paired measures\n assessed on two or more occasions, first introduced by Bland and Altman (1995).\n Includes functions for diagnostics, p-value, effect size with confidence\n interval including optional bootstrapping, as well as graphing. Also includes\n several example datasets.","Published":"2016-08-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rmdformats","Version":"0.3.3","Title":"HTML Output Formats and Templates for 'rmarkdown' Documents","Description":"HTML formats and templates for 'rmarkdown' documents, with some extra\n features such as automatic table of contents, lightboxed figures, dynamic\n crosstab helper.","Published":"2017-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rmdHelpers","Version":"1.2","Title":"Helper Functions for Rmd Documents","Description":"A series of functions to aid in repeated tasks for Rmd documents. All details are to my personal preference, though I am happy to add flexibility if there are use cases I am missing. I will continue updating with new functions as I add utility functions for myself.","Published":"2016-07-11","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rmdshower","Version":"2.0.0","Title":"'R' 'Markdown' Format for 'shower' Presentations","Description":"'R' 'Markdown' format for 'shower' presentations, see\n .","Published":"2016-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RMediation","Version":"1.1.4","Title":"Mediation Analysis Confidence Intervals","Description":"We provide functions to compute confidence\n intervals (CIs) for a well-defined nonlinear function of the model\n parameters (e.g., product of k coefficients) in single--level and\n multilevel structural equation models.","Published":"2016-03-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rmeta","Version":"2.16","Title":"Meta-analysis","Description":"Functions for simple fixed and random effects\n meta-analysis for two-sample comparisons and cumulative\n meta-analyses. Draws standard summary plots, funnel plots, and\n computes summaries and tests for association and heterogeneity","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rmetasim","Version":"3.0.5","Title":"An Individual-Based Population Genetic Simulation Environment","Description":"An interface between R and the metasim simulation engine.\n The simulation environment is documented in: \"Strand, A.(2002) Metasim 1.0: an individual-based environment for simulating population genetics of \n complex population dynamics. Mol. Ecol. Notes. \n Please see the vignettes CreatingLandscapes and Simulating to get some ideas on how to use the packages. \n See the rmetasim vignette to get an overview and to see important changes to the \n code in the most recent version.","Published":"2016-04-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rmgarch","Version":"1.3-0","Title":"Multivariate GARCH Models","Description":"Feasible multivariate GARCH models including DCC, GO-GARCH and Copula-GARCH.","Published":"2015-12-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rminer","Version":"1.4.2","Title":"Data Mining Classification and Regression Methods","Description":"Facilitates the use of data mining algorithms in classification and regression (including time series forecasting) tasks by presenting a short and coherent set of functions. Versions: 1.4.2 new NMAE metric, \"xgboost\" and \"cv.glmnet\" models (16 classification and 18 regression models); 1.4.1 new tutorial and more robust version; 1.4 - new classification and regression models/algorithms, with a total of 14 classification and 15 regression methods, including: Decision Trees, Neural Networks, Support Vector Machines, Random Forests, Bagging and Boosting; 1.3 and 1.3.1 - new classification and regression metrics (improved mmetric function); 1.2 - new input importance methods (improved Importance function); 1.0 - first version.","Published":"2016-09-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rmisc","Version":"1.5","Title":"Rmisc: Ryan Miscellaneous","Description":"The Rmisc library contains many functions useful for data analysis\n and utility operations.","Published":"2013-10-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Rmixmod","Version":"2.1.1","Title":"Supervised, Unsupervised, Semi-Supervised Classification with\nMIXture MODelling (Interface of MIXMOD Software)","Description":"Interface of MIXMOD software for supervised, unsupervised and semi-Supervised classification with MIXture MODelling.","Published":"2016-08-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RmixmodCombi","Version":"1.0","Title":"Combining Mixture Components for Clustering","Description":"The Rmixmod package provides model-based clustering by fitting a mixture model (e.g. Gaussian components for quantitative continuous data) to the data and identifying each cluster with one of its components. The number of components can be determined from the data, typically using the BIC criterion. In practice, however, individual clusters can be poorly fitted by Gaussian distributions, and in that case model-based clustering tends to represent one non-Gaussian cluster by a mixture of two or more Gaussian components. If the number of mixture components is interpreted as the number of clusters, this can lead to overestimation of the number of clusters. This is because BIC selects the number of mixture components needed to provide a good approximation to the density. This package, RmixmodCombi, according to \\emph{Combining Mixture Components for Clustering} by J.P. Baudry, A.E. Raftery, G. Celeux, K. Lo, R. Gottardo, combines the components of the EM/BIC solution (provided by Rmixmod) hierarchically according to an entropy criterion. This yields a clustering for each number of clusters less than or equal to K. These clusterings can be compared on substantive grounds, and we also provide a way of selecting the number of clusters via a piecewise linear regression fit to the (possibly rescaled) entropy plot. ","Published":"2014-07-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RMixpanel","Version":"0.6-2","Title":"API for Mixpanel","Description":"Provides an interface to many endpoints of Mixpanel's Data Export, Engage and JQL API. The R functions allow for event and profile data export as well as for segmentation, retention, funnel and addiction analysis. Results are always parsed into convenient R objects. Furthermore it is possible to load and update profiles. ","Published":"2017-02-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RMKdiscrete","Version":"0.1","Title":"Sundry Discrete Probability Distributions","Description":"Sundry discrete probability distributions and helper functions.","Published":"2014-10-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rmngb","Version":"0.6-1","Title":"Miscellaneous Collection of Functions for Medical Data Analysis","Description":"A collection of miscellaneous functions for medical data analysis. Visualization\n of multidimensional data, diagnostic test calibration, pairwise tests for qualitative variables and outcome simulation.","Published":"2014-12-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RMOA","Version":"1.0","Title":"Connect R with MOA for Massive Online Analysis","Description":"Connect R with MOA (Massive Online Analysis -\n http://moa.cms.waikato.ac.nz) to build classification models and\n regression models on streaming data or out-of-RAM data","Published":"2014-09-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RMOAjars","Version":"1.0","Title":"External jars required for package RMOA","Description":"External jars required for package RMOA. RMOA is a framework to\n build data stream models on top of MOA (Massive Online Analysis -\n http://moa.cms.waikato.ac.nz)","Published":"2014-09-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RMongo","Version":"0.0.25","Title":"MongoDB Client for R","Description":"MongoDB Database interface for R. The interface is provided via Java calls to the mongo-java-driver.","Published":"2013-09-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Rmosek","Version":"1.2.5.1","Title":"The R-to-MOSEK Optimization Interface","Description":"An interface to the MOSEK optimization library designed to\n solve large-scale mathematical optimization problems. Supports\n linear, quadratic and second order cone optimization\n with/without integer variables, in addition to the more general\n separable convex problems. Trial and free academic licenses\n available at http://www.mosek.com.","Published":"2014-12-13","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"rMouse","Version":"0.1","Title":"Automate Mouse Clicks and Send Keyboard Input","Description":"Provides wrapper functions to the Java Robot class to automate user input, like mouse movements, clicks and keyboard input.","Published":"2017-06-22","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"rmp","Version":"2.0","Title":"Rounded Mixture Package. Performs Probability Mass Function\nEstimation with Nonparametric Mixtures of Rounded Kernels","Description":"Performs probability mass function estimation with nonparametric mixtures of rounded kernels.","Published":"2016-02-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rmpfr","Version":"0.6-1","Title":"R MPFR - Multiple Precision Floating-Point Reliable","Description":"Arithmetic (via S4 classes and methods) for\n arbitrary precision floating point numbers, including transcendental\n (\"special\") functions. To this end, Rmpfr interfaces to\n the LGPL'ed MPFR (Multiple Precision Floating-Point Reliable) Library\n which itself is based on the GMP (GNU Multiple Precision) Library.","Published":"2016-11-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rmpi","Version":"0.6-6","Title":"Interface (Wrapper) to MPI (Message-Passing Interface)","Description":"An interface (wrapper) to MPI APIs. It also \n\t provides interactive R manager and worker environment.","Published":"2016-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rmpw","Version":"0.0.1","Title":"Causal Mediation Analysis Using Weighting Approach","Description":"We implement causal mediation analysis using the methods proposed by Hong (2010) and Hong, Deutsch & Hill (2015) . It allows the estimation and hypothesis testing of causal mediation effects through ratio of mediator probability weights (RMPW). This strategy conveniently relaxes the assumption of no treatment-by-mediator interaction while greatly simplifying the outcome model specification without invoking strong distributional assumptions. ","Published":"2017-02-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rMR","Version":"1.0.4","Title":"Importing Data from Loligo Systems Software, Calculating\nMetabolic Rates and Critical Tensions","Description":"Analysis of oxygen consumption data generated by Loligo (R) Systems respirometry equipment. The package includes a function for loading data output by Loligo's 'AutoResp' software (get.witrox.data()), functions for calculating metabolic rates over user-specified time intervals, extracting critical points from data using broken stick regressions based on Yeager and Ultsch (), and easy functions for converting between different units of barometric pressure.","Published":"2017-01-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RMRAINGEN","Version":"1.0","Title":"RMRAINGEN (R Multi-site RAINfall GENeretor): a package to\ngenerate daily time series of rainfall from monthly mean values","Description":"This package contains functions and S3 methods for spatial\n multi-site stochastic generation of daily precipitation. It generates\n precipitation occurrence in several sites using Wilks' Approach (1998).\n Bugs/comments/questions/collaboration of any kind are warmly welcomed.","Published":"2014-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rms","Version":"5.1-1","Title":"Regression Modeling Strategies","Description":"Regression modeling, testing, estimation, validation,\n\tgraphics, prediction, and typesetting by storing enhanced model design\n\tattributes in the fit. 'rms' is a collection of functions that\n\tassist with and streamline modeling. It also contains functions for\n\tbinary and ordinal logistic regression models, ordinal models for\n continuous Y with a variety of distribution families, and the Buckley-James\n\tmultiple regression model for right-censored responses, and implements\n\tpenalized maximum likelihood estimation for logistic and ordinary\n\tlinear models. 'rms' works with almost any regression model, but it\n\twas especially written to work with binary or ordinal regression\n\tmodels, Cox regression, accelerated failure time models,\n\tordinary linear models,\tthe Buckley-James model, generalized least\n\tsquares for serially or spatially correlated observations, generalized\n\tlinear models, and quantile regression.","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rms.gof","Version":"1.0","Title":"Root-mean-square goodness-of-fit test for simple null hypothesis","Description":"This package can be used to test any simple null\n hypothesis using the root-mean-square goodness of fit test.\n Monte Carlo estimation is used to calculate the associated\n P-value.","Published":"2013-01-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rmsfact","Version":"0.0.3","Title":"Amazing Random Facts About the World's Greatest Hacker","Description":"Display a randomly selected quote about Richard M. Stallman\n based on the collection in the 'GNU Octave' function 'fact()' which was\n aggregated by Jordi Gutiérrez Hermoso based on the (now defunct) site\n stallmanfacts.com (which is accessible only via ).","Published":"2016-08-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RMThreshold","Version":"1.1","Title":"Signal-Noise Separation in Random Matrices by using Eigenvalue\nSpectrum Analysis","Description":"An algorithm which can be used to determine an objective threshold for signal-noise separation in large random matrices (correlation matrices, mutual information matrices, network adjacency matrices) is provided. The package makes use of the results of Random Matrix Theory (RMT). The algorithm increments a suppositional threshold monotonically, thereby recording the eigenvalue spacing distribution of the matrix. According to RMT, that distribution undergoes a characteristic change when the threshold properly separates signal from noise. By using the algorithm, the modular structure of a matrix - or of the corresponding network - can be unraveled. ","Published":"2016-06-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RMTstat","Version":"0.3","Title":"Distributions, Statistics and Tests derived from Random Matrix\nTheory","Description":"\n Functions for working with the Tracy-Widom laws and other distributions \n related to the eigenvalues of large Wishart matrices.\n The tables for computing the Tracy-Widom densities and distribution\n functions were computed by Momar Dieng's MATLAB package \"RMLab\"\n (formerly available on his homepage at \n http://math.arizona.edu/~momar/research.htm ).\n This package is part of a collaboration between Iain Johnstone, \n Zongming Ma, Patrick Perry, and Morteza Shahram. It will soon be\n replaced by a package with more accuracy and built-in support for\n relevant statistical tests.","Published":"2014-11-01","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rmumps","Version":"5.1.1-1","Title":"Wrapper for MUMPS Library","Description":"Some basic features of MUMPS (Multifrontal Massively Parallel\n\tsparse direct Solver) are wrapped in a class whose methods can be used\n\tfor sequentially solving a sparse linear system (symmetric or not)\n\twith one or many right hand sides (dense or sparse).\n\tThere is a possibility to do separately symbolic analysis,\n\tLU (or LDL^t) factorization and system solving.","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rmutil","Version":"1.1.0","Title":"Utilities for Nonlinear Regression and Repeated Measurements\nModels","Description":"A toolkit of functions for nonlinear regression and repeated\n measurements not to be used by itself but called by other Lindsey packages such\n as 'gnlm', 'stable', 'growth', 'repeated', and 'event' \n (available at ).","Published":"2017-02-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RMySQL","Version":"0.10.11","Title":"Database Interface and 'MySQL' Driver for R","Description":"A 'DBI' interface to 'MySQL' / 'MariaDB'. The CRAN version of this package\n contains an old branch based on legacy code from S-PLUS, which being phased out. A\n modern rewrite based on 'Rcpp' can be obtained from the 'Github' repository.","Published":"2017-03-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RNAseqNet","Version":"0.1.1","Title":"Log-Linear Poisson Graphical Model with Hot-Deck Multiple\nImputation","Description":"Infer log-linear Poisson Graphical Model with an auxiliary data\n set. Hot-deck multiple imputation method is used to improve the reliability\n of the inference with an auxiliary dataset. Standard log-linear Poisson \n graphical model can also be used for the inference and the Stability \n Approach for Regularization Selection (StARS) is implemented to drive the \n selection of the regularization parameter.","Published":"2017-05-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rnaseqWrapper","Version":"1.0-1","Title":"Wrapper for several R packages and scripts to automate RNA-seq\nanalysis","Description":"This package is designed to streamline several of the common steps for RNA-seq\n analysis, including differential expression and variant discovery.\n For the development build, or to contribute changes to this package,\n please see our repository at https://bitbucket.org/petersmp/rnaseqwrapper/","Published":"2014-07-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RNAstructureModuleMiner","Version":"0.1.0","Title":"RNA Secondary Structure Comparison and Module Mining","Description":"Functions in this program is designed for RNA secondary structure plotting, comparison and module mining. Given a RNA secondary structure, you can obtain stem regions, hairpin loops, internal loops, bulge loops and multibranch loops of this RNA structure using this program. They are the basic modules of RNA secondary structure. For each module you get, you can use this program to label the RNA structure with a specific color. You can also use this program to compare two RNA secondary structures to get a score that represents similarity.","Published":"2017-05-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rnaturalearth","Version":"0.1.0","Title":"World Map Data from Natural Earth","Description":"Facilitates mapping by making natural earth map data from more easily available to R users.","Published":"2017-03-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rnaturalearthdata","Version":"0.1.0","Title":"World Vector Map Data from Natural Earth Used in 'rnaturalearth'","Description":"Vector map data from . Access functions are provided in the accompanying package 'rnaturalearth'.","Published":"2017-02-21","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"RnavGraph","Version":"0.1.8","Title":"Using Graphs as a Navigational Infrastructure","Description":"GUI to explore high dimensional data (including image data) using graphs as navigational infrastructure.","Published":"2014-12-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RnavGraphImageData","Version":"0.0.3","Title":"Some image data used in the RnavGraph package demos","Description":"Image data used as examples in the RnavGraph R package.\n See the demos in the RnavGraph package.","Published":"2013-05-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RNaviCell","Version":"0.2","Title":"Visualization of High-Throughput Data on Large-Scale Biological\nNetworks","Description":"Provides a set of functions to access a data visualization web service. For more information and a tutorial on how to use it, see https://navicell.curie.fr/pages/nav_web_service.html and https://github.com/sysbio-curie/RNaviCell. ","Published":"2015-10-29","License":"LGPL-2.1 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RNCBIEUtilsLibs","Version":"0.9","Title":"EUtils libraries for use in the R environment","Description":"Provides the libraries of the EUtils operations for the\n RNCBI package.","Published":"2010-06-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RNCEP","Version":"1.0.8","Title":"Obtain, Organize, and Visualize NCEP Weather Data","Description":"Contains functions to retrieve, organize, and visualize weather data from the NCEP/NCAR Reanalysis (http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.html) and NCEP/DOE Reanalysis II (http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis2.html) datasets. Data are queried via the Internet and may be obtained for a specified spatial and temporal extent or interpolated to a point in space and time. We also provide functions to visualize these weather data on a map. There are also functions to simulate flight trajectories according to specified behavior using either NCEP wind data or data specified by the user.","Published":"2017-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rncl","Version":"0.8.2","Title":"An Interface to the Nexus Class Library","Description":"An interface to the Nexus Class Library which allows parsing\n of NEXUS, Newick and other phylogenetic tree file formats. It provides\n elements of the file that can be used to build phylogenetic objects\n such as ape's 'phylo' or phylobase's 'phylo4(d)'. This functionality\n is demonstrated with 'read_newick_phylo()' and 'read_nexus_phylo()'.","Published":"2016-12-16","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RND","Version":"1.2","Title":"Risk Neutral Density Extraction Package","Description":"Extract the implied risk neutral density from options using various methods.","Published":"2017-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RndTexExams","Version":"1.4","Title":"Build and Grade Multiple Choice Exams with Randomized Content","Description":"Using as input a 'LaTeX' file with a multiple choice exam, this package will produce several versions with randomized contents of the same exam. Functions for grading and testing for cheating are also available.","Published":"2016-06-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RNeo4j","Version":"1.6.4","Title":"Neo4j Driver for R","Description":"Neo4j, a graph database, allows users to store their data as a property graph. A graph consists of nodes that are connected by relationships; both nodes and relationships can have properties, or key-value pairs. RNeo4j is Neo4j's R driver. It allows users to read and write data from and to Neo4j directly from their R environment by exposing an interface for interacting with nodes, relationships, paths, and more. Most notably, it allows users to retrieve Cypher query results as R data frames, where Cypher is Neo4j's graph query language. Visit to learn more about Neo4j.","Published":"2016-04-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rneos","Version":"0.3-2","Title":"XML-RPC Interface to NEOS","Description":"Within this package the XML-RPC API to NEOS is implemented. This enables the user to pass optimization problems to NEOS and retrieve results within R.","Published":"2017-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rnetcarto","Version":"0.2.4","Title":"Fast Network Modularity and Roles Computation by Simulated\nAnnealing (Rgraph C Library Wrapper for R)","Description":"It provides functions to compute the modularity and modularity-related roles in networks. It is a wrapper around the rgraph library (Guimera & Amaral, 2005, doi:10.1038/nature03288). ","Published":"2015-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RNetCDF","Version":"1.8-2","Title":"Interface to NetCDF Datasets","Description":"An interface to the NetCDF file format designed by Unidata\n for efficient storage of array-oriented scientific data and descriptions.\n The R interface is closely based on the C API of the NetCDF library,\n and it includes calendar conversions from the Unidata UDUNITS library.\n The current implementation supports all operations on NetCDF datasets\n in classic and 64-bit offset file formats, and NetCDF4-classic format\n is supported for reading and modification of existing files.","Published":"2016-02-21","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RNetLogo","Version":"1.0-4","Title":"Provides an Interface to the Agent-Based Modelling Platform\n'NetLogo'","Description":"Interface to use and access Wilensky's 'NetLogo' (Wilensky 1999) from R using either headless (no GUI) or interactive GUI mode. Provides functions to load models, execute commands, and get values from reporters. Mostly analogous to the 'NetLogo' 'Mathematica' Link .","Published":"2017-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RNewsflow","Version":"1.0.1","Title":"Tools for Analyzing Content Homogeneity and News Diffusion using\nComputational Text Analysis","Description":"A collection of tools for measuring the similarity of news content and tracing the flow of (news) messages over\n time and across media. ","Published":"2016-03-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RNeXML","Version":"2.0.7","Title":"Semantically Rich I/O for the 'NeXML' Format","Description":"Provides access to phyloinformatic data in 'NeXML' format. The\n package should add new functionality to R such as the possibility to\n manipulate 'NeXML' objects in more various and refined way and compatibility\n with 'ape' objects.","Published":"2016-06-28","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rngSetSeed","Version":"0.3-2","Title":"Seeding the Default RNG with a Numeric Vector","Description":"A function setVectorSeed() is provided. Its argument\n is a numeric vector of an arbitrary nonzero length, whose\n components have integer values from [0, 2^32-1]. The input\n vector is transformed using AES (Advanced Encryption Standard)\n algorithm into an initial state of Mersenne-Twister random\n number generator. The function provides a better alternative\n to the R base function set.seed(), if the input vector is\n a single integer. Initializing a stream of random numbers\n with a vector is a convenient way to obtain several streams,\n each of which is identified by several integer indices.","Published":"2014-12-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rngtools","Version":"1.2.4","Title":"Utility functions for working with Random Number Generators","Description":"This package contains a set of functions for working with\n Random Number Generators (RNGs). In particular, it defines a generic\n S4 framework for getting/setting the current RNG, or RNG data\n that are embedded into objects for reproducibility.\n Notably, convenient default methods greatly facilitate the way current\n RNG settings can be changed.","Published":"2014-03-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rngWELL","Version":"0.10-5","Title":"Toolbox for WELL Random Number Generators","Description":"It is a dedicated package to WELL pseudo random generators, which were introduced in Panneton et al. (2006), ``Improved Long-Period Generators Based on Linear Recurrences Modulo 2'', ACM Transactions on Mathematical Software. But this package is not intended to be used directly, you are strongly __encouraged__ to use the 'randtoolbox' package, which depends on this package. ","Published":"2017-05-21","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rngwell19937","Version":"0.6-0","Title":"Random number generator WELL19937a with 53 or 32 bit output","Description":"Long period linear random number generator WELL19937a by\n F. Panneton, P. L'Ecuyer and M. Matsumoto. The initialization\n algorithm allows to seed the generator with a\n numeric vector of an arbitrary length and uses MRG32k5a by\n P. L'Ecuyer to achieve good quality of the initialization. The\n output function may be set to provide numbers from the interval\n (0,1) with 53 (the default) or 32 random bits. WELL19937a is of\n similar type as Mersenne Twister and has the same period.\n WELL19937a is slightly slower than Mersenne Twister, but has\n better equidistribution and \"bit-mixing\" properties and faster\n recovery from states with prevailing zeros than Mersenne\n Twister. All WELL generators with orders 512, 1024, 19937 and\n 44497 can be found in randtoolbox package.","Published":"2014-11-30","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RNHANES","Version":"1.1.0","Title":"Facilitates Analysis of CDC NHANES Data","Description":"Tools for downloading and analyzing CDC NHANES data, with a focus\n on analytical laboratory data.","Published":"2016-11-29","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RNifti","Version":"0.7.0","Title":"Fast R and C++ Access to NIfTI Images","Description":"Provides very fast access to images stored in the NIfTI-1 file\n format , with seamless\n synchronisation between compiled C and interpreted R code. Not to be\n confused with 'RNiftyReg', which provides tools for image registration.","Published":"2017-06-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RNiftyReg","Version":"2.5.0","Title":"Image Registration Using the 'NiftyReg' Library","Description":"Provides an 'R' interface to the 'NiftyReg' image registration tools\n . Linear and nonlinear registration\n are supported, in two and three dimensions.","Published":"2017-02-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rNMF","Version":"0.5.0","Title":"Robust Nonnegative Matrix Factorization","Description":"An implementation of robust nonnegative matrix factorization (rNMF). The rNMF algorithm decomposes a nonnegative high dimension data matrix into the product of two low rank nonnegative matrices, while detecting and trimming outliers. The main function is rnmf(). The package also includes a visualization tool, see(), that arranges and prints vectorized images.","Published":"2015-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rnn","Version":"0.8.0","Title":"Recurrent Neural Network","Description":"Implementation of a Recurrent Neural Network in R.","Published":"2016-09-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rnoaa","Version":"0.7.0","Title":"'NOAA' Weather Data from R","Description":"Client for many 'NOAA' data sources including the 'NCDC' climate\n 'API' at , with functions for\n each of the 'API' 'endpoints': data, data categories, data sets, data types,\n locations, location categories, and stations. In addition, we have an interface\n for 'NOAA' sea ice data, the 'NOAA' severe weather inventory, 'NOAA' Historical\n Observing 'Metadata' Repository ('HOMR') data, 'NOAA' storm data via 'IBTrACS',\n tornado data via the 'NOAA' storm prediction center, and more.","Published":"2017-05-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rNOMADS","Version":"2.3.6","Title":"An Interface to the NOAA Operational Model Archive and\nDistribution System","Description":"An interface to the National Oceanic and Atmospheric Administration's Operational Model Archive and Distribution System (NOMADS, see for more information) that allows R users to quickly and efficiently download global and regional weather model data for processing. rNOMADS currently supports a variety of models ranging from global weather data to an altitude of 40 km, to high resolution regional weather models, to wave and sea ice models. It can also retrieve archived NOMADS models. rNOMADS can retrieve binary data in grib format as well as import ascii data directly into R by interfacing with the GrADS-DODS system.","Published":"2017-05-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rnpn","Version":"0.1.0","Title":"Interface to the National 'Phenology' Network 'API'","Description":"Programmatic interface to the\n Web Service methods provided by the National 'Phenology' Network\n (), which includes data on various life history\n events that occur at specific times.","Published":"2016-04-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RNRCS","Version":"0.1.1","Title":"Download NRCS Data","Description":"Downloads Natural Resources Conservation Service (NRCS) data for sites in the Soil Climate Analysis Network (SCAN) , and Snow Telemetry (SNOTEL and SNOLITE) networks. Metadata can be returned for all sites in the NRCS' Air and Water Data Base (AWDB) .","Published":"2017-04-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rnrfa","Version":"1.3.0","Title":"UK National River Flow Archive Data from R","Description":"Utility functions to retrieve data from the UK National River Flow Archive (http://nrfa.ceh.ac.uk/). The package contains R wrappers to the UK NRFA data temporary-API. There are functions to retrieve stations falling in a bounding box, to generate a map and extracting time series and general information.","Published":"2016-12-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"roadoi","Version":"0.2","Title":"Find Free Versions of Scholarly Publications via the oaDOI\nService","Description":"This web client interfaces oaDOI , a service finding \n free full-texts of academic papers by linking DOIs with open access journals and\n repositories. It provides unified access to various data sources for open access\n full-text links including Crossref, Bielefeld Academic Search Engine (BASE) and \n the Directory of Open Access Journals (DOAJ). API usage is free and no \n registration is required.","Published":"2017-05-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"roahd","Version":"1.3","Title":"Robust Analysis of High Dimensional Data","Description":"A collection of methods for the robust analysis of univariate and\n multivariate functional data, possibly in high-dimensional cases, and hence\n with attention to computational efficiency and simplicity of use.","Published":"2017-04-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROAuth","Version":"0.9.6","Title":"R Interface For OAuth","Description":"Provides an interface to the OAuth 1.0 specification\n allowing users to authenticate via OAuth to the\n server of their choice.","Published":"2015-02-13","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"RobAStBase","Version":"1.0.1","Title":"Robust Asymptotic Statistics","Description":"Base S4-classes and functions for robust asymptotic statistics.","Published":"2017-04-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"RobAStRDA","Version":"1.0.2","Title":"Interpolation Grids for Packages of the 'RobASt' - Family of\nPackages","Description":"Includes 'sysdata.rda' file for packages of the 'RobASt' - family of packages; is\n currently used by package 'RobExtremes' only.","Published":"2016-04-25","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"robCompositions","Version":"2.0.3","Title":"Robust Estimation for Compositional Data","Description":"Methods for analysis of compositional data including robust\n methods, imputation, methods to replace rounded zeros, (robust) outlier\n detection for compositional data, (robust) principal component analysis for\n compositional data, (robust) factor analysis for compositional data, (robust)\n discriminant analysis for compositional data (Fisher rule), robust regression\n with compositional predictors and (robust) Anderson-Darling normality tests for\n compositional data as well as popular log-ratio transformations (addLR, cenLR,\n isomLR, and their inverse transformations). In addition, visualisation and\n diagnostic tools are implemented as well as high and low-level plot functions\n for the ternary diagram.","Published":"2017-01-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"robcor","Version":"0.1-6","Title":"Robust Correlations","Description":"Robust pairwise correlations based on estimates of scale,\n particularly on \"FastQn\" one-step M-estimate.","Published":"2014-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robeth","Version":"2.7","Title":"R functions for robust statistics","Description":"Locations problems, M-estimates of coefficients and scale\n in linear regression, Weights for bounded influence regression,\n Covariance matrix of the coefficient estimates, Asymptotic\n relative efficiency of regression M-estimates, Robust testing\n in linear models, High breakdown point regression, M-estimates\n of covariance matrices, M-estimates for discrete generalized\n linear models.","Published":"2013-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robets","Version":"1.1","Title":"Forecasting Time Series with Robust Exponential Smoothing","Description":"We provide an outlier robust alternative of the function ets() in the 'forecast' package of Hyndman and Khandakar (2008). For each method of a class of exponential smoothing variants we made a robust alternative. The class includes methods with a damped trend and/or seasonal components. The robust method is developed by robustifying every aspect of the original exponential smoothing variant. We provide robust forecasting equations, robust initial values, robust smoothing parameter estimation and a robust information criterion. The method is described in more detail in Crevits and Croux (2016).","Published":"2016-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robfilter","Version":"4.1","Title":"Robust Time Series Filters","Description":"A set of functions to filter time series based on concepts\n from robust statistics.","Published":"2014-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RobLox","Version":"1.0","Title":"Optimally Robust Influence Curves and Estimators for Location\nand Scale","Description":"Functions for the determination of optimally robust influence curves and\n estimators in case of normal location and/or scale.","Published":"2016-09-05","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"RobLoxBioC","Version":"0.9","Title":"Infinitesimally robust estimators for preprocessing omics data","Description":"Functions for the determination of optimally robust influence curves and\n estimators for preprocessing omics data, in particular gene expression data.","Published":"2013-09-13","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"robmed","Version":"0.1.1","Title":"(Robust) Mediation Analysis","Description":"Perform mediation analysis via a bootstrap test.","Published":"2016-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robotstxt","Version":"0.3.2","Title":"A 'robots.txt' Parser and 'Webbot'/'Spider'/'Crawler'\nPermissions Checker","Description":"Provides functions to download and parse 'robots.txt' files.\n Ultimately the package makes it easy to check if bots\n (spiders, scrapers, ...) are allowed to access specific\n resources on a domain.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RobPer","Version":"1.2.2","Title":"Robust Periodogram and Periodicity Detection Methods","Description":"Calculates periodograms based on (robustly) fitting periodic functions to light curves (irregularly observed time series, possibly with measurement accuracies, occurring in astroparticle physics). Three main functions are included: RobPer() calculates the periodogram. Outlying periodogram bars (indicating a period) can be detected with betaCvMfit(). Artificial light curves can be generated using the function tsgen(). For more details see the corresponding article: Thieler, Fried and Rathjens (2016), Journal of Statistical Software 69(9), 1-36, .","Published":"2016-03-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"robreg3S","Version":"0.3","Title":"Three-Step Regression and Inference for Cellwise and Casewise\nContamination","Description":"Three-step regression and inference for cellwise and casewise contamination.","Published":"2015-12-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RobRex","Version":"0.9","Title":"Optimally robust influence curves for regression and scale","Description":"Functions for the determination of optimally robust influence curves in case of\n linear regression with unknown scale and standard normal distributed errors where the\n regressor is random.","Published":"2013-09-14","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"RobRSVD","Version":"1.0","Title":"Robust Regularized Singular Value Decomposition","Description":"This package provides the function to calculate SVD, regularized SVD, robust SVD and robust regularized SVD method. The robust SVD methods use alternating iteratively reweighted least squares methods. The regularized SVD uses generalized cross validation to choose the optimal smoothing parameters. ","Published":"2013-12-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RObsDat","Version":"16.03","Title":"Data Management for Hydrology and Beyond Using the Observations\nData Model","Description":"Data management in hydrology and other fields is facilitated with functions to enter and modify data in a database according to the Observations Data Model (ODM) standard by CUASHI (Consortium of Universities for the Advancement of Hydrologic Science). While this data model has been developed in hydrology, it is also useful for other fields. RObsDat helps in the setup of the database within one of the free database systems MariaDB, PostgreSQL or SQLite. It imports the controlled water vocabulary from the CUASHI web service and provides a smart interface between the analyst and the database: Already existing data entries are detected and duplicates avoided. The data import function converts different data table designs to make import simple. Cleaning and modifications of data are handled with a simple version control system. Variable and location names are treated in a user friendly way, accepting and processing multiple versions. When querying data from the database, it is stored in a spacetime objects within R for subsequent processing.","Published":"2016-03-31","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"robumeta","Version":"2.0","Title":"Robust Variance Meta-Regression","Description":"Functions for conducting robust variance estimation (RVE) meta-regression using both large and small sample RVE estimators under various weighting schemes. These methods are distribution free and provide valid point estimates, standard errors and hypothesis tests even when the degree and structure of dependence between effect sizes is unknown. Also included are functions for conducting sensitivity analyses under correlated effects weighting and producing RVE-based forest plots. ","Published":"2017-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"robust","Version":"0.4-18","Title":"Port of the S+ \"Robust Library\"","Description":"Methods for robust statistics, a state of the art in the early\n 2000s, notably for robust regression and robust multivariate analysis.","Published":"2017-04-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RobustAFT","Version":"1.4-1","Title":"Truncated Maximum Likelihood Fit and Robust Accelerated Failure\nTime Regression for Gaussian and Log-Weibull Case","Description":"R functions for the computation of the truncated maximum\n\t likelihood and the robust accelerated failure time regression \n\t for gaussian and log-Weibull case.","Published":"2015-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robustarima","Version":"0.2.5","Title":"Robust ARIMA Modeling","Description":"Functions for fitting a linear regression model with ARIMA\n errors using a filtered tau-estimate.","Published":"2017-02-23","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"robustbase","Version":"0.92-7","Title":"Basic Robust Statistics","Description":"\"Essential\" Robust Statistics.\n Tools allowing to analyze data with robust methods. This includes\n regression methodology including model selections and multivariate\n statistics where we strive to cover the book \"Robust Statistics,\n Theory and Methods\" by 'Maronna, Martin and Yohai'; Wiley 2006.","Published":"2016-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robustBLME","Version":"0.1.2","Title":"Robust Bayesian Linear Mixed-Effects Models using ABC","Description":"Bayesian robust fitting of linear mixed effects models through weighted likelihood equations and approximate Bayesian computation as proposed by Ruli et al. (2017) .","Published":"2017-06-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"robustDA","Version":"1.1","Title":"Robust Mixture Discriminant Analysis","Description":"Robust mixture discriminant analysis (RMDA, Bouveyron & Girard, 2009) allows to build a robust supervised classifier from learning data with label noise. The idea of the proposed method is to confront an unsupervised modeling of the data with the supervised information carried by the labels of the learning data in order to detect inconsistencies. The method is able afterward to build a robust classifier taking into account the detected inconsistencies into the labels.","Published":"2015-01-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"robustETM","Version":"1.0","Title":"Robust Methods using Exponential Tilt Model","Description":"Testing homogeneity for generalized exponential tilt model. This package includes a collection of functions for (1) implementing methods for testing homogeneity for generalized exponential tilt model; and (2) implementing existing methods under comparison.","Published":"2016-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robustfa","Version":"1.0-5","Title":"An Object Oriented Solution for Robust Factor Analysis","Description":"An object oriented solution for robust factor analysis. In the solution, new S4 classes \"Fa\", \"FaClassic\", \"FaRobust\", \"FaCov\", \"SummaryFa\" are created.","Published":"2013-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robustgam","Version":"0.1.7","Title":"Robust Estimation for Generalized Additive Models","Description":"This package provides robust estimation for generalized\n additive models. It implements a fast and stable algorithm in\n Wong, Yao and Lee (2013). The implementation also contains\n three automatic selection methods for smoothing parameter. They\n are designed to be robust to outliers. For more details, see\n Wong, Yao and Lee (2013).","Published":"2014-01-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RobustGaSP","Version":"0.5.3","Title":"Robust Gaussian Stochastic Process Emulation","Description":"Robust parameter estimation and prediction of Gaussian stochastic process emulators.\n Important functions : rgasp(), predict.rgasp().","Published":"2017-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robustHD","Version":"0.5.1","Title":"Robust Methods for High-Dimensional Data","Description":"Robust methods for high-dimensional data, in particular linear\n model selection techniques based on least angle regression and sparse\n regression.","Published":"2016-01-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robustlmm","Version":"2.1-3","Title":"Robust Linear Mixed Effects Models","Description":"A method to fit linear mixed effects models robustly.\n Robustness is achieved by modification of the scoring equations\n combined with the Design Adaptive Scale approach.","Published":"2017-04-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"robustloggamma","Version":"1.0-2","Title":"Robust Estimation of the Generalized log Gamma Model","Description":"Robust estimation of the generalized log gamma model is provided using Quantile Tau estimator, Weighted Likelihood estimator and Truncated Maximum Likelihood estimator. Functions for regression and censored data are also available.","Published":"2016-05-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robustrank","Version":"2016.11-9","Title":"Robust Rank-Based Tests","Description":"Implements several rank-based tests, including the modified Wilcoxon-Mann-Whitney two sample location test, also known as the Fligner-Policello test. ","Published":"2016-11-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RobustRankAggreg","Version":"1.1","Title":"Methods for robust rank aggregation","Description":"Methods for aggregating ranked lists, especially lists of\n genes. It implements the Robust Rank Aggregation (Kolde et. al\n in preparation) and some other simple algorithms for the task.\n RRA method uses a probabilistic model for aggregation that is\n robust to noise and also facilitates the calculation of\n significance probabilities for all the elements in the final\n ranking.","Published":"2013-06-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"robustrao","Version":"1.0-1","Title":"An Extended Rao-Stirling Diversity Index to Handle Missing Data","Description":"A collection of functions to compute the Rao-Stirling diversity index\n\t(Porter and Rafols, 2009) and its extension to\n\tacknowledge missing data (i.e.,\tuncategorized references) by calculating its\n\tinterval of uncertainty using\tmathematical optimization as proposed in Calatrava\n\tet al. (2016) .\n\tThe Rao-Stirling diversity index is a well-established bibliometric indicator\n\tto measure the interdisciplinarity of scientific publications. Apart from the\n\tobligatory dataset of publications with their respective references and\ta\n\ttaxonomy of disciplines that categorizes references as well as a measure of\n\tsimilarity between the disciplines, the Rao-Stirling diversity index requires\n\ta complete categorization of all references of a publication into disciplines.\n\tThus, it fails for a incomplete categorization; in this case, the robust\n\textension has to be used, which encodes the uncertainty caused by missing\n\tbibliographic data as an uncertainty interval.\n\tClassification / ACM - 2012: Information systems ~ Similarity measures,\n\tTheory of computation ~ Quadratic\tprogramming, Applied computing ~ Digital\n\tlibraries and archives.","Published":"2016-06-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"robustreg","Version":"0.1-10","Title":"Robust Regression Functions","Description":"Linear regression functions using Huber and bisquare psi functions. Optimal weights are calculated using IRLS algorithm.","Published":"2017-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"robustsae","Version":"0.1.0","Title":"Robust Bayesian Small Area Estimation","Description":"Functions for Robust Bayesian Small Area Estimation.","Published":"2016-12-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"robustvarComp","Version":"0.1-2","Title":"Robust Estimation of Variance Component Models","Description":"Robust Estimation of Variance Component Models by classic and composite robust procedures. The composite procedures are robust against outliers generated by the Independent Contamination Model. ","Published":"2014-07-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"robustX","Version":"1.2-2","Title":"'eXtra' / 'eXperimental' Functionality for Robust Statistics","Description":"Robustness -- 'eXperimental', 'eXtraneous', or 'eXtraordinary'\n Functionality for Robust Statistics. In other words, methods which are not\n yet well established, often related to methods in package 'robustbase'.","Published":"2017-02-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ROC632","Version":"0.6","Title":"Construction of diagnostic or prognostic scoring system and\ninternal validation of its discriminative capacities based on\nROC curve and 0.633+ boostrap resampling","Description":"This package computes traditional ROC curves and time-dependent ROC curves using the cross-validation, the 0.632 and the 0.632+ estimators. The 0.632+ estimator of time-dependent ROC curve is useful to estimate the predictive accuracy of prognostic signature based on high-dimensional data. For instance, it outperforms the other approaches, especially the cross-validation solution which is often used. The 0.632+ estimators correct the area under the curve in order to adequately estimate the prognostic capacities regardless of the overfitting level. This package also allows for the construction of diagnostic or prognostic scoring systems (penalized regressions). The methodology is adapted to complete data (penalized logistic regression associated with ROC curve) or incomplete time-to-event data (penalized Cox model associated with time-dependent ROC curve).","Published":"2013-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rocc","Version":"1.2","Title":"ROC based classification","Description":"Functions for a classification method based on receiver\n operating characteristics (ROC). Briefly, features are selected\n according to their ranked AUC value in the training set. The\n selected features are merged by the mean value to form a\n metagene. The samples are ranked by their metagene value and\n the metagene threshold that has the highest accuracy in\n splitting the training samples is determined. A new sample is\n classified by its metagene value relative to the threshold. In\n the first place, the package is aimed at two class problems in\n gene expression data, but might also apply to other problems.","Published":"2010-10-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"roccv","Version":"1.0","Title":"ROC for Cross Validation Results","Description":"Cross validate large genetic data while specifying clinical variables that should always be in the model using the function cv(). An ROC plot from the cross validation data with AUC can be obtained using rocplot(), which also can be used to compare different models. ","Published":"2016-06-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rockchalk","Version":"1.8.101","Title":"Regression Estimation and Presentation","Description":"A collection of functions for interpretation and presentation\n of regression analysis. These functions are used\n to produce the statistics lectures in\n http://pj.freefaculty.org/guides. Includes regression\n diagnostics, regression tables, and plots of interactions and\n \"moderator\" variables. The emphasis is on \"mean-centered\" and\n \"residual-centered\" predictors. The vignette 'rockchalk' offers a\n fairly comprehensive overview. The vignette 'Rstyle' has advice\n about coding in R. The package title 'rockchalk' refers to our\n school motto, 'Rock Chalk Jayhawk, Go K.U.'.","Published":"2016-02-25","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"RockFab","Version":"1.2","Title":"Rock fabric and strain analysis tools","Description":"Provides functions to complete three-dimensional rock fabric and strain analyses following the Rf Phi, Fry, and normalized Fry methods. Also allows for plotting of results and interactive 3D visualization functionality.","Published":"2014-01-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rocNIT","Version":"1.0","Title":"Non-Inferiority Test for Paired ROC Curves","Description":"\n Non-inferiority test and diagnostic test are very important in clinical trails.\n This package is to get a p value from the non-inferiority test for ROC curves from diagnostic test. ","Published":"2016-12-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rococo","Version":"1.1.4","Title":"Robust Rank Correlation Coefficient and Test","Description":"The 'rococo' package provides a robust gamma rank correlation\n\t coefficient along with a permutation-based rank correlation test.\n\t The rank correlation coefficient and the test are explicitly\n\t designed for dealing with noisy numerical data.","Published":"2016-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ROCR","Version":"1.0-7","Title":"Visualizing the Performance of Scoring Classifiers","Description":"ROC graphs, sensitivity/specificity curves, lift charts,\n and precision/recall plots are popular examples of trade-off\n visualizations for specific pairs of performance measures. ROCR is a\n flexible tool for creating cutoff-parameterized 2D performance curves\n by freely combining two from over 25 performance measures (new\n performance measures can be added using a standard interface).\n Curves from different cross-validation or bootstrapping runs can be\n averaged by different methods, and standard deviations, standard\n errors or box plots can be used to visualize the variability across\n the runs. The parameterization can be visualized by printing cutoff\n values at the corresponding curve positions, or by coloring the\n curve according to cutoff. All components of a performance plot can\n be quickly adjusted using a flexible parameter dispatching\n mechanism. Despite its flexibility, ROCR is easy to use, with only\n three commands and reasonable default values for all optional\n parameters.","Published":"2015-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ROCS","Version":"1.3","Title":"Receiver Operating Characteristics Surface","Description":"Plots the Receiver Operating Characteristics Surface for high-throughput class-skewed data, calculates the Volume under the Surface (VUS) and the FDR-Controlled Area Under the Curve (FCAUC), and conducts tests to compare two ROC surfaces. Computes eROC curve and the corresponding AUC for imperfect reference standard. ","Published":"2016-09-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ROCt","Version":"0.9.5","Title":"Time-Dependent ROC Curve Estimators and Expected Utility\nFunctions","Description":"Contains functions in order to estimate diagnostic and prognostic capacities of continuous markers. More precisely, one function concerns the estimation of time-dependent ROC (ROCt) curve, as proposed by Heagerty et al. (2000) . One function concerns the adaptation of the ROCt theory for studying the capacity of a marker to predict the excess of mortality of a specific population compared to the general population. This last part is based on additive relative survival models and the work of Pohar-Perme et al. (2012) . We also propose two functions for cut-off estimation in medical decision making by maximizing time-dependent expected utility function. Finally, we propose confounder-adjusted estimators of ROC and ROCt curves by using the Inverse Probability Weighting (IPW) approach. For the confounder-adjusted ROC curve (without censoring), we also proposed the implementation of the estimator based on placement values proposed by Pepe and Cai (2004) .","Published":"2017-02-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ROCwoGS","Version":"1.0","Title":"Non-parametric estimation of ROC curves without Gold Standard\nTest","Description":"Function to estimate the ROC Curve of a continuous-scaled\n diagnostic test with the help of a second imperfect diagnostic\n test with binary responses.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rodam","Version":"0.1.2","Title":"Wrapper Functions for 'ODAM' (Open Data for Access and Mining)\nWeb Services","Description":"'ODAM' (Open Data for Access and Mining) is a framework that implements a simple way to make research data broadly accessible and fully available for reuse, including by a script language such as R. The main purpose is to make a data set accessible online with a minimal effort from the data provider, and to allow any scientists or bioinformaticians to be able to explore the data set and then extract a subpart or the totality of the data according to their needs. The Rodam package has only one class, 'odamws', that provides methods to allow you to retrieve online data using 'ODAM' Web Services. This obviously requires that data are implemented according the 'ODAM' approach , namely that the data subsets were deposited in the suitable data repository in the form of TSV files associated with their metadata also described in TSV files. See .","Published":"2016-10-05","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RODBC","Version":"1.3-15","Title":"ODBC Database Access","Description":"An ODBC database interface.","Published":"2017-05-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"RODBCDBI","Version":"0.1.1","Title":"Provides Access to Databases Through the ODBC Interface","Description":"An implementation of R's DBI interface using ODBC package as a\n back-end. This allows R to connect to any DBMS that has a ODBC driver.","Published":"2016-03-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RODBCext","Version":"0.3.0","Title":"Parameterized Queries Extension for RODBC","Description":"An extension for RODBC package adding support for parameterized\n queries.","Published":"2017-04-10","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"rodd","Version":"0.2-1","Title":"Optimal Discriminating Designs","Description":"A collection of functions for numerical construction of optimal discriminating designs. At the current moment T-optimal designs (which maximize the lower bound for the power of F-test for regression model discrimination), KL-optimal designs (for lognormal errors) and their robust analogues can be calculated with the package. ","Published":"2016-01-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rODE","Version":"0.99.4","Title":"Ordinary Differential Equation (ODE) Solvers Written in R Using\nS4 Classes","Description":"Show Physics and engineering students how an ODE solver\n is made and how effective classes can be for the construction of\n the equations that describe how effective classes can be for the \n construction of equations that describe the natural phenomena. Most \n of the ideas come from the book on \"Computer Simulations in Physics\" \n by Harvey Gould, Jan Tobochnik, and Wolfgang Christian. \n Book link: .","Published":"2017-05-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rodeo","Version":"0.7.1","Title":"A Code Generator for ODE-Based Models","Description":"Provides an R6 class and several utility methods to\n facilitate the implementation of models based on ordinary\n differential equations. The heart of the package is a code generator\n that creates compiled 'Fortran' (or 'R') code which can be passed to\n a numerical solver. There is direct support for solvers contained\n in packages 'deSolve' and 'rootSolve'.","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rodham","Version":"0.0.3","Title":"Fetch Hillary Rodham Clinton's Emails","Description":"Fetch and process Hillary Rodham Clinton's \"personal\" emails.","Published":"2017-01-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RODM","Version":"1.1","Title":"R interface to Oracle Data Mining","Description":"This package implements an interface to Oracle Data Mining\n (ODM). It provides an ideal environment for rapid development\n of demos and proof of concept data mining studies. It\n facilitates the prototyping of vertical applications and makes\n ODM and the RDBMS environment easily accessible to\n statisticians and data analysts familiar with R but not fluent\n in SQL or familiar with the database environment. It also\n facilitates the benchmarking and testing of ODM functionality\n including the production of summary statistics, performance\n metrics and graphics. It enables the scripting and control of\n production data mining methodologies from a high-level\n environment. Oracle Data Mining (ODM) is an option of Oracle\n Relational Database Management System (RDBMS) Enterprise\n Edition (EE). It contains several data mining and data analysis\n algorithms for classification, prediction, regression,\n clustering, associations, feature selection, anomaly detection,\n feature extraction, and specialized analytics. It provides\n means for the creation, management and operational deployment\n of data mining models inside the database environment. For more\n information consult the entry for \"Oracle Data Mining\" in\n Wikipedia (en.wikipedia.org).","Published":"2012-10-29","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ROI","Version":"0.2-6","Title":"R Optimization Infrastructure","Description":"The R Optimization Infrastructure ('ROI') is a\n sophisticated framework for handling optimization problems in R.","Published":"2017-05-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.models.miplib","Version":"0.0-1","Title":"R Optimization Infrastructure: 'MIPLIB' 2010 Benchmark Instances","Description":"The mixed integer programming library 'MIPLIB' (see ) \n\tis commonly used to compare the performance of mixed integer optimization solvers.\n\tThis package provides functions to access 'MIPLIB' from the \n\t'R' Optimization Infrastructure ('ROI'). More information about 'MIPLIB'\n\tcan be found in the paper by Koch et al. available at\n\t.\n\tThe 'README.md' file illustrates how to use this package.","Published":"2017-05-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.models.netlib","Version":"1.0","Title":"'ROI' Optimization Problems Based on 'NETLIB-LP'","Description":"A collection of 'ROI' optimization problems based on the 'NETLIB-LP' collection.\n 'Netlib' is a software repository, which amongst many other software for scientific computing contains a collection of linear programming problems. The purpose of this package is to make \n this problems easily accessible from 'R' as 'ROI' optimization problems.","Published":"2016-12-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.alabama","Version":"0.2-5","Title":"'alabama' Plugin for the 'R' Optimization Infrastructure","Description":"Enhances the R Optimization Infrastructure ('ROI') package\n with the 'alabama' solver for solving nonlinear optimization problems.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.clp","Version":"0.3","Title":"'Clp (Coin-or linear programming)' Plugin for the 'R'\nOptimization Interface","Description":"Enhances the R Optimization Infrastructure (ROI) package by registering\n\t the COIN-OR Clp open-source solver from the COIN-OR suite .\n\t It allows for solving linear programming with continuous objective variables \n\t keeping sparse constraints definition.","Published":"2017-05-21","License":"EPL","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.cplex","Version":"0.2-5","Title":"ROI Plug-in CPLEX","Description":"Enhances the R Optimization Infrastructure (ROI) package by registering\n\t the 'CPLEX' commercial solver. It allows for solving mixed integer quadratically\n\t constrained programming (MIQPQC) problems as well as all\n\t variants/combinations of LP, QP, QCP, IP.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.ecos","Version":"0.2-5","Title":"'ECOS' Plugin for the 'R' Optimization Infrastructure","Description":"Enhances the 'R' Optimization Infrastructure ('ROI') package\n\t with the Embedded Conic Solver ('ECOS') for solving conic optimization problems.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.glpk","Version":"0.2-5","Title":"ROI Plug-in GLPK","Description":"Enhances the R Optimization Infrastructure ('ROI') package by registering\n\t the free 'GLPK' solver. It allows for solving mixed integer linear programming (MILP)\n\t problems as well as all variants/combinations of LP, IP.","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.ipop","Version":"0.2-5","Title":"ROI Plug-in {ipop}","Description":"Enhances the R Optimization Infrastructure ('ROI') package \n\t by registering the ipop solver from package 'kernlab'.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.lpsolve","Version":"0.2-5","Title":"'lp_solve' Plugin for the 'R' Optimization Interface","Description":"Enhances the 'R' Optimization Infrastructure ('ROI') package\n with the 'lp_solve' solver.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.msbinlp","Version":"0.2-5","Title":"'Multi-Solution' Binary Linear Problem Plugin for the 'R'\nOptimization Interface","Description":"Enhances the 'R' Optimization Infrastructure ('ROI') package\n with the possibility to obtain multiple solutions for linear \n problems with binary variables. The main function is copied \n (with small modifications) from the relations package.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.nloptr","Version":"0.2-5","Title":"'ROI'-Plugin 'NLOPTR'","Description":"Enhances the R Optimization Infrastructure ('ROI') package\n with the 'NLopt' solver for solving nonlinear optimization problems.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.optimx","Version":"0.2-5","Title":"'ROI'-Plugin 'optimx'","Description":"Enhances the R Optimization Infrastructure ('ROI') package\n with the 'optimx' package.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.quadprog","Version":"0.2-5","Title":"ROI Plug-in {quadprog}","Description":"Enhances the R Optimization Infrastructure ('ROI') package\n\t by registering the 'quadprog' solver. It allows for solving quadratic programming (QP)\n\t problems.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.scs","Version":"0.2-5","Title":"'SCS' Plugin for the 'R' Optimization Infrastructure","Description":"Enhances the 'R' Optimization Infrastructure ('ROI') package\n with the 'SCS' solver for solving convex cone problems.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROI.plugin.symphony","Version":"0.2-5","Title":"ROI Plug-in SYMPHONY","Description":"Enhances the R Optimization Infrastructure ('ROI') package by registering\n\t the 'SYMPHONY' open-source solver from the COIN-OR suite. It allows for\n\t solving mixed integer linear programming (MILP) problems as well as all\n\t variants/combinations of LP, IP.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"roll","Version":"1.0.7","Title":"Rolling Statistics","Description":"Parallel functions for computing rolling statistics of time-series\n data.","Published":"2017-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rollbar","Version":"0.1.0","Title":"Error Tracking and Logging","Description":"Reports errors and messages to Rollbar, the error tracking platform .","Published":"2016-05-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rollply","Version":"0.5.0","Title":"Moving-Window Add-on for 'plyr'","Description":"Apply a function in a moving window, then\n combine the results in a data frame.","Published":"2016-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rolr","Version":"1.0.0","Title":"Finding Optimal Three-Group Splits Based on a Survival Outcome","Description":"Provides fast procedures for exploring all pairs of\n cutpoints of a single covariate with respect to survival and determining\n optimal cutpoints using a hierarchical method and various ordered logrank tests.","Published":"2017-03-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rolypoly","Version":"0.1.0","Title":"Identifying Trait-Relevant Functional Annotations","Description":"Using enrichment of genome-wide association summary statistics to\n identify trait-relevant cellular functional annotations.","Published":"2017-03-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROMIplot","Version":"1.0","Title":"Plots Surfaces of Rates of Mortality Improvement","Description":"Provides the possibility to plot Lexis surface maps (heat maps) of rates of mortality improvement. Raw data to be plotted can be read from the Human Mortality Database using code originally written by Tim Riffe. The European Research Council has provided financial support under the European Community's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no. 263744.","Published":"2015-07-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rook","Version":"1.1-1","Title":"Rook - a web server interface for R","Description":"This package contains the Rook specification and\n convenience software for building and running Rook applications. To\n get started, be sure and read the 'Rook' help file first.","Published":"2014-10-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RootsExtremaInflections","Version":"1.1","Title":"Finds Roots, Extrema and Inflection Points of a Curve","Description":"Implementation of the Taylor Regression Estimator method which is described\n in Christopoulos (2014,) for finding\n the root, extreme or inflection point of a curve, when we only have a set of probably noisy\n xy points for it. The method uses a suitable polynomial regression in order to find the\n coefficients of the relevant Taylor polynomial for the function that has generated our data.\n Optional use of parallel computing under request.","Published":"2017-05-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rootSolve","Version":"1.7","Title":"Nonlinear Root Finding, Equilibrium and Steady-State Analysis of\nOrdinary Differential Equations","Description":"Routines to find the root of nonlinear functions, and to perform steady-state and equilibrium analysis of ordinary differential equations (ODE). \n Includes routines that: (1) generate gradient and jacobian matrices (full and banded),\n (2) find roots of non-linear equations by the 'Newton-Raphson' method, \n (3) estimate steady-state conditions of a system of (differential) equations in full, banded or sparse form, using the 'Newton-Raphson' method, or by dynamically running,\n (4) solve the steady-state conditions for uni-and multicomponent 1-D, 2-D, and 3-D partial differential equations, that have been converted to ordinary differential equations\n by numerical differencing (using the method-of-lines approach).\n Includes fortran code.","Published":"2016-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rootWishart","Version":"0.4.0","Title":"Distribution of Largest Root for Single and Double Wishart\nSettings","Description":"Functions for hypothesis testing in single and double Wishart\n settings, based on Roy's largest root. This test statistic is especially\n useful in multivariate analysis. The computations are based on results by\n Chiani (2014) and Chiani (2016)\n . They use the fact that the CDF is related\n to the Pfaffian of a matrix that can be computed in a finite number of\n iterations. This package takes advantage of the Boost and Eigen C++ libraries\n to perform multi-precision linear algebra.","Published":"2017-04-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rope","Version":"1.0","Title":"Model Selection with FDR Control of Selected Variables","Description":"Selects one model with variable selection FDR controlled at a\n specified level. A q-value for each potential variable is also returned. The\n input, variable selection counts over many bootstraps for several levels of\n penalization, is modeled as coming from a beta-binomial mixture\n distribution.","Published":"2017-02-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ropenaq","Version":"0.2.1","Title":"Accesses Air Quality Data from the Open Data Platform OpenAQ","Description":"Allows access to air quality data from the API of the OpenAQ\n platform , with the different services the API offers\n (getting measurements for a given query, getting latest measurements, getting\n lists of available countries/cities/locations).","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ROpenDota","Version":"0.1.1","Title":"Access OpenDota Services in R","Description":"Provides a client for the API of OpenDota. OpenDota is a web service which is provide DOTA2 real time data. Data is collected through the Steam WebAPI. With ROpenDota you can easily grab the latest DOTA2 statistics in R programming such as latest match on official international competition, analyzing your or enemy performance to learn their strategies,etc. Please see for more information.","Published":"2017-05-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ROpenFIGI","Version":"0.2.8","Title":"R Interface to OpenFIGI","Description":"Provide a simple interface to Bloomberg's OpenFIGI API. Please\n see for API details and registration. You may be\n eligible to have an API key to accelerate your loading process.","Published":"2016-06-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ropensecretsapi","Version":"1.0.1","Title":"R Package for the OpenSecrets.org API","Description":"An R package for the OpenSecrets.org web services API.","Published":"2014-10-27","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"ROpenWeatherMap","Version":"1.1","Title":"R Interface to OpenWeatherMap API","Description":"OpenWeatherMap (OWM) is a service providing weather related data.\n This package can be used to access current weather data for one location or several locations.\n It can also be used to forecast weather for 5 days with data for every 3 hours.","Published":"2016-03-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ropercenter","Version":"0.1.0","Title":"Reproducible Data Retrieval from the Roper Center Data Archive","Description":"Reproducible, programmatic retrieval of datasets from the\n Roper Center data archive. The Roper Center for Public Opinion\n Research maintains the largest \n archive of public opinion data in existence, but researchers using\n these datasets are caught in a bind. The Center's terms and conditions\n bar redistribution of downloaded datasets, but to ensure that one's \n work can be reproduced, assessed, and built upon by others, one must\n provide access to the raw data one employed. The `ropercenter`\n package cuts this knot by providing registered users with programmatic,\n reproducible access to Roper Center datasets from within R.","Published":"2017-03-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ROptEst","Version":"1.0.1","Title":"Optimally Robust Estimation","Description":"Optimally robust estimation in general smoothly parameterized models using S4 classes and methods.","Published":"2017-04-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"ROptEstOld","Version":"0.9.2","Title":"Optimally robust estimation - old version","Description":"Optimally robust estimation using S4 classes and methods. Old version still needed\n for current versions of ROptRegTS and RobRex.","Published":"2013-09-13","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"ROptimizely","Version":"0.2.0","Title":"R Optimizely API","Description":"R package extracts optimizely test results and test information using Optimizely REST API. Only read functionality is supported for analysis and reporting.","Published":"2015-07-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ROptRegTS","Version":"0.9.1","Title":"Optimally robust estimation for regression-type models","Description":"Optimally robust estimation for regression-type models using S4 classes and\n methods","Published":"2013-09-14","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"ROracle","Version":"1.3-1","Title":"OCI Based Oracle Database Interface for R","Description":"Oracle Database interface (DBI) driver for R.\n This is a DBI-compliant Oracle driver based on the OCI.","Published":"2016-10-26","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"rorcid","Version":"0.3.0","Title":"Interface to the 'Orcid.org' 'API'","Description":"Client for the 'Orcid.org' 'API' (http://orcid.org/).\n Functions included for searching for people, searching by 'DOI',\n and searching by 'Orcid' 'ID'.","Published":"2016-09-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rorutadis","Version":"0.4.2","Title":"Robust Ordinal Regression UTADIS","Description":"Implementation of Robust Ordinal Regression for multiple criteria value-based sorting with preference information provided in form of possibly imprecise assignment examples, assignment-based pairwise comparisons, and desired class cardinalities [Kadzinski et al. 2015, ].","Published":"2017-01-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ROSE","Version":"0.0-3","Title":"ROSE: Random Over-Sampling Examples","Description":"The package provides functions to deal with binary classification\n problems in the presence of imbalanced classes. Synthetic balanced samples are \n generated according to ROSE (Menardi and Torelli, 2013). \n Functions that implement more traditional remedies to the class imbalance\n are also provided, as well as different metrics to evaluate a learner accuracy.\n These are estimated by holdout, bootstrap or cross-validation methods. ","Published":"2014-07-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rosetteApi","Version":"1.7.0","Title":"Rosette API","Description":"Rosette is an API for multilingual text analysis and information\n extraction. More information can be found at .","Published":"2017-06-14","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rosm","Version":"0.2.2","Title":"Plot Raster Map Tiles from Open Street Map and Other Sources","Description":"Download and plot Open Street Map ,\n Bing Maps and other tiled map sources in a way \n that works seamlessly with plotting from the 'sp' package. Use to create \n high-resolution basemaps and add hillshade to vector-based maps.","Published":"2017-04-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rospca","Version":"1.0.2","Title":"Robust Sparse PCA using the ROSPCA Algorithm","Description":"Implementation of robust sparse PCA using the ROSPCA algorithm \n of Hubert et al. (2016) .","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rotationForest","Version":"0.1.3","Title":"Fit and Deploy Rotation Forest Models","Description":"Fit and deploy rotation forest models (\"Rodriguez, J.J., Kuncheva,\n L.I., 2006. Rotation forest: A new classifier ensemble method. IEEE Trans.\n Pattern Anal. Mach. Intell. 28, 1619-1630 \") for binary classification.\n Rotation forest is an ensemble method where each base classifier (tree) is\n fit on the principal components of the variables of random partitions of\n the feature set.","Published":"2017-04-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rotations","Version":"1.5","Title":"Tools for Working with Rotation Data","Description":"Tools for working with rotational data, including\n simulation from the most commonly used distributions on SO(3),\n methods for different Bayes, mean and median type estimators for\n the central orientation of a sample, confidence/credible\n regions for the central orientation based on those estimators and\n a novel visualization technique for rotation data. Most recently,\n functions to identify potentially discordant (outlying) values\n have been added.","Published":"2016-01-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rothermel","Version":"1.2","Title":"Rothermel fire spread model for R","Description":"R build of Rothermel's (1972) model for surface fire rate of spread with some additional utilities (uncertainty propagation, standard fuel model selection, fuel model optimization by genetic algorithm) and sample datasets.","Published":"2014-11-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rotl","Version":"3.0.3","Title":"Interface to the 'Open Tree of Life' API","Description":"An interface to the 'Open Tree of Life' API to retrieve\n phylogenetic trees, information about studies used to assemble the synthetic\n tree, and utilities to match taxonomic names to 'Open Tree identifiers'. The\n 'Open Tree of Life' aims at assembling a comprehensive phylogenetic tree for all\n named species.","Published":"2017-03-04","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"roughrf","Version":"1.0","Title":"Roughened Random Forests for Binary Classification","Description":"A set of functions to support Xiong K, 'Roughened Random Forests for Binary Classification' (2014). The functions include RRFA, RRFB, RRFC1-RRFC7, RRFD and RRFE. RRFB and RRFC6 are usually recommended. RRFB is much faster than RRFC6.","Published":"2015-01-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RoughSetKnowledgeReduction","Version":"0.1","Title":"Simplification of Decision Tables using Rough Sets","Description":"Rough Sets were introduced by Zdzislaw Pawlak on his book \"Rough Sets: Theoretical Aspects of Reasoning About Data\". Rough Sets provide a formal method to approximate crisp sets when the set-element belonging relationship is either known or undetermined. This enables the use of Rough Sets for reasoning about incomplete or contradictory knowledge. A decision table is a prescription of the decisions to make given some conditions. Such decision tables can be reduced without losing prescription ability. This package provides the classes and methods for knowledge reduction from decision tables as presented in the chapter 7 of the aforementioned book. This package provides functions for calculating the both the discernibility matrix and the essential parts of decision tables.","Published":"2014-12-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RoughSets","Version":"1.3-0","Title":"Data Analysis Using Rough Set and Fuzzy Rough Set Theories","Description":"Implementations of algorithms for data analysis\n based on the rough set theory (RST) and the fuzzy rough set theory (FRST). We\n not only provide implementations for the basic concepts of RST and FRST but also\n popular algorithms that derive from those theories. The methods included in the\n package can be divided into several categories based on their functionality:\n discretization, feature selection, instance selection, rule induction and classification\n based on nearest neighbors. RST was introduced by Zdzisław Pawlak in 1982\n as a sophisticated mathematical tool to\n model and process imprecise or incomplete information. By using\n the indiscernibility relation for objects/instances, RST does not require\n additional parameters to analyze the data. FRST is an extension of RST. The\n FRST combines concepts of vagueness and indiscernibility that are expressed\n with fuzzy sets (as proposed by Zadeh, in 1965) and RST.","Published":"2015-09-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rowr","Version":"1.1.3","Title":"Row-Based Functions for R Objects","Description":"Provides utilities which interact with all R objects as\n if they were arranged in rows. It allows more consistent and predictable \n output to common functions, and generalizes a number of utility functions to\n to be failsafe with any number and type of input objects.","Published":"2016-12-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"roxygen2","Version":"6.0.1","Title":"In-Line Documentation for R","Description":"Generate your Rd documentation, 'NAMESPACE' file, and collation \n field using specially formatted comments. Writing documentation in-line\n with code makes it easier to keep your documentation up-to-date as your\n requirements change. 'Roxygen2' is inspired by the 'Doxygen' system for C++.","Published":"2017-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"royston","Version":"1.2","Title":"Royston's H Test: Multivariate Normality Test","Description":"Performs a multivariate normality test based on Royston's H test","Published":"2015-02-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RPANDA","Version":"1.3","Title":"Phylogenetic ANalyses of DiversificAtion","Description":"Implements macroevolutionary analyses on phylogenetic trees. See\n Morlon et al. (2010) , Morlon et al. (2011)\n , Condamine et al. (2013) ,\n Morlon et al. (2014) , Manceau et al. (2015) , Lewitus & Morlon (2016) , Drury\n et al. (2016) , Manceau et al. (2016) ,\n and Clavel & Morlon (2017) .","Published":"2017-04-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rpanel","Version":"1.1-3","Title":"Simple interactive controls for R using the tcltk library","Description":"rpanel provides a set of functions to build simple \n GUI controls for R functions. These are built on the tcltk package. \n Uses could include changing a parameter on a graph by animating it \n with a slider or a \"doublebutton\", up to more sophisticated control \n panels.\n Some functions for specific graphical tasks, referred to as 'cartoons',\n are provided.","Published":"2014-02-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rpart","Version":"4.1-11","Title":"Recursive Partitioning and Regression Trees","Description":"Recursive partitioning for classification, \n regression and survival trees. An implementation of most of the \n functionality of the 1984 book by Breiman, Friedman, Olshen and Stone.","Published":"2017-04-21","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"rpart.plot","Version":"2.1.2","Title":"Plot 'rpart' Models: An Enhanced Version of 'plot.rpart'","Description":"Plot 'rpart' models. Extends plot.rpart() and text.rpart()\n in the 'rpart' package.","Published":"2017-04-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rpart.utils","Version":"0.5","Title":"Tools for parsing and manipulating rpart objects, including\ngenerating machine readable rules","Description":"This package contains additional tools for working with rpart\n objects. Most importantly, it includes methods for converting rpart rules\n into a series of structured tables sufficient for executing the decision\n tree completely in SQL.","Published":"2014-05-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rpartitions","Version":"0.1","Title":"Code for integer partitioning","Description":"Provides algorithims for randomly sampling a feasible set defined\n by a given total and number of elements using integer partitioning.","Published":"2013-12-11","License":"MIT","snapshot_date":"2017-06-23"} {"Package":"rpartScore","Version":"1.0-1","Title":"Classification trees for ordinal responses","Description":"This package contains functions that allow to build\n classification trees for ordinal responses within the CART\n framework. The trees are grown using the Generalized Gini\n impurity function, where the misclassification costs are given\n by the absolute or squared differences in scores assigned to\n the categories of the response. Pruning is based on the total\n misclassification rate or on the total misclassification cost.","Published":"2012-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rpatrec","Version":"1.0.1","Title":"Recognising Visual Charting Patterns in Time Series Data","Description":"Generating visual charting patterns and noise,\n smoothing to find a signal in noisy time series and enabling\n users to apply their findings to real life data.","Published":"2017-05-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rpca","Version":"0.2.3","Title":"RobustPCA: Decompose a Matrix into Low-Rank and Sparse\nComponents","Description":"Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Candes, E. J., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis?. Journal of the ACM (JACM), 58(3), 11. prove that we can recover each component individually under some suitable assumptions. It is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This package implements this decomposition algorithm resulting with Robust PCA approach.","Published":"2015-07-31","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"rpcdsearch","Version":"1.0","Title":"Tools for the Construction of Clinical Code Lists for Primary\nCare Database Studies","Description":"Allows users to identify relevant clinical codes and\n automate the construction of clinical code lists for primary care database\n studies. This package is analogous to the Stata command pcdsearch.","Published":"2016-01-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RPCLR","Version":"1.0","Title":"RPCLR (Random-Penalized Conditional Logistic Regression)","Description":"This package implements the R-PCLR (Random-Penalized\n Conditional Logistic Regression) algorithm for obtaining\n variable importance. The algorithm is applicable for the\n analysis of high dimensional data from matched case-control\n studies.","Published":"2012-08-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rpdb","Version":"2.2","Title":"Read, write, visualize and manipulate PDB files","Description":"Provides tools to read, write, visualize PDB files and perform some structural manipulations.","Published":"2014-04-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rpdo","Version":"0.2.2","Title":"Pacific Decadal Oscillation Index Data","Description":"Monthly Pacific Decadal Oscillation (PDO) index\n values from January 1900 to February 2017.\n Includes download_pdo() to scrape the latest values from \n .","Published":"2017-04-12","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"RpeakChrom","Version":"1.1.0","Title":"Tools for Chromatographic Column Characterization and Modelling\nChromatographic Peak","Description":"The quantitative measurement and detection of molecules in HPLC should be carried out by an accurate description of chromatographic peaks. In this package non-linear fitting using a modified Gaussian model with a parabolic variance (PVMG) has been implemented to obtain the retention time and height at the peak maximum. This package also includes the traditional Van Deemter approach and two alternatives approaches to characterize chromatographic column.","Published":"2017-04-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RPEnsemble","Version":"0.3","Title":"Random Projection Ensemble Classification","Description":"Implements the methodology of \"Cannings, T. I. and Samworth, R. J. (2015) Random projection ensemble classification. http://arxiv.org/abs/1504.04595\". The random projection ensemble classifier is a general method for classification of high-dimensional data, based on careful combination of the results of applying an arbitrary base classifier to random projections of the feature vectors into a lower-dimensional space. The random projections are divided into non-overlapping blocks, and within each block the projection yielding the smallest estimate of the test error is selected. The random projection ensemble classifier then aggregates the results of applying the base classifier on the selected projections, with a data-driven voting threshold to determine the final assignment. ","Published":"2016-09-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RPEXE.RPEXT","Version":"0.0.1","Title":"Reduced Piecewise Exponential Estimate/Test Software","Description":"This reduced piecewise exponential survival software implements the likelihood ratio test and backward elimination procedure in Han, Schell, and Kim (2012 , 2014 ), and Han et al. (2016 ). Inputs to the program can be either times when events/censoring occur or the vectors of total time on test and the number of events. Outputs of the programs are times and the corresponding p-values in the backward elimination. Details about the model and implementation are given in Han et al. 2014. This program can run in R version 3.2.2 and above.","Published":"2017-05-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rpf","Version":"0.53","Title":"Response Probability Functions","Description":"The purpose of this package is to factor out logic and math common\n to Item Factor Analysis fitting, diagnostics, and analysis. It is\n envisioned as core support code suitable for more specialized IRT packages\n to build upon. Complete access to optimized C functions are made available\n with R_RegisterCCallable().","Published":"2016-06-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rpg","Version":"1.5","Title":"Easy Interface to Advanced PostgreSQL Features","Description":"Allows ad hoc queries and reading and\n writing data frames to and from a database.","Published":"2017-02-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rpgm","Version":"1.0.0","Title":"Fast Simulation of Normal/Exponential Random Variables and\nStochastic Differential Equations","Description":"Fast simulation of some random variables than the usual native functions, including rnorm() and rexp(), using Ziggurat method, reference: MARSAGLIA, George, TSANG, Wai Wan, and al. (2000) , and fast simulation of stochastic differential equations.","Published":"2017-03-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rphast","Version":"1.6.5","Title":"Interface to PHAST Software for Comparative Genomics","Description":"Provides an R interface to the PHAST software\n (Phylogenetic Analysis with Space/Time Models). It can be used for\n many types of analysis in comparative and evolutionary genomics,\n such as estimating models of evolution from sequence data, scoring\n alignments for conservation or acceleration, and predicting\n elements based on conservation or custom phylogenetic hidden Markov\n models. It can also perform many basic operations on multiple\n sequence alignments and phylogenetic trees.","Published":"2016-08-16","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rphylip","Version":"0.1-23","Title":"An R interface for PHYLIP","Description":"Rphylip provides an R interface for the PHYLIP package. All users\n of Rphylip will thus first have to install the PHYLIP phylogeny methods\n program package (Felsenstein 2013). See http://www.phylip.com for more \n\tinformation about installing PHYLIP.","Published":"2014-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rphylopars","Version":"0.2.9","Title":"Phylogenetic Comparative Tools for Missing Data and\nWithin-Species Variation","Description":"Tools for performing phylogenetic comparative methods for datasets with with multiple observations per species (intraspecific variation or measurement error) and/or missing data. Performs ancestral state reconstruction and missing data imputation on the estimated evolutionary model, which can be specified as Brownian Motion, Ornstein-Uhlenbeck, Early-Burst, Pagel's lambda, kappa, or delta, or a star phylogeny.","Published":"2016-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rpinterest","Version":"0.3.1","Title":"Access Pinterest API","Description":"Get information (boards, pins and\n users) from the Pinterest \n API.","Published":"2016-08-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rpivotTable","Version":"0.2.0","Title":"Build Powerful Pivot Tables and Dynamically Slice & Dice your\nData","Description":"Build powerful pivot tables (aka Pivot Grid, Pivot Chart, Cross-Tab) \n and dynamically slice & dice / drag 'n' drop your data. 'rpivotTable' is a\n wrapper of 'pivottable', a powerful open-source Pivot Table library implemented\n in 'JavaScript' by Nicolas Kruchten. Aligned to 'pivottable' v2.11.0.","Published":"2017-04-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rplos","Version":"0.6.4","Title":"Interface to the Search 'API' for 'PLoS' Journals","Description":"A programmatic interface to the 'SOLR' based\n search 'API' () provided by the Public\n Library of Science journals to search their articles.\n Functions are included for searching for articles, retrieving\n articles, making plots, doing 'faceted' searches,\n 'highlight' searches, and viewing results of 'highlighted'\n searches in a browser.","Published":"2016-11-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rplotengine","Version":"1.0-6","Title":"R as a Plotting Engine","Description":"Generate basic charts either by custom applications, or from a small script launched from the system console, or within the R console. Two ASCII text files are necessary:\n (1) The graph parameters file, which name is passed to the function 'rplotengine()'.\n The user can specify the titles, choose the type of the graph, graph output formats\n (e.g. png, eps), proportion of the X-axis and Y-axis, position of the legend,\n whether to show or not a grid at the background, etc.\n (2) The data to be plotted, which name is specified as a parameter ('data_filename')\n in the previous file. This data file has a tabulated format, with a single character\n (e.g. tab) between each column, and a headers line located in the first row.\n Optionally, the file could include data columns for showing confidence intervals.","Published":"2016-11-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RPMG","Version":"2.2-1","Title":"Graphical User Interface (GUI) for Interactive R Analysis\nSessions","Description":"Really Poor Man's Graphical User Interface, used to create interactive R analysis sessions with simple R commands.","Published":"2015-11-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RPMM","Version":"1.25","Title":"Recursively Partitioned Mixture Model","Description":"\n Recursively Partitioned Mixture Model for Beta and Gaussian Mixtures. \n This is a model-based clustering algorithm that returns a hierarchy\n of classes, similar to hierarchical clustering, but also similar to\n finite mixture models.","Published":"2017-02-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rpms","Version":"0.2.1","Title":"Recursive Partitioning for Modeling Survey Data","Description":"Fits a linear model to survey data in each node obtained by \n recursively partitioning the data. The splitting variables and splits\n selected are obtained using a procedure which adjusts for complex sample\n design features used to obtain the data. Likewise the model fitting\n algorithm produces design-consistent coefficients to the least squares\n linear model between the dependent and independent variables.\n The first stage of the design is accounted for in the provided variance \n estimates. The main function returns the resulting binary tree with the \n linear model fit at every end-node. The package provides a number of \n functions and methods for these trees.","Published":"2017-06-02","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"rpn","Version":"1.0","Title":"Converter and Interpreter for Reverse Polish Notation\nExpressions","Description":"Pure R implementation of a simple (Reverse) Polish Notation (RPN) interpreter\n and converter.","Published":"2016-06-07","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rpnf","Version":"1.0.5","Title":"Point and Figure Package","Description":"A set of functions to analyze and print the development of a\n commodity using the Point and Figure (P&F) approach. A P&F processor can be used\n to calculate daily statistics for the time series. These statistics can be used\n for deeper investigations as well as to create plots. Plots can be generated as\n well known X/O Plots in plain text format, and additionally in a more graphical\n format.","Published":"2016-08-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Rpoppler","Version":"0.1-0","Title":"PDF Tools Based on Poppler","Description":"PDF tools based on the Poppler PDF rendering library.\n See for more information on Poppler.","Published":"2017-04-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rportfolios","Version":"1.0-1","Title":"Random Portfolio Generation","Description":"A collection of tools used to generate\n various types of random portfolios. The weights of these\n portfolios are random variables derived from truncated\n continuous random variables.","Published":"2016-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rpostgis","Version":"1.2.1","Title":"R Interface to a 'PostGIS' Database","Description":"Provides an interface between R and\n 'PostGIS'-enabled 'PostgreSQL' databases to transparently transfer\n spatial data. Both vector (points, lines, polygons) and raster\n data are supported in read and write modes. Also provides\n convenience functions to execute common procedures in\n 'PostgreSQL/PostGIS'.","Published":"2017-05-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rpostgisLT","Version":"0.5.0","Title":"Managing Animal Movement Data with 'PostGIS' and R","Description":"Integrates R and the 'PostgreSQL/PostGIS' database \n system to build and manage animal trajectory (movement) data sets. \n The package relies on 'ltraj' objects from the R package 'adehabitatLT',\n building the analogous 'pgtraj' data structure in 'PostGIS'. Functions\n allow users to seamlessly transfer between 'ltraj' and 'pgtraj', as\n well as build new 'pgtraj' directly from location data stored in the \n database.","Published":"2017-06-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RPostgreSQL","Version":"0.4-1","Title":"R interface to the PostgreSQL database system","Description":"Database interface and PostgreSQL driver for R This\n package provides a Database Interface (DBI) compliant driver\n for R to access PostgreSQL database systems.\n\n In order to build and install this package from source, PostgreSQL\n itself must be present your system to provide PostgreSQL\n functionality via its libraries and header files. These files\n are provided as postgresql-devel package under some Linux\n distributions.\n\n On Microsoft Windows system the attached libpq library source will be\n used.\n\n A wiki and issue tracking system for the package are available at\n Google Code at https://code.google.com/p/rpostgresql/ .","Published":"2016-05-08","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rPowerSampleSize","Version":"1.0.1","Title":"Sample Size Computations Controlling the Type-II Generalized\nFamily-Wise Error Rate","Description":"The significance of mean difference tests in clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. This package enables one to compute necessary sample sizes for single-step (Bonferroni) and step-wise procedures (Holm and Hochberg). These three procedures control the q-generalized family-wise error rate (probability of making at least q false rejections). Sample size is computed (for these single-step and step-wise procedures) in a such a way that the r-power (probability of rejecting at least r false null hypotheses, i.e. at least r significant endpoints among m) is above some given threshold, in the context of tests of difference of means for two groups of continuous endpoints (variables). Various types of structure of correlation are considered. It is also possible to analyse data (i.e., actually test difference in means) when these are available. The case r equals 1 is treated in separate functions that were used in Lafaye de Micheaux et al. (2014) .","Published":"2016-01-13","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"RPPairwiseDesign","Version":"1.0","Title":"Resolvable partially pairwise balanced design and Space-filling\ndesign via association scheme","Description":"Using some association schemes to obtain a new series of resolvable partially pairwise balanced designs (RPPBD) and space-filling designs.","Published":"2015-02-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RPPanalyzer","Version":"1.4.3","Title":"Reads, Annotates, and Normalizes Reverse Phase Protein Array\nData","Description":"Reads in sample description and slide description files and\n annotates the expression values taken from GenePix results files\n\t(text file format used by many microarray scanner and software providers). \n\tAfter normalization data can be visualized as boxplot, heatmap or dotplot.","Published":"2016-02-11","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"rpql","Version":"0.5","Title":"Regularized PQL for Joint Selection in GLMMs","Description":"Performs joint selection in Generalized Linear Mixed Models (GLMMs) using penalized likelihood methods. Specifically, the Penalized Quasi-Likelihood (PQL) is used as a loss function, and penalties are then \"added on\" to perform simultaneous fixed and random effects selection. Regularized PQL avoids the need for integration (or approximations such as the Laplace's method) during the estimation process, and so the full solution path for model selection can be constructed relatively quickly. ","Published":"2016-10-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rPref","Version":"1.2","Title":"Database Preferences and Skyline Computation","Description":"Routines to select and visualize the maxima for a given strict\n partial order. This especially includes the computation of the Pareto\n frontier, also known as (Top-k) Skyline operator, and some\n generalizations (database preferences).","Published":"2016-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RPresto","Version":"1.2.1","Title":"DBI Connector to Presto","Description":"Implements a 'DBI' compliant interface to Presto. Presto is\n an open source distributed SQL query engine for running interactive\n analytic queries against data sources of all sizes ranging from\n gigabytes to petabytes: .","Published":"2016-04-06","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rprev","Version":"0.2.3","Title":"Estimating Disease Prevalence from Registry Data","Description":"Estimates disease prevalence for a given index date, using existing\n registry data extended with Monte Carlo simulations.","Published":"2017-02-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rprime","Version":"0.1.0","Title":"Functions for Working with 'Eprime' Text Files","Description":"'Eprime' is a set of programs for administering psychological\n experiments by computer. This package provides functions for loading,\n parsing, filtering and exporting data in the text files produced by\n 'Eprime' experiments.","Published":"2015-05-29","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rprintf","Version":"0.2.1","Title":"Adaptive Builder for Formatted Strings","Description":"Provides a set of functions to facilitate building formatted strings\n under various replacement rules: C-style formatting, variable-based formatting,\n and number-based formatting. C-style formatting is basically identical to built-in\n function 'sprintf'. Variable-based formatting allows users to put variable names\n in a formatted string which will be replaced by variable values. Number-based\n formatting allows users to use index numbers to represent the corresponding\n argument value to appear in the string.","Published":"2015-09-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rprojroot","Version":"1.2","Title":"Finding Files in Project Subdirectories","Description":"Robust, reliable and flexible paths to files below a\n project root. The 'root' of a project is defined as a directory\n that matches a certain criterion, e.g., it contains a certain\n regular file.","Published":"2017-01-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RProtoBuf","Version":"0.4.9","Title":"R Interface to the 'Protocol Buffers' 'API' (Version 2 or 3)","Description":"Protocol Buffers are a way of encoding structured data in an\n efficient yet extensible format. Google uses Protocol Buffers for almost all\n of its internal 'RPC' protocols and file formats. Additional documentation\n is available in two included vignettes one of which corresponds to our paper\n in the Journal of Statistical Software (2016, v71i02). Either version 2 or 3\n of the 'Protocol Buffers' 'API' is supported.","Published":"2017-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rpsftm","Version":"1.1.0","Title":"Rank Preserving Structural Failure Time Models","Description":"Implements methods described by the paper Robins and Tsiatis (1991) . These use g-estimation to estimate the causal effect of a treatment in a two-armed randomised control trial where non-compliance exists and is measured, under an assumption of an accelerated failure time model and no unmeasured confounders.","Published":"2017-04-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rpst","Version":"1.0.0","Title":"Recursive Partitioning Survival Trees","Description":"An implementation of Recursive Partitioning Survival Trees via a node-splitting rule that builds decision tree models that reflected within-node and within-treatment responses. The algorithm aims to find the maximal difference in survival time among different treatments.","Published":"2017-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rpsychi","Version":"0.8","Title":"Statistics for psychiatric research","Description":"The rpsychi offers a number of functions for psychiatry, psychiatric nursing, clinical psychology. Functions are primarily for statistical significance testing using published work. For example, you can conduct a factorial analysis of variance (ANOVA), which requires only the mean, standard deviation, and sample size for each cell, rather than the individual data. This package covers fundamental statistical tests such as t-test, chi-square test, analysis of variance, and multiple regression analysis. With some exceptions, you can obtain effect size and its confidence interval. These functions help you to obtain effect size from published work, and then to conduct a priori power analysis or meta-analysis, even if a researcher do not report effect size in a published work.","Published":"2012-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RPtests","Version":"0.1.4","Title":"Goodness of Fit Tests for High-Dimensional Linear Regression\nModels","Description":"Performs goodness of fits tests for both high and low-dimensional linear models.\n It can test for a variety of model misspecifications including nonlinearity and heteroscedasticity.\n In addition one can test the significance of potentially large groups of variables, and also\n produce p-values for the significance of individual variables in high-dimensional linear\n regression.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rptR","Version":"0.9.2","Title":"Repeatability Estimation for Gaussian and Non-Gaussian Data","Description":"Estimating repeatability (intra-class\n correlation) from Gaussian, binary, proportion and count data.","Published":"2017-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rpubchem","Version":"1.5.10","Title":"An Interface to the PubChem Collection","Description":"Access PubChem data (compounds, substance, assays) using R.\n Structural information is provided in the form of SMILES strings. \n It currently only provides access to a subset of the \n precalculated data stored by PubChem. Bio-assay data can be accessed to \n obtain descriptions as well as the actual data. It is also possible to search for assay ID's by keyword. ","Published":"2016-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RPublica","Version":"0.1.3","Title":"ProPublica API Client","Description":"Client for accessing data journalism APIs from ProPublica .","Published":"2015-12-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RPushbullet","Version":"0.3.1","Title":"R Interface to the Pushbullet Messaging Service","Description":"An R interface to the Pushbullet messaging service which\n provides fast and efficient notifications (and file transfer) between\n computers, phones and tablets. An account has to be registered at the site\n http://www.pushbullet.com site to obtain a (free) API key.","Published":"2017-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RPyGeo","Version":"0.9-3","Title":"ArcGIS Geoprocessing in R via Python","Description":"Provide access to (virtually any) ArcGIS Geoprocessing\n tool from within R by running Python geoprocessing scripts\n without writing Python code or touching ArcGIS. Requires ArcGIS\n >=9.2, a suitable version of Python (for ArcGIS 9.2: Python\n 2.4; for ArcGIS 10.0: 2.6), and Windows.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rPython","Version":"0.0-6","Title":"Package Allowing R to Call Python","Description":"Run Python code, make function calls, assign and retrieve variables, etc. from R.","Published":"2015-11-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RQDA","Version":"0.2-8","Title":"R-Based Qualitative Data Analysis","Description":"Current version only supports plain text, but it can import PDF highlights if package 'rjpod' () is installed.","Published":"2016-12-12","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RQGIS","Version":"1.0.0","Title":"Integrating R with QGIS","Description":"Establishes an interface between R and 'QGIS', i.e. it allows\n the user to access 'QGIS' functionalities from the R console. It achieves this\n by using the 'QGIS' Python API via the command line. Hence, RQGIS extends R's\n statistical power by the incredible vast geo-functionality of 'QGIS' (including\n also 'GDAL', 'SAGA'- and 'GRASS'-GIS among other third-party providers).\n This in turn creates a powerful environment for advanced and innovative\n (geo-)statistical geocomputing. 'QGIS' is licensed under GPL version 2 or\n greater and is available from .","Published":"2017-05-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rqPen","Version":"2.0","Title":"Penalized Quantile Regression","Description":"Performs penalized quantile regression for LASSO, SCAD and MCP functions including group penalties. Provides a function that automatically generates lambdas and evaluates different models with cross validation or BIC, including a large p version of BIC. ","Published":"2017-05-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rquake","Version":"2.4-0","Title":"Seismic Hypocenter Determination","Description":"Hypocenter estimation and analysis of seismic data collected continuously, or in trigger mode. The functions organize other functions from RSEIS and GEOmap to help researchers pick, locate, and store hypocenters for detailed seismic investigation.","Published":"2016-08-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RQuantLib","Version":"0.4.3","Title":"R Interface to the 'QuantLib' Library","Description":"The 'RQuantLib' package makes parts of 'QuantLib' accessible from R\n The 'QuantLib' project aims to provide a comprehensive software framework\n for quantitative finance. The goal is to provide a standard open source library\n for quantitative analysis, modeling, trading, and risk management of financial\n assets.","Published":"2016-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rr","Version":"1.4","Title":"Statistical Methods for the Randomized Response Technique","Description":"Enables researchers to conduct multivariate statistical analyses\n of survey data with randomized response technique items from several designs,\n including mirrored question, forced question, and unrelated question. This\n includes regression with the randomized response as the outcome and logistic\n regression with the randomized response item as a predictor. In addition,\n tools for conducting power analysis for designing randomized response items\n are included. The package implements methods described in Blair, Imai, and Zhou\n (2015) ''Design and Analysis of the Randomized Response Technique,'' Journal\n of the American Statistical Association \n .","Published":"2016-08-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"Rramas","Version":"0.1-4","Title":"Matrix population models","Description":"Analyzes and predicts from matrix population models in the way of the Ramas (c) software","Published":"2014-01-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rRAP","Version":"1.1","Title":"Real-Time Adaptive Penalization for Streaming Lasso Models","Description":"An implementation of the Real-time Adaptive Penalization (RAP) algorithm through which to iteratively update a regularization parameter in a streaming context. ","Published":"2016-10-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RRate","Version":"1.0","Title":"Estimating Replication Rate for Genome-Wide Association Studies","Description":"Replication Rate (RR) is the probability of replicating a statistically significant association in genome-wide association studies. This R-package provide the estimation method for replication rate which makes use of the summary statistics from the primary study. We can use the estimated RR to determine the sample size of the replication study, and to check the consistency between the results of the primary study and those of the replication study.","Published":"2016-08-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rrBLUP","Version":"4.5","Title":"Ridge Regression and Other Kernels for Genomic Selection","Description":"Software for genomic prediction with the RR-BLUP mixed model. One application is to estimate marker effects by ridge regression; alternatively, BLUPs can be calculated based on an additive relationship matrix or a Gaussian kernel.","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rrBlupMethod6","Version":"1.3","Title":"Re-parametrization of RR-BLUP to allow for a fixed residual\nvariance","Description":"rrBlupMethod6 -- Re-parametrization of mixed model\n formulation to allow for a fixed residual variance when using\n RR-BLUP for genomwide estimation of marker effects and linear\n transformation of the adjusted means proposed by Piepho et\n al.(2011)","Published":"2012-10-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rrcov","Version":"1.4-3","Title":"Scalable Robust Estimators with High Breakdown Point","Description":"Robust Location and Scatter Estimation and Robust\n Multivariate Analysis with High Breakdown Point.","Published":"2016-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rrcov3way","Version":"0.1-10","Title":"Robust Methods for Multiway Data Analysis, Applicable also for\nCompositional Data","Description":"Provides methods for multiway data analysis by means of Parafac\n and Tucker 3 models. Robust versions (Engelen and Hubert (2011) ) and versions\n for compositional data are also provided (Gallo (2015) , Di Palma et al. (in press)).","Published":"2017-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rrcovHD","Version":"0.2-5","Title":"Robust Multivariate Methods for High Dimensional Data","Description":"Robust multivariate methods for high dimensional data including\n outlier detection, PCA, PLS and classification.","Published":"2016-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rrcovNA","Version":"0.4-9","Title":"Scalable Robust Estimators with High Breakdown Point for\nIncomplete Data","Description":"Robust Location and Scatter Estimation and Robust\n Multivariate Analysis with High Breakdown Point for Incomplete\n Data.","Published":"2016-09-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rrdrand","Version":"0.1-14","Title":"'DRNG' on Intel CPUs with the 'RdRand' Instruction for R","Description":"Make use of the hardware random number accessed by the 'RdRand'\n instruction in recent Intel CPUs (Ivy Bridge and later).\n 'DRNG' is \"Digital Random Number Generator\".","Published":"2015-05-28","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"rrecsys","Version":"0.9.5.4","Title":"Environment for Assessing Recommender Systems","Description":"Provides implementations of several popular recommendation systems. They can process standard recommendation datasets (user/item matrix) as input and generate rating predictions and recommendation lists. Standard algorithm implementations included in this package are: Global/Item/User-Average baselines, Item-Based KNN, FunkSVD, BPR and weighted ALS. They can be assessed according to the standard offline evaluation methodology for recommender systems using measures such as MAE, RMSE, Precision, Recall, AUC, NDCG, RankScore and coverage measures. The package is intended for rapid prototyping of recommendation algorithms and education purposes. ","Published":"2016-06-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rredis","Version":"1.7.0","Title":"\"Redis\" Key/Value Database Client","Description":"R client interface to the \"Redis\" key-value database.","Published":"2015-07-05","License":"Apache License (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"rredlist","Version":"0.3.0","Title":"'IUCN' Red List Client","Description":"'IUCN' Red List () client.\n The 'IUCN' Red List is a global list of threatened and endangered species.\n Functions cover all of the Red List 'API' routes. An 'API' key is required.","Published":"2017-01-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RRedshiftSQL","Version":"0.1.2","Title":"R Interface to the 'Redshift' Database","Description":"Superclasses 'PostgreSQL' connection to help enable full 'dplyr' functionality on 'Redshift'.","Published":"2016-09-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rrefine","Version":"1.0","Title":"R Client for OpenRefine API","Description":"'OpenRefine' (formerly 'Google Refine') is a popular, open source data cleaning software. This package enables users to programmatically trigger data transfer between R and 'OpenRefine'. Available functionality includes project import, export and deletion.","Published":"2016-04-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rrepast","Version":"0.6.0","Title":"Invoke 'Repast Simphony' Simulation Models","Description":"An R and Repast integration tool for running individual-based\n (IbM) simulation models developed using 'Repast Simphony' Agent-Based framework\n directly from R code. This package integrates 'Repast Simphony' models within\n R environment, making easier the tasks of running and analyzing model output\n data for automated parameter calibration and for carrying out uncertainty and\n sensitivity analysis using the power of R environment.","Published":"2017-05-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RRF","Version":"1.7","Title":"Regularized Random Forest","Description":"Feature Selection with Regularized Random Forest. This\n package is based on the 'randomForest' package by Andy Liaw.\n The key difference is the RRF() function that builds a\n regularized random forest.","Published":"2017-01-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rriskDistributions","Version":"2.1.2","Title":"Fitting Distributions to Given Data or Known Quantiles","Description":"Collection of functions for fitting distributions to given data or\n by known quantiles. Two main functions fit.perc() and fit.cont() provide\n users a GUI that allows to choose a most appropriate distribution without\n any knowledge of the R syntax. Note, this package is a part of the 'rrisk'\n project.","Published":"2017-03-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rrlda","Version":"1.1","Title":"Robust Regularized Linear Discriminant Analysis","Description":"This package offers methods to perform robust regularized\n linear discriminant analysis.","Published":"2012-06-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RRNA","Version":"1.0","Title":"Secondary Structure Plotting for RNA","Description":"Functions for creating and manipulating RNA secondary structure plots.","Published":"2015-07-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rrpack","Version":"0.1-5","Title":"Reduced-Rank Regression","Description":"Multivariate regression methodologies including reduced-rank\n regression (RRR), reduced-rank ridge regression (RRS), robust reduced-rank\n regression (R4), generalized/mixed-response reduced-rank regression (mRRR),\n row-sparse reduced-rank regression (SRRR), reduced-rank regression with a\n sparse singular value decomposition (RSSVD), and sparse and orthogonal\n factor regression (SOFAR).","Published":"2017-06-22","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"rrr","Version":"1.0.0","Title":"Reduced-Rank Regression","Description":"Reduced-rank regression, diagnostics and graphics.","Published":"2016-12-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RRreg","Version":"0.6.2","Title":"Correlation and Regression Analyses for Randomized Response Data","Description":"Univariate and multivariate methods to analyze randomized response (RR) survey designs (e.g., Warner, S. L. (1965). Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association, 60, 63–69). Besides univariate estimates of true proportions, RR variables can be used for correlations, as dependent variable in a logistic regression (with or without random effects), as predictors in a linear regression, or as dependent variable in a beta-binomial ANOVA. For simulation and bootstrap purposes, RR data can be generated according to several models.","Published":"2017-03-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RRTCS","Version":"0.0.3","Title":"Randomized Response Techniques for Complex Surveys","Description":"Point and interval estimation of linear parameters with data\n obtained from complex surveys (including stratified and clustered samples)\n when randomization techniques are used. The randomized response technique\n was developed to obtain estimates that are more valid when studying\n sensitive topics. Estimators and variances for 14 randomized response\n methods for qualitative variables and 7 randomized response methods for\n quantitative variables are also implemented. In addition, some data sets\n from surveys with these randomization methods are included in the package.","Published":"2015-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSA","Version":"0.9.10","Title":"Response Surface Analysis","Description":"Advanced response surface analysis. The main function RSA computes\n and compares several nested polynomial regression models (full polynomial,\n shifted and rotated squared differences, rising ridge surfaces, basic\n squared differences). The package provides plotting functions for 3d\n wireframe surfaces, interactive 3d plots, and contour plots. Calculates\n many surface parameters (a1 to a4, principal axes, stationary point,\n eigenvalues) and provides standard, robust, or bootstrapped standard errors\n and confidence intervals for them.","Published":"2016-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSADBE","Version":"1.0","Title":"Data related to the book \"R Statistical Application Development\nby Example\"","Description":"The package contains all the data sets related to the book\n written by the maintainer of the package.","Published":"2013-06-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rsae","Version":"0.1-5","Title":"Robust Small Area Estimation","Description":"Robust Small Area Estimation. Robust Basic Unit- and Area-Level Models","Published":"2014-02-13","License":"GPL (>= 2) | FreeBSD","snapshot_date":"2017-06-23"} {"Package":"RSAGA","Version":"0.94-5","Title":"SAGA Geoprocessing and Terrain Analysis in R","Description":"Provides access to geocomputing and terrain analysis\n functions of the geographical information system (GIS) 'SAGA' (System for\n Automated Geoscientific Analyses) from within R by running the command \n line version of SAGA. This package furthermore provides several R functions\n for handling ASCII grids, including a flexible framework for applying local\n functions (including predict methods of fitted models) and focal functions to\n multiple grids. SAGA GIS is available under GPLv2 / LGPLv2 licence from\n http://sourceforge.net/projects/saga-gis/.","Published":"2016-01-05","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RSAgeo","Version":"1.2","Title":"Resampling-Based Analysis of Geostatistical Data","Description":"Performs parameter estimation for geostatistical data using a resampling-based stochastic approximation (RSA) method.","Published":"2016-05-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Rsampletrees","Version":"1.0","Title":"MCMC Sampling of Gene Genealogies Conditional on Genetic Data","Description":"Sample ancestral trees conditional on phased or unphased SNP genotype data. The actual tree sampling is done using a C++ program that is launched within R. The package also contains functions for specifying the tree-sampling settings (pre-processing) and for storing and manipulating the sampled trees (post-processing). More information about 'sampletrees' can be found at . ","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rsampling","Version":"0.1.1","Title":"Ports the Workflow of \"Resampling Stats\" Add-in to R","Description":"Resampling Stats (http://www.resample.com) is an add-in for\n running randomization tests in Excel worksheets. The workflow is (1) to define\n a statistic of interest that can be calculated from a data table, (2) to\n randomize rows ad/or columns of a data table to simulate a null hypothesis\n and (3) and to score the value of the statistic from many randomizations. The\n relative frequency distribution of the statistic in the simulations is then\n used to infer the probability of the observed value be generated by the null\n process (probability of Type I error). This package intends to translate this\n logic for R for teaching purposes. Keeping the original workflow is favored over\n performance.","Published":"2016-06-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RSAP","Version":"0.9","Title":"SAP Netweaver RFC connector for R","Description":"The SAP Netweaver RFC connector for R","Published":"2013-02-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rSARP","Version":"1.0.0","Title":"Functions to Create and Evaluate Search and Rescue Plans","Description":"Tools to create, evaluate, critique,\n revise, track progress, and communicate a detailed wilderness or urban\n search plan to management. This package uses and creates csv files in the R\n working directory to document inputs and results. It also creates a series\n of PDF and PNG files to accomplish communication of the plan. The program\n creates and revises search plans using Bayesian models. The package\n includes functions bestsearch(), searchstatus() and searchme() to model the\n number of searchers and hours required to search an area, calculate the\n probability of detection, probability of success, and project the best plan\n given limited resources.","Published":"2016-05-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RSarules","Version":"1.0","Title":"Random Sampling Association Rules from a Transaction Dataset","Description":"Implements the Gibbs sampling algorithm to randomly sample association rules with one pre-chosen item as the consequent from a transaction dataset. The Gibbs sampling algorithm was proposed in G. Qian, C.R. Rao, X. Sun and Y. Wu (2016) . ","Published":"2016-10-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rsatscan","Version":"0.3.9200","Title":"Tools, Classes, and Methods for Interfacing with SaTScan\nStand-Alone Software","Description":"SaTScan(TM) (http://www.satscan.org) is software for finding regions in \n Time, Space, or Time-Space that have excess risk, based on scan statistics, and \n\tuses Monte Carlo hypothesis testing to generate P-values for these regions. The \n\trsatscan package provides functions for writing R data frames in \n\tSaTScan-readable formats, for setting SaTScan parameters, for running SaTScan in \n\tthe OS, and for reading the files that SaTScan creates. ","Published":"2015-03-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RSauceLabs","Version":"0.1.6","Title":"R Wrapper for 'SauceLabs' REST API","Description":"Retrieve, update, delete job information from . Poll the 'SauceLabs' services\n current status and access supported platforms. Send and retrieve files from 'SauceLabs' and manage tunnels associated\n with 'SauceConnect'.","Published":"2016-09-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rSCA","Version":"2.1","Title":"An R Package for Stepwise Cluster Analysis","Description":"This package implements a statistical tool for modeling multivariate relationships using a stepwise cluster analysis (SCA) method.","Published":"2014-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSCABS","Version":"0.9.2","Title":"Rao-Scott Cochran-Armitage by Slices Trend Test","Description":"Performs the Rao-Scott Cochran-Armitage by Slices trend test (RSCABS) used \n\tin analysis of histopathological endpoints, built to be used with either a GUI or by a\n\tcommand line. The RSCABS method is detailed in \"Statistical analysis of \n\thistopathological endpoints\" by John Green et. al. (2014) . ","Published":"2017-04-26","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"rscala","Version":"2.2.2","Title":"Bi-Directional Interface Between R and Scala with Callbacks","Description":"The Scala interpreter is embedded in R and callbacks to R from the embedded interpreter are supported. Conversely, the R interpreter is embedded in Scala. Scala versions in the 2.10.x, 2.11.x, and 2.12.x series are supported.","Published":"2017-05-25","License":"GPL (>= 2) | BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rscimark","Version":"1.0","Title":"SciMark 2.0 Benchmark for Scientific and Numerical Computing","Description":"The SciMark 2.0 benchmark was originally developed in Java as a benchmark for numerical and scientific computational performance. It measures the performance of several computational kernels which are frequently occurring in scientific applications. This package is a simple wrapper around the ANSI C implementation of the benchmark.","Published":"2016-03-17","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RSclient","Version":"0.7-3","Title":"Client for Rserve","Description":"Client for Rserve, allowing to connect to Rserve instances and issue commands.","Published":"2015-07-28","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rsconnect","Version":"0.8","Title":"Deployment Interface for R Markdown Documents and Shiny\nApplications","Description":"Programmatic deployment interface for 'RPubs', 'shinyapps.io', and\n 'RStudio Connect'. Supported content types include R Markdown documents,\n Shiny applications, plots, and static web content.","Published":"2017-05-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rscopus","Version":"0.4.6","Title":"Scopus Database 'API' Interface","Description":"Uses Elsevier 'Scopus' 'API'\n to download \n information about authors and their citations.","Published":"2017-02-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rscorecard","Version":"0.3.5","Title":"A Method to Download Department of Education College Scorecard\nData","Description":"A method to download Department of Education College\n Scorecard data using the public API\n . It is based on\n the 'dplyr' model of piped commands to select and filter data in a\n single chained function call. An API key from the U.S. Department of\n Education is required.","Published":"2017-01-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RSDA","Version":"2.0","Title":"R to Symbolic Data Analysis","Description":"Symbolic Data Analysis (SDA) was proposed by professor Edwin Diday in 1987, the main purpose of SDA is to substitute the set of rows (cases) in the data table for a concept (second order statistical unit). This package implements, to the symbolic case, certain techniques of automatic classification, as well as some linear models.","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rsdepth","Version":"0.1-5","Title":"Ray Shooting Depth (i.e. RS Depth) functions for bivariate\nanalysis","Description":"Ray Shooting Depth functions are provided for bivariate analysis. ","Published":"2014-06-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rsdmx","Version":"0.5-8","Title":"Tools for Reading SDMX Data and Metadata","Description":"Set of classes and methods to read data and metadata documents\n exchanged through the Statistical Data and Metadata Exchange (SDMX) framework,\n currently focusing on the SDMX XML standard format (SDMX-ML).","Published":"2017-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSeed","Version":"0.1.60","Title":"Borenstein Analysis","Description":"An implementation of the analysis about seed components from Borenstein et.al. 2008.","Published":"2016-10-07","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"rseedcalc","Version":"1.3","Title":"Estimating the Proportion of Genetically Modified Seeds in\nSeedlots via Multinomial Group Testing","Description":"Estimate the percentage of seeds in a seedlot that contain stacks\n of genetically modified traits. Estimates are calculated using a\n multinomial group testing model with maximum likelihood estimation of the\n parameters.","Published":"2015-07-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RSEIS","Version":"3.7-4","Title":"Seismic Time Series Analysis Tools","Description":"Multiple interactive codes to view and analyze seismic data, via spectrum analysis, wavelet transforms, particle motion, hodograms. Includes general time-series tools, plotting, filtering, interactive display.","Published":"2017-03-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSelenium","Version":"1.7.1","Title":"R Bindings for 'Selenium WebDriver'","Description":"Provides a set of R bindings for the 'Selenium 2.0 WebDriver'\n (see \n for more information) using the 'JsonWireProtocol' (see\n for more\n information). 'Selenium 2.0 WebDriver' allows driving a web browser\n natively as a user would either locally or on a remote machine using\n the Selenium server it marks a leap forward in terms of web browser\n automation. Selenium automates web browsers (commonly referred to as\n browsers). Using RSelenium you can automate browsers locally or\n remotely.","Published":"2017-01-24","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"rsem","Version":"0.4.6","Title":"Robust Structural Equation Modeling with Missing Data and\nAuxiliary Variables","Description":"A robust procedure is implemented to estimate means and covariance matrix of multiple variables with missing data using Huber weight and then to estimate a structural equation model.","Published":"2015-05-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RSentiment","Version":"2.1.4","Title":"Analyse Sentiment of English Sentences","Description":"Analyses sentiment of a sentence in English and assigns score to it. It can classify sentences to the following categories of sentiments:- Positive, Negative, very Positive, very negative, \n Neutral. For a vector of sentences, it counts the number of sentences in each\n category of sentiment.In calculating the score, negation and various degrees\n of adjectives are taken into consideration. It deals only with English sentences.","Published":"2017-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rserve","Version":"1.7-3","Title":"Binary R server","Description":"Rserve acts as a socket server (TCP/IP or local sockets) \n\t which allows binary requests to be sent to R. Every\n\t connection has a separate workspace and working\n\t directory. Client-side implementations are available\n\t for popular languages such as C/C++ and Java, allowing\n\t any application to use facilities of R without the need of\n\t linking to R code. Rserve supports remote connection,\n\t user authentication and file transfer. A simple R client\n\t is included in this package as well.","Published":"2013-08-21","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rSFA","Version":"1.04","Title":"Slow Feature Analysis in R","Description":"Slow Feature Analysis in R, ported to R based on the\n matlab versions SFA toolkit 1.0 by Pietro Berkes and SFA toolkit\n 2.8 by Wolfgang Konen for matlab.","Published":"2014-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rsgcc","Version":"1.0.6","Title":"Gini methodology-based correlation and clustering analysis of\nmicroarray and RNA-Seq gene expression data","Description":"This package provides functions for calculating\n associations between two genes with five correlation\n methods(e.g., the Gini correlation coefficient [GCC], the\n Pearson's product moment correlation coefficient [PCC], the\n Kendall tau rank correlation coefficient [KCC], the Spearman's\n rank correlation coefficient [SCC] and the Tukey's biweight\n correlation coefficient [BiWt], and three non-correlation\n methods (e.g., mutual information [MI] and the maximal\n information-based nonparametric exploration [MINE], and the\n euclidean distance [ED]). It can also been implemented to\n perform the correlation and clustering analysis of\n transcriptomic data profiled by microarray and RNA-Seq\n technologies. Additionally, this package can be further applied\n to construct gene co-expression networks (GCNs).","Published":"2013-06-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rsggm","Version":"0.3","Title":"Robust Sparse Gaussian Graphical Modeling via the\nGamma-Divergence","Description":"Robust estimation of sparse inverse covariance matrix via the gamma-divergence.","Published":"2015-08-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSGHB","Version":"1.1.2","Title":"Functions for Hierarchical Bayesian Estimation: A Flexible\nApproach","Description":"Functions for estimating models using a Hierarchical Bayesian (HB) framework. The flexibility comes in allowing the user to specify the likelihood function directly instead of assuming predetermined model structures. Types of models that can be estimated with this code include the family of discrete choice models (Multinomial Logit, Mixed Logit, Nested Logit, Error Components Logit and Latent Class) as well ordered response models like ordered probit and ordered logit. In addition, the package allows for flexibility in specifying parameters as either fixed (non-varying across individuals) or random with continuous distributions. Parameter distributions supported include normal, positive/negative log-normal, positive/negative censored normal, and the Johnson SB distribution. Kenneth Train's Matlab and Gauss code for doing Hierarchical Bayesian estimation has served as the basis for a few of the functions included in this package. These Matlab/Gauss functions have been rewritten to be optimized within R. Considerable code has been added to increase the flexibility and usability of the code base. Train's original Gauss and Matlab code can be found here: http://elsa.berkeley.edu/Software/abstracts/train1006mxlhb.html See Train's chapter on HB in Discrete Choice with Simulation here: http://elsa.berkeley.edu/books/choice2.html; and his paper on using HB with non-normal distributions here: http://eml.berkeley.edu//~train/trainsonnier.pdf.","Published":"2015-12-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RSiena","Version":"1.1-232","Title":"Siena - Simulation Investigation for Empirical Network Analysis","Description":"Fits models to longitudinal network data","Published":"2013-06-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rsig","Version":"1.0","Title":"Robust Signature Selection for Survival Outcomes","Description":"Robust and efficient feature selection algorithm to\n identify important features for predicting survival risk.\n The method is based on subsampling and averaging linear models\n obtained from the (preconditioned) Lasso algorithm, with an extra \n shrinking procedure to reduce the size of signatures. An \n evaluation procedure using subsampling is also provided.","Published":"2013-10-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RsimMosaic","Version":"1.0.3","Title":"R Simple Image Mosaic Creation Library","Description":"Provides a way to transform an image into a mosaic composed from a set of smaller images (tiles). It also contains a simple function for creating the tiles from a folder of images directly through R, without the need of any external code. At this moment only the JPEG format is supported, either as input (image and tiles) or output (mosaic transformed image).","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSIP","Version":"1.0.0","Title":"Remote Sensing and Image Processing","Description":"Makes operations with raster images, such as map\n viewing in time series, export values in time series for specific, total or limited\n within a polygon locations. Makes data processing of remote sensing of climatic variables distributed in the space (maps 2D) and the time (time series).","Published":"2016-11-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RSiteCatalyst","Version":"1.4.12","Title":"R Client for Adobe Analytics API V1.4","Description":"Functions for interacting with the Adobe Analytics API V1.4\n ().","Published":"2017-04-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RSKC","Version":"2.4.2","Title":"Robust Sparse K-Means","Description":"This RSKC package contains a function RSKC which runs the robust sparse K-means clustering algorithm.","Published":"2016-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rslp","Version":"0.1.0","Title":"A Stemming Algorithm for the Portuguese Language","Description":"Implements the \"Stemming Algorithm for the Portuguese Language\" .","Published":"2016-08-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rslurm","Version":"0.3.3","Title":"Submit R Calculations to a 'SLURM' Cluster","Description":"Functions that simplify submitting R scripts to a 'SLURM' cluster\n workload manager, in part by automating the division of embarrassingly parallel\n calculations across cluster nodes.","Published":"2017-04-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rsm","Version":"2.8","Title":"Response-Surface Analysis","Description":"Provides functions to generate response-surface designs, \n fit first- and second-order response-surface models, \n make surface plots, obtain the path of steepest ascent, \n and do canonical analysis.","Published":"2016-10-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RSmartlyIO","Version":"0.1.2","Title":"Loading Facebook and Instagram Advertising Data from\n'Smartly.io'","Description":"Aims at loading Facebook and Instagram advertising data from\n 'Smartly.io' into R. 'Smartly.io' is an online advertising service that enables\n advertisers to display commercial ads on social media networks (see for more information).\n The package offers an interface to query the 'Smartly.io' API and loads data directly into R for further data processing and data analysis.","Published":"2017-06-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RSMET","Version":"1.2.9","Title":"Get Real-Time Meteorological Data in SMET Format","Description":"It manages snow and weather local time series as provided by MeteoIO (, , ). MeteoIO is a C/C++ Open Source library which \"has been designed to accomodate both the needs of carefully crafted simulations for a specific purpose/study and for the needs of operational simulations that run automatically and unattended\". It is integrated in physical spatially-distributed models and tackles several issues with weather input/output data. Here a SMET S4 class object is defined and can be imported from/ exported to a SMET ini files of MeteoIO , allowing interoperability from R to MeteoIO and other SMET-compliant software. ","Published":"2016-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rsml","Version":"1.3","Title":"Plant Root System Markup Language (RSML) File Processing","Description":"Read and analyse Root System Markup Language (RSML) files, used to\n store plant root system architecture data. More information can be found\n at the address .","Published":"2016-01-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RSNNS","Version":"0.4-9","Title":"Neural Networks in R using the Stuttgart Neural Network\nSimulator (SNNS)","Description":"The Stuttgart Neural Network Simulator (SNNS) is a library\n containing many standard implementations of neural networks. This\n package wraps the SNNS functionality to make it available from\n within R. Using the 'RSNNS' low-level interface, all of the\n algorithmic functionality and flexibility of SNNS can be accessed.\n Furthermore, the package contains a convenient high-level\n interface, so that the most common neural network topologies and\n learning algorithms integrate seamlessly into R.","Published":"2016-12-16","License":"LGPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rsnps","Version":"0.2.0","Title":"Get 'SNP' ('Single-Nucleotide' 'Polymorphism') Data on the Web","Description":"A programmatic interface to various 'SNP' 'datasets'\n on the web: 'OpenSNP' (), 'NBCIs' 'dbSNP' database \n (), and Broad Institute 'SNP'\n Annotation and Proxy Search \n (). Functions \n are included for searching for 'SNPs' for the Broad Institute and 'NCBI'. \n For 'OpenSNP', functions are included for getting 'SNPs', and data for \n 'genotypes', 'phenotypes', annotations, and bulk downloads of data by user.","Published":"2016-11-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RSNPset","Version":"0.5.2","Title":"Efficient Score Statistics for Genome-Wide SNP Set Analysis","Description":"An implementation of the use of efficient score statistics\n in genome-wide SNP set analysis with complex traits. Three standard score statistics\n (Cox, binomial, and Gaussian) are provided, but the package is easily extensible to\n include others. Code implementing the inferential procedure is primarily written in C++ and\n utilizes parallelization of the analysis to reduce runtime. A supporting function offers\n simple computation of observed, permutation, and FWER and FDR adjusted p-values.","Published":"2017-02-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RSocrata","Version":"1.7.3-2","Title":"Download or Upload 'Socrata' Data Sets","Description":"Provides easier interaction with\n Socrata open data portals .\n Users can provide a 'Socrata' data set resource URL,\n or a 'Socrata' Open Data API (SoDA) web query,\n or a 'Socrata' \"human-friendly\" URL,\n returns an R data frame. Converts dates to 'POSIX'\n format and manages throttling by 'Socrata'.\n Users can upload data to Socrata portals directly\n from R.","Published":"2017-06-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rsoi","Version":"0.2.3","Title":"El Nino/Southern Oscillation (ENSO) Index","Description":"Downloads Southern Oscillation Index from and Oceanic Nino Index data from .","Published":"2017-06-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Rsolnp","Version":"1.16","Title":"General Non-Linear Optimization","Description":"General Non-linear Optimization Using Augmented Lagrange Multiplier Method.","Published":"2015-12-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rsolr","Version":"0.0.4","Title":"R to Solr Interface","Description":"A comprehensive R API for querying Apache Solr databases.\n A Solr core is represented as a data frame or list that\n supports Solr-side filtering, sorting,\n transformation and aggregation, all through the familiar\n base R API. Queries are processed\n lazily, i.e., a query is only sent to the database when\n the data are required. ","Published":"2017-04-10","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"Rsomoclu","Version":"1.7.4","Title":"Somoclu","Description":"Somoclu is a massively parallel implementation of self-organizing maps. It exploits multicore CPUs and it can be accelerated by CUDA. The topology of the map can be planar or toroid and the grid of neurons can be rectangular or hexagonal .","Published":"2017-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rspa","Version":"0.2.1","Title":"Adapt Numerical Records to Fit (in)Equality Restrictions","Description":"Minimally adjust the values of numerical records in a data.frame, such\n that each record satisfies a predefined set of equality and/or inequality\n constraints. The constraints can be defined using the 'validate' package. \n The core algorithms have recently been moved to the 'lintools' package,\n refer to 'lintools' for a more basic interface and access to a sparse version\n of the algorithm.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rSPACE","Version":"1.2.0","Title":"Spatially-Explicit Power Analysis for Conservation and Ecology","Description":"Conducts a spatially-explicit, simulation-based power analysis for detecting trends in population abundance through occupancy-based modeling. Applicable for evaluating monitoring designs in conservation and ecological settings.","Published":"2015-11-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rsparkling","Version":"0.2.0","Title":"R Interface for H2O Sparkling Water","Description":"An extension package for 'sparklyr' that provides an R interface to\n H2O Sparkling Water machine learning library (see for more information).","Published":"2017-03-17","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RSpectra","Version":"0.12-0","Title":"Solvers for Large Scale Eigenvalue and SVD Problems","Description":"R interface to the 'Spectra' library\n for large scale eigenvalue and SVD\n problems. It is typically used to compute a few\n eigenvalues/vectors of an n by n matrix, e.g., the k largest eigenvalues,\n which is usually more efficient than eigen() if k << n. This package\n provides the 'eigs()' function which does the similar job as in 'Matlab',\n 'Octave', 'Python SciPy' and 'Julia'. It also provides the 'svds()' function\n to calculate the largest k singular values and corresponding\n singular vectors of a real matrix. Matrices can be given in either dense\n or sparse form.","Published":"2016-06-12","License":"MPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSpincalc","Version":"1.0.2","Title":"Conversion Between Attitude Representations of DCM, Euler\nAngles, Quaternions, and Euler Vectors","Description":"Conversion between attitude representations: DCM, Euler angles, Quaternions, and Euler vectors.\n Plus conversion between 2 Euler angle set types (xyx, yzy, zxz, xzx, yxy, zyz, xyz, yzx, zxy, xzy, yxz, zyx).\n Fully vectorized code, with warnings/errors for Euler angles (singularity, out of range, invalid angle order), \n DCM (orthogonality, not proper, exceeded tolerance to unity determinant) and Euler vectors(not unity).\n Also quaternion and other useful functions.\n Based on SpinCalc by John Fuller and SpinConv by Paolo de Leva.","Published":"2015-07-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RSPS","Version":"1.0","Title":"RNA-Seq Power Simulation","Description":"Provides functions for estimating power or sample size for RNA-Seq studies. Empirical approach is used and the data is assumed to be count in nature. The underlying distribution of data is assumed to be Poisson or negative binomial. The package contains 6 function; 4 functions provide estimates of sample size or power for Poisson and Negative Binomial distribution; 2 functions provide plots of power for given sample size or sample size for given power.","Published":"2015-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rsq","Version":"1.0","Title":"R-Squared and Related Measures","Description":"Calculate generalized R-squared, partial R-squared, and partial correlation coefficients for generalized linear models (including quasi models with well defined variance functions).","Published":"2017-03-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RSQLite","Version":"2.0","Title":"'SQLite' Interface for R","Description":"Embeds the 'SQLite' database engine in R and\n provides an interface compliant with the 'DBI' package. The\n source for the 'SQLite' engine (version 3.8.8.2) is included.","Published":"2017-06-19","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSQLServer","Version":"0.3.0","Title":"SQL Server R Database Interface (DBI) and 'dplyr' SQL Backend","Description":"Utilises The 'jTDS' project's 'JDBC' 3.0 'SQL Server'\n driver to extend 'DBI' classes and methods. The package also\n implements a 'SQL' backend to the 'dplyr' package.","Published":"2017-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rssa","Version":"0.14","Title":"A Collection of Methods for Singular Spectrum Analysis","Description":"Methods and tools for Singular Spectrum Analysis including decomposition, forecasting and gap-filling for univariate and multivariate time series.","Published":"2016-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSSL","Version":"0.6.1","Title":"Implementations of Semi-Supervised Learning Approaches for\nClassification","Description":"A collection of implementations of semi-supervised classifiers and methods to evaluate their performance. The package includes implementations of, among others, Implicitly Constrained Learning, Moment Constrained Learning, the Transductive SVM, Manifold regularization, Maximum Contrastive Pessimistic Likelihood estimation, S4VM and WellSVM.","Published":"2016-10-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSSOP","Version":"1.1","Title":"Simulation of Supply Reservoir Systems using Standard Operation\nPolicy","Description":"Reservoir Systems Standard Operation Policy. A system for simulation of supply reservoirs. It proposes functionalities for plotting and evaluation of supply reservoirs systems.","Published":"2016-08-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rstack","Version":"1.0.0","Title":"Stack Data Type as an 'R6' Class","Description":"An extremely simple stack data type, implemented with 'R6'\n classes. The size of the stack increases as needed, and the amortized\n time complexity is O(1). The stack may contain arbitrary objects.","Published":"2016-08-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rstackdeque","Version":"1.1.1","Title":"Persistent Fast Amortized Stack and Queue Data Structures","Description":"Provides fast, persistent (side-effect-free) stack, queue and\n deque (double-ended-queue) data structures. While deques include a superset\n of functionality provided by queues, in these implementations queues are\n more efficient in some specialized situations. See the documentation for\n rstack, rdeque, and rpqueue for details.","Published":"2015-04-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rstan","Version":"2.15.1","Title":"R Interface to Stan","Description":"User-facing R functions are provided to parse, compile, test, \n estimate, and analyze Stan models by accessing the header-only Stan library \n provided by the 'StanHeaders' package. The Stan project develops a\n probabilistic programming language that implements full Bayesian statistical \n inference via Markov Chain Monte Carlo, rough Bayesian inference via 'variational'\n approximation, and (optionally penalized) maximum likelihood estimation via \n optimization. In all three cases, automatic differentiation is used to quickly \n and accurately evaluate gradients without burdening the user with the need \n to derive the partial derivatives.","Published":"2017-04-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rstanarm","Version":"2.15.3","Title":"Bayesian Applied Regression Modeling via Stan","Description":"Estimates previously compiled regression models using the 'rstan' package,\n which provides the R interface to the Stan C++ library for Bayesian estimation.\n Users specify models via the customary R syntax with a formula and data.frame\n plus some additional arguments for priors.","Published":"2017-04-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rstantools","Version":"1.2.0","Title":"Tools for Developing R Packages Interfacing with 'Stan'","Description":"Provides various tools for developers of R packages interfacing\n with 'Stan' , including functions to set up the required \n package structure, S3 generics and default methods to unify function naming \n across 'Stan'-based R packages, and a vignette with recommendations for \n developers.","Published":"2017-03-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RStata","Version":"1.1.1","Title":"A Bit of Glue Between R and Stata","Description":"A simple R -> Stata interface allowing the user to\n execute Stata commands (both inline and from a .do file)\n from R.","Published":"2016-10-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rstatscn","Version":"1.1.1","Title":"R Interface for China National Data","Description":"R interface for china national data http://data.stats.gov.cn/, \n some convenient functions for accessing the national data are provided.","Published":"2016-07-20","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"rstiefel","Version":"0.10","Title":"Random orthonormal matrix generation on the Stiefel manifold","Description":"This package simulates random orthonormal matrices from linear and quadratic exponential family distributions on the Stiefel manifold. The most general type of distribution covered is the matrix-variate Bingham-von Mises-Fisher distribution. Most of the simulation methods are presented in Hoff(2009) \"Simulation of the Matrix Bingham-von Mises-Fisher Distribution, With Applications to Multivariate and Relational Data.\"","Published":"2016-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RStoolbox","Version":"0.1.8","Title":"Tools for Remote Sensing Data Analysis","Description":"Toolbox for remote sensing image processing and analysis such as\n calculating spectral indices, principal component transformation, unsupervised\n and supervised classification or fractional cover analyses.","Published":"2017-04-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RStorm","Version":"0.902","Title":"Simulate and Develop Streaming Processing in [R]","Description":"While streaming processing provides opportunities to deal with extremely large and ever growing data sets in (near) real time, the development of streaming algorithms for complex models is often cumbersome: the software packages that facilitate streaming processing in production environments do not provide statisticians with the simulation, estimation, and plotting tools they are used to. Developers of streaming algorithms would thus benefit from the flexibility of [R] to create, plot and compute data while developing streaming algorithms. Package RStorm implements a streaming architecture modeled on Storm for easy development and testing of streaming algorithms in [R]. RStorm is not intended as a production package, but rather a development tool for streaming algorithms. ","Published":"2013-08-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rstpm2","Version":"1.3.4","Title":"Generalized Survival Models","Description":"R implementation of generalized survival models, where g(S(t|x))=eta(t,x) for a link function g, survival S at time t with covariates x and a linear predictor eta(t,x). The main assumption is that the time effect(s) are smooth. For fully parametric models, this re-implements Stata's 'stpm2' function, which are flexible parametric survival models developed by Royston and colleagues. We have extended the parametric models to include any smooth parametric smoothers for time. We have also extended the model to include any smooth penalized smoothers from the 'mgcv' package, using penalized likelihood. These models include left truncation, right censoring, interval censoring, gamma frailties and normal random effects. ","Published":"2016-10-09","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"rstream","Version":"1.3.5","Title":"Streams of Random Numbers","Description":"Unified object oriented interface for multiple independent streams of random numbers from different sources.","Published":"2017-06-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RStripe","Version":"0.1","Title":"A Convenience Interface for the Stripe Payment API","Description":"A convenience interface for communicating with the Stripe payment processor to accept payments online. See for more information.","Published":"2016-07-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rstudioapi","Version":"0.6","Title":"Safely Access the RStudio API","Description":"Access the RStudio API (if available) and provide informative error\n messages when it's not.","Published":"2016-06-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rsubgroup","Version":"0.6","Title":"Subgroup Discovery and Analytics","Description":"A collection of efficient and effective tools and\n\talgorithms for subgroup discovery and analytics. The package\n\tintegrates an R interface to the org.vikamine.kernel library\n\tof the VIKAMINE system (http://www.vikamine.org) implementing\n\tsubgroup discovery, pattern mining and analytics in Java.","Published":"2014-09-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rsunlight","Version":"0.4.2","Title":"Interface to 'Sunlight' Foundation 'APIs'","Description":"Interface to three 'Sunlight' Foundation 'APIs' (http://\n sunlightfoundation.com/api/) for government data, including the\n Congress 'API' 'v3', the Capitol Words 'API', and the Open States\n 'API'. 'Sunlight' Foundation is a 'nonpartison' 'nonprofit' that collects and\n provides data on government activities, and those that influence government.\n Functions are provided to interact with each of the three 'APIs'.","Published":"2015-12-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rsurfer","Version":"0.1","Title":"Manipulating 'Freesurfer' Generated Data","Description":"The software suite, 'Freesurfer', is a open-source software suite involving the segmentation of brain MRIs (see for more information). This package provides functionality to import the data generated by 'Freesurfer'; functions to easily manipulate the data; and provides brain specific normalisation commonly used when studying structural brain MRIs. This package has been designed using an installation of and data generated from 'Freesurfer' version 5.3.","Published":"2017-05-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rsurrogate","Version":"2.0","Title":"Robust Estimation of the Proportion of Treatment Effect\nExplained by Surrogate Marker Information","Description":"Provides functions to estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. ","Published":"2016-10-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RSurveillance","Version":"0.2.0","Title":"Design and Analysis of Disease Surveillance Activities","Description":"A range of functions for the design and\n analysis of disease surveillance activities. These functions were\n originally developed for animal health surveillance activities but can be\n equally applied to aquatic animal, wildlife, plant and human health\n surveillance activities. Utilities are included for sample size calculation\n and analysis of representative surveys for disease freedom, risk-based\n studies for disease freedom and for prevalence estimation.","Published":"2016-10-04","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"RSurvey","Version":"0.9.1","Title":"Geographic Information System Application","Description":"A geographic information system (GIS) graphical user interface (GUI) that\n provides data viewing, management, and analysis tools.","Published":"2017-02-24","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"rsvd","Version":"0.6","Title":"Randomized Singular Value Decomposition","Description":"Randomized singular value decomposition (rsvd) is a very fast probabilistic algorithm that can be used to compute the near optimal low-rank singular value decomposition of massive data sets with high accuracy. SVD plays a central role in data analysis and scientific computing. SVD is also widely used for computing (randomized) principal component analysis (PCA), a linear dimensionality reduction technique. Randomized PCA (rpca) uses the approximated singular value decomposition to compute the most significant principal components. This package also includes a function to compute (randomized) robust principal component analysis (RPCA). In addition several plot functions are provided.","Published":"2016-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rsvg","Version":"1.1","Title":"Render SVG Images into PDF, PNG, PostScript, or Bitmap Arrays","Description":"Renders vector-based svg images into high-quality custom-size bitmap\n arrays using 'librsvg2'. The resulting bitmap can be written to e.g. png, jpeg\n or webp format. In addition, the package can convert images directly to various\n formats such as pdf or postscript.","Published":"2017-03-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RSvgDevice","Version":"0.6.4.4","Title":"An R SVG graphics device","Description":"A graphics device for R that uses the w3.org xml standard\n for Scalable Vector Graphics.","Published":"2014-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RSVGTipsDevice","Version":"1.0-7","Title":"An R SVG Graphics Device with Dynamic Tips and Hyperlinks","Description":"A graphics device for R that uses the w3.org xml standard\n for Scalable Vector Graphics. This version supports\n tooltips with 1 to 3 lines, hyperlinks, and line styles.","Published":"2016-07-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rsymphony","Version":"0.1-26","Title":"SYMPHONY in R","Description":"An R interface to the SYMPHONY solver for mixed-integer linear programs.","Published":"2017-02-20","License":"EPL","snapshot_date":"2017-06-23"} {"Package":"rSymPy","Version":"0.2-1.1","Title":"R interface to SymPy computer algebra system","Description":"Access SymPy computer algebra system from R via Jython","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rt3","Version":"0.1.2","Title":"Tic-Tac-Toe Package for R","Description":"Play the classic game of tic-tac-toe (naughts and crosses).","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rtable","Version":"0.1.5","Title":"Tabular Reporting Functions","Description":"Provides tabular reporting functionalities to work with 'ReporteRs'\n package: 'as.FlexTable' methods are available for 'ftable' and 'xtable' objects,\n function 'FlexPivot' is producing a pivot table and 'freqtable' a percentage table,\n a 'knitr' print method and a 'shiny' render function are provided for 'FlexTable' objects.","Published":"2015-11-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rTableICC","Version":"1.0.4","Title":"Random Generation of Contingency Tables","Description":"Contains functions for random generation of R x C and 2 x 2 x K contingency tables. In addition to the generation of contingency tables over predetermined intraclass-correlated clusters, it is possible to generate contingency tables without intraclass correlations under product multinomial, multinomial, and Poisson sampling plans. It also consists of a function for generation of random data from a given discrete probability distribution function.","Published":"2017-03-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rtape","Version":"2.2","Title":"Manage and manipulate large collections of R objects stored as\ntape-like files","Description":"Storing huge data in RData format causes problems because\n of the necessity to load the whole file to the memory in order\n to access and manipulate objects inside such file; rtape is a\n simple solution to this problem. The package contains several\n wrappers of R built-in serialize/unserialize mechanism allowing\n user to quickly append objects to a tape-like file and later\n iterate over them requiring only one copy of each stored object\n to reside in memory a time.","Published":"2012-07-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rtauchen","Version":"1.0","Title":"Discretization of AR(1) Processes","Description":"Discretize AR(1) process following Tauchen (1986) . A discrete Markov chain that approximates in the sense of weak convergence a continuous-valued univariate Autoregressive process of first order is generated. It is a popular method used in economics and in finance. ","Published":"2016-08-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RTaxometrics","Version":"2.0","Title":"Taxometric Analysis","Description":"We provide functions to perform taxometric analyses. This package contains 52 functions, but only 5 should be called directly by users. CheckData() should be run prior to any taxometric analysis to ensure that the data are appropriate for taxometric analysis. RunTaxometrics() performs taxometric analyses for a sample of data. RunCCFIProfile() performs a series of taxometric analyses to generate a CCFI profile. CreateData() generates a sample of categorical or dimensional data. ClassifyCases() assigns cases to groups using the base-rate classification method.","Published":"2017-05-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RTConnect","Version":"0.1.4","Title":"Tools for analyzing sales report files of iTunes Connect","Description":"Tools for analyzing sales report files of iTunes Connect.","Published":"2013-10-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RTDE","Version":"0.2-0","Title":"Robust Tail Dependence Estimation","Description":"Robust tail dependence estimation for bivariate models. This package is based on two papers by the authors:'Robust and bias-corrected estimation of the coefficient of tail dependence' and 'Robust and bias-corrected estimation of probabilities of extreme failure sets'. This work was supported by a research grant (VKR023480) from VILLUM FONDEN and an international project for scientific cooperation (PICS-6416).","Published":"2015-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rtdists","Version":"0.7-3","Title":"Response Time Distributions","Description":"Provides response time distributions (density/PDF, distribution\n function/CDF, quantile function, and random generation): (a) Ratcliff\n diffusion model (Ratcliff & McKoon, 2008,\n ) based on C code by Andreas and Jochen\n Voss and (b) linear ballistic accumulator (LBA; Brown & Heathcote, 2008,\n ) with different distributions\n underlying the drift rate.","Published":"2017-05-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"rtematres","Version":"0.2","Title":"The rtematres API package","Description":"Exploit controlled vocabularies organized on tematres servers.","Published":"2013-08-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rTensor","Version":"1.3","Title":"Tools for Tensor Analysis and Decomposition","Description":"A set of tools for creation, manipulation, and modeling\n of tensors with arbitrary number of modes. A tensor in the context of data\n analysis is a multidimensional array. rTensor does this by providing a S4\n class 'Tensor' that wraps around the base 'array' class. rTensor\n provides common tensor operations as methods, including matrix unfolding,\n summing/averaging across modes, calculating the Frobenius norm, and taking\n the inner product between two tensors. Familiar array operations are\n overloaded, such as index subsetting via '[' and element-wise operations.\n rTensor also implements various tensor decomposition, including CP, GLRAM,\n MPCA, PVD, and Tucker. For tensors with 3 modes, rTensor also implements\n transpose, t-product, and t-SVD, as defined in Kilmer et al. (2013). Some\n auxiliary functions include the Khatri-Rao product, Kronecker product, and\n the Hamadard product for a list of matrices.","Published":"2015-12-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rtext","Version":"0.1.20","Title":"R6 Objects for Text and Data","Description":"For natural language processing and analysis of qualitative text\n coding structures which provide a way to bind together text and text data\n are fundamental. The package provides such a structure and accompanying\n methods in form of R6 objects. The 'rtext' class allows for text handling\n and text coding (character or regex based) including data updates on\n text transformations as well as aggregation on various levels.\n Furthermore, the usage of R6 enables inheritance and passing by reference\n which should enable 'rtext' instances to be used as back-end for R based\n graphical text editors or text coding GUIs.","Published":"2016-11-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rtextrankr","Version":"1.0.0","Title":"TextRank for Korean","Description":"Reorder sentences for Korean text using TextRank algorithm.","Published":"2016-08-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RTextTools","Version":"1.4.2","Title":"Automatic Text Classification via Supervised Learning","Description":"RTextTools is a machine learning package for automatic\n text classification that makes it simple for novice users to\n get started with machine learning, while allowing experienced\n users to easily experiment with different settings and\n algorithm combinations. The package includes nine algorithms\n for ensemble classification (svm, slda, boosting, bagging,\n random forests, glmnet, decision trees, neural networks,\n maximum entropy), comprehensive analytics, and thorough\n documentation.","Published":"2014-01-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RTextureMetrics","Version":"1.1","Title":"Functions for calculation of texture metrics for Grey Level\nCo-occurrence Matrices","Description":"This package contains several functions for calculation of texture metrics for Grey Level Co-occurrence matrices","Published":"2014-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rtf","Version":"0.4-11","Title":"Rich Text Format (RTF) Output","Description":"A set of R functions to output Rich Text Format (RTF) files with high resolution tables and graphics that may be edited with a standard word processor such as Microsoft Word.","Published":"2013-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rtfbs","Version":"0.3.5","Title":"Transcription Factor Binding Site Identification Tool","Description":"Identifies and scores possible Transcription Factor\n Binding Sites and allows for FDR analysis and pruning. It supports\n splitting of sequences based on size or a specified GFF, grouping\n by G+C content, and specification of Markov model order. The heavy\n lifting is done in C while all results are made available via R.","Published":"2016-08-16","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rticles","Version":"0.4.1","Title":"Article Formats for R Markdown","Description":"A suite of custom R Markdown formats and templates for\n authoring journal articles and conference submissions.","Published":"2017-05-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rtide","Version":"0.0.4","Title":"Tide Heights","Description":"Calculates tide heights based on tide station harmonics.\n It includes the harmonics data for 637 US stations.\n The harmonics data was converted from ,\n which is NOAA web site data processed by David Flater for XTide.\n The code to calculate tide heights from the harmonics is based on XTide.","Published":"2017-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rtiff","Version":"1.4.5","Title":"Read and Write TIFF Files","Description":"Reads and writes TIFF format images and\n returns them as a pixmap object. Because the resulting object\n can be very large for even modestly sized TIFF images, images\n can be reduced as they are read for improved performance. This\n package is a wrapper around libtiff (www.libtiff.org), on which\n it depends (i.e. the libtiff shared library must be on your\n PATH for the binary to work, and tiffio.h must be on your\n system to build the package from source). By using libtiff's\n highlevel TIFFReadRGBAImage function, this package inherently\n supports a wide range of image formats and compression schemes.\n This package also provides an implementation of the Ridler\n Autothresholding algorithm for easy generation of binary masks.","Published":"2015-07-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rtimes","Version":"0.4.0","Title":"Client for New York Times 'APIs'","Description":"Interface to Congress, Campaign Finance, Article Search, and\n Geographic 'APIs' from the New York Times. Documentation for New York\n Times 'APIs' (http://developer.nytimes.com/docs/). This client covers\n a subset of the New York Times 'APIs'.","Published":"2017-05-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rtip","Version":"1.0.0","Title":"Inequality, Welfare and Poverty Indices and Curves using the\nEU-SILC Data","Description":"R tools to measure and compare inequality, welfare and poverty using the EU statistics on income and living conditions surveys.","Published":"2016-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rtk","Version":"0.2.5.1","Title":"Rarefaction Tool Kit","Description":"Rarefy data, calculate diversity and plot the results.","Published":"2017-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rtkore","Version":"1.3.1","Title":"'STK++' Core Library Integration to 'R' using 'Rcpp'","Description":"'STK++' is a collection of\n C++ classes for statistics, clustering, linear algebra, arrays (with an\n 'Eigen'-like API), regression, dimension reduction, etc. The integration of\n the library to 'R' is using 'Rcpp'.\n\n The 'rtkore' package includes the header files from the 'STK++' core library.\n All files contain only template classes and/or inline functions.\n\n 'STK++' is licensed under the GNU LGPL version 2 or later. 'rtkore'\n (the 'stkpp' integration into 'R') is licensed under the\n GNU GPL version 2 or later. See file LICENSE.note for details.","Published":"2017-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rtnmin","Version":"2016-7.7","Title":"Truncated Newton Function Minimization with Bounds Constraints","Description":"Truncated Newton function minimization with bounds constraints\n\tbased on the 'Matlab'/'Octave' codes of Stephen Nash.","Published":"2016-07-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RTOMO","Version":"1.1-5","Title":"Visualization for Seismic Tomography","Description":"Aimed at seismic tomography, the package\n plots tomographic images, and allows one to interact and query\n three-dimensional tomographic models.\n Vertical cross-sectional cuts can be extracted by mouse click.\n Geographic information can be added easily.","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rtop","Version":"0.5-10","Title":"Interpolation of Data with Variable Spatial Support","Description":"Geostatistical interpolation of data with irregular spatial support such as runoff related data or data from administrative units.","Published":"2016-09-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RTransProb","Version":"0.1.0","Title":"Analyze and Forecast Credit Migrations","Description":"A set of functions used to automate commonly used methods in credit risk. This includes multiple methods for bootstrapping default rates and forecasting/stress testing credit exposures migrations, via Econometrics and Machine Learning algorithms.","Published":"2017-04-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rtrends","Version":"0.1.0","Title":"Analyze Download Logs from the CRAN RStudio Mirror","Description":"Analyze download logs from the CRAN RStudio mirror \n (). \n This CRAN mirror is the default one used in RStudio.\n The available data is the result of parsed and anonymised raw log data from\n that CRAN mirror.","Published":"2016-12-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RTriangle","Version":"1.6-0.8","Title":"Triangle - A 2D Quality Mesh Generator and Delaunay Triangulator","Description":"This is a port of Jonathan Shewchuk's Triangle library to\n R. From his description: \"Triangle generates exact Delaunay\n triangulations, constrained Delaunay triangulations, conforming\n Delaunay triangulations, Voronoi diagrams, and high-quality\n triangular meshes. The latter can be generated with no small or\n large angles, and are thus suitable for finite element analysis.\"","Published":"2016-07-01","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"rtrie","Version":"0.1.1","Title":"A Simple R-Based Implementation of a Trie (A.k.a. Digital\nTree/Radix Tree/Prefix Tree)","Description":"A simple R-based implementation of a Trie (a.k.a. digital tree/radix tree/prefix tree)\n A trie, also called digital tree and sometimes radix tree or prefix tree is a kind of search tree.\n This ordered tree data structure is used to store a dynamic set or associative array where the keys are usually strings.","Published":"2017-01-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rtrim","Version":"1.0.1","Title":"Trends and Indices for Monitoring Data","Description":"The TRIM model is widely used for estimating growth and decline of\n animal populations based on (possibly sparsely available) count data. The\n current package is a reimplementation of the original TRIM software developed\n at Statistics Netherlands by Jeroen Pannekoek. See\n \n for more information about TRIM.","Published":"2016-11-28","License":"EUPL","snapshot_date":"2017-06-23"} {"Package":"rts","Version":"1.0-27","Title":"Raster Time Series Analysis","Description":"This framework aims to provide classes and methods for manipulating and processing of raster time series data (e.g. a time series of satellite images).","Published":"2017-06-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"Rtsne","Version":"0.13","Title":"T-Distributed Stochastic Neighbor Embedding using a Barnes-Hut\nImplementation","Description":"An R wrapper around the fast T-distributed Stochastic\n Neighbor Embedding implementation by Van der Maaten (see for more information on the original implementation).","Published":"2017-04-14","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rtson","Version":"1.3","Title":"Typed JSON","Description":"TSON, short for Typed JSON, is a binary-encoded serialization of\n JSON like document that support JavaScript typed data (https://github.com/tercen/TSON).","Published":"2016-08-26","License":"Apache License Version 2.0","snapshot_date":"2017-06-23"} {"Package":"Rttf2pt1","Version":"1.3.4","Title":"'ttf2pt1' Program","Description":"Contains the program 'ttf2pt1', for use with the\n 'extrafont' package. This product includes software developed by the 'TTF2PT1'\n Project and its contributors.","Published":"2016-05-19","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rtts","Version":"0.3.3","Title":"Convert Text into Speech","Description":"Convert text into speech (voice file in 'wav' format) with API offered by ITRI TTS (Text-To-Speech service, Industrial Technology Research Institute, Taiwan. http://tts.itri.org.tw/). One main function is given, tts_ITRI(). English and Chinese (both traditional and simplified) are supported, and user can specify the speaker accent, speed and volume. Using this package requires internet connection.","Published":"2015-11-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RtutoR","Version":"0.3","Title":"Tutorial App for Learning R","Description":"Contains functions for launching R Tutorial & Plotting Apps. The R Tutorial app\n provides a set of most commonly performed data manipulation tasks in R. The\n app structures the contents into different topics and provides an interactive & dynamic interface\n for navigation.\n The plotting app provides an automated interface for generating plots using the 'ggplot2' package.\n Current version of this app supports 10 different plot types along with options to manipulate specific\n aesthetics and controls related to each plot type.","Published":"2016-05-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Rtwalk","Version":"1.8.0","Title":"The R Implementation of the 't-walk' MCMC Algorithm","Description":"The 't-walk' is a general-purpose MCMC sampler for\n arbitrary continuous distributions that requires no tuning.","Published":"2015-09-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rtweet","Version":"0.4.0","Title":"Collecting Twitter Data","Description":"An implementation of calls designed to extract and organize Twitter data via\n Twitter's REST and stream APIs. Functions formulate and send API requests, convert\n response objects to more user friendly data structures---e.g., data frames---and\n provide some aesthetically pleasing visualizations for exploring the data.","Published":"2017-01-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rtype","Version":"0.1-1","Title":"A strong type system for R","Description":"A strong type system for R which supports\n symbol declaration and assignment with type checking\n and condition checking.","Published":"2014-08-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rtypeform","Version":"0.3.0","Title":"Interface to 'typeform' Results","Description":"An R interface to the 'typeform' application program interface.\n Also provides functions for downloading your results.","Published":"2017-05-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"Ruchardet","Version":"0.0-3","Title":"R package to detect character encoding","Description":"R port of 'universalchardet', that is the encoding detector\n library of Mozilla.","Published":"2014-02-07","License":"MPL","snapshot_date":"2017-06-23"} {"Package":"rucm","Version":"0.6","Title":"Implementation of Unobserved Components Model (UCM)","Description":"Unobserved Components Models (introduced in Harvey, A. (1989),\n Forecasting, structural time series models and the Kalman filter, Cambridge\n New York: Cambridge University Press) decomposes a time series into\n components such as trend, seasonal, cycle, and the regression effects due\n to predictor series which captures the salient features of the series to\n predict its behavior.","Published":"2015-11-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rucrdtw","Version":"0.1.2","Title":"R Bindings for the UCR Suite","Description":"R bindings for functions from the UCR Suite by Rakthanmanon et al. (2012) , which enables ultrafast subsequence\n search for a best match under Dynamic Time Warping and Euclidean Distance.","Published":"2017-05-07","License":"Apache License","snapshot_date":"2017-06-23"} {"Package":"rugarch","Version":"1.3-6","Title":"Univariate GARCH Models","Description":"ARFIMA, in-mean, external regressors and various GARCH flavors, with methods for fit, forecast, simulation, inference and plotting.","Published":"2015-08-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rUnemploymentData","Version":"1.1.0","Title":"Data and Functions for USA State and County Unemployment Data","Description":"Contains data and visualization functions for USA unemployment\n data. Data comes from the US Bureau of Labor Statistics (BLS). State data\n is in ?df_state_unemployment and covers 2000-2013. County data is in\n ?df_county_unemployment and covers 1990-2013. Choropleth maps of the data\n can be generated with ?state_unemployment_choropleth() and\n ?county_unemployment_choropleth() respectively. ","Published":"2017-01-19","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RUnit","Version":"0.4.31","Title":"R Unit Test Framework","Description":"R functions implementing a standard Unit Testing\n framework, with additional code inspection and report\n generation tools.","Published":"2015-11-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"runittotestthat","Version":"0.0-2","Title":"Convert 'RUnit' Test Functions into 'testthat' Tests","Description":"Automatically convert a file or package worth of 'RUnit' test\n functions into 'testthat' tests.","Published":"2015-06-24","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"Runiversal","Version":"1.0.2","Title":"Runiversal - Package for converting R objects to Java variables\nand XML","Description":"This package contains some functions for converting R\n objects to Java style variables and XML. Generated Java code is\n interpretable by dynamic Java libraries such as Beanshell.\n Calling R externally and handling the Java or XML output is an\n other way to call R from other languages without native\n interfaces. For a Java implementation of this approach visit\n http://www.mhsatman.com/rcaller.php and\n http://stdioe.blogspot.com/search/label/rcaller","Published":"2012-08-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"runjags","Version":"2.0.4-2","Title":"Interface Utilities, Model Templates, Parallel Computing Methods\nand Additional Distributions for MCMC Models in JAGS","Description":"User-friendly interface utilities for MCMC models via\n Just Another Gibbs Sampler (JAGS), facilitating the use of parallel\n (or distributed) processors for multiple chains, automated control\n of convergence and sample length diagnostics, and evaluation of the\n performance of a model using drop-k validation or against simulated\n data. Template model specifications can be generated using a standard\n lme4-style formula interface to assist users less familiar with the\n BUGS syntax. A JAGS extension module provides additional distributions\n including the Pareto family of distributions, the DuMouchel prior and\n the half-Cauchy prior.","Published":"2016-07-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Runuran","Version":"0.23.0","Title":"R Interface to the UNU.RAN Random Variate Generators","Description":"Interface to the UNU.RAN library for Universal Non-Uniform RANdom variate generators. \n\t Thus it allows to build non-uniform random number generators from quite arbitrary\n\t distributions. In particular, it provides an algorithm for fast numerical inversion\n\t for distribution with given density function.\n\t In addition, the package contains densities, distribution functions and quantiles\n\t from a couple of distributions. ","Published":"2015-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RunuranGUI","Version":"0.1","Title":"A GUI for the UNU.RAN random variate generators","Description":"This package provides a GUI (Graphical User Interface) for\n the UNU.RAN random variate generators. Thus it allows to build\n non-uniform random number generators interactively for quite\n arbitrary distributions. In addition, R code for the required\n calls from package Runuran can be displayed and stored for\n later use. Some basic analysis like goodness-of-fit tests can\n be performed.","Published":"2010-10-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rusda","Version":"1.0.8","Title":"Interface to USDA Databases","Description":"An interface to the web service methods provided by the United States Department of Agriculture (USDA). The Agricultural Research Service (ARS) provides a large set of databases. The current version of the package holds interfaces to the Systematic Mycology and Microbiology Laboratory (SMML), which consists of four databases: Fungus-Host Distributions, Specimens, Literature and the Nomenclature database. It provides functions for querying these databases. The main function is \\code{associations}, which allows searching for fungus-host combinations. ","Published":"2016-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rust","Version":"1.2.2","Title":"Ratio-of-Uniforms Simulation with Transformation","Description":"Uses the generalised ratio-of-uniforms (RU) method to simulate\n from univariate and (low-dimensional) multivariate continuous distributions.\n The user specifies the log-density, up to an additive constant. The RU\n algorithm is applied after relocation of mode of the density to zero, and\n the user can choose a tuning parameter r. For details see Wakefield, Gelfand\n and Smith (1991) , Efficient generation of random\n variates via the ratio-of-uniforms method, Statistics and Computing (1991)\n 1, 129-133. A Box-Cox variable transformation can be used to make the input\n density suitable for the RU method and to improve efficiency. In the\n multivariate case rotation of axes can also be used to improve efficiency.\n From version 1.2.0 the 'Rcpp' package \n can be used to improve efficiency.\n See the rust website for more information, documentation and examples.","Published":"2017-06-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ruv","Version":"0.9.6","Title":"Detect and Remove Unwanted Variation using Negative Controls","Description":"Implements the 'RUV' (Remove Unwanted Variation) algorithms. These algorithms attempt to adjust for systematic errors of unknown origin in high-dimensional data. The algorithms were originally developed for use with genomic data, especially microarray data, but may be useful with other types of high-dimensional data as well. These algorithms were proposed by Gagnon-Bartsch and Speed (2012), and by Gagnon-Bartsch, Jacob and Speed (2013). The algorithms require the user to specify a set of negative control variables, as described in the references. The algorithms included in this package are 'RUV-2', 'RUV-4', 'RUV-inv', and 'RUV-rinv', along with various supporting algorithms. ","Published":"2015-07-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"rv","Version":"2.3.2","Title":"Simulation-Based Random Variable Objects","Description":"Implements simulation-based random variable class and a suite of methods.","Published":"2017-04-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RVAideMemoire","Version":"0.9-65","Title":"Diverse Basic Statistical and Graphical Functions","Description":"Contains diverse more or less complicated functions, written to simplify user's life: simplifications of existing functions, basic but not implemented tests, easy-to-use tools, bridges between functions of different packages... All functions are presented in the French book 'Aide-memoire de statistique appliquee a la biologie', written by the same author and available on CRAN.","Published":"2017-05-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rvalues","Version":"0.6","Title":"R-Values for Ranking in High-Dimensional Settings","Description":"A collection of functions for computing \"r-values\" from various\n kinds of user input such as MCMC output or a list of effect size estimates\n and associated standard errors. Given a large collection of measurement units,\n the r-value, r, of a particular unit is a reported percentile that may be\n interpreted as the smallest percentile at which the unit should be placed in the\n top r-fraction of units.","Published":"2015-12-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rvcg","Version":"0.16","Title":"Manipulations of Triangular Meshes Based on the 'VCGLIB' API","Description":"Operations on triangular meshes based on 'VCGLIB'. This package\n integrates nicely with the R-package 'rgl' to render the meshes processed by\n 'Rvcg'. The Visualization and Computer Graphics Library (VCG for short) is\n an open source portable C++ templated library for manipulation, processing\n and displaying with OpenGL of triangle and tetrahedral meshes. The library,\n composed by more than 100k lines of code, is released under the GPL license,\n and it is the base of most of the software tools of the Visual Computing Lab of\n the Italian National Research Council Institute ISTI ,\n like 'metro' and 'MeshLab'. The 'VCGLIB' source is pulled from trunk\n and patched to work with options\n determined by the configure script as well as to work with the header files\n included by 'RcppEigen'.","Published":"2017-04-06","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rvcheck","Version":"0.0.8","Title":"R/Package Version Check","Description":"Check latest release version of R and R package (both in 'CRAN', 'Bioconductor' or 'Github').","Published":"2017-03-08","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"rversions","Version":"1.0.3","Title":"Query 'R' Versions, Including 'r-release' and 'r-oldrel'","Description":"Query the main 'R' 'SVN' repository to find the\n versions 'r-release' and 'r-oldrel' refer to, and also all\n previous 'R' versions and their release dates.","Published":"2016-08-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rvertnet","Version":"0.5.0","Title":"Search 'Vertnet', a 'Database' of Vertebrate Specimen Records","Description":"Retrieve, map and summarize data from the 'VertNet.org' archives.\n Functions allow searching by many parameters, including 'taxonomic' names,\n places, and dates. In addition, there is an interface for conducting spatially\n delimited searches, and another for requesting large 'datasets' via email.","Published":"2016-09-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rvest","Version":"0.3.2","Title":"Easily Harvest (Scrape) Web Pages","Description":"Wrappers around the 'xml2' and 'httr' packages to make it easy to\n download, then manipulate, HTML and XML.","Published":"2016-06-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RVFam","Version":"1.1","Title":"Rare Variants Association Analyses with Family Data","Description":"The RVFam package provides functions to perform single SNP association analyses and gene-based tests for continuous, binary and survival traits against sequencing data (e.g. exome chip) using family data. ","Published":"2015-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rvg","Version":"0.1.4","Title":"R Graphics Devices for Vector Graphics Output","Description":"Vector Graphics devices for 'SVG', 'DrawingML' for Microsoft Word, \n PowerPoint and Excel. Functions extending package 'officer' are provided to \n embed 'DrawingML' graphics into 'Microsoft PowerPoint' documents.","Published":"2017-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"rvgtest","Version":"0.7.4","Title":"Tools for Analyzing Non-Uniform Pseudo-Random Variate Generators","Description":"Test suite for non-uniform pseudo-random number generators.","Published":"2014-02-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rvHPDT","Version":"3.0","Title":"Calling haplotype-based and variant-based pedigree\ndisequilibrium test for rare variants in pedigrees","Description":"To detecting rare variants for binary traits using general pedigrees, the pedigree disequilibrium tests are proposed by collapsing rare haplotypes/variants with/without weights. To run the test, MERLIN is needed in Linux for haplotyping.","Published":"2014-05-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RVideoPoker","Version":"0.3","Title":"Play Video Poker with R","Description":"Play Video Poker with R, complete with a graphical user\n interface. So far, only \"Jacks or Better\" is implemented.","Published":"2012-06-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RViennaCL","Version":"1.7.1.4","Title":"'ViennaCL' C++ Header Files","Description":"'ViennaCL' is a free open-source linear algebra library \n for computations on many-core architectures (GPUs, MIC) and \n multi-core CPUs. The library is written in C++ and supports 'CUDA', \n 'OpenCL', and 'OpenMP' (including switches at runtime). \n I have placed these libraries in this package as a more efficient \n distribution system for CRAN. The idea is that you can write a package \n that depends on the 'ViennaCL' library and yet you do not need to \n distribute a copy of this code with your package.","Published":"2016-11-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Rvmmin","Version":"2013-11.12","Title":"Variable Metric Nonlinear Function Minimization","Description":"Variable metric nonlinear function minimization with bounds constraints.","Published":"2014-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Rvoterdistance","Version":"1.1","Title":"Calculates the Distance Between Voter and Multiple Polling\nLocations","Description":"Designed to calculate the distance between each voter in a voter file -- given lat/long coordinates -- and many potential (early) polling or vote by mail drop box locations, then return the minimum distance.","Published":"2017-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RVowpalWabbit","Version":"0.0.9","Title":"R Interface to the Vowpal Wabbit","Description":"The 'Vowpal Wabbit' project is a fast out-of-core learning\n system sponsored by Microsoft Research (having started at Yahoo! Research)\n and written by John Langford along with a number of contributors. This R\n package does not include the distributed computing implementation of the\n cluster/ directory of the upstream sources. Use of the software as a network\n service is also not directly supported as the aim is a simpler direct call\n from R for validation and comparison. ","Published":"2017-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RVPedigree","Version":"0.0.3","Title":"Methods for Family-Based Rare-Variant Genetic Association Tests","Description":"This is a collection of the five region-based\n rare-variant genetic association tests. The following tests are\n currently implemented: ASKAT, ASKAT-Normalized, VC-C1, VC-C2 and\n VC-C3.","Published":"2016-01-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RVsharing","Version":"1.7.0","Title":"Probability of Sharing Rare Variants among Relatives","Description":"Computes estimates of the probability of related individuals sharing a rare variant.","Published":"2017-02-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rvTDT","Version":"1.0","Title":"population control weighted rare-variants TDT","Description":"Used to compute population controls weighted rare variants transmission distortion test","Published":"2014-04-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"RVtests","Version":"1.2","Title":"Rare Variant Tests","Description":"Use multiple regression methods to test rare variants\n association with disease traits.","Published":"2013-05-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rwars","Version":"1.0.0","Title":"R Client for the Star Wars API","Description":"Provides functions to retrieve and reformat data from the 'Star Wars' API (SWAPI) .","Published":"2017-01-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Rwave","Version":"2.4-5","Title":"Time-Frequency Analysis of 1-D Signals","Description":"A set of R functions which provide an\n environment for the Time-Frequency analysis of 1-D signals (and\n especially for the wavelet and Gabor transforms of noisy\n signals). It was originally written for Splus by Rene Carmona,\n Bruno Torresani, and Wen L. Hwang, first at the University of\n California at Irvine and then at Princeton University. Credit\n should also be given to Andrea Wang whose functions on the\n dyadic wavelet transform are included. Rwave is based on the\n book: \"Practical Time-Frequency Analysis: Gabor and Wavelet\n Transforms with an Implementation in S\", by Rene Carmona, Wen\n L. Hwang and Bruno Torresani, Academic Press, 1998.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rWBclimate","Version":"0.1.3","Title":"A package for accessing World Bank climate data","Description":"This package will download model predictions from 15 different global circulation models in 20 year intervals from the world bank. Users can also access historical data, and create maps at 2 different spatial scales.","Published":"2014-04-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RWBP","Version":"1.0","Title":"Detects spatial outliers using a Random Walk on Bipartite Graph","Description":"a Bipartite graph and is constructed based on the spatial and/or non-spatial attributes of the spatial objects in the dataset. Secondly, RW techniques are utilized on the graphs to compute the outlierness for each point (the differences between spatial objects and their spatial neighbours). The top k objects with higher outlierness are recognized as outliers.","Published":"2014-06-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RWebLogo","Version":"1.0.3","Title":"plotting custom sequence logos","Description":"RWebLogo is a wrapper for the WebLogo python package\n that allows generating of customised sequence logos. Sequence logos are\n graphical representations of the sequence conservation of nucleotides (in a\n strand of DNA/RNA) or amino acids (in protein sequences). Each logo\n consists of stacks of symbols, one stack for each position in the sequence.\n The overall height of the stack indicates the sequence conservation at that\n position, while the height of symbols within the stack indicates the\n relative frequency of each amino or nucleic acid at that position. In\n general, a sequence logo provides a richer and more precise description of,\n for example, a binding site, than would a consensus sequence.","Published":"2014-08-10","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RWeka","Version":"0.4-34","Title":"R/Weka Interface","Description":"An R interface to Weka (Version 3.9.1).\n Weka is a collection of machine learning algorithms for data mining\n tasks written in Java, containing tools for data pre-processing,\n classification, regression, clustering, association rules, and\n visualization. Package 'RWeka' contains the interface code, the\n Weka jar is in a separate package 'RWekajars'. For more information\n on Weka see .","Published":"2017-05-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RWekajars","Version":"3.9.1-3","Title":"R/Weka Interface Jars","Description":"External jars required for package 'RWeka'.","Published":"2017-04-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rwfec","Version":"0.2","Title":"R Wireless, Forward Error Correction","Description":"Communications simulation package supporting forward error correction.","Published":"2015-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RWiener","Version":"1.3-1","Title":"Wiener Process Distribution Functions","Description":"Provides Wiener process distribution functions,\n namely the Wiener first passage time density, CDF, quantile and random\n functions. Additionally supplies a modelling function (wdm) and further\n methods for the resulting object.","Published":"2017-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RWildbook","Version":"0.9.2","Title":"Interface for the 'Wildbook' Wildlife Data Management Framework","Description":"Provides an interface with the 'Wildbook' mark-recapture ecological database framework. It \n helps users to pull data from the 'Wildbook' framework and format data for further analysis\n with mark-recapture applications like 'Program MARK' (which can be accessed via the 'RMark' package in 'R').\n Further information on the 'Wildbook' framework is available at: . ","Published":"2017-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rWind","Version":"0.2.0","Title":"Download, Edit and Transform Wind Data from GFS","Description":"Tools for downloading, editing and transforming wind data from Global Forecast System (GFS, see ) of the USA's National Weather Service (NWS, see ).","Published":"2017-06-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"RWinEdt","Version":"2.0-6","Title":"R Interface to 'WinEdt'","Description":"A plug in for using 'WinEdt' as an editor for R.","Published":"2017-01-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Rwinsteps","Version":"1.0-1","Title":"Running Winsteps in R","Description":"The Rwinsteps package facilitates communication between R\n and the Rasch modeling software Winsteps. The package currently\n includes functions for reading and writing command files,\n sending them to Winsteps, reading and writing data according to\n command file specifications, reading output into R, and\n plotting various results.","Published":"2012-01-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rworldmap","Version":"1.3-6","Title":"Mapping Global Data","Description":"Enables mapping of country level and gridded user datasets.","Published":"2016-02-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rworldxtra","Version":"1.01","Title":"Country boundaries at high resolution","Description":"High resolution vector country boundaries derived from\n Natural Earth data, can be plotted in rworldmap.","Published":"2012-10-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rwt","Version":"1.0.0","Title":"Rice Wavelet Toolbox wrapper","Description":"Provides a set of functions for performing digital signal\n processing.","Published":"2014-06-24","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rwty","Version":"1.0.1","Title":"R We There Yet? Visualizing MCMC Convergence in Phylogenetics","Description":"Implements various tests, visualizations, and metrics\n for diagnosing convergence of MCMC chains in phylogenetics. It implements\n and automates many of the functions of the AWTY package in the R\n environment.","Published":"2016-06-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rwunderground","Version":"0.1.6","Title":"R Interface to Weather Underground API","Description":"Tools for getting historical weather information and forecasts \n from wunderground.com. Historical weather and forecast data includes, but \n is not limited to, temperature, humidity, windchill, wind speed, dew point, \n heat index. Additionally, the weather underground weather API also includes \n information on sunrise/sunset, tidal conditions, satellite/webcam imagery, \n weather alerts, hurricane alerts and historical high/low temperatures.","Published":"2017-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RxCEcolInf","Version":"0.1-3","Title":"R x C Ecological Inference With Optional Incorporation of Survey\nInformation","Description":"Fits the R x C inference model described in Greiner and\n Quinn (2009). Allows incorporation of survey results.","Published":"2013-07-24","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RXKCD","Version":"1.8-2","Title":"Get XKCD Comic from R","Description":"Visualize your favorite XKCD comic strip directly from\n R. XKCD web comic content is provided under the Creative\n Commons Attribution-NonCommercial 2.5 License.","Published":"2017-03-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RXMCDA","Version":"1.5.5","Title":"Functions to Parse and Create XMCDA Files","Description":"Functions which allow to read many XMCDA tags and transform them into R variables which are then usable in MCDA algorithms written in R. It also allows to write certain R variables into XML files respecting the XMCDA standard.","Published":"2015-12-11","License":"CeCILL-2","snapshot_date":"2017-06-23"} {"Package":"RxnSim","Version":"1.0.2","Title":"Functions to Compute Chemical Reaction Similarity","Description":"Methods to compute chemical similarity between two or more reactions and molecules. Allows masking of chemical substructures for weighted similarity computations. Uses packages 'rCDK' and 'fingerprint' for cheminformatics functionality.","Published":"2017-06-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"RxODE","Version":"0.5-6","Title":"Facilities for Simulating from ODE-Based Models","Description":"Facilities for running simulations from ordinary \n differential equation (ODE) models, such as pharmacometrics and other \n compartmental models. A compilation manager translates the ODE model \n into C, compiles it, and dynamically loads the object code into R for \n improved computational efficiency. An event table object facilitates \n the specification of complex dosing regimens (optional) and sampling \n schedules.","Published":"2017-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rxSeq","Version":"0.99.3","Title":"Combined Total and Allele Specific Reads Sequencing Study","Description":"Analysis of combined total and allele specific reads from the reciprocal cross study with RNA-seq data. ","Published":"2016-12-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"RXshrink","Version":"1.0-8","Title":"Maximum Likelihood Shrinkage via Generalized Ridge or Least\nAngle Regression","Description":"Identify and display TRACEs for a specified shrinkage path and determine\n the extent of shrinkage most likely, under normal distribution theory, to produce an\n optimal reduction in MSE Risk in estimates of regression (beta) coefficients.","Published":"2014-01-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Ryacas","Version":"0.3-1","Title":"R Interface to the Yacas Computer Algebra System","Description":"An interface to the yacas computer algebra system.","Published":"2016-05-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"RYandexTranslate","Version":"1.0","Title":"R Interface to Yandex Translate API","Description":"'Yandex Translate' (https://translate.yandex.com/) is a statistical machine translation system.\n\tThe system translates separate words, complete texts, and webpages.\n\tThis package can be used to detect language from text and to translate it to supported target language.\n\tFor more info: https://tech.yandex.com/translate/doc/dg/concepts/About-docpage/ .","Published":"2016-02-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RYoudaoTranslate","Version":"1.0","Title":"R package provide functions to translate English words into\nChinese","Description":"You can use this package to translate thousands of words. The Youdao translation open API is applied in this package. But, it just translates less than 1000 English words into Chinese.","Published":"2014-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ryouready","Version":"0.4","Title":"Companion to the Forthcoming Book - R you Ready?","Description":"Package contains some data and functions that \n are used in my forthcoming \"R you ready?\" book.","Published":"2015-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"rYoutheria","Version":"1.0.3","Title":"Access to the YouTheria Mammal Trait Database","Description":"A programmatic interface to web-services of YouTheria. YouTheria is\n an online database of mammalian trait data .","Published":"2016-04-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"rysgran","Version":"2.1.0","Title":"Grain size analysis, textural classifications and distribution\nof unconsolidated sediments","Description":"This package is a port to R of the SysGran program, written in Delphi by Camargo (2006). It contains functions for the analysis of grain size samples (in logarithmic (phi) and geometric (micrometers) scale) based on various methods, like Folk & Ward (1957) and Methods of Moments (Tanner, 1995), among others; textural classifications and distribution of unconsolidated sediments are shown in histograms, bivariated plots and ternary diagrams of Shepard (1954) and Pejrup (1988). English and Portuguese languages are supported in outputs","Published":"2014-07-10","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"Rz","Version":"0.9-1","Title":"GUI Tool for Data Management like SPSS or Stata","Description":"R is very powerful but it lacks some of the functionalities found in\n Stata or SPSS to manage survey data. The 'memisc' package provides these\n (variable labels, value labels, definable missing values and so on), but to\n efficiently work these functions need a graphical interface to allow the user\n to get an overview of the data. This package provides such a graphical interface,\n similar in fashion to SPSS's Variable View and data managing system. It uses the\n 'memisc' package as its backend. Additionally, 'Rz' has a powerful plot assistant\n interface based on 'ggplot2'.","Published":"2013-07-28","License":"GPL (>= 3) + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"RZabbix","Version":"0.1.0","Title":"R Module for Working with the 'Zabbix API'","Description":"R interface to the 'Zabbix API' data . Enables easy and direct communication with 'Zabbix API' from 'R'.","Published":"2016-04-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"rzmq","Version":"0.9.1","Title":"R Bindings for ZeroMQ","Description":"Interface to the 'ZeroMQ' lightweight messaging kernel (see for more information).","Published":"2017-03-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"s2","Version":"0.1-1","Title":"Google's S2 Library for Geometry on the Sphere","Description":"R bindings for Google's s2 library for geometric calculations on\n the sphere.","Published":"2016-11-04","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"s20x","Version":"3.1-22","Title":"Functions for University of Auckland Course STATS 201/208 Data\nAnalysis","Description":"A set of functions used in teaching STATS 201/208 Data Analysis at\n the University of Auckland. The functions are designed to make parts of R more\n accessible to a large undergraduate population who are mostly not statistics\n majors.","Published":"2017-05-29","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"s2dverification","Version":"2.8.0","Title":"Set of Common Tools for Forecast Verification","Description":"Set of tools to verify forecasts through the computation of typical prediction scores against one or more observational datasets or reanalyses (a reanalysis being a physical extrapolation of observations that relies on the equations from a model, not a pure observational dataset). Intended for seasonal to decadal climate forecasts although can be useful to verify other kinds of forecasts. The package can be helpful in climate sciences for other purposes than forecasting. ","Published":"2017-02-13","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"S2sls","Version":"0.1","Title":"Spatial Two Stage Least Squares Estimation","Description":"Fit a spatial instrumental-variable regression by two-stage least squares.","Published":"2016-01-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"s4vd","Version":"1.1-1","Title":"Biclustering via Sparse Singular Value Decomposition\nIncorporating Stability Selection","Description":"The main function s4vd() performs a biclustering via sparse\n singular value decomposition with a nested stability selection.\n The results is an biclust object and thus all methods of the\n biclust package can be applied.","Published":"2015-11-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"saasCNV","Version":"0.3.4","Title":"Somatic Copy Number Alteration Analysis Using Sequencing and SNP\nArray Data","Description":"Perform joint segmentation on two signal dimensions derived from \n total read depth (intensity) and allele specific read depth (intensity) for \n whole genome sequencing (WGS), whole exome sequencing (WES) and SNP array data.","Published":"2016-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sac","Version":"1.0.1","Title":"Semiparametric Analysis of Changepoint","Description":"Semiparametric empirical likelihood ratio\n based test of changepoint with one-change or epidemic alternatives\n with data-based model diagnostic","Published":"2009-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"saccades","Version":"0.1-1","Title":"Detection of Fixations in Eye-Tracking Data","Description":"Functions for detecting eye fixations in raw eye-tracking\n data. The detection is done using a velocity-based algorithm for\n saccade detection proposed by Ralf Engbert and Reinhold Kliegl in\n 2003. The algorithm labels segments as saccades when the velocity of\n the eye movement exceeds a certain threshold. Anything between two\n saccades is considered a fixation. Thus the algorithm is not\n appropriate for data containing episodes of smooth pursuit eye\n movements.","Published":"2015-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SACCR","Version":"2.1","Title":"SA Counterparty Credit Risk under Basel III","Description":"Computes the Exposure-At-Default based on standardized approach\n of the Basel III Regulatory framework (SA-CCR). Currently, trade types of all\n the five major asset classes have been created and, given the inheritance-\n based structure of the application, the addition of further trade types\n is straightforward. The application returns a list of trees (one per CSA) after\n automatically separating the trades based on the CSAs, the hedging sets, the\n netting sets and the risk factors. The basis and volatility transactions are\n also identified and treated in specific hedging sets whereby the corresponding \n penalty factors are applied. All the examples appearing on the\n regulatory paper (including the margined and the un-margined workflow) have been\n implemented.","Published":"2016-11-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SACOBRA","Version":"0.7","Title":"Self-Adjusting COBRA","Description":"Performs constrained optimization for expensive black-box problems.","Published":"2015-08-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SADEG","Version":"1.0.0","Title":"Stability Analysis in Differentially Expressed Genes","Description":"We analyzed the nucleotide composition of genes with a special emphasis on stability of DNA sequences. Besides, in a variety of different organisms unequal use of synonymous codons, or codon usage bias, occurs which also show variation among genes in the same genome. Seemingly, codon usage bias is affected by both selective constraints and mutation bias which allows and enables us to examine and detect changes in these two evolutionary forces between genomes or along one genome. Therefore, we determined the codon adaptation index (CAI), effective number of codons (ENC) and codon usage analysis with calculation of the relative synonymous codon usage (RSCU), and subsequently predicted the translation efficiency and accuracy through GC-rich codon usages. Furthermore, we estimated the relative stability of the DNA sequence following calculation of the average free energy (Delta G) and Dimer base-stacking energy level.","Published":"2017-01-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SADISA","Version":"1.0","Title":"Species Abundance Distributions with Independent-Species\nAssumption","Description":"Computes the probability of a set of species abundances of a single or multiple samples of individuals under a mainland-island model. One must specify the mainland (metacommunity) model and the island (local) community model. It assumes that species fluctuate independently. See Haegeman, B. & R.S. Etienne (2017). A general sampling formula for community structure data. Methods in Ecology & Evolution. In press.","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sadists","Version":"0.2.3","Title":"Some Additional Distributions","Description":"Provides the density, distribution, quantile and generation\n functions of some obscure probability distributions, including the doubly non-\n central t, F, Beta, and Eta distributions; the lambda-prime and K-prime; the\n upsilon distribution; the (weighted) sum of non-central chi-squares to a power;\n the (weighted) sum of log non-central chi-squares; the product of non-central\n chi-squares to powers; the product of doubly non-central F variables; the\n product of independent normals.","Published":"2017-03-20","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"sads","Version":"0.4.0","Title":"Maximum Likelihood Models for Species Abundance Distributions","Description":"Maximum likelihood tools to fit and compare models of species\n abundance distributions and of species rank-abundance distributions.","Published":"2017-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sae","Version":"1.1","Title":"Small Area Estimation","Description":"Functions for small area estimation.","Published":"2015-07-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sae2","Version":"0.1-1","Title":"Small Area Estimation: Time-series Models","Description":"Time series models for small area estimation based on area-level models.","Published":"2015-01-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"saemix","Version":"1.2","Title":"Stochastic Approximation Expectation Maximization (SAEM)\nalgorithm","Description":"The SAEMIX package implements the Stochastic Approximation EM algorithm for parameter estimation in (non)linear mixed effects models. The SAEM algorithm: - computes the maximum likelihood estimator of the population parameters, without any approximation of the model (linearisation, quadrature approximation,...), using the Stochastic Approximation Expectation Maximization (SAEM) algorithm, - provides standard errors for the maximum likelihood estimator - estimates the conditional modes, the conditional means and the conditional standard deviations of the individual parameters, using the Hastings-Metropolis algorithm. Several applications of SAEM in agronomy, animal breeding and PKPD analysis have been published by members of the Monolix group (http://group.monolix.org/).","Published":"2014-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SAENET","Version":"1.1","Title":"A Stacked Autoencoder Implementation with Interface to\n'neuralnet'","Description":"An implementation of a stacked sparse autoencoder for dimension reduction of features and pre-training of feed-forward neural networks\n\t\twith the 'neuralnet' package is contained within this package. The package also includes a predict function for the stacked autoencoder object to generate the compressed\n\t\trepresentation of new data if required. For the purposes of this package, 'stacked' is defined in line with http://ufldl.stanford.edu/wiki/index.php/Stacked_Autoencoders .\n\t\tThe underlying sparse autoencoder is defined in the documentation of 'autoencoder'.","Published":"2015-06-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"saeRobust","Version":"0.1.0","Title":"Robust Small Area Estimation","Description":"Methods to fit robust alternatives to commonly used models used in\n Small Area Estimation. The methods here used are based on best linear\n unbiased predictions and linear mixed models. At this time available models\n include area level models incorporating spatial and temporal correlation in\n the random effects.","Published":"2016-05-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"saery","Version":"1.0","Title":"Small Area Estimation for Rao and Yu Model","Description":"A complete set of functions to calculate several EBLUP (Empirical Best Linear Unbiased Predictor) estimators and their mean squared errors. All estimators are based on an area-level linear mixed model introduced by Rao and Yu in 1994 (see documentation). The REML method is used for fitting this model.","Published":"2014-09-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"saeSim","Version":"0.9.0","Title":"Simulation Tools for Small Area Estimation","Description":"Tools for the simulation of data in the context of small area\n estimation. Combine all steps of your simulation - from data generation\n over drawing samples to model fitting - in one object. This enables easy\n modification and combination of different scenarios. You can store your\n results in a folder or start the simulation in parallel.","Published":"2017-05-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SAFD","Version":"1.0-1","Title":"Statistical Analysis of Fuzzy Data","Description":"The aim of the package is to provide some basic functions\n for doing statistics with one dimensional Fuzzy Data (in the\n form of polygonal fuzzy numbers). In particular, the package\n contains functions for the basic operations on the class of\n fuzzy numbers (sum, scalar product, mean, median, Hukuhara difference) \n as well as for calculating (Bertoluzza) distance,\n sample variance, sample covariance, and the\n Dempster-Shafer (levelwise) histogram. Moreover a function to\n simulate fuzzy random variables, bootstrap tests for the\n equality of means, and a function to do linear regression given\n trapezoidal fuzzy data is included. Version 1.0 fixes some bugs\n of previous versions.","Published":"2015-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SafeBayes","Version":"1.1","Title":"Generalized and Safe-Bayesian Ridge and Lasso Regression","Description":"Functions for Generalized and Safe-Bayesian Ridge and Lasso Regression models with both fixed and varying variance.","Published":"2016-10-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"safeBinaryRegression","Version":"0.1-3","Title":"Safe Binary Regression","Description":"Overloads the glm function in the stats package so that\n a test for the existence of the maximum likelihood estimate is included\n in the fitting procedure for binary regression models.","Published":"2013-12-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SafeQuant","Version":"2.3.1","Title":"A Toolbox for the Analysis of Proteomics Data","Description":"Tools for the statistical analysis and visualization of (relative\n and absolute) quantitative (LFQ,TMT,HRM) Proteomics data.","Published":"2016-12-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"safer","Version":"0.1.0","Title":"Encrypt and Decrypt Strings, R Objects and Files","Description":"A consistent interface to encrypt and decrypt strings, R objects and files using symmetric key encryption.","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"safi","Version":"1.0","Title":"Sensitivity Analysis for Functional Input","Description":"Design and sensitivity analysis for computer experiments with scalar-valued output and functional input, e.g. over time or space. The aim is to explore the behavior of the sensitivity over the functional domain.","Published":"2014-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SAGA","Version":"2.0.0","Title":"Software for the Analysis of Genetic Architecture","Description":"Implements an information theory approach to the analysis of line\n cross data providing model averaged results of parameter estimates and\n unconditional standard errors. Also includes functions to provide a\n visualization of models space, custom plots of multi-model inference results, and traditional\n line cross analysis plots.","Published":"2015-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sAIC","Version":"1.0","Title":"Akaike Information Criterion for Sparse Estimation","Description":"Computes the Akaike information criterion for the generalized linear models (logistic regression, Poisson regression, and Gaussian graphical models) estimated by the lasso. ","Published":"2016-10-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SALES","Version":"1.0.0","Title":"Elastic Net and (Adaptive) Lasso Penalized Sparse Asymmetric\nLeast Squares (SALES) and Coupled Sparse Asymmetric Least\nSquares (COSALES) using Coordinate Descent and Proximal\nGradient Algorithms","Description":"A coordinate descent algorithm for computing the solution path of the sparse and coupled sparse asymmetric least squares, including the elastic net and (adaptive) Lasso penalized SALES and COSALES regressions.","Published":"2016-01-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SALTSampler","Version":"0.1","Title":"Efficient Sampling on the Simplex","Description":"The SALTSampler package facilitates Monte Carlo Markov Chain (MCMC)\n sampling of random variables on a simplex. A Self-Adjusting Logit Transform\n (SALT) proposal is used so that sampling is still efficient even in difficult\n cases, such as those in high dimensions or with parameters that differ by orders\n of magnitude. Special care is also taken to maintain accuracy even when some\n coordinates approach 0 or 1 numerically. Diagnostic and graphic functions are\n included in the package, enabling easy assessment of the convergence and mixing\n of the chain within the constrained space.","Published":"2015-11-03","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SAM","Version":"1.0.5","Title":"Sparse Additive Modelling","Description":"The package SAM targets at high dimensional predictive\n modeling (regression and classification) for complex data\n analysis. SAM is short for sparse additive modeling, and adopts\n the computationally efficient basis spline technique. We solve\n the optimization problems by various computational algorithms\n including the block coordinate descent algorithm, fast\n iterative soft-thresholding algorithm, and newton method. The\n computation is further accelerated by warm-start and active-set\n tricks.","Published":"2014-02-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SAMM","Version":"0.0.1","Title":"Some Algorithms for Mixed Models","Description":"Programs for fitting Gaussian linear mixed models (LMM).","Published":"2016-07-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Sample.Size","Version":"1.0","Title":"Sample size calculation","Description":"Computes the required sample size using the optimal designs with multiple constraints proposed in Mayo et al.(2010). This optimal method is designed for two-arm, randomized phase II clinical trials, and the required sample size can be optimized either using fixed or flexible randomization allocation ratios.","Published":"2013-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SamplerCompare","Version":"1.2.7","Title":"A Framework for Comparing the Performance of MCMC Samplers","Description":"A framework for running sets of MCMC samplers on sets of\n distributions with a variety of tuning parameters, along with plotting\n functions to visualize the results of those simulations. See sc-intro.pdf\n for an introduction.","Published":"2015-07-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sampleSelection","Version":"1.0-4","Title":"Sample Selection Models","Description":"Two-step estimation\n and maximum likelihood estimation\n of Heckman-type sample selection models:\n standard sample selection models (Tobit-2)\n and endogenous switching regression models (Tobit-5).","Published":"2015-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"samplesize","Version":"0.2-4","Title":"Sample Size Calculation for Various t-Tests and Wilcoxon-Test","Description":"Computes sample size for Student's t-test and for the Wilcoxon-Mann-Whitney test for categorical data. The t-test function allows paired and unpaired (balanced / unbalanced) designs as well as homogeneous and heterogeneous variances. The Wilcoxon function allows for ties.","Published":"2016-12-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"samplesize4surveys","Version":"3.1.2.400","Title":"Sample Size Calculations for Complex Surveys","Description":"Computes the required sample size for estimation of totals, means\n and proportions under complex sampling designs.","Published":"2016-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"samplesizelogisticcasecontrol","Version":"0.0.6","Title":"Sample Size Calculations for Case-Control Studies","Description":"To determine sample size for case-control studies to be analyzed using logistic regression.","Published":"2017-02-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SampleSizeMeans","Version":"1.1","Title":"Sample size calculations for normal means","Description":"A set of R functions for calculating sample size\n requirements using three different Bayesian criteria in the\n context of designing an experiment to estimate a normal mean or\n the difference between two normal means. Functions for\n calculation of required sample sizes for the Average Length\n Criterion, the Average Coverage Criterion and the Worst Outcome\n Criterion in the context of normal means are provided.\n Functions for both the fully Bayesian and the mixed\n Bayesian/likelihood approaches are provided.","Published":"2012-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SampleSizeProportions","Version":"1.0","Title":"Calculating sample size requirements when estimating the\ndifference between two binomial proportions","Description":"A set of R functions for calculating sample size\n requirements using three different Bayesian criteria in the\n context of designing an experiment to estimate the difference\n between two binomial proportions. Functions for calculation of\n required sample sizes for the Average Length Criterion, the\n Average Coverage Criterion and the Worst Outcome Criterion in\n the context of binomial observations are provided. In all\n cases, estimation of the difference between two binomial\n proportions is considered. Functions for both the fully\n Bayesian and the mixed Bayesian/likelihood approaches are\n provided.","Published":"2009-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sampling","Version":"2.8","Title":"Survey Sampling","Description":"Functions for drawing and calibrating samples.","Published":"2016-12-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"samplingbook","Version":"1.2.2","Title":"Survey Sampling Procedures","Description":"Sampling procedures from the book 'Stichproben - Methoden\n und praktische Umsetzung mit R' by Goeran Kauermann and Helmut\n Kuechenhoff (2010).","Published":"2017-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"samplingDataCRT","Version":"1.0","Title":"Sampling Data Within Different Study Designs for Cluster\nRandomized Trials","Description":"Package provides the possibility to sampling complete datasets \n from a normal distribution to simulate cluster randomized trails for different study designs. ","Published":"2017-02-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"samplingEstimates","Version":"0.1-3","Title":"Sampling Estimates","Description":"Functions to estimate from survey data. This package is a user-friendly wrapper of the samplingVarEst package. It considers that the user is more familiar with practical survey data rather than with research on survey sampling (variance estimation). More functionalities are on the way.","Published":"2014-05-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SamplingStrata","Version":"1.1","Title":"Optimal Stratification of Sampling Frames for Multipurpose\nSampling Surveys","Description":"In the field of stratified sampling design, this package\n offers an approach for the determination of the best\n stratification of a sampling frame, the one that ensures the\n minimum sample cost under the condition to satisfy precision\n constraints in a multivariate and multidomain case. This\n approach is based on the use of the genetic algorithm: each\n solution (i.e. a particular partition in strata of the sampling\n frame) is considered as an individual in a population; the\n fitness of all individuals is evaluated applying the\n Bethel-Chromy algorithm to calculate the sampling size\n satisfying precision constraints on the target estimates.\n Functions in the package allows to: (a) analyse the obtained\n results of the optimisation step; (b) assign the new strata\n labels to the sampling frame; (c) select a sample from the new\n frame accordingly to the best allocation. \n Functions for the execution of the genetic algorithm are a modified \n version of the functions in the 'genalg' package. ","Published":"2016-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"samplingVarEst","Version":"1.0-2","Title":"Sampling Variance Estimation","Description":"Functions to calculate some point estimators and estimating their variance under unequal probability sampling without replacement. Single and two stage sampling designs are considered. Some approximations for the second order inclusion probabilities are also available (sample and population based). A variety of Jackknife variance estimators are implemented.","Published":"2016-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sampSurf","Version":"0.7-3","Title":"Sampling Surface Simulation for Areal Sampling Methods","Description":"Sampling surface simulation is useful in the comparison of different areal sampling methods\n in forestry, ecology and natural resources. The sampSurf package allows the simulation \n\t\t\t of numerous sampling methods for standing trees and downed woody debris in a spatial context.\n\t\t\t It also provides an S4 class and method structure that facilitates the addition of new sampling\n\t\t\t methods.","Published":"2015-05-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"samr","Version":"2.0","Title":"SAM: Significance Analysis of Microarrays","Description":"Significance Analysis of Microarrays","Published":"2011-06-29","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"SAMUR","Version":"0.6","Title":"Stochastic Augmentation of Matched Data Using Restriction\nMethods","Description":"Augmenting a matched data set by generating multiple stochastic, matched samples from the data using a\n multi-dimensional histogram constructed from dropping the input matched data into a multi-dimensional grid built on\n the full data set. The resulting stochastic, matched sets will likely provide a collectively higher coverage of the full\n data set compared to the single matched set. Each stochastic match is without duplication, thus allowing downstream\n validation techniques such as cross-validation to be applied to each set without concern for overfitting.","Published":"2015-03-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SAMURAI","Version":"1.2.1","Title":"Sensitivity Analysis of a Meta-analysis with Unpublished but\nRegistered Analytical Investigations","Description":"This package contains R functions to gauge the impact of unpublished studies upon the meta-analytic summary effect of a set of published studies. (Credits: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 282574.)","Published":"2013-09-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"sand","Version":"1.0.3","Title":"Statistical Analysis of Network Data with R","Description":"Data sets for the book 'Statistical Analysis of \n Network Data with R'.","Published":"2017-03-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sandwich","Version":"2.3-4","Title":"Robust Covariance Matrix Estimators","Description":"Model-robust standard error estimators for cross-sectional, time series, and longitudinal data.","Published":"2015-09-24","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"SanFranBeachWater","Version":"0.1.0","Title":"Downloads and Tidies the San Francisco Public Utilities\nCommission Beach Water Quality Monitoring Program Data","Description":"\n Downloads and tidies the San Francisco Public Utilities Commission Beach Water Quality Monitoring Program data. Data sets can be downloaded per beach, or the raw data can be downloaded. See .","Published":"2017-06-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sanitizers","Version":"0.1.0","Title":"C/C++ source code to trigger Address and Undefined Behaviour\nSanitizers","Description":"Recent gcc and clang compiler versions provide functionality to\n memory violations and other undefined behaviour; this is often referred to as\n \"Address Sanitizer\" (or SAN) and \"Undefined Behaviour Sanitizer\" (UBSAN).\n The Writing R Extension manual describes this in some detail in Section 4.9.\n\n This feature has to be enabled in the corresponding binary, eg in R, which\n is somewhat involved as it also required a current compiler toolchain which \n is not yet widely available, or in the case of Windows, not available at all\n (via the common Rtools mechanism).\n\n As an alternative, the pre-built Docker containers available via the Docker Hub\n at https://registry.hub.docker.com/u/eddelbuettel/docker-debian-r/ can be used\n on Linux, and via boot2docker on Windows and OS X.\n\n This package then provides a means of testing the compiler setup as the\n known code failures provides in the sample code here should be detected\n correctly, whereas a default build of R will let the package pass.\n\n The code samples are based on the examples from the Address Sanitizer\n Wiki at https://code.google.com/p/address-sanitizer/wiki/AddressSanitizer.","Published":"2014-08-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sankey","Version":"1.0.0","Title":"Sankey Diagrams","Description":"Sankey plots illustrate the flow of information or material.","Published":"2015-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sanon","Version":"1.5","Title":"Stratified Analysis with Nonparametric Covariable Adjustment","Description":"There are several functions to implement the method for analysis in a randomized clinical trial with strata with following key features. A stratified Mann-Whitney estimator addresses the comparison between two randomized groups for a strictly ordinal response variable. The multivariate vector of such stratified Mann-Whitney estimators for multivariate response variables can be considered for one or more response variables such as in repeated measurements and these can have missing completely at random (MCAR) data. Non-parametric covariance adjustment is also considered with the minimal assumption of randomization. The p-value for hypothesis test and confidence interval are provided.","Published":"2015-10-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sapa","Version":"2.0-2","Title":"Spectral Analysis for Physical Applications","Description":"Software for the book Spectral Analysis for Physical\n Applications, Donald B. Percival and Andrew T. Walden,\n Cambridge University Press, 1993.","Published":"2016-05-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SAPP","Version":"1.0.7","Title":"Statistical Analysis of Point Processes","Description":"Functions for statistical analysis of point processes.","Published":"2016-10-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sarima","Version":"0.4-5","Title":"Simulation and Prediction with Seasonal ARIMA Models","Description":"\n Functions, classes and methods for time series modelling with ARIMA\n and related models. The aim of the package is to provide consistent\n interface for the user. For example, a single function\n autocorrelations() computes various kinds of\n theoretical and sample autocorrelations. This is work in progress,\n see the documentation and vignettes for the current functionality.","Published":"2017-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SARP.moodle","Version":"0.3.8","Title":"XML Output Functions for Easy Creation of Moodle Questions","Description":"Provides a set of basic functions for creating Moodle XML\n output files suited for importing questions in Moodle (a learning\n management system, see for more information).","Published":"2017-04-01","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"sas7bdat","Version":"0.5","Title":"SAS Database Reader (experimental)","Description":"Read SAS files in the sas7bdat data format.","Published":"2014-06-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SAScii","Version":"1.0","Title":"Import ASCII files directly into R using only a SAS input script","Description":"Using any importation code designed for SAS users to read\n ASCII files into sas7bdat files, the SAScii package parses\n through the INPUT block of a (.sas) syntax file to design the\n parameters needed for a read.fwf function call. This allows\n the user to specify the location of the ASCII (often a .dat)\n file and the location of the .sas syntax file, and then load\n the data frame directly into R in just one step.","Published":"2012-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SASmixed","Version":"1.0-4","Title":"Data sets from \"SAS System for Mixed Models\"","Description":"Data sets and sample lmer analyses corresponding\n to the examples in Littell, Milliken, Stroup and Wolfinger\n (1996), \"SAS System for Mixed Models\", SAS Institute.","Published":"2014-03-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SASPECT","Version":"0.1-1","Title":"Significant AnalysiS of PEptide CounTs","Description":"A statistical method for significant analysis of\n comparative proteomics based on LC-MS/MS Experiments","Published":"2008-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SASxport","Version":"1.5.3","Title":"Read and Write 'SAS' 'XPORT' Files","Description":"Functions for reading, listing\n the contents of, and writing 'SAS' 'xport' format files.\n The functions support reading and writing of either\n individual data frames or sets of data frames. Further,\n a mechanism has been provided for customizing how\n variables of different data types are stored.","Published":"2016-03-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"satellite","Version":"0.2.0","Title":"Various Functions for Handling and Manipulating Remote Sensing\nData","Description":"This smorgasbord provides a variety of functions which are useful \n for handling, manipulating and visualizing remote sensing data.","Published":"2015-09-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"satscanMapper","Version":"1.0.0","Title":"'SaTScan' (TM) Results Mapper","Description":"Supports the generation of maps based on the results from \n 'SaTScan' (TM) cluster analysis.\n The package handles mapping of Spatial and Spatial-Time analysis using\n the discrete Poisson, Bernoulli, and exponential models of case data generating\n cluster and location ('GIS') records containing observed, expected and observed/expected\n ratio for U. S. states (and DC), counties or census tracts of individual \n states based on the U. S. 'FIPS' codes for state, county and census tracts \n (locations) using 2000 or 2010 Census areas, 'FIPS' codes, and boundary data.\n 'satscanMapper' uses the 'SeerMapper' package for the boundary data and \n mapping of locations. Not all of the 'SaTScan' (TM) analysis and models generate\n the observed, expected and observed/expected ratio values for the clusters and \n locations.\n The user can map the observed/expected ratios for locations \n (states, counties, or census tracts) for each cluster with a p-value less than 0.05 \n or a user specified p-value. \n The locations are categorized and colored based on either the cluster's Observed/Expected \n ratio or the locations' Observed/Expected ratio. \n The place names are provided for each census tract using data from 'NCI', the 'HUD' crossover \n tables (Tract to Zip code) as of December, 2013, the USPS Zip code 5 database for 1999, \n and manual look ups on the USPS.gov web site.","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"saturnin","Version":"1.1.1","Title":"Spanning Trees Used for Network Inference","Description":"Bayesian inference of graphical model structures using spanning trees.","Published":"2015-07-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SAVE","Version":"1.0","Title":"Bayesian Emulation, Calibration and Validation of Computer\nModels","Description":"Implements Bayesian statistical methodology for the \n\t\tanalysis of complex computer models.\n\t\tIt allows for the emulation, calibration, and validation of computer models, \n\t\tfollowing methodology described in Bayarri et al 2007, Technometrics.","Published":"2017-01-10","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"saves","Version":"0.5","Title":"Fast load variables","Description":"The purpose of this package is to be able to save and load only\n the needed variables/columns of a dataframe in special binary files (tar\n archives) - which seems to be a lot faster method than loading the whole\n binary object (RData files) via load() function, or than loading columns\n from SQLite/MySQL databases via SQL commands (see vignettes). Performance\n gain on SSD drives is a lot more sensible compared to basic load()\n function. The performance improvement gained by loading only the chosen\n variables in binary format can be useful in some special cases (e.g. where\n merging data tables is not an option and very different datasets are needed\n for reporting), but be sure if using this package that you really need\n this, as non-standard file formats are used!","Published":"2013-12-27","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"saws","Version":"0.9-6.1","Title":"Small-Sample Adjustments for Wald tests Using Sandwich\nEstimators","Description":"Tests coefficients with sandwich estimator of variance and with small samples. Regression types supported are gee, linear regression, and conditional logistic regression.","Published":"2014-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sbart","Version":"0.1.0","Title":"Sequential BART for Imputation of Missing Covariates","Description":"Implements the sequential BART (Bayesian Additive Regression Trees) approach to impute the missing covariates. The\n algorithm applies a Bayesian nonparametric approach on factored sets of sequential conditionals of the joint \n distribution of the covariates and the missingness and applying the Bayesian additive regression trees to model \n each of these univariate conditionals. Each conditional distribution is then sampled using MCMC algorithm. The published \n journal can be found at \n Package provides a function, seqBART(), which computes and returns the imputed values.","Published":"2017-03-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sBF","Version":"1.1.1","Title":"Smooth Backfitting","Description":"Smooth Backfitting for additive models using\n Nadaraya-Watson estimator","Published":"2014-12-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sbfc","Version":"1.0.1","Title":"Selective Bayesian Forest Classifier","Description":"An MCMC algorithm for simultaneous feature selection and classification, \n and visualization of the selected features and feature interactions. \n An implementation of SBFC by Krakovna, Du and Liu (2015), .","Published":"2016-07-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sbgcop","Version":"0.975","Title":"Semiparametric Bayesian Gaussian copula estimation and\nimputation","Description":"This package estimates parameters of a Gaussian copula,\n treating the univariate marginal distributions as nuisance\n parameters as described in Hoff(2007). It also provides a\n semiparametric imputation procedure for missing multivariate\n data.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sBIC","Version":"0.2.0","Title":"Computing the Singular BIC for Multiple Models","Description":"Computes the sBIC for various singular model collections including:\n binomial mixtures, factor analysis models, Gaussian mixtures,\n latent forests, latent class analyses, and reduced rank regressions.","Published":"2016-10-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"sbioPN","Version":"1.1.0","Title":"sbioPN: Simulation of deterministic and stochastic spatial\nbiochemical reaction networks using Petri Nets","Description":"\n sbioPN is a package suited to perform simulation of deterministic and stochastic systems of biochemical reaction\n networks with spatial effects.\n Models are defined using a subset of Petri Nets, in a way that is close at how chemical reactions are defined.\n For deterministic solutions, sbioPN creates the associated system of differential equations \"on the fly\", and\n solves it with a Runge Kutta Dormand Prince 45 explicit algorithm.\n For stochastic solutions, sbioPN offers two variants of Gillespie algorithm, or SSA.\n For hybrid deterministic/stochastic,\n it employs the Haseltine and Rawlings algorithm, that partitions the system in fast and slow reactions.\n sbioPN algorithms are developed in C to achieve adequate performance.","Published":"2014-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sbmSDP","Version":"0.2","Title":"Semidefinite Programming for Fitting Block Models of Equal Block\nSizes","Description":"An ADMM implementation of SDP-1, a semidefinite programming relaxation of the maximum likelihood estimator for fitting a block model. SDP-1 has a tendency to produce equal-sized blocks and is ideal for producing a form of network histogram approximating a nonparametric graphon model. Alternatively, it can be used for community detection. (This is experimental code, proceed with caution.)","Published":"2015-06-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SBRect","Version":"0.26","Title":"Detecting structural breaks using rectangle covering\n(non-parametric method)","Description":"The package uses fitting axes-aligned rectangles to a time series in order to find structural breaks. The algorithm enclose the time series in a number of axes-aligned rectangles and tries to minimize their area and number. As these are conflicting aims, the user has to specify a parameter alpha in [0.0,1.0]. Values close to 0 result in more breakpoints, values close to 1 in fewer. The left edges of the rectangles are the breakpoints. The package supplies two methods, computeBreakPoints(series,alpha) which returns the indices of the break points and computeRectangles(series,alpha) which returns the rectangles. The algorithm is randomised; it uses a genetic algorithm. Therefore, the break point sequence found can be different in different executions of the method on the same data, especially when used on longer series of some thousand observations. The algorithm uses a range-tree as background data structure which makes i very fast and suited to analyse series with millions of observations. A detailed description can be found in Paul Fischer, Astrid Hilbert, Fast detection of structural breaks, Proceedings of Compstat 2014.","Published":"2014-07-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sbrl","Version":"1.2","Title":"Scalable Bayesian Rule Lists Model","Description":"An implementation of Scalable Bayesian Rule Lists Algorithm.","Published":"2016-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SBSA","Version":"0.2.3","Title":"Simplified Bayesian Sensitivity Analysis","Description":"Simplified Bayesian Sensitivity Analysis","Published":"2014-01-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"sbtools","Version":"1.1.6","Title":"USGS ScienceBase Tools","Description":"Tools for interacting with U.S. Geological Survey ScienceBase \n interfaces. ScienceBase is a data cataloging and\n collaborative data management platform. Functions included for querying\n ScienceBase, and creating and fetching datasets.","Published":"2016-09-27","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"sca","Version":"0.9-0","Title":"Simple Component Analysis","Description":"Simple Component Analysis (SCA) often provides much more\n interpretable components than Principal Components (PCA) while still\n representing much of the variability in the data.","Published":"2015-09-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scagnostics","Version":"0.2-4","Title":"Compute scagnostics - scatterplot diagnostics","Description":"Calculates graph theoretic scagnostics. Scagnostics\n describe various measures of interest for pairs of variables,\n based on their appearance on a scatterplot. They are useful\n tool for discovering interesting or unusual scatterplots from a\n scatterplot matrix, without having to look at every individual\n plot.","Published":"2012-11-05","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Scale","Version":"1.0.4","Title":"Likert Type Questionnaire Item Analysis","Description":"Provides the Scale class and corresponding functions, in order to facilitate data input for scale construction. Reverse items and alternative orders of administration are dealt with by the program. Computes reliability statistics, confirmatory, single factor loadings. It suggests item deletions and produces basic text output in English, for incorporation in reports. Returns list objects of all relevant functions from other packages (see Depends).","Published":"2015-05-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"scales","Version":"0.4.1","Title":"Scale Functions for Visualization","Description":"Graphical scales map data to aesthetics, and provide\n methods for automatically determining breaks and labels\n for axes and legends.","Published":"2016-11-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"scalpel","Version":"1.0.0","Title":"Processes Calcium Imaging Data","Description":"Identifies the locations of neurons, and estimates their calcium concentrations over time using the SCALPEL method proposed in Petersen, A., Simon, N., and Witten, D. SCALPEL: Extracting Neurons from Calcium Imaging Data .","Published":"2017-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scalreg","Version":"1.0","Title":"Scaled sparse linear regression","Description":"Algorithms for fitting scaled sparse linear regression and estimating precision matrices","Published":"2013-12-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"scam","Version":"1.2-1","Title":"Shape Constrained Additive Models","Description":"Routines for generalized additive modelling under shape\n constraints on the component functions of the linear predictor\n (Pya and Wood, 2015) .\n Models can contain multiple shape constrained (univariate\n and/or bivariate) and unconstrained terms. The routines of gam() \n in package 'mgcv' are used for setting up the model matrix, \n printing and plotting the results. Penalized likelihood\n maximization based on Newton-Raphson method is used to fit a\n model with multiple smoothing parameter selection by GCV or\n UBRE/AIC.","Published":"2017-02-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scan","Version":"0.20","Title":"Single-Case Data Analyses for Single and Multiple AB Designs","Description":"A collection of procedures for analysing single-case data of an AB-design. Some procedures support multiple-baseline designs.","Published":"2016-10-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"scanstatistics","Version":"0.1.0","Title":"Space-Time Anomaly Detection using Scan Statistics","Description":"Detection of anomalous space-time clusters using the scan \n statistics methodology. Focuses on prospective surveillance of data streams, \n scanning for clusters with ongoing anomalies. Hypothesis testing is made \n possible by the generation of Monte Carlo p-values.","Published":"2016-09-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"scape","Version":"2.3-1","Title":"Statistical Catch-at-Age Plotting Environment","Description":"Import, plot, and diagnose results from statistical\n catch-at-age models, used in fisheries stock assessment.","Published":"2017-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scar","Version":"0.2-1","Title":"Shape-Constrained Additive Regression: a Maximum Likelihood\nApproach","Description":"This package computes the maximum likelihood estimator of the generalised additive and index regression with shape constraints. Each additive component function is assumed to obey one of the nine possible shape restrictions: linear, increasing, decreasing, convex, convex increasing, convex decreasing, concave, concave increasing, or concave decreasing.","Published":"2014-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scaRabee","Version":"1.1-3","Title":"Optimization Toolkit for Pharmacokinetic-Pharmacodynamic Models","Description":"scaRabee is a port of the Scarabee toolkit originally\n written as a Matlab-based application. It provides a framework\n for simulation and optimization of pharmacokinetic-pharmacodynamic \n models at the individual and population level. It is built on top of the\n neldermead package, which provides the direct search algorithm proposed \n by Nelder and Mead for model optimization.","Published":"2014-08-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SCAT","Version":"0.3.0","Title":"Summary based Conditional Association Test","Description":"Conditional association test based on summary data from genome-wide association study (GWAS) adjusting for heterogeneity in SNP coverage.","Published":"2017-06-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"scatterD3","Version":"0.8.1","Title":"D3 JavaScript Scatterplot from R","Description":"Creates 'D3' 'JavaScript' scatterplots from 'R' with interactive\n features : panning, zooming, tooltips, etc.","Published":"2016-12-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"scatterpie","Version":"0.0.7","Title":"Scatter Pie Plot","Description":"Creates scatterpie plots, especially useful for plotting pies on a\n map.","Published":"2017-03-22","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"scatterplot3d","Version":"0.3-40","Title":"3D Scatter Plot","Description":"Plots a three dimensional (3D) point cloud.","Published":"2017-04-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SCBmeanfd","Version":"1.2.2","Title":"Simultaneous Confidence Bands for the Mean of Functional Data","Description":"Statistical methods for estimating and inferring the mean of functional data. The methods include simultaneous confidence bands, local polynomial fitting, bandwidth selection by plug-in and cross-validation, goodness-of-fit tests for parametric models, equality tests for two-sample problems, and plotting functions. ","Published":"2016-12-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"scclust","Version":"0.1.1","Title":"Size-Constrained Clustering","Description":"\n Provides wrappers for 'scclust', a C library for computationally efficient\n size-constrained clustering with near-optimal performance.\n See for more information.","Published":"2017-05-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"scdhlm","Version":"0.3.1","Title":"Estimating Hierarchical Linear Models for Single-Case Designs","Description":"Provides a set of tools for estimating hierarchical linear\n models and effect sizes based on data from single-case designs. \n Functions are provided for calculating standardized mean difference effect sizes that \n are directly comparable to standardized mean differences estimated from between-subjects randomized experiments,\n as described in Hedges, Pustejovsky, and Shadish (2012) ; \n Hedges, Pustejovsky, and Shadish (2013) ; and \n Pustejovsky, Hedges, and Shadish (2014) . \n Includes an interactive web interface.","Published":"2016-12-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"scenario","Version":"1.0","Title":"Construct Reduced Trees with Predefined Nodal Structures","Description":"Uses the neural gas algorithm to construct\n a scenario tree for use in multi-stage stochastic programming.\n The primary input is a set of initial scenarios or realizations\n of a disturbance. The scenario tree nodal structure must be\n predefined using a scenario tree nodal partition matrix.","Published":"2016-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SCEPtER","Version":"0.2-1","Title":"Stellar CharactEristics Pisa Estimation gRid","Description":"SCEPtER pipeline for estimating the stellar age, mass, and radius\n given observational \n effective temperature, [Fe/H], and astroseismic\n\tparameters. The results are obtained adopting a maximum likelihood\n\ttechnique over a grid of pre-computed stellar models.","Published":"2015-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SCEPtERbinary","Version":"0.1-1","Title":"Stellar CharactEristics Pisa Estimation gRid for Binary Systems","Description":"SCEPtER pipeline for estimating the stellar age for double-lined detached binary systems. The observational constraints adopted in the recovery are the effective temperature, the metallicity [Fe/H], the mass, and the radius of the two stars. The results are obtained adopting a maximum likelihood technique over a grid of pre-computed stellar models.","Published":"2014-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SCGLR","Version":"2.0.3","Title":"Supervised Component Generalized Linear Regression","Description":"The Fisher Scoring Algorithm is extended so as to combine Partial\n Least Squares regression with Generalized Linear Model estimation in the\n multivariate context.","Published":"2016-03-16","License":"CeCILL-2 | GPL-2","snapshot_date":"2017-06-23"} {"Package":"SchemaOnRead","Version":"1.0.2","Title":"Automated Schema on Read","Description":"Provides schema-on-read tools including a single function call (e.g., schemaOnRead('filename')) that reads text ('TXT'), comma separated value ('CSV'), raster image ('BMP', 'PNG', 'GIF', 'TIFF', and 'JPG'), R data ('RDS'), HDF5 ('H5'), NetCDF ('CS'), spreadsheet ('XLS', 'XLSX', 'ODS', and 'DIF'), Weka Attribute-Relation File Format ('ARFF'), Epi Info ('REC'), SPSS ('SAV'), Systat ('SYS'), and Stata ('DTA') files. It also recursively reads folders (e.g., schemaOnRead('folder')), returning a nested list of the contained elements.","Published":"2015-12-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"scholar","Version":"0.1.4","Title":"Analyse Citation Data from Google Scholar","Description":"Provides functions to extract citation data from Google\n Scholar. Convenience functions are also provided for comparing\n multiple scholars and predicting future h-index values.","Published":"2015-11-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"schoolmath","Version":"0.4","Title":"Functions and datasets for math used in school","Description":"This package contains functions and datasets for math\n taught in school. A main focus is set to prime-calculation","Published":"2012-07-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"schoRsch","Version":"1.4","Title":"Tools for Analyzing Factorial Experiments","Description":"Offers a helping hand to psychologists and other behavioral scientists who routinely deal with experimental data from factorial experiments. It includes several functions to format output from other R functions according to the style guidelines of the APA (American Psychological Association). This formatted output can be copied directly into manuscripts to facilitate data reporting. These features are backed up by a toolkit of several small helper functions, e.g., offering out-of-the-box outlier removal. The package lends its name to Georg \"Schorsch\" Schuessler, ingenious technician at the Department of Psychology III, University of Wuerzburg.","Published":"2017-02-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"schumaker","Version":"1.0","Title":"Schumaker Shape-Preserving Spline","Description":"This is a shape preserving spline which is guaranteed\n to be monotonic and concave or convex if the data is monotonic\n and concave or convex. It does not use any optimisation and is\n therefore quick and smoothly converges to a fixed point in\n economic dynamics problems including value function iteration.\n It also automatically gives the first two derivatives of the\n spline and options for determining behaviour when evaluated\n outside the interpolation domain.","Published":"2017-05-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"schwartz97","Version":"0.0.6","Title":"A package on the Schwartz two-factor commodity model","Description":"This package provides detailed functionality for working with the Schwartz 1997 two-factor commodity model. Essentially, it contains pricing formulas for futures and European options and the standard d/p/q/r functions for the distribution of the state variables and futures prices. In addition, a parameter estimation procedure is contained together with many utilities as filtering and plotting functionality. This package is accompanied by futures data of ten commodities.","Published":"2014-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SCI","Version":"1.0-2","Title":"Standardized Climate Indices Such as SPI, SRI or SPEI","Description":"Functions for generating Standardized Climate Indices (SCI). \n\t\tSCI is a transformation of (smoothed) climate (or environmental)\n\t\ttime series that removes seasonality and forces the data to\n\t\ttake values of the standard normal distribution. SCI was \n\t\toriginally developed for precipitation. In this case it is \n\t\tknown as the Standardized Precipitation Index (SPI).","Published":"2016-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scidb","Version":"2.0.0","Title":"An R Interface to SciDB","Description":"An R interface to the 'SciDB' array database .","Published":"2017-04-14","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"SciencesPo","Version":"1.4.1","Title":"A Tool Set for Analyzing Political Behavior Data","Description":"Provide functions for analyzing elections and political behavior\n data, including measures of political fragmentation, seat apportionment, and\n small data visualization graphs.","Published":"2016-08-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scientoText","Version":"0.1","Title":"Text & Scientometric Analytics","Description":"It involves bibliometric indicators calculation from bibliometric data.It also deals pattern analysis using the text part of bibliometric data.The bibliometric data are obtained from mainly Web of Science and Scopus.","Published":"2016-07-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"scio","Version":"0.6.1","Title":"Sparse Column-wise Inverse Operator","Description":"Sparse Column-wise Inverse Operator for estimating the inverse covariance matrix. Note that this is a preliminary version accompanying the arXiv paper (arXiv:1203.3896) in 2012. This version contains only the minimal set of functions for estimation and cross validation.","Published":"2014-04-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sciplot","Version":"1.1-1","Title":"Scientific Graphing Functions for Factorial Designs","Description":"A collection of functions that creates graphs with error\n bars for data collected from one-way or higher factorial\n designs.","Published":"2017-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SciViews","Version":"0.9-5","Title":"SciViews GUI API - Main package","Description":"Functions to install SciViews additions to R, and more (various) tools","Published":"2014-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sclero","Version":"0.2","Title":"Measure Growth Patterns and Align Sampling Spots in Photographs","Description":"Provides functions to measure growth patterns and align\n sampling spots in chronologically deposited materials. The package is\n intended for the fields of sclerochronology, dendrochronology and geology.","Published":"2016-01-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SCMA","Version":"1.2","Title":"Single-Case Meta-Analysis","Description":"Perform meta-analysis of single-case experiments, including calculating various effect size measures (SMD, PND and PEM) and probability combining (additive and multiplicative method).","Published":"2017-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scmamp","Version":"0.2.55","Title":"Statistical Comparison of Multiple Algorithms in Multiple\nProblems","Description":"Given a matrix with results of different algorithms for different\n problems, the package uses statistical tests and corrections to assess the\n differences between algorithms.","Published":"2016-10-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"score","Version":"1.0.2","Title":"A Package to Score Behavioral Questionnaires","Description":"Provides routines for scoring behavioral questionnaires. Includes scoring procedures for the 'International Physical Activity Questionnaire (IPAQ)' . Compares physical functional performance to the age- and gender-specific normal ranges. ","Published":"2015-06-03","License":"GNU General Public License (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ScoreGGUM","Version":"1.0","Title":"Score Persons Using the Generalized Graded Unfolding Model","Description":"Estimate GGUM Person Parameters Using Pre-Calibrated Item Parameters and Binary or Graded Disagree-Agree Responses","Published":"2014-11-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scorer","Version":"0.2.0","Title":"Quickly Score Models in Data Science and Machine Learning","Description":"A set of tools for quickly scoring models in data science and\n machine learning. This toolset is written in C++ for blazing fast performance.","Published":"2016-02-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SCORER2","Version":"0.99.0","Title":"SCORER 2.0: an algorithm for distinguishing parallel dimeric and\ntrimeric coiled-coil sequences","Description":"This package contains the functions necessary to run the SCORER 2.0 algorithm. SCORER 2.0 can be used to differentiate between parallel dimeric and trimeric coiled-coil sequence, which are the two most more frequent coiled-coil structures observed naturally. As such, SCORER 2.0 is particularly useful for researchers looking to characterize novel coiled-coil sequences. It may also be used to assist in the structural characterization of synthetic coiled-coil sequences. Also included in this package are functions that allows the user to retrain the SCORER 2.0 algorithm using user-defined training data.","Published":"2014-01-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scoring","Version":"0.5-1","Title":"Proper scoring rules","Description":"Evaluating probabilistic forecasts via proper scoring rules. scoring implements the beta, power, and pseudospherical families of proper scoring rules, along with ordered versions of the latter two families. Included among these families are popular rules like the Brier (quadratic) score, logarithmic score, and spherical score. For two-alternative forecasts, also includes functionality for plotting scores that one would obtain under specific scoring rules.","Published":"2014-07-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"scoringRules","Version":"0.9.2","Title":"Scoring Rules for Parametric and Simulated Distribution\nForecasts","Description":"Dictionary-like reference for computing scoring rules in a wide\n range of situations. Covers both parametric forecast distributions (such as\n mixtures of Gaussians) and distributions generated via simulation.","Published":"2017-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ScottKnott","Version":"1.2-5","Title":"The ScottKnott Clustering Algorithm","Description":"Division of an ANOVA experiment treatment means into\n homogeneous distinct groups using the clustering method of\n Scott & Knott","Published":"2014-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ScottKnottESD","Version":"1.2.2","Title":"The Scott-Knott Effect Size Difference (ESD) Test","Description":"An enhancement of the Scott-Knott test (which clusters distributions\n into statistically distinct ranks) that takes effect size into consideration \n [Tantithamthavorn et al., (2017) ].","Published":"2017-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scout","Version":"1.0.4","Title":"Implements the Scout Method for Covariance-Regularized\nRegression","Description":"Implements the Scout method for regression, described in \"Covariance-regularized regression and classification for high-dimensional problems\", by Witten and Tibshirani (2008), Journal of the Royal Statistical Society, Series B 71(3): 615-636. ","Published":"2015-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SCperf","Version":"1.0","Title":"Supply Chain Perform","Description":"The package implements different inventory models, the\n bullwhip effect and other supply chain performance variables.","Published":"2012-02-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scphaser","Version":"1.0.0","Title":"Phase Variants Within Genes Using Allele Counts","Description":"Phase variants within genes using allele counts from single-cell\n RNA-seq data.","Published":"2016-08-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ScrabbleScore","Version":"1.0","Title":"Calculates Scrabble score for strings","Description":"Given a word will produce that word's scrabble score. Unlike many naive implementations this package takes into consideration the distribution of letter in scrabble. So a word like 'zzz' will be scored '10' rather than '30'.","Published":"2013-10-09","License":"MIT License","snapshot_date":"2017-06-23"} {"Package":"scrapeR","Version":"0.1.6","Title":"Tools for Scraping Data from HTML and XML Documents","Description":"Tools for Scraping Data from Web-Based Documents","Published":"2010-02-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ScreenClean","Version":"1.0.1","Title":"Screen and clean variable selection procedures","Description":"Routines for a collection of screen-and-clean type\n variable selection procedures, including UPS and GS.","Published":"2012-10-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scrime","Version":"1.3.3","Title":"Analysis of High-Dimensional Categorical Data such as SNP Data","Description":"Tools for the analysis of high-dimensional data developed/implemented\n at the group \"Statistical Complexity Reduction In Molecular Epidemiology\" (SCRIME).\n Main focus is on SNP data. But most of the functions can also be applied to other\n types of categorical data.","Published":"2013-09-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"scriptests","Version":"1.0-16","Title":"Transcript-Based Unit Tests that are Easy to Create and Maintain","Description":"Support for using .Rt (transcript) tests\n in the tests directory of a package. Provides more\n convenience and features than the standard .R/.Rout.save\n tests. Tests can be run under R CMD check and also\n interactively. Provides source.pkg() for quickly loading\n code, DLLs, and data from a package for use in an\n edit/compile/test development cycle.","Published":"2016-07-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"scrm","Version":"1.7.2-0","Title":"Simulating the Evolution of Biological Sequences","Description":"A coalescent simulator that allows the rapid simulation of\n biological sequences under neutral models of evolution. Different to other\n coalescent based simulations, it has an optional approximation parameter that\n allows for high accuracy while maintaining a linear run time cost for long\n sequences. It is optimized for simulating massive data sets as produced by Next-\n Generation Sequencing technologies for up to several thousand sequences.","Published":"2016-12-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"SCRSELECT","Version":"1.1-3","Title":"Performs Bayesian Variable Selection on the Covariates in a\nSemi-Competing Risks Model","Description":"Contains four functions used in the DIC-tau_g procedure. SCRSELECT() and SCRSELECTRUN() uses Stochastic Search Variable Selection to select important\n covariates in the three hazard functions of a semi-competing risks model. These functions perform the Gibbs sampler for variable selection and a Metropolis-Hastings-Green sampler for the number of split points and parameters for the\n three baseline hazard function. The function SCRSELECT() returns the posterior sample of all quantities sampled in the Gibbs sampler after a burn-in period to a desired\n file location, while the function SCRSELECTRUN() returns posterior values of important quantities to the DIC-Tau_g procedure in a list.\n The function DICTAUG() returns a list containing the DIC values for the unique models visited by the DIC-Tau_g grid search.\n The function ReturnModel() uses SCRSELECTRUN() and DICTAUG() to return a summary of the posterior coefficient vectors for the optimal model along with saving this posterior sample to a desired path location.","Published":"2017-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SCRT","Version":"1.2.1","Title":"Single-Case Randomization Tests","Description":"Design single-case phase, alternation and multiple-baseline experiments, and conduct randomization tests on data gathered by means of such designs.","Published":"2017-06-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scrubr","Version":"0.1.1","Title":"Clean Biological Occurrence Records","Description":"Clean biological occurrence records. Includes functionality\n for cleaning based on various aspects of spatial coordinates,\n unlikely values due to political 'centroids', coordinates based on\n where collections of specimens are held, and more.","Published":"2016-03-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"scrypt","Version":"0.1.1","Title":"scrypt key derivation functions for R","Description":"scrypt is an R package for working with scrypt. Scrypt is a\n password-based key derivation function created by Colin Percival. The\n algorithm was specifically designed to make it costly to perform\n large-scale custom hardware attacks by requiring large amounts of memory.","Published":"2016-10-25","License":"FreeBSD | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"scs","Version":"1.1-1","Title":"Splitting Conic Solver","Description":"Solves convex cone programs via operator splitting. Can solve:\n linear programs (LPs), second-order cone programs (SOCPs), semidefinite programs\n (SDPs), exponential cone programs (ECPs), and power cone programs (PCPs), or\n problems with any combination of those cones. SCS uses AMD (a set of routines for permuting sparse matrices prior to factorization) and LDL (a sparse LDL' factorization and solve package) from 'SuiteSparse' ().","Published":"2016-02-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"scuba","Version":"1.9-1","Title":"Diving Calculations and Decompression Models","Description":"Code for describing and manipulating scuba diving profiles \n\t(depth-time curves) and decompression models, \n for calculating the predictions of decompression models,\n\tfor calculating maximum no-decompression time and decompression tables,\n\tand for performing mixed gas calculations. ","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SCVA","Version":"1.2.1","Title":"Single-Case Visual Analysis","Description":"Make graphical representations of single case data and transform graphical displays back to raw data. The package also includes tools for visually analyzing single-case data, by displaying central location, variability and trend.","Published":"2017-06-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"scvxclustr","Version":"0.1","Title":"Sparse Convex Clustering","Description":"Alternating Minimization Algorithm (AMA) and Alternating Direction Method of Multipliers (ADMM) splitting methods for sparse convex clustering.","Published":"2016-10-13","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"sda","Version":"1.3.7","Title":"Shrinkage Discriminant Analysis and CAT Score Variable Selection","Description":"Provides an efficient framework for \n high-dimensional linear and diagonal discriminant analysis with \n variable selection. The classifier is trained using James-Stein-type \n shrinkage estimators and predictor variables are ranked using \n correlation-adjusted t-scores (CAT scores). Variable selection error \n is controlled using false non-discovery rates or higher criticism.","Published":"2015-07-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"SDaA","Version":"0.1-3","Title":"Sampling: Design and Analysis","Description":"Functions and Datasets from Lohr, S. (1999), Sampling:\n Design and Analysis, Duxbury.","Published":"2014-09-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sdat","Version":"1.0","Title":"Signal Detection via Adaptive Test","Description":"Test the global null in linear models using marginal approach.","Published":"2016-04-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sdcMicro","Version":"5.0.2","Title":"Statistical Disclosure Control Methods for Anonymization of\nMicrodata and Risk Estimation","Description":"Data from statistical agencies and other institutions are mostly\n confidential. This package can be used for the generation of anonymized\n (micro)data, i.e. for the creation of public- and scientific-use files. In\n addition, various risk estimation methods are included. Note that the package\n includes a graphical user interface that allows to use various methods of this\n package.","Published":"2017-05-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sdcTable","Version":"0.22.6","Title":"Methods for Statistical Disclosure Control in Tabular Data","Description":"Methods for statistical disclosure control in\n tabular data such as primary and secondary cell suppression are covered in\n this package.","Published":"2017-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sdcTarget","Version":"0.9-11","Title":"Statistical Disclosure Control Substitution Matrix Calculator","Description":"Classes and methods to calculate and evaluate target matrices for\n statistical disclosure control.","Published":"2014-11-03","License":"CC BY-NC 4.0","snapshot_date":"2017-06-23"} {"Package":"SDD","Version":"1.2","Title":"Serial Dependence Diagrams","Description":"Allows for computing (and by default plotting) different types of serial dependence diagrams.","Published":"2015-02-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SDDE","Version":"1.0.1","Title":"Shortcuts, Detours and Dead Ends (SDDE) Path Types in Genome\nSimilarity Networks","Description":"Compares the evolution of an original network X to an augmented network Y by counting the number of Shortcuts, Detours, Dead Ends (SDDE), equal paths and disconnected nodes. ","Published":"2015-08-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sddpack","Version":"0.9","Title":"Semidiscrete Decomposition","Description":"The semidiscrete decomposition (SDD) approximates a matrix\n as a weighted sum of outer products formed by vectors with\n entries constrained to be in the set {-1, 0, 1}.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sde","Version":"2.0.15","Title":"Simulation and Inference for Stochastic Differential Equations","Description":"Companion package to the book Simulation and Inference for\n Stochastic Differential Equations With R Examples, ISBN\n 978-0-387-75838-1, Springer, NY.","Published":"2016-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sdef","Version":"1.6","Title":"Synthesizing List of Differentially Expressed Features","Description":"Performs two tests to evaluate if the\n experiments are associated and returns a list of interesting\n features common to all the experiments.","Published":"2015-07-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SDEFSR","Version":"0.7.1.0","Title":"Subgroup Discovery with Evolutionary Fuzzy Systems in R","Description":"Implementation of evolutionary fuzzy systems for the data mining task called\n \"subgroup discovery\". It also provide a Shiny App\n for make the analysis easier. The algorithms works with data sets provided in\n KEEL, ARFF and CSV format and also with data.frame objects. ","Published":"2016-07-13","License":"LGPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sdm","Version":"1.0-32","Title":"Species Distribution Modelling","Description":"An extensible R framework for developing species distribution\n models using individual and community-based approaches, generate ensembles of\n models, evaluate the models, and predict species potential distributions in\n space and time.","Published":"2016-12-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"SDMPlay","Version":"1.0","Title":"Species Distribution Modelling Playground","Description":"Functions provided by this pedagogic package allow to compute models with two popular machine learning approaches, BRT (Boosted Regression Trees) and MaxEnt (Maximum Entropy) applied on sets of marine biological and environmental data. They include the possibility of managing the main parameters for the construction of the models. Classic tools to evaluate model performance are provided (Area Under the Curve, omission rate and confusion matrix, map standard deviation) and are completed with tools to perform null models. The biological dataset includes original occurrences of two species of the class Echinoidea (sea urchins) present on the Kerguelen Plateau and that show contrasted ecological niches. The environmental dataset includes the corresponding statistics for 15 abiotic and biotic descriptors summarized for the Kerguelen Plateau and for different periods in a raster format. The package can be used for practicals to teach and learn the basics of species distribution modelling. Maps of potential distribution can be produced based on the example data included in the package, which brings prior observations of the influence of spatial and temporal heterogeneities on modelling performances. The user can also provide his own datasets to use the modelling functions.","Published":"2016-08-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sdmpredictors","Version":"0.2.5","Title":"Species Distribution Modelling Predictor Datasets","Description":"Terrestrial and marine predictors for species distribution modelling\n from multiple sources, including WorldClim ,,\n ENVIREM , Bio-ORACLE \n and MARSPEC .","Published":"2017-03-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SDMTools","Version":"1.1-221","Title":"Species Distribution Modelling Tools: Tools for processing data\nassociated with species distribution modelling exercises","Description":"This packages provides a set of tools for post processing the\n outcomes of species distribution modeling exercises. It includes novel\n methods for comparing models and tracking changes in distributions through\n time. It further includes methods for visualizing outcomes, selecting\n thresholds, calculating measures of accuracy and landscape fragmentation\n statistics, etc.. This package was made possible in part by financial\n support from the Australian Research Council & ARC Research Network for\n Earth System Science.","Published":"2014-08-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"sdmvspecies","Version":"0.3.2","Title":"Create Virtual Species for Species Distribution Modelling","Description":"A software package help user to create virtual species for species distribution modelling. It includes\n several methods to help user to create virtual species distribution map.\n Those maps can be used for Species Distribution Modelling (SDM) study. SDM use\n environmental data for sites of occurrence of a species to predict all the sites\n where the environmental conditions are suitable for the species to persist, and\n may be expected to occur.","Published":"2015-12-30","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"sdnet","Version":"2.3.8","Title":"Soft-Discretization-Based Bayesian Network Inference","Description":"Fitting discrete Bayesian networks using soft-discretized data. Soft-discretization is based on mixture of normal distributions. Also implemented is a supervised Bayesian network learning employing Kullback-Leibler divergence. ","Published":"2016-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sdPrior","Version":"0.3","Title":"Scale-Dependent Hyperpriors in Structured Additive\nDistributional Regression","Description":"Utility functions for scale-dependent and alternative hyperpriors.","Published":"2015-07-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sdprisk","Version":"1.1-5","Title":"Measures of Risk for the Compound Poisson Risk Process with\nDiffusion","Description":"Based on the compound Poisson risk process that is perturbed by\n a Brownian motion, saddlepoint approximations to some measures of risk are\n provided. Various approximation methods for the probability of ruin are\n also included. Furthermore, exact values of both the risk measures as well\n as the probability of ruin are available if the individual claims follow\n a hypo-exponential distribution (i. e., if it can be represented as a sum\n of independent exponentially distributed random variables with different\n rate parameters). For more details see Gatto and Baumgartner (2014)\n .","Published":"2016-12-31","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"SDraw","Version":"2.1.3","Title":"Spatially Balanced Sample Draws for Spatial Objects","Description":"Routines for drawing samples, focusing on spatially balanced algorithms. Draws Halton Lattice (HAL), Balanced Acceptance Samples (BAS), Generalized Random Tesselation Stratified (GRTS), Simple Systematic Samples (SSS) and Simple Random Samples (SRS) from point, line, and polygon resources. Frames are 'SpatialPoints', 'SpatialLines', or 'SpatialPolygons' objects from package 'sp'. ","Published":"2016-06-11","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"sdtoolkit","Version":"2.33-1","Title":"Scenario Discovery Tools to Support Robust Decision Making","Description":"Implements algorithms to help with scenario discovery - currently only modified version of the the Patient Rule Induction Method. ","Published":"2014-02-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sdwd","Version":"1.0.2","Title":"Sparse Distance Weighted Discrimination","Description":"Formulates a sparse distance weighted discrimination (SDWD) for high-dimensional classification and implements a very fast algorithm for computing its solution path with the L1, the elastic-net, and the adaptive elastic-net penalties.","Published":"2015-08-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"seacarb","Version":"3.2","Title":"Seawater Carbonate Chemistry","Description":"Calculates parameters of the seawater carbonate system and assists the design of ocean acidification perturbation experiments.","Published":"2017-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sealasso","Version":"0.1-2","Title":"Standard Error Adjusted Adaptive Lasso","Description":"Standard error adjusted adaptive lasso (SEA-lasso) is a version of the adaptive lasso, which incorporates OLS standard error to the L1 penalty weight. This method is intended for variable selection under linear regression settings (n > p). This new weight assignment strategy is especially useful when the collinearity of the design matrix is a concern. ","Published":"2013-12-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"searchable","Version":"0.3.3.1","Title":"Tools for Custom Searches / Subsets / Slices of Named R Objects","Description":"Provides functionality for searching / subsetting and slicing named\n objects using 'stringr/i'-style modifiers by case (in)sensitivity,\n regular expressions or fixed expressions; searches uses the standard '['\n operator and allows specification of default search behavior to either the\n search target (named object) and/or the search pattern.","Published":"2015-04-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"searchConsoleR","Version":"0.2.1","Title":"Google Search Console R Client","Description":"Provides an interface with the Google Search Console,\n formally called Google Webmaster Tools.","Published":"2016-06-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SearchTrees","Version":"0.5.2","Title":"Spatial Search Trees","Description":"This package provides an implementation of the QuadTree\n data structure. It uses this to implement fast k-Nearest\n Neighbor and Rectangular range lookups in 2 dimenions. The\n primary target is high performance interactive graphics.","Published":"2012-08-24","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"seas","Version":"0.4-3","Title":"Seasonal analysis and graphics, especially for climatology","Description":"Capable of deriving seasonal statistics, such as \"normals\", and\n analysis of seasonal data, such as departures. This package also has\n graphics capabilities for representing seasonal data, including boxplots for\n seasonal parameters, and bars for summed normals. There are many specific\n functions related to climatology, including precipitation normals,\n temperature normals, cumulative precipitation departures and precipitation\n interarrivals. However, this package is designed to represent any\n time-varying parameter with a discernible seasonal signal, such as found\n in hydrology and ecology.","Published":"2014-02-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SEAsic","Version":"0.1","Title":"Score Equity Assessment- summary index computation","Description":"This package conducts Score Equity Assessment (SEA; Dorans, 2004) by calculating and plotting multiple SEA indices as introduced by a variety of authors and summarized by Huggins and Penfield (2012).","Published":"2014-11-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"season","Version":"0.3-5","Title":"Seasonal analysis of health data","Description":"Routines for the seasonal analysis of health data,\n including regression models, time-stratified case-crossover,\n plotting functions and residual checks. Thanks to Yuming Guo\n for checking the case-crossover code.","Published":"2014-12-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"seasonal","Version":"1.6.1","Title":"R Interface to X-13-ARIMA-SEATS","Description":"Easy-to-use interface to X-13-ARIMA-SEATS, the seasonal adjustment\n software by the US Census Bureau. It offers full access to almost all\n options and outputs of X-13, including X-11 and SEATS, automatic ARIMA model\n search, outlier detection and support for user defined holiday variables,\n such as Chinese New Year or Indian Diwali. A graphical user interface can be\n used through the 'seasonalview' package. Uses the X-13-binaries from the\n 'x13binary' package.","Published":"2017-05-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"seasonalview","Version":"0.3","Title":"Graphical User Interface for Seasonal Adjustment","Description":"A graphical user interface to the 'seasonal' package and\n 'X-13ARIMA-SEATS', the U.S. Census Bureau's seasonal adjustment software. \n Unifies the code base of and the GUI in the\n 'seasonal' package.","Published":"2017-05-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"seawaveQ","Version":"1.0.0","Title":"U.S. Geological Survey seawaveQ model","Description":"A model and utilities for analyzing trends in chemical concentrations in streams with a seasonal wave (seawave) and adjustment for streamflow (Q) and other ancillary variables","Published":"2013-12-30","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"SEchart","Version":"0.1","Title":"SEchart","Description":"Displays state-event charts, for graphical presentation of longitudinal data.","Published":"2013-08-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SecKW","Version":"0.2","Title":"The SecKW Distribution","Description":"Density, distribution function, quantile function, random\n generation and survival function for the Secant Kumaraswamy Weibull Distribution\n as defined by SOUZA, L. New Trigonometric Class of Probabilistic Distributions.\n 219 p. Thesis (Doctorate in Biometry and Applied Statistics) - Department of\n Statistics and Information, Federal Rural University of Pernambuco, Recife,\n Pernambuco, 2015 (available at ) and BRITO, C. C. R. Method Distributions generator and\n Probability Distributions Classes. 241 p. Thesis (Doctorate in Biometry and\n Applied Statistics) - Department of Statistics and Information, Federal Rural\n University of Pernambuco, Recife, Pernambuco, 2014 (available upon request).","Published":"2016-07-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SECP","Version":"0.1-4","Title":"Statistical Estimation of Cluster Parameters (SECP)","Description":"SECP package provides functionality for estimating\n parameters of site clusters on 2D & 3D square lattice with\n various lattice sizes, relative fractions of accessible sites\n (occupation probability), iso- & anisotropy, von Neumann &\n Moore (1,d)-neighborhoods","Published":"2012-07-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"secr","Version":"3.0.1","Title":"Spatially Explicit Capture-Recapture","Description":"Functions to estimate the density and size of a spatially distributed animal population sampled with an array of passive detectors, such as traps, or by searching polygons or transects. Models incorporating distance-dependent detection are fitted by maximizing the likelihood. Tools are included for data manipulation and model selection.","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"secrdesign","Version":"2.4.0","Title":"Sampling Design for Spatially Explicit Capture-Recapture","Description":"Tools are provided for designing spatially explicit capture-recapture studies of animal populations. This is primarily a simulation manager for package 'secr'.","Published":"2016-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"secret","Version":"1.0.0","Title":"Share Sensitive Information in R Packages","Description":"Allow sharing sensitive information, for example passwords,\n 'API' keys, etc., in R packages, using public key cryptography.","Published":"2017-06-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"secrlinear","Version":"1.1.0","Title":"Spatially Explicit Capture-Recapture for Linear Habitats","Description":"Tools for spatially explicit capture-recapture analysis of animal populations in linear habitats, extending package 'secr'.","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"secure","Version":"0.5","Title":"Sequential Co-Sparse Factor Regression","Description":"Sequential factor extraction via co-sparse unit-rank estimation (SeCURE).","Published":"2017-04-07","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"seeclickfixr","Version":"1.1.0","Title":"Access Data from the SeeClickFix Web API","Description":"Provides a wrapper to access data from the SeeClickFix\n web API for R. SeeClickFix is a central platform employed by many cities\n that allows citizens to request their city's services. This package\n creates several functions to work with all the built-in calls to the\n SeeClickFix API. Allows users to download service request data from\n numerous locations in easy-to-use dataframe format manipulable in\n standard R functions.","Published":"2016-12-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"seedy","Version":"1.3","Title":"Simulation of Evolutionary and Epidemiological Dynamics","Description":"Suite of functions for the simulation, visualisation and analysis of bacterial evolution within- and between-host.","Published":"2015-11-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"seeg","Version":"1.0","Title":"Statistics for Environmental Sciences, Engineering, and\nGeography","Description":"Supports the text book \"Data Analysis and Statistics for\n Geography, Environmental Science, and Engineering\"","Published":"2013-01-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SEER2R","Version":"1.0","Title":"reading and writing SEER*STAT data files","Description":"read and write SEER*STAT data files","Published":"2012-01-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SEERaBomb","Version":"2017.1","Title":"SEER and Atomic Bomb Survivor Data Analysis Tools","Description":"Creates SEER (Surveillance, Epidemiology and End Results) and A-bomb data binaries \n from ASCII sources and provides tools for estimating SEER second cancer risks. ","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SeerMapper","Version":"1.2.0","Title":"A Quick Way to Map U.S. Rates and Data of U. S. States,\nCounties, Census Tracts, or Seer Registries using 2000 and 2010\nU. S. Census Boundaries","Description":"Provides an easy way to map seer registry area rate data on a U. S, map. \n The U. S. data may be mapped at the state, U. S. NCI Seer Register, state/county \n or census tract level. The function can categorize the data into \"n\" quantiles, where \"n\" is 3 to 11 or\n the caller can specify a cut point list for the categorizes. \n The caller can also provide the data and the comparison operation to request\n hatching over any areas. The default operation and value are > 0.05 (p-values).\n The location id provided in the data determines the geographic level of the mapping.\n If states, state/counties or census tracts are being mapped, the location ids \n used must be the U.S. FIPS codes for states (2 digits), state/counties (5 digits)\n or state/county/census tracts (11 digits). If the location id references the U.S. Seer Registry \n areas, the Seer Registry area identifier used to link the data to the geographical \n areas, then the location id is the Seer Registry name or abbreviation.\n Additional parameters are used to provide control over the drawing of the boundaries\n at the data's boundary level and higher levels.\n The package uses modified boundary data from the 2000 and 2010 U. S. Census to reduce the \n storage requirements and improve drawing speed. \n The 'SeerMapper' package contains the U. S. Census 2000 and 2010 boundary data\n for the regional, state, Seer Registry, and county levels. Six supplement packages \n contain the census tract boundary data (see manual for more details.)","Published":"2017-05-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SeerMapper2010East","Version":"1.2.0","Title":"Supplemental U. S. 2010 Census Tract Boundaries for 23 Eastern\nStates without Registries for 'SeerMapper'","Description":"Provides supplemental 2010 census tract boundary package for 23 states\n without Seer Registries that are east of the Mississippi river \n for use with the 'SeerMapper' package. \n The data contained in this \n package is derived from U. S. Census data and is in public domain. ","Published":"2017-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SeerMapper2010Regs","Version":"1.2.0","Title":"Supplemental U. S. 2010 Census Tract Boundaries for 15 States\nwith Seer Registries for 'SeerMapper'","Description":"Provides supplemental 2010 census tract boundaries of the 15 states \n containing Seer Registries for use with the 'SeerMapper' package.\n The data contained in this \n package is derived from U. S. 2010 Census data and is in public domain. ","Published":"2017-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SeerMapper2010West","Version":"1.2.0","Title":"Supplemental U.S. 2010 Census Tract Boundaries for 14 Western\nStates without Seer Registries for 'SeerMapper'","Description":"Provides supplemental 2010 census tract boundaries for the 14 states\n without Seer Registries that are west of the Mississippi river \n for use with the 'SeerMapper' package.\n The data contained in this \n package is derived from U. S. 2010 Census data and is in public domain.","Published":"2017-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SeerMapperEast","Version":"1.2.0","Title":"Supplemental U. S. 2000 Census Tract Boundaries for 23 Eastern\nStates without Seer Registries","Description":"Provides supplemental 2000 census tract boundaries for the 23 states\n without Seer Registries that are east of the Mississippi river \n for use with the 'SeerMapper' package. \n The data contained in this \n package is derived from U. S. Census data and is in the public domain. ","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SeerMapperRegs","Version":"1.2.0","Title":"Supplemental U. S. 2000 Census Tract Boundary for 15 States with\nSeer Registries for 'SeerMapper'","Description":"Provides supplemental 2000 census tract boundaries for the 15 states\n containing Seer Registries for use with the 'SeerMapper' package. \n The data contained in this \n package is derived from U. S. Census data and is in the public domain. ","Published":"2017-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SeerMapperWest","Version":"1.2.0","Title":"Supplemental U.S. 2000 Census Tract Boundaries for 14 Western\nStates without Seer Registries for 'SeerMapper'","Description":"Provides supplemental 2000 census tract boundaries for the 14 states\n without Seer Registries that are west of the Mississippi river\n for use with the 'SeerMapper' package. \n The data contained in this \n package is derived from U. S. Census data and is in the public domain. ","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"seewave","Version":"2.0.5","Title":"Sound Analysis and Synthesis","Description":"Functions for analysing, manipulating, displaying, editing and synthesizing time waves (particularly sound). This package processes time analysis (oscillograms and envelopes), spectral content, resonance quality factor, entropy, cross correlation and autocorrelation, zero-crossing, dominant frequency, analytic signal, frequency coherence, 2D and 3D spectrograms and many other analyses.","Published":"2016-10-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"seg","Version":"0.5-1","Title":"A set of tools for measuring spatial segregation","Description":"A package that provides functions for measuring spatial \n segregation. The methods implemented in this package include \n White's P index (1983), Morrill's D(adj) (1991), Wong's D(w)\n and D(s) (1993), and Reardon and O'Sullivan's set of spatial \n segregation measures (2004).","Published":"2014-05-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"SegCorr","Version":"1.1","Title":"Detecting Correlated Genomic Regions","Description":"Performs correlation matrix segmentation and applies a test\n procedure to detect highly correlated regions in gene expression.","Published":"2015-11-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"segmag","Version":"1.2.4","Title":"Determine Event Boundaries in Event Segmentation Experiments","Description":"Contains functions that help to determine event\n boundaries in event segmentation experiments by bootstrapping a critical\n segmentation magnitude under the null hypothesis that all key presses were\n randomly distributed across the experiment. Segmentation magnitude is\n defined as the sum of Gaussians centered at the times of the segmentation\n key presses performed by the participants. Within a participant, the maximum\n of the overlaid Gaussians is used to prevent an excessive influence of a\n single participant on the overall outcome (e.g. if a participant is pressing\n the key multiple times in succession). Further functions are included, such\n as plotting the results.","Published":"2016-08-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"segmented","Version":"0.5-2.1","Title":"Regression Models with Break-Points / Change-Points Estimation","Description":"Given a regression model, segmented `updates' the model by adding one or more segmented (i.e., piece-wise linear) relationships. Several variables with multiple breakpoints are allowed.","Published":"2017-06-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Segmentor3IsBack","Version":"2.0","Title":"A Fast Segmentation Algorithm","Description":"Performs a fast exact segmentation on data and allows for use of various cost functions.","Published":"2016-06-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"seismic","Version":"1.0","Title":"Predict Information Cascade by Self-Exciting Point Process","Description":"An implementation of self-exciting point process model for information cascades, which occurs when many people engage in the same acts after observing the actions of others (e.g. post resharings on Facebook or Twitter). It provides functions to estimate the infectiousness of an information cascade and predict its popularity given the observed history. See http://snap.stanford.edu/seismic/ for more information and datasets.","Published":"2015-06-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"seismicRoll","Version":"1.1.2","Title":"Fast Rolling Functions for Seismology using Rcpp","Description":"Fast versions of seismic analysis functions that 'roll' over a\n vector of values. See the RcppRoll package for alternative\n versions of basic statistical functions such as rolling mean,\n median, etc.","Published":"2016-10-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sejmRP","Version":"1.3.4","Title":"An Information About Deputies and Votings in Polish Diet from\nSeventh to Eighth Term of Office","Description":"Set of functions that access information about deputies and votings\n in Polish diet from webpage . The package was developed\n as a result of an internship in MI2 Group - , Faculty\n of Mathematics and Information Science, Warsaw University of Technology.","Published":"2017-03-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Sejong","Version":"0.01","Title":"KoNLP static dictionaries and Sejong project resources","Description":"Sejong(http://www.sejong.or.kr/) corpus and\n Hannanum(http://semanticweb.kaist.ac.kr/home/index.php/HanNanum)\n dictionaries for KoNLP","Published":"2015-06-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SEL","Version":"1.0-2","Title":"Semiparametric elicitation","Description":"This package implements a novel method for fitting a\n bounded probability distribution to quantiles (for example\n stated by an expert), see Bornkamp and Ickstadt (2009) for\n details. For this purpose B-splines are used, and the density\n is obtained by penalized least squares based on a Brier entropy\n penalty. The package provides methods for fitting the\n distribution as well as methods for evaluating the underlying\n density and cdf. In addition methods for plotting the\n distribution, drawing random numbers and calculating quantiles\n of the obtained distribution are provided.","Published":"2010-05-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Select","Version":"1.1","Title":"Determines Species Probabilities Based on Functional Traits","Description":"For determining species probabilities that satisfy a given\n functional trait profile. Restoring resilient ecosystems requires a\n flexible framework for selecting assemblages that are based on the functional\n traits of species. However, current trait-based models have been biased toward\n algorithms that can only select species by optimising specific trait values,\n and could not elegantly accommodate the common desire among restoration ecologists\n to produce functionally diverse assemblages. We have solved this problem by\n applying a non-linear optimisation algorithm that optimises Rao’s Q, a closed-form\n functional diversity index that incorporates species abundances, subject to other\n linear constraints. This framework generalises previous models that could only\n optimise the entropy of the community, and can optimise both functional diversity\n and entropy simultaneously.","Published":"2017-02-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"selectapref","Version":"0.1.0","Title":"Analysis of Field and Laboratory Foraging","Description":"Provides indices such as Manly's alpha, foraging ratio, and Ivlev's selectivity to allow for analysis of dietary selectivity and preference. Can accommodate multiple experimental designs such as constant prey number of prey depletion.","Published":"2017-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"selection","Version":"1.0","Title":"Correcting Biased Estimates Under Selection","Description":"A collection of functions for correcting biased estimates under\n selection (range restriction).","Published":"2016-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"selectiongain","Version":"2.0.591","Title":"A Tool for Calculation and Optimization of the Expected Gain\nfrom Multi-Stage Selection","Description":"Multi-stage selection is practiced in numerous fields of life and social sciences and particularly in breeding. A special characteristic of multi-stage selection is that candidates are evaluated in successive stages with increasing intensity and effort, and only a fraction of the superior candidates is selected and promoted to the next stage. For the optimum design of such selection programs, the selection gain plays a crucial role. It can be calculated by integration of a truncated multivariate normal (MVN) distribution. While mathematical formulas for calculating the selection gain and the variance among selected candidates were developed long time ago, solutions for numerical calculation were not available. This package can also be used for optimizing multi-stage selection programs for a given total budget and different costs of evaluating the candidates in each stage.","Published":"2016-10-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"selectiveInference","Version":"1.2.2","Title":"Tools for Post-Selection Inference","Description":"New tools for post-selection inference, for use\n with forward stepwise regression, least angle regression, the\n lasso, and the many means problem. The lasso function implements Gaussian, logistic and Cox survival models.","Published":"2017-01-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"selectMeta","Version":"1.0.8","Title":"Estimation of Weight Functions in Meta Analysis","Description":"Publication bias, the fact that studies identified for inclusion in a meta analysis do not represent all studies on the topic of interest, is commonly recognized as a threat to the validity of the results of a meta analysis. One way to explicitly model publication bias is via selection models or weighted probability distributions. In this package we provide implementations of several parametric and nonparametric weight functions. The novelty in Rufibach (2011) is the proposal of a non-increasing variant of the nonparametric weight function of Dear & Begg (1992). The new approach potentially offers more insight in the selection process than other methods, but is more flexible than parametric approaches. To maximize the log-likelihood function proposed by Dear & Begg (1992) under a monotonicity constraint we use a differential evolution algorithm proposed by Ardia et al (2010a, b) and implemented in Mullen et al (2009). In addition, we offer a method to compute a confidence interval for the overall effect size theta, adjusted for selection bias as well as a function that computes the simulation-based p-value to assess the null hypothesis of no selection as described in Rufibach (2011, Section 6).","Published":"2015-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"selectr","Version":"0.3-1","Title":"Translate CSS Selectors to XPath Expressions","Description":"Translates a CSS3 selector into an equivalent XPath\n expression. This allows us to use CSS selectors when working with\n the XML package as it can only evaluate XPath expressions. Also\n provided are convenience functions useful for using CSS selectors on\n XML nodes. This package is a port of the Python package 'cssselect'\n ().","Published":"2016-12-19","License":"BSD_3_clause + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"selectspm","Version":"0.2","Title":"Select Point Pattern Models Based on Minimum Contrast, AIC and\nGoodness of Fit","Description":"Fit and selects point pattern models based on minimum contrast, AIC and and goodness of fit.","Published":"2015-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SeleMix","Version":"1.0.1","Title":"Selective Editing via Mixture Models","Description":"Detection of outliers and influential errors using a latent variable model. ","Published":"2016-11-22","License":"EUPL","snapshot_date":"2017-06-23"} {"Package":"seleniumPipes","Version":"0.3.7","Title":"R Client Implementing the W3C WebDriver Specification","Description":"The W3C WebDriver specification defines a way for out-of-process\n programs to remotely instruct the behaviour of web browsers. It is detailed\n at . This package provides\n an R client implementing the W3C specification.","Published":"2016-10-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"selfea","Version":"1.0.1","Title":"Select Features Reliably with Cohen's Effect Sizes","Description":"Functions using Cohen's effect sizes (Cohen, Jacob. Statistical power analysis for the behavioral sciences. Academic press, 2013) are provided for reliable feature selection in biology data analysis. In addition to Cohen's effect sizes, p-values are calculated and adjusted from quasi-Poisson GLM, negative binomial GLM and Normal distribution ANOVA. Significant features (genes, RNAs or proteins) are selected by adjusted p-value and minimum Cohen's effect sizes, calculated to keep certain level of statistical power of biology data analysis given p-value threshold and sample size.","Published":"2015-06-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"selfingTree","Version":"0.2","Title":"Genotype Probabilities in Intermediate Generations of Inbreeding\nThrough Selfing","Description":"A probability tree allows to compute probabilities of\n\t complex events, such as genotype probabilities in intermediate generations of inbreeding\n\t through recurrent self-fertilization (selfing). This package implements functionality to compute\n\t probability trees for two- and three-marker genotypes in the F2 to F7 selfing\n\t generations. The conditional probabilities are derived automatically\n\t and in symbolic form. The package also provides functionality to\n\t extract and evaluate the relevant probabilities.","Published":"2014-12-18","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SelvarMix","Version":"1.2","Title":"Regularization for Variable Selection in Model-Based Clustering\nand Discriminant Analysis","Description":"Performs a regularization approach to variable selection in the\n model-based clustering and classification frameworks.\n First, the variables are arranged in order with a lasso-like procedure. \n Second, the method of Maugis, Celeux, and Martin-Magniette (2009, 2011)\n , \n is adapted to define the role of variables in the two frameworks. ","Published":"2016-11-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"sem","Version":"3.1-9","Title":"Structural Equation Models","Description":"Functions for fitting general linear structural\n equation models (with observed and latent variables) using the RAM approach, \n and for fitting structural equations in observed-variable models by two-stage least squares.","Published":"2017-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"semdiag","Version":"0.1.2","Title":"Structural equation modeling diagnostics","Description":"Outlier and leverage diagnostics for SEM.","Published":"2012-01-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"semds","Version":"0.9-2","Title":"Structural Equation Multidimensional Scaling","Description":"Fits a multidimensional scaling (MDS) model for three-way data. It integrates concepts from structural equation models (SEM) by assuming an underlying, latent dissimilarity matrix. The methods uses an alternating estimation procedure in which the unknown symmetric dissimilarity matrix is estimated in a SEM framework while the objects are represented in a low-dimensional space. As a special case it can also handle asymmetric input dissimilarities.","Published":"2016-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"semGOF","Version":"0.2-0","Title":"Goodness-of-fit indexes for structural equation models","Description":"This is an add-on package which provides fourteen\n goodness-of-fit indexes for structural equation models using\n 'sem' package.","Published":"2012-08-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"semiArtificial","Version":"2.2.5","Title":"Generator of Semi-Artificial Data","Description":"Contains methods to generate and evaluate semi-artificial data sets. \n Based on a given data set different methods learn data properties using machine learning algorithms and\n generate new data with the same properties.\n The package currently includes the following data generators:\n i) a RBF network based generator using rbfDDA() from package 'RSNNS',\n ii) a Random Forest based generator for both classification and regression problems\n iii) a density forest based generator for unsupervised data\n Data evaluation support tools include:\n a) single attribute based statistical evaluation: mean, median, standard deviation, skewness, kurtosis, medcouple, L/RMC, KS test, Hellinger distance\n b) evaluation based on clustering using Adjusted Rand Index (ARI) and FM\n c) evaluation based on classification performance with various learning models, e.g., random forests.","Published":"2017-03-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SemiCompRisks","Version":"2.6","Title":"Hierarchical Models for Parametric and Semi-Parametric Analyses\nof Semi-Competing Risks Data","Description":"Parametric and semi-parametric analyses of semi-competing risks/univariate survival data. For semi-competing risks data, the package contains implementations of hierarchical models for independent data (Lee et al., 2015; ) and cluster-correlated data (Lee et al., 2016; ).","Published":"2016-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SEMID","Version":"0.2","Title":"Identifiability of Linear Structural Equation Models","Description":"Provides routines to check identifiability or non-identifiability\n of linear structural equation models as described in Drton, Foygel &\n Sullivant (Ann. Statist., 2011) and Foygel, Draisma & Drton (Ann. Statist.,\n 2012). The routines are based on the graphical representation of\n structural equation models by a path diagram/mixed graph.","Published":"2015-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SemiMarkov","Version":"1.4.3","Title":"Multi-States Semi-Markov Models","Description":"Functions for fitting multi-state semi-Markov models to longitudinal data. A parametric maximum likelihood estimation method adapted to deal with Exponential, Weibull and Exponentiated Weibull distributions is considered. Right-censoring can be taken into account and both constant and time-varying covariates can be included using a Cox proportional model.","Published":"2016-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SemiPar","Version":"1.0-4.1","Title":"Semiparametic Regression","Description":"Functions for semiparametric regression analysis, to\n complement the book: Ruppert, D., Wand, M.P. and Carroll, R.J.\n (2003). Semiparametric Regression. Cambridge University Press.","Published":"2014-09-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SemiParBIVProbit","Version":"3.8-2","Title":"Semiparametric Copula Regression Models","Description":"Routines for fitting various copula regression models, with several types of covariate effects, in the presence of associated error equations, endogeneity, non-random sample selection or partial observability.","Published":"2017-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SemiParSampleSel","Version":"1.5","Title":"Semi-Parametric Sample Selection Modelling with Continuous or\nDiscrete Response","Description":"Routine for fitting continuous or discrete response copula sample selection models with semi-parametric predictors, including linear and nonlinear effects. ","Published":"2017-05-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"semisupKernelPCA","Version":"0.1.5","Title":"Kernel PCA projection, and semi-supervised variant","Description":"Functions to compute Gaussian and p-Gaussian kernels,\n include supervision in these kernels, and perform kernel PCA\n projections.","Published":"2013-03-11","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"SEMModComp","Version":"1.0","Title":"Model Comparisons for SEM","Description":"Conduct tests of difference in fit for mean and covariance\n structure models as in structural equation modeling (SEM)","Published":"2009-05-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"semPlot","Version":"1.1","Title":"Path Diagrams and Visual Analysis of Various SEM Packages'\nOutput","Description":"Path diagrams and visual analysis of various SEM packages' output.","Published":"2017-03-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"semPLS","Version":"1.0-10","Title":"Structural Equation Modeling Using Partial Least Squares","Description":"Fits structural equation models using partial least\n squares (PLS). The PLS approach is referred to as\n 'soft-modeling' technique requiring no distributional\n assumptions on the observed data.","Published":"2013-01-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"semsfa","Version":"1.0","Title":"Semiparametric Estimation of Stochastic Frontier Models","Description":"Semiparametric Estimation of Stochastic Frontier Models following a two step procedure: in the first step semiparametric or nonparametric regression techniques are used to relax parametric restrictions of the functional form representing technology and in the second step variance parameters are obtained by pseudolikelihood estimators or by method of moments.","Published":"2015-02-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"semTools","Version":"0.4-14","Title":"Useful Tools for Structural Equation Modeling","Description":"Provides useful tools for structural equation modeling packages. ","Published":"2016-10-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"semtree","Version":"0.9.10","Title":"Recursive Partitioning for Structural Equation Models","Description":"SEM Trees and SEM Forests -- an extension of model-based decision\n trees and forests to Structural Equation Models (SEM). SEM trees hierarchically\n split empirical data into homogeneous groups sharing similar data patterns\n with respect to a SEM by recursively selecting optimal predictors of these\n differences. SEM forests are an extension of SEM trees. They are ensembles of \n SEM trees each built on a random sample of the original data. By aggregating over \n a forest, we obtain measures of variable importance that are more robust than \n measures from single trees.","Published":"2017-04-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"semver","Version":"0.2.0","Title":"'Semantic Versioning V2.0.0' Parser","Description":"Tools and functions for parsing, rendering and operating on\n semantic version strings. Semantic versioning is a simple set of rules\n and requirements that dictate how version numbers are assigned and\n incremented as outlined at .","Published":"2017-01-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sendmailR","Version":"1.2-1","Title":"send email using R","Description":"Package contains a simple SMTP client which provides a\n portable solution for sending email, including attachment, from\n within R.","Published":"2014-09-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sendplot","Version":"4.0.0","Title":"Tool for sending interactive plots with tool-tip content","Description":"A tool for visualizing data","Published":"2013-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sensiPhy","Version":"0.6.0","Title":"Sensitivity Analysis for Comparative Methods","Description":"An implementation of sensitivity analysis in phylogenetic regression models,\n for both linear and logistic phylogenetic regressions. The package is an umbrella\n of statistical and graphical methods that estimate and report different types of\n uncertainty in PGLS models:\n (i) Species Sampling uncertainty (sample size; influential species and clades).\n (ii) Phylogenetic uncertainty (different topologies and/or branch lengths).\n (iii) Data uncertainty (intraspecific variation and measurement error).","Published":"2017-01-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sensitivity","Version":"1.14.0","Title":"Global Sensitivity Analysis of Model Outputs","Description":"A collection of functions for factor screening, global sensitivity analysis and reliability sensitivity analysis. Most of the functions have to be applied on model with scalar output, but several functions support multi-dimensional outputs.","Published":"2017-02-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sensitivity2x2xk","Version":"1.01","Title":"Sensitivity Analysis for 2x2xk Tables in Observational Studies","Description":"Performs exact or approximate adaptive or nonadaptive Cochran-Mantel-Haenszel-Birch tests and sensitivity analyses for one or two 2x2xk tables in observational studies.","Published":"2015-12-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SensitivityCaseControl","Version":"2.1","Title":"Sensitivity Analysis for Case-Control Studies","Description":"This package performs sensitivity analysis for case-control studies in which some cases may meet a more narrow definition of being a case compared to other cases which only meet a broad definition. The sensitivity analyses are described in Small, Cheng, Halloran and Rosenbaum (2013, \"Case Definition and Sensitivity Analysis\", Journal of the American Statistical Association, 1457-1468). The functions sens.analysis.mh and sens.analysis.aberrant.rank provide sensitivity analyses based on the Mantel-Haenszel test statistic and aberrant rank test statistic as described in Rosenbaum (1991, \"Sensitivity Analysis for Matched Case Control Studies\", Biometrics); see also Section 1 of Small et al. The function adaptive.case.test provides adaptive inferences as described in Section 5 of Small et al. The function adaptive.noether.brown provides a sensitivity analysis for a matched cohort study based on an adaptive test. The other functions in the package are internal functions. ","Published":"2014-05-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sensitivityfull","Version":"1.5.6","Title":"Sensitivity Analysis for Full Matching in Observational Studies","Description":"Sensitivity to unmeasured biases in an observational study that is a full match. Function senfm() performs tests and function senfmCI() creates confidence intervals. The method uses Huber's M-statistics, including least squares, and is described in Rosenbaum (2007, Biometrics) .","Published":"2017-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sensitivitymult","Version":"1.0.1","Title":"Sensitivity Analysis for Observational Studies with Multiple\nOutcomes","Description":"Sensitivity analysis for multiple outcomes in observational studies. For instance, all linear combinations of several outcomes may be explored using Scheffe projections in the comparison() function; see Rosenbaum (2016, Annals of Applied Statistics) . Alternatively, attention may focus on a few principal components in the principal() function. The package includes parallel methods for individual outcomes, including tests in the senm() function and confidence intervals in the senmCI() function.","Published":"2017-05-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sensitivitymv","Version":"1.3","Title":"Sensitivity Analysis in Observational Studies","Description":"Sensitivity analysis in observational studies, including evidence factors and amplification, using the permutation distribution of Huber-Maritz M-statistics, including the permutational t-test.","Published":"2015-08-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sensitivitymw","Version":"1.1","Title":"Sensitivity analysis using weighted M-statistics","Description":"Sensitivity analysis analysis in matched observational studies with multiple controls using weighted M-statistics to increase design sensitivity.","Published":"2014-07-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sensitivityPStrat","Version":"1.0-6","Title":"Principal Stratification Sensitivity Analysis Functions","Description":"This package provides functions to perform principal stratification sensitivity analyses on datasets.","Published":"2014-12-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SensMixed","Version":"2.0-10","Title":"Analysis of Sensory and Consumer Data in a Mixed Model Framework","Description":"Functions that facilitate analysis of \n Sensory as well as Consumer data within a mixed effects model \n framework. The so-called mixed assessor models, \n that correct for the scaling effect are implemented.\n The generation of the d-tilde plots forms part of the package.\n The shiny application provides GUI for the functionalities.","Published":"2016-07-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SensoMineR","Version":"1.20","Title":"Sensory data analysis with R","Description":"an R package for analysing sensory data","Published":"2014-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sensors4plumes","Version":"0.9","Title":"Test and Optimise Sampling Designs Based on Plume Simulations","Description":"Test sampling designs by several flexible cost functions, usually based on the simulations, and optimise sampling designs using different optimisation algorithms; load plume simulations (on lattice or points) even if they do not fit into memory.","Published":"2017-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sensory","Version":"1.1","Title":"Simultaneous Model-Based Clustering and Imputation via a\nProgressive Expectation-Maximization Algorithm","Description":"Contains the function CUUimpute() which performs model-based clustering and imputation simultaneously.","Published":"2016-02-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sensR","Version":"1.4-7","Title":"Thurstonian Models for Sensory Discrimination","Description":"Provides methods for sensory discrimination methods;\n duotrio, tetrad, triangle, 2-AFC, 3-AFC, A-not A, same-different,\n 2-AC and degree-of-difference.\n This enables the calculation of d-primes, standard errors of\n d-primes, sample size and power computations, and\n comparisons of different d-primes. Methods for profile likelihood\n confidence intervals and plotting are included.","Published":"2016-04-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"SenSrivastava","Version":"2015.6.25","Title":"Datasets from Sen & Srivastava","Description":"Collection of datasets from Sen & Srivastava: \"Regression\n Analysis, Theory, Methods and Applications\", Springer. Sources\n for individual data files are more fully documented in the\n book.","Published":"2015-06-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SensusR","Version":"2.0.0","Title":"Sensus Analytics","Description":"Provides access and analytic functions for Sensus data.","Published":"2016-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SentimentAnalysis","Version":"1.2-0","Title":"Dictionary-Based Sentiment Analysis","Description":"Performs a sentiment analysis of textual contents in R. This implementation\n utilizes various existing dictionaries, such as Harvard IV, or finance-specific \n dictionaries. Furthermore, it can also create customized dictionaries. The latter \n uses LASSO regularization as a statistical approach to select relevant terms based on \n an exogenous response variable. ","Published":"2017-06-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sentimentr","Version":"1.0.0","Title":"Calculate Text Polarity Sentiment","Description":"Calculate text polarity sentiment at the sentence level\n and optionally aggregate by rows or grouping variable(s).","Published":"2017-03-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sEparaTe","Version":"0.2.1","Title":"Maximum Likelihood Estimation and Likelihood Ratio Test\nFunctions for Separable Variance-Covariance Structures","Description":"It combines maximum likelihood estimation of the parameters\n of matrix and 3rd-order tensor normal distributions with unstructured\n factor variance-covariance matrices, two procedures, and unbiased\n modified likelihood ratio testing of simple and double separability\n for variance-covariance structures, two procedures.","Published":"2016-07-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"separationplot","Version":"1.1","Title":"Separation Plots","Description":"Functions to generate separation plots for evaluation of\n model fit.","Published":"2015-03-15","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"SeqAlloc","Version":"1.0","Title":"Sequential Allocation for Prospective Experiments","Description":"Potential randomization schemes are prospectively evaluated when\n units are assigned to treatment arms upon entry into the experiment. The schemes\n are evaluated for balance on covariates and on predictability (i.e., how well\n could a site worker guess the treatment of the next unit enrolled).","Published":"2016-08-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"seqCBS","Version":"1.2","Title":"CN Profiling using Sequencing and CBS","Description":"This is a method for DNA Copy Number Profiling using\n Next-Generation Sequencing. It has new model and test\n statistics based on non-homogeneous Poisson Processes with\n change point models. It uses an adaptation of Circular Binary\n Segmentation. Also included are methods for point-wise Bayesian\n Confidence Interval and model selection method for the\n change-point model. A case and a control sample reads (normal\n and tumor) are required.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"seqDesign","Version":"1.1","Title":"Simulation and Group Sequential Monitoring of Randomized\nTwo-Stage Treatment Efficacy Trials with Time-to-Event\nEndpoints","Description":"A modification of the preventive vaccine efficacy trial design of Gilbert, Grove et al. (2011, Statistical Communications in Infectious Diseases) is implemented, with application generally to individual-randomized clinical trials with multiple active treatment groups and a shared control group, and a study endpoint that is a time-to-event endpoint subject to right-censoring. The design accounts for the issues that the efficacy of the treatment/vaccine groups may take time to accrue while the multiple treatment administrations/vaccinations are given; there is interest in assessing the durability of treatment efficacy over time; and group sequential monitoring of each treatment group for potential harm, non-efficacy/efficacy futility, and high efficacy is warranted. The design divides the trial into two stages of time periods, where each treatment is first evaluated for efficacy in the first stage of follow-up, and, if and only if it shows significant treatment efficacy in stage one, it is evaluated for longer-term durability of efficacy in stage two. The package produces plots and tables describing operating characteristics of a specified design including an unconditional power for intention-to-treat and per-protocol/as-treated analyses; trial duration; probabilities of the different possible trial monitoring outcomes (e.g., stopping early for non-efficacy); unconditional power for comparing treatment efficacies; and distributions of numbers of endpoint events occurring after the treatments/vaccinations are given, useful as input parameters for the design of studies of the association of biomarkers with a clinical outcome (surrogate endpoint problem). The code can be used for a single active treatment versus control design and for a single-stage design.","Published":"2015-04-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SeqFeatR","Version":"0.2.4","Title":"A Tool to Associate FASTA Sequences and Features","Description":"Provides user friendly methods for the identification of sequence patterns that are statistically significantly associated with a property of the sequence. For instance, SeqFeatR allows to identify viral immune escape mutations for hosts of given HLA types. The underlying statistical method is Fisher's exact test, with appropriate corrections for multiple testing, or Bayes. Patterns may be point mutations or n-tuple of mutations. SeqFeatR offers several ways to visualize the results of the statistical analyses.","Published":"2016-10-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"SeqGrapheR","Version":"0.4.8.5","Title":"Simple GUI for Graph Based Visualization of Cluster of DNA\nSequence Reads","Description":"The SeqGrapheR package provide interactive GUI for visualization of DNA sequence clusters. Details and principles of usage are described in user manual and (2010 BMC Bioinformatics 11:378). For full functionality installed NCBI blast is required.","Published":"2015-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"seqHMM","Version":"1.0.7","Title":"Hidden Markov Models for Life Sequences and Other Multivariate,\nMultichannel Categorical Time Series","Description":"Designed for fitting hidden (latent) Markov models and mixture\n hidden Markov models for social sequence data and other categorical time series.\n Also some more restricted versions of these type of models are available: Markov\n models, mixture Markov models, and latent class models. The package supports\n models for one or multiple subjects with one or multiple parallel sequences\n (channels). External covariates can be added to explain cluster membership in\n mixture models. The package provides functions for evaluating and comparing\n models, as well as functions for easy plotting of multichannel sequence data and\n hidden Markov models. Models are estimated using maximum likelihood via the EM\n algorithm and/or direct numerical maximization with analytical gradients. All\n main algorithms are written in C++ with support for parallel computation.","Published":"2017-04-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"seqinr","Version":"3.3-6","Title":"Biological Sequences Retrieval and Analysis","Description":"Exploratory data analysis and data visualization\n for biological sequence (DNA and protein) data. Includes also\n utilities for sequence data management under the ACNUC system.","Published":"2017-04-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SeqMADE","Version":"1.0","Title":"Network Module-Based Model in the Differential Expression\nAnalysis for RNA-Seq","Description":"A network module-based generalized linear model for differential expression analysis with the count-based sequence data from RNA-Seq.","Published":"2016-06-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"seqMeta","Version":"1.6.7","Title":"Meta-Analysis of Region-Based Tests of Rare DNA Variants","Description":"Computes necessary information to meta analyze region-based\n tests for rare genetic variants (e.g. SKAT, T1) in individual studies, and\n performs meta analysis.","Published":"2017-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"seqminer","Version":"6.0","Title":"Efficiently Read Sequence Data (VCF Format, BCF Format and METAL\nFormat) into R","Description":"Integrate sequencing data (Variant call format, e.g. VCF or BCF) or meta-analysis results in R. This package can help you (1) read VCF/BCF files by chromosomal ranges (e.g. 1:100-200); (2) read RareMETAL summary statistics files; (3) read tables from a tabix-indexed files; (4) annotate VCF/BCF files; (5) create customized workflow based on Makefile.","Published":"2017-05-05","License":"GPL | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"seqmon","Version":"2.1","Title":"Group Sequential Design Class for Clinical Trials","Description":"S4 class object for creating and managing group sequential designs. It calculates the efficacy and futility boundaries at each look. It allows modifying the design and tracking the design update history.","Published":"2016-10-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"seqPERM","Version":"1.0","Title":"Generates a permutation matrix based upon a sequence","Description":"User inputs a range of values r1 and r2, as well as a\n number of columns v, in the function sq.pe(r1,r2,v). The\n returned statement is a permutation matrix.","Published":"2013-01-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"seqRFLP","Version":"1.0.1","Title":"Simulation and visualization of restriction enzyme cutting\npattern from DNA sequences","Description":"This package includes functions for handling DNA\n sequences, especially simulated RFLP and TRFLP pattern based on\n selected restriction enzyme and DNA sequences.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"seqtest","Version":"0.1-0","Title":"Sequential Triangular Test","Description":"Sequential triangular test for the arithmetic mean in one- and two-\n samples, proportions in one- and two-samples, and the Pearson's correlation\n coefficient.","Published":"2016-03-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SequenceAnalysis","Version":"1.3","Title":"Sequence Analysis","Description":"Provides: 1) By Giving UniProtKB, the Protein Sequence will be returned from UniProt database. 2)By Giving UniProtKB, the Nucleotide Sequence will be returned from EBI database. 3) Amino acid Composition is calculated by four different methods: a) Twenty-two independent categories are considered, with one amino acid for each category. B) Five categories (Nonpolar Aliphatic, Nonpolar Aromatic, Polar Uncharged, Polar Positively Charged, Polar Negatively Charged) are considered according to the standard chemical structures of amino acids. c) Six categories (Nonpolar Aliphatic, Nonpolar Aromatic, Polar Uncharged, Polar Positively Charged, Polar Negatively Charged, Special cases) are considered which Cysteine, Selenocysteine, Glycine and Proline are placed in Special cases group. d)Eight categories are clustered via k-means algorithm on Physicochemical index of amino acids. 4) GC Content: Percentage of nucleotide g and c in sequence. 5) Codon usage: Frequency of occurrence of synonymous codons. 6) Stacking Energy: The NN model for nucleic acids assumes that the stability of a given base pair depends on the identity and orientation of neighboring base pairs. Stacking Energy = DeltaG(total) = Sigma (n(i)*DeltaG(i)) + DeltaG(init) + DeltaG(end) + DeltaG(sym), which DeltaG for i, init and end is obtained by Unified NN free energy parameter. Symmetry of self-complementary duplexes is also included by DeltaG(sym) equals to +0.43 (kcal/mol) if the duplex is self-complementary and zero if it is non-self-complementary. 7) Complement of desired nucleotide sequence. 8) Reverse of desired nucleotide sequence. 9) Reverse-Complement of desired nucleotide sequence. 10) Protein, Gene and Organism of desired UniProt ID. 11) Converting nucleotide sequence to protein sequence. 12) Getting Localization From Gene Ontology Cellular Component inside Uniprot. 13) All related codons of desired amino acid.","Published":"2016-08-15","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"sequences","Version":"0.5.9","Title":"Generic and Biological Sequences","Description":"Educational package used in R courses to illustrate\n \t object-oriented programming and package\n \t development. Using biological sequences (DNA and RNA) as\n \t a working example.","Published":"2014-12-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Sequential","Version":"2.3.1","Title":"Exact Sequential Analysis for Poisson and Binomial Data","Description":"Functions to calculate exact critical values, statistical power, expected time to signal and required sample sizes for performing exact sequential analysis. All these\tcalculations can be done for either Poisson or binomial data, for continuous or group sequential analyses, and for different types of rejection boundaries. In case of group sequential analyses, the group sizes do not have to be specified in advance and the alpha spending can be arbitrarily settled.","Published":"2017-03-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sequenza","Version":"2.1.2","Title":"Copy Number Estimation from Tumor Genome Sequencing Data","Description":"Tools to analyze genomic sequencing data from\n paired normal-tumor samples, including cellularity and ploidy estimation; mutation\n and copy number (allele-specific and total copy number) detection, quantification \n and visualization.","Published":"2015-10-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sequoia","Version":"0.8.1","Title":"Pedigree Inference from SNPs","Description":"Fast multi-generational pedigree inference from incomplete data on\n hundreds of SNPs, including parentage assignment and sibship clustering.\n See article \"Pedigree reconstruction from SNP data: Parentage assignment,\n sibship clustering, and beyond\" (Mol Ecol Res, accepted manuscript) for \n more information.","Published":"2017-03-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"serial","Version":"1.2","Title":"The Serial Interface Package","Description":"Provides functionality for the use of the internal hardware for\n RS232/RS422/RS485 and any other virtual serial interfaces of the\n computer.","Published":"2016-04-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"seriation","Version":"1.2-2","Title":"Infrastructure for Ordering Objects Using Seriation","Description":"Infrastructure for seriation with an implementation of several\n seriation/sequencing techniques to reorder matrices, dissimilarity\n matrices, and dendrograms. Also provides (optimally) reordered heatmaps,\n color images and clustering visualizations like dissimilarity plots, and\n visual assessment of cluster tendency plots (VAT and iVAT).","Published":"2017-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"seroincidence","Version":"1.0.5","Title":"Estimating Infection Rates from Serological Data","Description":"Translates antibody levels measured in a (cross-sectional)\n population sample into an estimate of the frequency with which\n seroconversions (infections) occur in the sampled population.","Published":"2015-12-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"servr","Version":"0.6","Title":"A Simple HTTP Server to Serve Static Files or Dynamic Documents","Description":"Start an HTTP server in R to serve static files, or dynamic\n documents that can be converted to HTML files (e.g., R Markdown) under a\n given directory.","Published":"2017-05-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"sesem","Version":"1.0.2","Title":"Spatially Explicit Structural Equation Modeling","Description":"Structural equation modeling is a powerful statistical approach for the testing of networks of direct and indirect theoretical causal relationships in complex data sets with inter-correlated dependent and independent variables. Here we implement a simple method for spatially explicit structural equation modeling based on the analysis of variance co-variance matrices calculated across a range of lag distances. This method provides readily interpreted plots of the change in path coefficients across scale.","Published":"2016-06-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"session","Version":"1.0.3","Title":"Functions for interacting with, saving and restoring R sessions","Description":"Utility functions for interacting with R processes from\n external programs. This package includes functions to save and\n restore session information (including loaded packages, and\n attached data objects), as well as functions to evaluate\n strings containing R commands and return the printed results or\n an execution transcript.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sessioninfo","Version":"1.0.0","Title":"R Session Information","Description":"Query and print information about the current R session.\n It is similar to 'utils::sessionInfo()', but includes more information\n about packages, and where they were installed from.","Published":"2017-06-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SetMethods","Version":"2.1","Title":"Functions for Set-Theoretic Multi-Method Research and Advanced\nQCA","Description":"This initiated as a package companion to the book by C. Q.\n Schneider and C. Wagemann \"Set-Theoretic Methods for the Social\n Sciences\", Cambridge University Press. It grew to include functions \n\tfor performing set-theoretic multi-method research, QCA for clustered \n\tdata, theory evaluation, and Enhanced Standard Analysis. Additionally \n\tit includes data to replicate the examples in the book and in the online \n\tappendix.","Published":"2017-03-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SETPath","Version":"1.0","Title":"Spiked Eigenvalue Test for Pathway data","Description":"Tests gene expression data from a biological pathway for biologically meaningful differences in the eigenstructure between two classes. Specifically, it tests the null hypothesis that the two classes' leading eigenvalues and sums of eigenvalues are equal. A pathway's leading eigenvalue arguably represents the total variability due to variability in pathway activity, while the sum of all its eigenvalues represents the variability due to pathway activity and to other, unregulated causes. Implementation of the method described in Danaher (2015), \"Covariance-based analyses of biological pathways\".","Published":"2015-02-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SetRank","Version":"1.1.0","Title":"Advanced Gene Set Enrichment Analysis","Description":"Implements an algorithm to conduct advanced\n gene set enrichment analysis on the results of genomics experiments.","Published":"2016-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"setRNG","Version":"2013.9-1","Title":"Set (Normal) Random Number Generator and Seed","Description":"SetRNG provides utilities to help set and record the setting of\n\tthe seed and the uniform and normal generators used when a random\n\texperiment is run. The utilities can be used in other functions \n\tthat do random experiments to simplify recording and/or setting all the \n\tnecessary information for reproducibility. \n\tSee the vignette and reference manual for examples.","Published":"2014-11-25","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sets","Version":"1.0-17","Title":"Sets, Generalized Sets, Customizable Sets and Intervals","Description":"Data structures and basic operations for ordinary sets,\n generalizations such as fuzzy sets, multisets, and\n fuzzy multisets, customizable sets, and intervals.","Published":"2017-03-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"setter","Version":"0.0-1","Title":"Mutators that Work with Pipes","Description":"Mutators to set attributes of variables, that work well in a pipe\n (much like stats::setNames()).","Published":"2016-03-30","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"SetTest","Version":"0.1.0","Title":"Group Testing Procedures for Signal Detection and\nGoodness-of-Fit","Description":"It provides cumulative distribution function (CDF),\n quantile, p-value, statistical power calculator and random number generator\n for a collection of group-testing procedures, including the Higher Criticism\n tests, the one-sided Kolmogorov-Smirnov tests, the one-sided Berk-Jones tests,\n the one-sided phi-divergence tests, etc. The input are a group of p-values.\n The null hypothesis is that they are i.i.d. Uniform(0,1). In the context of\n signal detection, the null hypothesis means no signals. In the context of the\n goodness-of-fit testing, which contrasts a group of i.i.d. random variables to\n a given continuous distribution, the input p-values can be obtained by the CDF\n transformation. The null hypothesis means that these random variables follow the\n given distribution. For reference, see Hong Zhang, Jiashun Jin and Zheyang Wu. \"Distributions and\n Statistical Power of Optimal Signal Detection Methods in Finite Samples\",\n submitted.","Published":"2017-02-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"settings","Version":"0.2.4","Title":"Software Option Settings Manager for R","Description":"Provides option settings management that goes\n beyond R's default 'options' function. With this package, users can define\n their own option settings manager holding option names, default values and \n (if so desired) ranges or sets of allowed option values that will be \n automatically checked. Settings can then be retrieved, altered and reset \n to defaults with ease. For R programmers and package developers it offers \n cloning and merging functionality which allows for conveniently defining \n global and local options, possibly in a multilevel options hierarchy. See \n the package vignette for some examples concerning functions, S4 classes, \n and reference classes. There are convenience functions to reset par() \n and options() to their 'factory defaults'.","Published":"2015-10-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"severity","Version":"2.0","Title":"Mayo's Post-data Severity Evaluation","Description":"This package contains functions for calculating severity\n and generating severity curves. Specifically, the simple case\n of the one-parameter Normal distribution (i.e., with known\n variance) is considered.","Published":"2013-03-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sExtinct","Version":"1.1","Title":"Calculates the historic date of extinction given a series of\nsighting events","Description":"This package combines several sighting based estimators of historical extinction, allowing them to be run simultaneously or individually. Code for this package was contributed by Ben Collen, Gene Hunt and Tracy Rout. Additional code was taken from McPherson & Myers (2009).","Published":"2013-10-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sf","Version":"0.5-1","Title":"Simple Features for R","Description":"Support for simple features, a standardized way to encode spatial vector data. Binds \n to GDAL for reading and writing data, to GEOS for geometrical operations, and to Proj.4 for projection\n\tconversions and datum transformations.","Published":"2017-06-23","License":"GPL-2 | MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sfa","Version":"1.0-1","Title":"Stochastic Frontier Analysis","Description":"Stochastic Frontier Analysis\n introduced by Aigner, Lovell and Schmidt (1976)\n and Battese and Coelli (1992, 1995).","Published":"2014-01-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sfadv","Version":"1.0.1","Title":"Advanced Methods for Stochastic Frontier Analysis","Description":"\n Stochastic frontier analysis with advanced methods.\n In particular, it applies the approach proposed by Latruffe et al. (2017) \n to estimate a stochastic frontier with technical \n inefficiency effects when one input is endogenous.","Published":"2017-06-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sfc","Version":"0.1.0","Title":"Substance Flow Computation","Description":"Provides a function sfc() to compute the substance flow\n with the input files --- \"data\" and \"model\". If sample.size is\n set more than 1, uncertainty analysis will be executed while\n the distributions and parameters are supplied in the file \"data\".","Published":"2016-08-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"sfdct","Version":"0.0.3","Title":"Constrained Triangulation for Simple Features","Description":"Build a constrained 'Delaunay' triangulation from simple features\n objects, applying constraints based on input line segments, and triangle\n properties including maximum area, minimum internal angle.","Published":"2017-05-02","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"sFFLHD","Version":"0.1.1","Title":"Sequential Full Factorial-Based Latin Hypercube Design","Description":"Gives design points from a sequential full factorial-based\n Latin hypercube design, as described in Duan, Ankenman, Sanchez,\n and Sanchez (2015, Technometrics,\n ).","Published":"2016-09-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sfinx","Version":"1.7.9","Title":"Straightforward Filtering Index for AP-MS Data Analysis (SFINX)","Description":"The straightforward filtering index (SFINX) identifies true positive\n protein interactions in a fast, user-friendly, and highly accurate way.\n It is not only useful for the filtering of affinity purification -\n mass spectrometry (AP-MS) data, but also for similar types of data\n resulting from other co-complex interactomics technologies, such as TAP-MS,\n Virotrap and BioID. SFINX can also be used via the website interface at\n .","Published":"2016-12-23","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"SFS","Version":"0.1.1","Title":"Similarity-First Search Seriation Algorithm","Description":"An implementation of the Similarity-First Search algorithm (SFS), a combinatorial algorithm which can be used to solve the seriation problem and to recognize some structured weighted graphs. The SFS algorithm represents a generalization to weighted graphs of the graph search algorithm Lexicographic Breadth-First Search (Lex-BFS), a variant of Breadth-First Search. The SFS algorithm reduces to Lex-BFS when applied to binary matrices (or, equivalently, unweighted graphs). Hence this library can be also considered for Lex-BFS applications such as recognition of graph classes like chordal or unit interval graphs. In fact, the SFS seriation algorithm implemented in this package is a multisweep algorithm, which consists in repeating a finite number of SFS iterations (at most \\eqn{n} sweeps for a matrix of size \\eqn{n}). If the data matrix has a Robinsonian structure, then the ranking returned by the multistep SFS algorithm is a Robinson ordering of the input matrix. Otherwise the algorithm can be used as a heuristic to return a ranking partially satisfying the Robinson property. ","Published":"2017-06-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sfsmisc","Version":"1.1-1","Title":"Utilities from 'Seminar fuer Statistik' ETH Zurich","Description":"Useful utilities ['goodies'] from Seminar fuer Statistik ETH\n Zurich, quite a few related to graphics; some were ported from S-plus.","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sft","Version":"2.0-7","Title":"Functions for Systems Factorial Technology Analysis of Data","Description":"This package contains a series of tools for analyzing Systems Factorial Technology data. This includes functions for plotting and statistically testing capacity coefficient functions and survivor interaction contrast functions.","Published":"2014-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SGCS","Version":"2.6","Title":"Spatial Graph Based Clustering Summaries for Spatial Point\nPatterns","Description":"Graph based clustering summaries for spatial point patterns.\n Includes Connectivity function, Cumulative connectivity function and clustering\n function, plus the triangle/triplet intensity function T.","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sgd","Version":"1.1","Title":"Stochastic Gradient Descent for Scalable Estimation","Description":"A fast and flexible set of tools for large scale estimation. It\n features many stochastic gradient methods, built-in models, visualization\n tools, automated hyperparameter tuning, model checking, interval estimation,\n and convergence diagnostics.","Published":"2016-01-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sgee","Version":"0.2-0","Title":"Stagewise Generalized Estimating Equations","Description":"Stagewise techniques implemented with Generalized Estimating Equations to handle individual, group, and bi-level selection.","Published":"2016-10-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"sgeostat","Version":"1.0-27","Title":"An Object-Oriented Framework for Geostatistical Modeling in S+","Description":"An Object-oriented Framework for Geostatistical Modeling in S+ \n containing functions for variogram estimation, variogram fitting and kriging\n as well as some plot functions. Written entirely in S, therefore works only\n for small data sets in acceptable computing time.","Published":"2016-02-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SGL","Version":"1.1","Title":"Fit a GLM (or cox model) with a combination of lasso and group\nlasso regularization","Description":"Fit a regularized generalized linear model via penalized\n maximum likelihood. The model is fit for a path of values of\n the penalty parameter. Fits linear, logistic and Cox models.","Published":"2013-04-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"sglasso","Version":"1.2.2","Title":"Lasso Method for RCON(V,E) Models","Description":"RCON(V, E) models (Højsgaard, et al., 2008) are a kind of restriction of the Gaussian Graphical Models defined by a set of equality constraints on the entries of the concentration matrix. 'sglasso' package implements the structured graphical lasso (sglasso) estimator proposed in Abbruzzo et al. (2014) for the weighted l1-penalized RCON(V, E) model. Two cyclic coordinate algorithms are implemented to compute the sglasso estimator, i.e. a cyclic coordinate minimization (CCM) and a cyclic coordinate descent (CCD) algorithm.","Published":"2015-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sglOptim","Version":"1.3.6","Title":"Generic Sparse Group Lasso Solver","Description":"Fast generic solver for sparse group lasso optimization\n problems. The loss (objective) function must be defined in a\n C++ module. The optimization problem is solved using a\n coordinate gradient descent algorithm. Convergence of the\n algorithm is established (see reference) and the algorithm is\n applicable to a broad class of loss functions. Use of parallel\n computing for cross validation and subsampling is supported\n through the 'foreach' and 'doParallel' packages. Development\n version is on GitHub, please report package issues on GitHub.","Published":"2017-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sglr","Version":"0.7","Title":"An R package for power and boundary calculations in\npre-licensure vaccine trials using a sequential generalized\nlikelihood ratio test","Description":"Functions for computing power and boundaries for pre-licensure vaccine trials using the Generalized Likelihood Ratio tests proposed by Shih, Lai, Heyse and Chen","Published":"2014-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sgof","Version":"2.3","Title":"Multiple Hypothesis Testing","Description":"Seven different methods for multiple testing problems. The SGoF-type methods and the BH and BY false discovery rate controlling procedures.","Published":"2016-04-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SGP","Version":"1.7-0.0","Title":"Student Growth Percentiles & Percentile Growth Trajectories","Description":"Functions to calculate student growth percentiles and percentile growth projections/trajectories for students using large scale,\n longitudinal assessment data. Functions use quantile regression to estimate the conditional density associated\n with each student's achievement history. Percentile growth projections/trajectories are calculated using the coefficient matrices derived from\n\tthe quantile regression analyses and specify what percentile growth is required for students to reach future achievement targets.","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sGPCA","Version":"1.0","Title":"Sparse Generalized Principal Component Analysis","Description":"Functions for computing sparse generalized principal components, including functions for modeling structured correlation","Published":"2013-07-06","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"SGPdata","Version":"17.0-0.0","Title":"Exemplar Data Sets for SGP Analyses","Description":"Data sets utilized by the SGP Package as exemplars for users to conduct their own SGP analyses.","Published":"2017-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sgPLS","Version":"1.4","Title":"Sparse Group Partial Least Square Methods","Description":"The Sparse Group Partial Least Square package (sgPLS) provides sparse, group, and sparse group versions of partial least square regression models.","Published":"2015-11-25","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"sgr","Version":"1.3","Title":"Sample Generation by Replacement","Description":"The package for Sample Generation by Replacement simulations (SGR; Lombardi & Pastore, 2014; Pastore & Lombardi, 2014). The package can be used to perform fake data analysis according to the sample generation by replacement approach. It includes functions for making simple inferences about discrete/ordinal fake data. The package allows to study the implications of fake data for empirical results.","Published":"2014-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sgRSEA","Version":"0.1","Title":"Enrichment Analysis of CRISPR/Cas9 Knockout Screen Data","Description":"Provides functions to implement sgRSEA (single-guide RNA Set Enrichment Analysis), which is a robust test for identification of essential genes from genetic screening data using CRISPR (clustered regularly interspaced short palindromic repeats) and Cas9 (CRISPR-associated nuclease 9) system.","Published":"2015-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sgt","Version":"2.0","Title":"Skewed Generalized T Distribution Tree","Description":"Density, distribution function, quantile function and random generation for the skewed generalized t distribution. This package also provides a function that can fit data to the skewed generalized t distribution using maximum likelihood estimation.","Published":"2015-09-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"shades","Version":"0.2.0","Title":"Simple Colour Manipulation","Description":"Functions for easily manipulating colours, creating colour scales and calculating colour distances.","Published":"2016-09-24","License":"BSD_3_clause + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"shadow","Version":"0.3.3","Title":"Geometric Shadow Calculations","Description":"Functions for calculating (1) shadow heights; (2) shadow footprint on ground polygons; and (3) Sky View Factor values. Inputs include a polygonal layer of obstacle outlines along with their heights, sun azimuth and sun elevation. The package also provides functions for related preliminary calculations: breaking polygons into line segments, finding segment azimuth, shifting segments by azimuth and distance, and constructing the footprint of a line of sight between an observer and the sun.","Published":"2017-06-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shallot","Version":"0.3.2","Title":"Random Partition Distribution Indexed by Pairwise Information","Description":"Implementations are provided for the models described in the paper D. B. Dahl, R. Day, J. Tsai (2017) . The Ewens, Ewens-Pitman, Ewens attraction, Ewens-Pitman attraction, and ddCRP distributions are available for prior simulation. We hope in the future to add posterior simulation with a user-supplied likelihood. Supporting functions for partition estimation and plotting are also planned.","Published":"2017-05-25","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shape","Version":"1.4.2","Title":"Functions for plotting graphical shapes, colors","Description":"Functions for plotting graphical shapes\n such as ellipses, circles, cylinders, arrows, ...","Published":"2014-11-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ShapeChange","Version":"1.4","Title":"Change-Point Estimation using Shape-Restricted Splines","Description":"In a scatterplot where the response variable is Gaussian, Poisson or binomial, we consider the case in which the mean function is smooth with a change-point, which is a mode, an inflection point or a jump point. The main routine estimates the mean curve and the change-point as well using shape-restricted B-splines. An optional subroutine delivering a bootstrap confidence interval for the change-point is incorporated in the main routine. ","Published":"2016-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"shapefiles","Version":"0.7","Title":"Read and Write ESRI Shapefiles","Description":"Functions to read and write ESRI shapefiles","Published":"2013-01-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ShapePattern","Version":"1.0.1","Title":"Tools for Analyzing Planar Shape and Associated Patterns","Description":"An evolving and growing collection of tools for the quantification, assessment, and comparison of planar shape and pattern. The current flagship functionality is in the spatial decomposition of planar shapes using 'ShrinkShape' to incrementally shrink shapes to extinction while computing area, perimeter, and number of parts at each iteration of shrinking. The spectra of results are returned in graphic and tabular formats. Additional utility tools for handling data are provided and this package will be added to as more tools are created, cleaned-up, and documented.","Published":"2016-10-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"shapeR","Version":"0.1-5","Title":"Collection and Analysis of Otolith Shape Data","Description":"Studies otolith shape variation among fish populations. \n Otoliths are calcified structures found in the inner ear of teleost fish and their shape has \n been known to vary among several fish populations and stocks, making them very useful in taxonomy, \n species identification and to study geographic variations. The package extends previously described \n software used for otolith shape analysis by allowing the user to automatically extract closed \n contour outlines from a large number of images, perform smoothing to eliminate pixel noise, \n choose from conducting either a Fourier or wavelet transform to the outlines and visualize \n the mean shape. The output of the package are independent Fourier or wavelet coefficients \n which can be directly imported into a wide range of statistical packages in R. The package \n might prove useful in studies of any two dimensional objects.","Published":"2015-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"shapes","Version":"1.2.0","Title":"Statistical Shape Analysis","Description":"Routines for the statistical analysis of landmark \n shapes, including Procrustes analysis, graphical displays, principal\n components analysis, permutation and bootstrap tests, thin-plate \n spline transformation grids and comparing covariance matrices. \n see Dryden, I.L. and Mardia, K.V. (2016). Statistical shape analysis, \n with Applications in R (2nd Edition), Wiley. ","Published":"2017-02-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ShapeSelectForest","Version":"1.3","Title":"Shape Selection for Landsat Time Series of Forest Dynamics","Description":"Landsat satellites collect important data about global forest conditions. Documentation about Landsat's role in forest disturbance estimation is available at the site . By constrained quadratic B-splines, this package delivers an optimal shape-restricted trajectory to a time series of Landsat imagery for the purpose of modeling annual forest disturbance dynamics to behave in an ecologically sensible manner assuming one of seven possible \"shapes\", namely, flat, decreasing, one-jump (decreasing, jump up, decreasing), inverted vee (increasing then decreasing), vee (decreasing then increasing), linear increasing, and double-jump (decreasing, jump up, decreasing, jump up, decreasing). The main routine selects the best shape according to the minimum Bayes information criterion (BIC) or the cone information criterion (CIC), which is defined as the log of the estimated predictive squared error. The package also provides parameters summarizing the temporal pattern including year(s) of inflection, magnitude of change, pre- and post-inflection rates of growth or recovery. In addition, it contains routines for converting a flat map of disturbance agents to time-series disturbance maps and a graphical routine displaying the fitted trajectory of Landsat imagery. ","Published":"2016-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SharpeR","Version":"1.1.0","Title":"Statistical Significance of the Sharpe Ratio","Description":"A collection of tools for analyzing significance of trading\n strategies, based on the Sharpe ratio and overfit of the same.","Published":"2016-03-14","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"sharpeRratio","Version":"1.1","Title":"Moment-Free Estimation of Sharpe Ratios","Description":"An efficient moment-free estimator of the Sharpe ratio, or signal-to-noise ratio, for heavy-tailed data (see ).","Published":"2016-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sharpshootR","Version":"1.0","Title":"A Soil Survey Toolkit","Description":"Miscellaneous soil data management, summary, visualization, and conversion utilities to support soil survey.","Published":"2016-08-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sharx","Version":"1.0-5","Title":"Models and Data Sets for the Study of Species-Area Relationships","Description":"Hierarchical models for the analysis of species-area \n relationships (SARs) by combining several data sets and covariates; \n with a global data set combining individual SAR studies; \n as described in Solymos and Lele \n (2012, Global Ecology and Biogeography 21, 109-120).","Published":"2016-01-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"shazam","Version":"0.1.7","Title":"Immunoglobulin Somatic Hypermutation Analysis","Description":"Provides a computational framework for Bayesian estimation of\n antigen-driven selection in immunoglobulin (Ig) sequences, providing an\n intuitive means of analyzing selection by quantifying the degree of\n selective pressure. Also provides tools to profile mutations in Ig\n sequences, build models of somatic hypermutation (SHM) in Ig sequences,\n and make model-dependent distance comparisons of Ig repertoires.","Published":"2017-05-14","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"SHELF","Version":"1.2.3","Title":"Tools to Support the Sheffield Elicitation Framework","Description":"Implements various methods for eliciting a probability distribution\n for a single parameter from an expert or a group of experts. The expert\n provides a small number of probability judgements, corresponding\n to points on his or her cumulative distribution function. A range of parametric\n distributions can then be fitted and displayed, with feedback provided in the\n form of fitted probabilities and percentiles. A graphical interface for the roulette elicitation\n method is also provided. For multiple experts, a weighted linear pool can be\n calculated. Also includes functions for eliciting beliefs about population distributions.","Published":"2017-02-11","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"shiny","Version":"1.0.3","Title":"Web Application Framework for R","Description":"Makes it incredibly easy to build interactive web\n applications with R. Automatic \"reactive\" binding between inputs and\n outputs and extensive prebuilt widgets make it possible to build\n beautiful, responsive, and powerful applications with minimal effort.","Published":"2017-04-26","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shiny.semantic","Version":"0.1.1","Title":"Semantic UI Support for Shiny","Description":"Creating a great user interface for your Shiny apps\n can be a hassle, especially if you want to work purely in R\n and don't want to use, for instance HTML templates. This\n package adds support for a powerful UI library Semantic UI -\n . It also supports universal UI input \n binding that works with various DOM elements.","Published":"2017-05-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinyAce","Version":"0.2.1","Title":"Ace Editor Bindings for Shiny","Description":"Ace editor bindings to enable a rich text editing environment\n within Shiny.","Published":"2016-03-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinybootstrap2","Version":"0.2.1","Title":"Bootstrap 2 Web Components for Use with Shiny","Description":"Provides Bootstrap 2 web components for use with the Shiny\n package. With versions of Shiny prior to 0.11, these Bootstrap 2 components\n were included as part of the package. Later versions of Shiny include\n Bootstrap 3, so the Bootstrap 2 components have been moved into this\n package for those uses who rely on features specific to Bootstrap 2.","Published":"2015-02-11","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinyBS","Version":"0.61","Title":"Twitter Bootstrap Components for Shiny","Description":"Adds additional Twitter Bootstrap components to Shiny. ","Published":"2015-03-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"shinycssloaders","Version":"0.2.0","Title":"Add CSS Loading Animations to 'shiny' Outputs","Description":"Create a lightweight Shiny wrapper for the css-loaders created by Luke Hass . Wrapping a Shiny output will automatically show a loader when the output is (re)calculating.","Published":"2017-05-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"shinydashboard","Version":"0.6.1","Title":"Create Dashboards with 'Shiny'","Description":"Create dashboards with 'Shiny'. This package provides\n a theme on top of 'Shiny', making it easy to create attractive dashboards.","Published":"2017-06-14","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinyDND","Version":"0.1.0","Title":"Shiny Drag-n-Drop","Description":"Add functionality to create drag and drop div elements in shiny.","Published":"2016-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"shinyFeedback","Version":"0.0.3","Title":"Displays User Feedback Next to Shiny Inputs","Description":"Easily display user feedback next to Shiny inputs. The feedback message is displayed when the feedback condition evaluates to TRUE.","Published":"2017-04-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"shinyFiles","Version":"0.6.2","Title":"A Server-Side File System Viewer for Shiny","Description":"Provides functionality for client-side navigation of\n the server side file system in shiny apps. In case the app is running\n locally this gives the user direct access to the file system without the\n need to \"download\" files to a temporary location. Both file and folder\n selection as well as file saving is available.","Published":"2016-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"shinyHeatmaply","Version":"0.1.0","Title":"Deploy 'heatmaply' using 'shiny'","Description":"Access functionality of the 'heatmaply' package through 'Shiny UI'.","Published":"2017-03-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ShinyItemAnalysis","Version":"1.2.0","Title":"Test and Item Analysis via Shiny","Description":"Interactive shiny application for analysis of educational tests and\n their items.","Published":"2017-06-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"shinyjqui","Version":"0.1.0","Title":"'jQuery UI' Interactions and Effects for Shiny","Description":"An extension to shiny that brings interactions and animation effects from\n 'jQuery UI' library.","Published":"2017-03-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinyjs","Version":"0.9","Title":"Easily Improve the User Experience of Your Shiny Apps in Seconds","Description":"Perform common useful JavaScript operations in Shiny apps that will\n greatly improve your apps without having to know any JavaScript. Examples\n include: hiding an element, disabling an input, resetting an input back to\n its original value, delaying code execution by a few seconds, and many more\n useful functions for both the end user and the developer. 'shinyjs' can also\n be used to easily call your own custom JavaScript functions from R.","Published":"2016-12-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinyLP","Version":"1.1.0","Title":"Bootstrap Landing Home Pages for Shiny Applications","Description":"Provides functions that wrap HTML Bootstrap\n components code to enable the design and layout of informative landing home\n pages for Shiny applications. This can lead to a better user experience for\n the users and writing less HTML for the developer.","Published":"2016-11-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinymaterial","Version":"0.2.1","Title":"Implement Material Design in Shiny Applications","Description":"Allows shiny developers to incorporate UI elements based on Google's Material design. See for more information.","Published":"2017-04-29","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinyRGL","Version":"0.1.0","Title":"Shiny Wrappers for RGL","Description":"Shiny wrappers for the RGL package. This package exposes RGL's\n ability to export WebGL visualization in a shiny-friendly format.","Published":"2013-10-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinyShortcut","Version":"0.1.0","Title":"Creates an Executable Shortcut for Shiny Applications","Description":"Provides function shinyShortcut() that, \n when given the base directory of a shiny application, will produce an\n executable file that runs the shiny app directly in the user's\n default browser. Tested on both windows and unix machines. Inspired\n by and borrowing from \n .","Published":"2017-03-19","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinystan","Version":"2.3.0","Title":"Interactive Visual and Numerical Diagnostics and Posterior\nAnalysis for Bayesian Models","Description":"A graphical user interface for interactive Markov chain Monte\n Carlo (MCMC) diagnostics and plots and tables helpful for analyzing a\n posterior sample. The interface is powered by RStudio's Shiny web\n application framework and works with the output of MCMC programs written\n in any programming language (and has extended functionality for Stan models\n fit using the rstan and rstanarm packages).","Published":"2017-02-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ShinyTester","Version":"0.1.0","Title":"Functions to Minimize Bonehead Moves While Working with 'shiny'","Description":"It's my experience that working with 'shiny' is intuitive once you're\n into it, but can be quite daunting at first. Several common mistakes are fairly\n predictable, and therefore we can control for these. The functions in this\n package help match up the assets listed in the UI and the SERVER files, and\n Visualize the ad hoc structure of the 'shiny' App.","Published":"2017-02-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"shinythemes","Version":"1.1.1","Title":"Themes for Shiny","Description":"Themes for use with Shiny. Includes several Bootstrap themes\n from , which are packaged for use with Shiny\n applications.","Published":"2016-10-12","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinyTime","Version":"0.2.1","Title":"A Time Input Widget for Shiny","Description":"Provides a time input widget for Shiny. This widget allows intuitive time input in the\n '[hh]:[mm]:[ss]' or '[hh]:[mm]' (24H) format by using a separate numeric input for each time\n component. The interface with R uses 'DateTimeClasses' objects. See the project page for more\n information and examples.","Published":"2016-10-07","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinytoastr","Version":"2.1.1","Title":"Notifications from 'Shiny'","Description":"Browser notifications in 'Shiny' apps, using\n 'toastr': .","Published":"2016-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinyTree","Version":"0.2.2","Title":"jsTree Bindings for Shiny","Description":"Exposes bindings to jsTree -- a JavaScript library\n that supports interactive trees -- to enable a rich, editable trees in\n Shiny.","Published":"2015-02-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"shinyWidgets","Version":"0.3.0","Title":"Custom Inputs Widgets for Shiny","Description":"Some custom inputs widgets to use in Shiny applications, like a toggle switch to replace checkboxes. And other components to pimp your apps.","Published":"2017-06-11","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SHIP","Version":"1.0.2","Title":"SHrinkage covariance Incorporating Prior knowledge","Description":"The SHIP-package allows the estimation of various types of\n shrinkage covariance matrices. These types differ in terms of\n the so-called covariance target (to be chosen by the user), the\n highly structured matrix which the standard unbiased sample\n covariance matrix is shrunken towards and which optionally\n incorporates prior biological knowledge extracted from the\n database KEGG. The shrinkage intensity is obtained via an\n analytical procedure.","Published":"2013-12-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SHLR","Version":"1.0","Title":"Shared Haplotype Length Regression","Description":"A statistical method designed to take advantage of population genetics and microevolutionary theory, specifically by testing the association between haplotype sharing length and trait of interest.","Published":"2016-04-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"shock","Version":"1.0","Title":"Slope Heuristic for Block-Diagonal Covariance Selection in High\nDimensional Gaussian Graphical Models","Description":"Block-diagonal covariance selection for high dimensional Gaussian\n graphical models. The selection procedure is based on the slope heuristics.","Published":"2015-12-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"shopifyr","Version":"0.28","Title":"An R Interface to the Shopify API","Description":"An interface to the API of the E-commerce service Shopify\n (http://docs.shopify.com/api)","Published":"2014-08-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"shotGroups","Version":"0.7.3","Title":"Analyze Shot Group Data","Description":"Analyzes shooting data with respect to group shape,\n precision, and accuracy. This includes graphical methods,\n descriptive statistics, and inference tests using standard,\n but also non-parametric and robust statistical methods.\n Implements distributions for radial error in bivariate normal\n variables. Works with files exported by OnTarget PC/TDS or\n Taran, as well as with custom data files in text format.\n Supports inference from range statistics like extreme spread.\n Includes a set of web-based graphical user interfaces.","Published":"2017-01-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"showtext","Version":"0.4-6","Title":"Using Fonts More Easily in R Graphs","Description":"Making it easy to use various types of fonts ('TrueType',\n 'OpenType', Type 1, web fonts, etc.) in R graphs, and supporting most output\n formats of R graphics including PNG, PDF and SVG. Text glyphs will be converted\n into polygons or raster images, hence after the plot has been created, it no\n longer relies on the font files. No external software such as 'Ghostscript' is\n needed to use this package.","Published":"2017-01-05","License":"Apache License (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"showtextdb","Version":"1.0","Title":"Font Files for the 'showtext' Package","Description":"Providing font files that are needed by the 'showtext' package.","Published":"2015-03-10","License":"Apache License (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"shp2graph","Version":"0-2","Title":"Convert a SpatialLinesDataFrame object to a \"igraph-class\"\nobject","Description":"Functions for converting network data from a\n SpatialLinesDataFrame object to a \"igraph-class\" object.","Published":"2014-05-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"shrink","Version":"1.2.1","Title":"Global, Parameterwise and Joint Shrinkage Factor Estimation","Description":"The predictive value of a statistical model can often be improved\n by applying shrinkage methods. This can be achieved, e.g., by regularized\n regression or empirical Bayes approaches. Various types of shrinkage factors can\n also be estimated after a maximum likelihood. While global shrinkage modifies\n all regression coefficients by the same factor, parameterwise shrinkage factors\n differ between regression coefficients. With variables which are either highly\n correlated or associated with regard to contents, such as several columns of a\n design matrix describing a nonlinear effect, parameterwise shrinkage factors are\n not interpretable and a compromise between global and parameterwise shrinkage,\n termed 'joint shrinkage', is a useful extension. A computational shortcut to\n resampling-based shrinkage factor estimation based on DFBETA residuals can be\n applied. Global, parameterwise and joint shrinkage for models fitted by lm(),\n glm(), coxph(), or mfp() is available.","Published":"2016-03-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ShrinkCovMat","Version":"1.1.2","Title":"Shrinkage Covariance Matrix Estimators","Description":"Provides nonparametric Steinian shrinkage estimators of the covariance matrix that are suitable in high dimensional settings, that is when the number of variables is larger than the sample size.","Published":"2016-05-22","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"shuffle","Version":"1.0.1","Title":"The Shuffle Estimator for Explainable Variance","Description":"Implementation of the shuffle estimator, a non-parametric estimator for signal and noise variance under mild noise correlations. ","Published":"2016-05-02","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"siar","Version":"4.2","Title":"Stable Isotope Analysis in R","Description":"This package takes data on organism isotopes and fits a\n Bayesian model to their dietary habits based upon a Gaussian\n likelihood with a mixture dirichlet-distributed prior on the\n mean. It also includes SiBER metrics. See siardemo() for an\n example. Version 4.1.2 contains bug fixes to allow more than\n isotope numbers other than 2. Version 4.2 fixes a bug that\n stopped siar working on 64-bit systems","Published":"2013-04-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SIBER","Version":"2.1.3","Title":"Stable Isotope Bayesian Ellipses in R","Description":"Fits bi-variate ellipses to stable isotope data using Bayesian\n inference with the aim being to describe and compare their isotopic\n niche.","Published":"2017-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sicegar","Version":"0.1","Title":"Analysis of Single-Cell Viral Growth Curves","Description":"Classifies time course fluorescence data of viral growth. The package categorize time course data into one of four categories, \"ambiguous\", \"no signal\", \"sigmoidal\", and \"double sigmoidal\" by fitting a series of mathematical models to the data. The origin of the package name came from \"SIngle CEll Growth Analysis in R\".","Published":"2017-03-19","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"SID","Version":"1.0","Title":"Structural Intervention Distance","Description":"The code computes the structural intervention distance (SID) between a true directed acyclic graph (DAG) and an estimated DAG. Definition and details about the implementation can be found in J. Peters and P. Bühlmann: \"Structural intervention distance (SID) for evaluating causal graphs\", Neural Computation 27, pages 771-799, 2015.","Published":"2015-03-07","License":"FreeBSD","snapshot_date":"2017-06-23"} {"Package":"sideChannelAttack","Version":"1.0-6","Title":"Side Channel Attack","Description":"This package has many purposes: first, it gives to the\n community an R implementation of each known side channel attack\n and countermeasures as well as data to test it, second it\n allows to implement a side channel attack quickly and easily.","Published":"2013-04-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SIDES","Version":"1.11","Title":"Subgroup Identification Based on Differential Effect Search","Description":"Provides function to apply \"Subgroup Identification based on Differential Effect Search\" (SIDES) method proposed by Lipkovich et al. (2011) .","Published":"2017-05-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sidier","Version":"4.0.2","Title":"Substitution and Indel Distances to Infer Evolutionary\nRelationships","Description":"Evolutionary reconstruction based on substitutions and insertion-deletion (indels) analyses in a distance-based framework.","Published":"2017-06-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sidrar","Version":"0.2.0","Title":"An Interface to IBGE's SIDRA API","Description":"Allows the user to connect with IBGE's (Instituto Brasileiro de \n Geografia e Estatistica, see for more information)\n SIDRA API in a flexible way. SIDRA is the acronym to \"Sistema IBGE de \n Recuperacao Automatica\" and is the system where IBGE turns available \n aggregate data from their researches.","Published":"2017-06-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sievetest","Version":"1.2.2","Title":"Sieve test reporting functions","Description":"Functions for making sieve test reports. Sieve test is widely used to obtain particle-size distribution of powders or granular materials.","Published":"2014-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sig","Version":"0.0-5","Title":"Print Function Signatures","Description":"Print function signatures and find overly complicated code.","Published":"2015-01-22","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"sigclust","Version":"1.1.0","Title":"Statistical Significance of Clustering","Description":"SigClust is a statistical method for testing the\n significance of clustering results. SigClust can be applied to\n assess the statistical significance of splitting a data set\n into two clusters. For more than two clusters, SigClust can be\n used iteratively.","Published":"2014-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SightabilityModel","Version":"1.3","Title":"Wildlife Sightability Modeling","Description":"Uses logistic regression to model the probability of detection as a function of covariates. \n This model is then used with observational survey data to estimate population size, while\n accounting for uncertain detection. See Steinhorst and Samuel (1989).","Published":"2014-10-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sigloc","Version":"0.0.4","Title":"Signal Location Estimation","Description":"A collection of tools for estimating the location of a transmitter signal from radio telemetry studies using the maximum likelihood estimation (MLE) approach described in Lenth (1981).","Published":"2014-10-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sigmoid","Version":"0.2.1","Title":"Sigmoid Functions for Machine Learning","Description":"Several different sigmoid functions are implemented, including a wrapper function, SoftMax preprocessing and inverse functions.","Published":"2017-03-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"signal","Version":"0.7-6","Title":"Signal Processing","Description":"A set of signal processing functions originally written for 'Matlab' and 'Octave'.\n Includes filter generation utilities, filtering functions,\n resampling routines, and visualization of filter models. It also\n includes interpolation functions.","Published":"2015-07-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"signalHsmm","Version":"1.4","Title":"Predict Presence of Signal Peptides","Description":"Predicts the presence of signal peptides in eukaryotic protein\n using hidden semi-Markov models. The implemented algorithm can be accessed from\n both the command line and GUI.","Published":"2016-03-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SignifReg","Version":"1.0","Title":"Significant Variable Selection in Linear Regression","Description":"Provide a significant variable selection procedure with different directions (forward, backward, stepwise) based on diverse criteria (Mallows' Cp, AIC, BIC, adjusted r-square, p-value). The algorithm selects a final model with only significant variables based on a correction choice of False Discovery Rate, Bonferroni, or no correction.","Published":"2017-02-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"signmedian.test","Version":"1.5.1","Title":"Perform Exact Sign Test and Asymptotic Sign Test in Large\nSamples","Description":"Perform sign test on one-sample data, which is one of the oldest non-parametric statistical methods. Assume that X comes from a continuous distribution with median = v ( unknown ). Test the null hypothesis H0: median of X v = mu ( mu is the location parameter and is given in the test ) v.s. the alternative hypothesis H1: v > mu ( or v < mu or v != mu ) and calculate the p-value. When the sample size is large, perform the asymptotic sign test. In both ways, calculate the R-estimate of location of X and the distribution free confidence interval for mu.","Published":"2015-05-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SigOptR","Version":"0.0.1","Title":"R API Wrapper for SigOpt","Description":"Interfaces with the 'SigOpt' API. More info at .","Published":"2017-03-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sigora","Version":"2.0.1","Title":"Signature Overrepresentation Analysis","Description":"\n Pathway Analysis is the process of statistically linking observations on the molecular level to biological processes or pathways on the systems (organism, organ, tissue, cell) level. \n Traditionally, pathway analysis methods regard pathways as collections of single genes and treat all genes in a pathway as equally informative. This can lead to identification of spurious (misleading) pathways as statistically significant, since components are often shared amongst pathways. \n SIGORA seeks to avoid this pitfall by focusing on genes or gene-pairs that are (as a combination) specific to a single pathway. In relying on such pathway gene-pair signatures (Pathway-GPS), SIGORA inherently uses the status of other genes in the experimental context to identify the most relevant pathways. \n The current version allows for pathway analysis of human and mouse data sets and contains pre-computed Pathway-GPS data for pathways in the KEGG and Reactome pathway repositories as well as mechanisms for extracting GPS for user supplied repositories.","Published":"2016-05-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sigr","Version":"0.1.6","Title":"Format Significance Summaries for Reports","Description":"Succinctly format significance summaries of\n various models and tests (F-test, Chi-Sq-test, Fisher-test, T-test, and rank-significance). The main purpose is unified reporting and planning\n of experimental results, working around issue such as the difficulty of\n extracting model summary facts (such as with 'lm'/'glm'). This package also\n includes empirical tests, such as bootstrap estimates.","Published":"2017-05-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SigTree","Version":"1.10.5","Title":"Identify and Visualize Significantly Responsive Branches in a\nPhylogenetic Tree","Description":"Provides tools to identify and visualize branches in a phylogenetic tree that are significantly responsive to some intervention, taking as primary inputs a phylogenetic tree (of class phylo) and a data frame (or matrix) of corresponding tip (OTU) labels and p-values.","Published":"2017-02-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SII","Version":"1.0.3","Title":"Calculate ANSI S3.5-1997 Speech Intelligibility Index","Description":"\n This package calculates ANSI S3.5-1997 Speech Intelligibility Index\n (SII), a standard method for computing the intelligibility of\n speech from acoustical measurements of speech, noise, and hearing\n thresholds. This package includes data frames corresponding to\n Tables 1 - 4 in the ANSI standard as well as a function utilizing\n these tables and user-provided hearing threshold and noise level\n measurements to compute the SII score. The methods implemented\n here extend the standard computations to allow calculation of SII\n when the measured frequencies do not match those required by the\n standard by applying interpolation to obtain values for the\n required frequencies\n -- \n Development of this package was funded by the Center for Bioscience\n Education and Technology (CBET) of the Rochester Institute of\n Technology (RIT).","Published":"2013-12-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Sim.DiffProc","Version":"3.7","Title":"Simulation of Diffusion Processes","Description":"Provides the functions for simulating and modeling of Ito and Stratonovich stochastic differential equations (SDE's). Statistical analysis and Monte-Carlo simulation of the solution of SDE's enabled many searchers in different domains to use these equations to modeling practical problems, in financial and actuarial modeling and other areas of application. For example, modeling and simulate of dispersion in shallow water using the attractive center (Boukhetala K, 1996). ","Published":"2017-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simba","Version":"0.3-5","Title":"A Collection of functions for similarity analysis of vegetation\ndata","Description":"Besides functions for the calculation of similarity and\n multiple plot similarity measures with binary data (for\n instance presence/absence species data) the package contains\n some simple wrapper functions for reshaping species lists into\n matrices and vice versa and some other functions for further\n processing of similarity data (Mantel-like permutation\n procedures) as well as some other useful stuff for vegetation\n analysis.","Published":"2012-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simboot","Version":"0.2-6","Title":"Simultaneous Inference for Diversity Indices","Description":"Provides estimation of simultaneous bootstrap and asymptotic confidence intervals for diversity indices, namely the Shannon and the Simpson index. Several pre--specified multiple comparison types are available to choose. Further user--defined contrast matrices are applicable. In addition, simboot estimates adjusted as well as unadjusted p--values for two of the three proposed bootstrap methods. Further simboot allows for comparing biological diversities of two or more groups while simultaneously testing a user-defined selection of Hill numbers of orders q, which are considered as appropriate and useful indices for measuring diversity.","Published":"2017-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simcausal","Version":"0.5.3","Title":"Simulating Longitudinal Data with Causal Inference Applications","Description":"A flexible tool for simulating complex longitudinal data using\n structural equations, with emphasis on problems in causal inference.\n Specify interventions and simulate from intervened data generating\n distributions. Define and evaluate treatment-specific means, the average\n treatment effects and coefficients from working marginal structural models.\n User interface designed to facilitate the conduct of transparent and\n reproducible simulation studies, and allows concise expression of complex\n functional dependencies for a large number of time-varying nodes. See the\n package vignette for more information, documentation and examples.","Published":"2016-12-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SimComp","Version":"2.2","Title":"Simultaneous Comparisons for Multiple Endpoints","Description":"Simultaneous tests and confidence intervals are provided for one-way experimental designs with one or many normally distributed, primary response variables (endpoints). Differences (Hasler and Hothorn, 2011) or ratios (Hasler and Hothorn, 2012) of means can be considered. Various contrasts can be chosen, unbalanced sample sizes are allowed as well as heterogeneous variances (Hasler and Hothorn, 2008) or covariance matrices (Hasler, 2014).","Published":"2014-09-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SimCorMultRes","Version":"1.4.2","Title":"Simulates Correlated Multinomial Responses","Description":"Simulates correlated multinomial responses conditional on a marginal model specification.","Published":"2016-09-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"simctest","Version":"2.5","Title":"Safe Implementation of Monte Carlo Tests","Description":"Algorithms for the implementation and evaluation of Monte Carlo tests, as well as for their use in multiple testing procedures.","Published":"2017-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SimDesign","Version":"1.6","Title":"Structure for Organizing Monte Carlo Simulation Designs","Description":"Provides tools to help safely and efficiently organize Monte Carlo simulations in R.\n The package controls the structure and back-end of Monte Carlo simulations\n by utilizing a general generate-analyse-summarise strategy. The functions provided control\n common simulation issues such as re-simulating non-convergent results, support parallel\n back-end and MPI distributed computations, save and restore temporary files,\n aggregate results across independent nodes, and provide native support for debugging.","Published":"2017-04-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simecol","Version":"0.8-9","Title":"Simulation of Ecological (and Other) Dynamic Systems","Description":"An object oriented framework to simulate\n ecological (and other) dynamic systems. It can be used for\n differential equations, individual-based (or agent-based) and other\n models as well. The package helps to organize scenarios (to avoid copy\n and paste) and aims to improve readability and usability of code.","Published":"2017-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simEd","Version":"1.0.1","Title":"Simulation Education","Description":"Contains various functions to be used for simulation education, \n including queueing simulation functions, variate generation functions\n capable of producing independent streams and antithetic variates, functions\n for illustrating random variate generation for various discrete and\n continuous distributions, and functions to compute time-persistent\n statistics. Also contains two queueing data sets (one fabricated, one\n real-world) to facilitate input modeling.","Published":"2017-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simest","Version":"0.4","Title":"Constrained Single Index Model Estimation","Description":"Estimation of function and index vector in single index model with and without shape constraints including different smoothness conditions.","Published":"2017-04-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"simex","Version":"1.5","Title":"SIMEX- and MCSIMEX-Algorithm for measurement error models","Description":"Implementation of the SIMEX-Algorithm by Cook & Stefanski\n and MCSIMEX by Küchenhoff, Mwalili & Lesaffre","Published":"2013-05-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simexaft","Version":"1.0.7","Title":"simexaft","Description":"Implement of the Simulation-Extrapolation (SIMEX) algorithm for the accelerated failure time (AFT) with covariates subject to measurement error.","Published":"2014-01-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"simFrame","Version":"0.5.3","Title":"Simulation framework","Description":"A general framework for statistical simulation.","Published":"2014-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simglm","Version":"0.5.0","Title":"Simulate Models Based on the Generalized Linear Model","Description":"Easily simulates regression models,\n including both simple regression and generalized linear mixed\n models with up to three level of nesting. Power simulations that are\n flexible allowing the specification of missing data, unbalanced designs,\n and different random error distributions are built into the package.","Published":"2017-05-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SimHaz","Version":"0.1","Title":"Simulated Survival and Hazard Analysis for Time-Dependent\nExposure","Description":"Generate power for the Cox proportional hazards model by simulating survival events data with time dependent exposure status for subjects. A dichotomous exposure variable is considered with a single transition from unexposed to exposed status during the subject's time on study.","Published":"2015-10-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SimilarityMeasures","Version":"1.4","Title":"Trajectory Similarity Measures","Description":"Functions to run and assist four\n different similarity measures. The similarity\n measures included are: longest common\n subsequence (LCSS), Frechet distance, edit distance\n and dynamic time warping (DTW). Each of these\n similarity measures can be calculated from two\n n-dimensional trajectories, both in matrix form.","Published":"2015-02-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Simile","Version":"1.3.3","Title":"Interact with Simile Models","Description":"Allows a Simile model saved as a compiled binary to be\n loaded, parameterized, executed and interrogated. This version works \n with Simile v5.97 on.","Published":"2015-02-19","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"SimInf","Version":"5.0.0","Title":"A Framework for Data-Driven Stochastic Disease Spread\nSimulations","Description":"Livestock movements are important for the spread of many\n infectious diseases between herds. The package provides an\n efficient and flexible framework for stochastic disease spread\n modelling that integrates within-herd disease dynamics as\n continuous-time Markov chains and livestock movements between\n herds as scheduled events. The core simulation solver is\n implemented in C and uses 'OpenMP' (if available) to divide work\n over multiple processors. The package contains template models and\n can be extended with user defined models.","Published":"2017-06-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"simLife","Version":"0.3","Title":"Simulation of Fatigue Lifetimes","Description":"Provides methods for simulation and analysis of a very general fatigue\n\t\t\t lifetime model for (metal matrix) composite materials.","Published":"2016-10-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"simmer","Version":"3.6.2","Title":"Discrete-Event Simulation for R","Description":"A process-oriented and trajectory-based Discrete-Event Simulation\n (DES) package for R. It is designed as a generic yet powerful framework. The\n architecture encloses a robust and fast simulation core written in C++ with\n automatic monitoring capabilities. It provides a rich and flexible R API that\n revolves around the concept of trajectory, a common path in the simulation\n model for entities of the same type.","Published":"2017-05-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"simmer.plot","Version":"0.1.9","Title":"Plotting Methods for 'simmer'","Description":"A set of plotting methods for 'simmer' trajectories and\n simulations.","Published":"2017-03-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"simMP","Version":"0.17.3","Title":"Simulate Somatic Mutations in Cancer Genomes from Mutational\nProcesses","Description":"Simulates somatic single base substitutions carried in cancer genomes. By only providing a human reference genome, substitutions that result from mutational processes operative in every cancer genome can be generated.","Published":"2017-04-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"simmr","Version":"0.3","Title":"A Stable Isotope Mixing Model","Description":"Fits a stable isotope mixing model via JAGS in R. The package allows for any number of isotopes or sources, as well as concentration dependencies.","Published":"2016-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SIMMS","Version":"1.0.2","Title":"Subnetwork Integration for Multi-Modal Signatures","Description":"Algorithms to create prognostic biomarkers using biological networks.","Published":"2015-09-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"simMSM","Version":"1.1.41","Title":"Simulation of Event Histories for Multi-State Models","Description":"Simulation of event histories with possibly non-linear baseline hazard rate functions, non-linear (time-varying) covariate effect functions, and dependencies on the past of the history. Random generation of event histories is performed using inversion sampling on the cumulative all-cause hazard rate functions. ","Published":"2015-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simone","Version":"1.0-3","Title":"Statistical Inference for MOdular NEtworks (SIMoNe)","Description":"Implements the inference of\n co-expression networks based on partial correlation\n coefficients from either steady-state or time-course\n transcriptomic data. Note that with both type of data this\n package can deal with samples collected in different\n experimental conditions and therefore not identically\n distributed. In this particular case, multiple but related\n networks are inferred on one simone run.","Published":"2016-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simPATHy","Version":"0.2","Title":"A Method for Simulating Data from Perturbed Biological Pathways","Description":"Simulate data from a Gaussian graphical model or a Gaussian Bayesian network in two conditions. Given a covariance matrix of a reference condition simulate plausible dysregulations.","Published":"2016-09-21","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"simPH","Version":"1.3.10","Title":"Tools for Simulating and Plotting Quantities of Interest\nEstimated from Cox Proportional Hazards Models","Description":"Simulates and plots quantities of interest (relative\n hazards, first differences, and hazard ratios) for linear coefficients,\n multiplicative interactions, polynomials, penalised splines, and\n non-proportional hazards, as well as stratified survival curves from Cox\n Proportional Hazard models. It also simulates and plots marginal effects\n for multiplicative interactions.","Published":"2017-05-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SimPhe","Version":"0.1.1","Title":"Tools to Simulate Phenotype(s) with Epistatic Interaction","Description":"Provides functions to simulate single or multiple, independent or correlated phenotype(s) with additive, dominance effects and their interactions. Also includes functions to generate phenotype(s) with specific heritability. Flexible and user-friendly options for simulation.","Published":"2017-01-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simpleboot","Version":"1.1-3","Title":"Simple Bootstrap Routines","Description":"Simple bootstrap routines","Published":"2008-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simplegraph","Version":"1.0.0","Title":"Simple Graph Data Types and Basic Algorithms","Description":"Simple classic graph algorithms for simple graph classes.\n Graphs may possess vertex and edge attributes. 'simplegraph' has\n so dependencies and it is written entirely in R, so it is easy to\n install.","Published":"2015-12-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"simpleNeural","Version":"0.1.1","Title":"An Easy to Use Multilayer Perceptron","Description":"Trains neural networks (multilayer perceptrons with one hidden layer) for bi- or multi-class classification.","Published":"2015-04-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"simpleRCache","Version":"0.3.2","Title":"Simple R Cache","Description":"Simple result caching in R based on R.cache. The global environment is not \n considered when caching results simplifying moving files between multiple instances \n of R. Relies on more base functions than R.cache (e.g. cached results are saved using \n saveRDS() and readRDS()).","Published":"2017-04-09","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"simpleSetup","Version":"0.1.0","Title":"Set Up R Source Code Files for Use on Multiple Machines","Description":"When working across multiple machines and, similarly for\n reproducible research, it can be time consuming to ensure that you have\n all of the needed packages installed and loaded and that the correct working\n directory is set. 'simpleSetup' provides simple functions for making these\n tasks more straightforward.","Published":"2017-01-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"SimpleTable","Version":"0.1-2","Title":"Bayesian Inference and Sensitivity Analysis for Causal Effects\nfrom 2 x 2 and 2 x 2 x K Tables in the Presence of Unmeasured\nConfounding","Description":"SimpleTable provides a series of methods to conduct\n Bayesian inference and sensitivity analysis for causal effects\n from 2 x 2 and 2 x 2 x K tables when unmeasured confounding is\n present or suspected.","Published":"2012-06-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"simplexreg","Version":"1.3","Title":"Regression Analysis of Proportional Data Using Simplex\nDistribution","Description":"Simplex density, distribution, quantile functions as well as random variable\n \tgeneration of the simplex distribution are given. Regression analysis of proportional data\n \tusing various kinds of simplex models is available. In addition, GEE method can be applied \n\tto longitudinal data to model the correlation. Residual analysis is also involved. Some \n\tsubroutines are written in C with GNU Scientific Library (GSL) so as to facilitate the \n\tcomputation. ","Published":"2016-08-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SimplicialCubature","Version":"1.2","Title":"Integration of Functions Over Simplices","Description":"Provides methods to integrate functions over m-dimensional simplices\n in n-dimensional Euclidean space. There are exact methods for polynomials and\n adaptive methods for integrating an arbitrary function. Dirichlet probabilities\n are calculated in certain cases.","Published":"2016-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simplr","Version":"0.1-1","Title":"Basic Symbolic Expression Simplification","Description":"Basic tools for symbolic expression simplification, e.g. simplify(x*1) => x, or simplify(sin(x)^2+cos(x)^2) => 1. Based on the \"Expression v3\" (Ev3) 1.0 system by Leo Liberti.","Published":"2015-08-20","License":"CPL","snapshot_date":"2017-06-23"} {"Package":"simPop","Version":"0.6.0","Title":"Simulation of Synthetic Populations for Survey Data Considering\nAuxiliary Information","Description":"Tools and methods to simulate populations for surveys based\n on auxiliary data. The tools include model-based methods, calibration and\n combinatorial optimization algorithms. The package was developed with support of\n the International Household Survey Network, DFID Trust Fund TF011722 and funds from the World bank.","Published":"2017-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Simpsons","Version":"0.1.0","Title":"Detecting Simpson's Paradox","Description":"This package detects instances of Simpson's Paradox in\n datasets. It examines subpopulations in the data, either\n user-defined or by means of cluster analysis, to test whether a\n regression at the level of the group is in the opposite\n direction at the level of subpopulations.","Published":"2012-08-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"simputation","Version":"0.2.2","Title":"Simple Imputation","Description":"Easy to use interfaces to a number of imputation methods\n that fit in the not-a-pipe operator of the 'magrittr' package.","Published":"2017-05-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"simr","Version":"1.0.2","Title":"Power Analysis for Generalised Linear Mixed Models by Simulation","Description":"Calculate power for generalised linear mixed models, using\n simulation. Designed to work with models fit using the 'lme4' package.","Published":"2016-11-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SimRAD","Version":"0.96","Title":"Simulations to Predict the Number of RAD and GBS Loci","Description":"Provides a number of functions to simulate restriction enzyme digestion, library construction and fragments size selection to predict the number of loci expected from most of the Restriction site Associated DNA (RAD) and Genotyping By Sequencing (GBS) approaches. SimRAD estimates the number of loci expected from a particular genome depending on the protocol type and parameters allowing to assess feasibility, multiplexing capacity and the amount of sequencing required.","Published":"2016-01-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SimReg","Version":"3.0","Title":"Similarity Regression","Description":"Functions for performing Bayesian similarity regression,\n and evaluating the probability of association between sets of ontological terms\n and binary response vector. A random model is compared with one in which\n the log odds of a true response is linked to the semantic similarity\n between terms and a latent characteristic ontological profile. ","Published":"2016-10-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simrel","Version":"1.0-1","Title":"Linear Model Data Simulation and Design of Computer Experiments","Description":"Facilitates data simulation from a random regression model where the data properties can be controlled by a few input parameters. The data simulation is based on the concept of relevant latent components and relevant predictors, and was developed for the purpose of testing methods for variable selection for prediction. Included are also functions for designing computer experiments in order to investigate the effects of the data properties on the performance of the tested methods. The design is constructed using the Multi-level Binary Replacement (MBR) design approach which makes it possible to set up fractional designs for multi-factor problems with potentially many levels for each factor. ","Published":"2014-11-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SiMRiv","Version":"0.9.1","Title":"Individual-Based, Spatially-Explicit Simulation and Analysis of\nMulti-State Movements in River Networks and Heterogeneous\nLandscapes","Description":"Provides functions to generate and analyze individual-based spatially-explicit\n simulations of multi-state movements in heterogeneous landscapes, based on \"resistance\"\n rasters. Although originally conceived and designed to fill the gap of softwares simulating\n spatially-explicit trajectories of species constrained to linear, dendritic habitats\n (e.g., river networks), the simulation algorithm is built to be highly flexible and can be\n applied to any (aquatic, semi-aquatic or terrestrial) organism. Thus, the user will be able\n to use the package to simulate movements either in homogeneous landscapes, heterogeneous\n landscapes (e.g. semi-aquatic animal in a riverscape), or even in highly contrasted\n landscapes (e.g. fish in a river network). The algorithm and its input parameters are\n the same for all cases, so that results are comparable. Simulated trajectories can then\n be used as null models to test e.g. for species site fidelity and other movement ecology\n hypotheses, or to build predictive, mechanistic movement models, among other things. The\n package should thus be relevant to explore a broad spectrum of ecological phenomena, such\n as those at the interface of animal behaviour, landscape, spatial and movement ecology,\n disease and invasive species spread, and population dynamics. This is the first released\n experimental version; do test before using in production.","Published":"2016-07-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simsalapar","Version":"1.0-9","Title":"Tools for Simulation Studies in Parallel","Description":"Tools for setting up (\"design\"), conducting, and evaluating\n large-scale simulation studies with graphics and tables, including\n parallel computations.","Published":"2016-04-19","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"SimSCRPiecewise","Version":"0.1.1","Title":"'Simulates Univariate and Semi-Competing Risks Data Given\nCovariates and Piecewise Exponential Baseline Hazards'","Description":"Contains two functions for simulating survival data from piecewise exponential hazards with a proportional hazards adjustment for covariates. The first function SimUNIVPiecewise simulates univariate survival data based on a piecewise exponential hazard, covariate matrix and true regression vector. The second function SimSCRPiecewise semi-competing risks data based on three piecewise exponential hazards, three true regression vectors and three matrices of patient covariates (which can be different or the same). This simulates from the Semi-Markov model of Lee et al (2015) given patient covariates, regression parameters, patient frailties and baseline hazard functions.","Published":"2016-07-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"simsem","Version":"0.5-13","Title":"SIMulated Structural Equation Modeling","Description":"Provides an easy framework for Monte Carlo simulation in structural equation modeling, which can be used for various purposes, such as such as model fit evaluation, power analysis, or missing data handling and planning. ","Published":"2016-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SimSeq","Version":"1.4.0","Title":"Nonparametric Simulation of RNA-Seq Data","Description":"RNA sequencing analysis methods are often derived by relying on hypothetical parametric models for read counts that are not likely to be precisely satisfied in practice. Methods are often tested by analyzing data that have been simulated according to the assumed model. This testing strategy can result in an overly optimistic view of the performance of an RNA-seq analysis method. We develop a data-based simulation algorithm for RNA-seq data. The vector of read counts simulated for a given experimental unit has a joint distribution that closely matches the distribution of a source RNA-seq dataset provided by the user. Users control the proportion of genes simulated to be differentially expressed (DE) and can provide a vector of weights to control the distribution of effect sizes. The algorithm requires a matrix of RNA-seq read counts with large sample sizes in at least two treatment groups. Many datasets are available that fit this standard.","Published":"2015-11-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simstudy","Version":"0.1.2","Title":"Simulation of Study Data","Description":"Simulates data sets in order to explore modeling techniques or\n better understand data generating processes. The user specifies a set of\n relationships between covariates, and generates data based on these\n specifications. The final data sets can represent data from randomized\n control trials, repeated measure (longitudinal) designs, and cluster\n randomized trials. Missingness can be generated using various\n mechanisms (MCAR, MAR, NMAR).","Published":"2016-12-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"simSummary","Version":"0.1.0","Title":"Simulation summary","Description":"simSummary is a small utility package which eases the\n process of summarizing simulation results. Simulations often\n produce intermediate results - some focal statistics that need\n to be summarized over several scenarios and many replications.\n This step is in principle easy, but tedious. The package\n simSummary fills this niche by providing a generic way of\n summarizing the focal statistics of simulations. The useR must\n provide properly structured input, holding focal statistics,\n and then the summary step can be performed with one line of\n code, calling the simSummary function.","Published":"2012-05-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"simTool","Version":"1.0.3","Title":"Conduct Simulation Studies with a Minimal Amount of Source Code","Description":"The simTool package is designed for statistical simulations that\n have two components. One component generates the data and the other one\n analyzes the data. The main aims of the simTool package are the reduction\n of the administrative source code (mainly loops and management code for the\n results) and a simple applicability of the package that allows the user to\n quickly learn how to work with the simTool package. Parallel computing is\n also supported. Finally, convenient functions are provided to summarize the\n simulation results.","Published":"2014-10-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SimuChemPC","Version":"1.3","Title":"Simulation process of 4 selection methods in predicting chemical\npotent compounds","Description":"This package provides simulation process of 4 selection methods in predicting potent compounds ","Published":"2014-02-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"simulator","Version":"0.2.0","Title":"An Engine for Running Simulations","Description":"A framework for performing simulations such as those common in\n methodological statistics papers. The design principles of this package\n are described in greater depth in Bien, J. (2016) \"The simulator: An Engine\n to Streamline Simulations,\" which is available at\n .","Published":"2016-07-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"simule","Version":"1.1.0","Title":"A Constrained L1 Minimization Approach for Estimating Multiple\nSparse Gaussian or Nonparanormal Graphical Models","Description":"The SIMULE (Shared and Individual parts of MULtiple graphs Explicitly) is a generalized method for estimating multiple related graphs with shared and individual pattern among graphs. For more details, please see .","Published":"2017-05-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SimultAnR","Version":"1.1","Title":"Correspondence and Simultaneous Analysis","Description":"This package performs classical correspondence analysis (CA) and\n simultaneous analysis (SA). Simultaneous analysis is a factorial\n methodology developed for the joint treatment of a set of several\n contingency tables. In SA tables having the same rows, are concatenated \n row-wise.\n In this version of the package a multiple option have been included for \n the simultaneous analysis\n of tables having the same columns, concatenated column-wise.\n This way, a MSA allows to perform the analysis of an indicator matrix \n where the rows represent individuals.\n In this package, functions for computation, summaries\n and graphical visualization in two dimensions are provided, including\n options to display partial and supplementary points.","Published":"2013-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SIN","Version":"0.6","Title":"A SINful Approach to Selection of Gaussian Graphical Markov\nModels","Description":"This package provides routines to perform SIN model selection\n as described in Drton & Perlman (2004, 2008). The selected models are\n represented in the format of the 'ggm' package, which allows in\n particular parameter estimation in the selected model.","Published":"2013-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sinaplot","Version":"1.1.0","Title":"An Enhanced Chart for Simple and Truthful Representation of\nSingle Observations over Multiple Classes","Description":"The sinaplot is a data visualization chart suitable for plotting\n any single variable in a multiclass data set. It is an enhanced jitter strip\n chart, where the width of the jitter is controlled by the density\n distribution of the data within each class.","Published":"2017-04-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sinew","Version":"0.2.1","Title":"Create 'roxygen2' Skeleton with Information from Function Script","Description":"Create 'roxygen2' skeleton populated with information scraped from the within the function script. \n Also creates field entries for imports in the 'DESCRIPTION' and import in the 'NAMESPACE' files.\n Can be run from the R console or through the 'RStudio' 'addin' menu.","Published":"2017-06-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"sinib","Version":"1.0.0","Title":"Sum of Independent Non-Identical Binomial Random Variables","Description":"Density, distribution function, quantile function \n\tand random generation for the sum of independent non-identical\n\tbinomial distribution with parameters \\code{size} and \\code{prob}.","Published":"2017-05-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SinIW","Version":"0.2","Title":"The SinIW Distribution","Description":"Density, distribution function, quantile function, random\n generation and survival function for the Sine Inverse Weibull Distribution as\n defined by SOUZA, L. New Trigonometric Class of Probabilistic Distributions.\n 219 p. Thesis (Doctorate in Biometry and Applied Statistics) - Department of\n Statistics and Information, Federal Rural University of Pernambuco, Recife,\n Pernambuco, 2015 (available at ) and BRITO, C. C. R. Method Distributions generator and\n Probability Distributions Classes. 241 p. Thesis (Doctorate in Biometry and\n Applied Statistics) - Department of Statistics and Information, Federal Rural\n University of Pernambuco, Recife, Pernambuco, 2014 (available upon request).","Published":"2016-07-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"siplab","Version":"1.2","Title":"Spatial Individual-Plant Modelling","Description":"A platform for experimenting with spatially explicit individual-based vegetation models.","Published":"2016-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sirad","Version":"2.3-3","Title":"Functions for Calculating Daily Solar Radiation and\nEvapotranspiration","Description":"Calculating daily global solar radiation at horizontal surface using several well-known models (i.e. Angstrom-Prescott, Supit-Van Kappel, Hargreaves, Bristow and Campbell, and Mahmood-Hubbard), and model calibration based on ground-truth data, and (3) model auto-calibration. The FAO Penmann-Monteith equation to calculate evapotranspiration is also included.","Published":"2016-10-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"siRSM","Version":"1.1","Title":"Single-Index Response Surface Models","Description":"This package fits single-index (quadratic) response surface models.","Published":"2014-07-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sirt","Version":"2.0-25","Title":"Supplementary Item Response Theory Models","Description":"\n Supplementary item response theory models to complement existing \n functions in R, including multidimensional compensatory and\n noncompensatory IRT models, MCMC for hierarchical IRT models and \n testlet models, NOHARM, Rasch copula model, faceted and \n hierarchical rater models, ordinal IRT model (ISOP), \n DETECT statistic, local structural equation modeling (LSEM).","Published":"2017-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SIS","Version":"0.8-4","Title":"Sure Independence Screening","Description":"Variable selection techniques are essential tools for model\n selection and estimation in high-dimensional statistical models. Through this\n publicly available package, we provide a unified environment to carry out\n variable selection using iterative sure independence screening (SIS) and all\n of its variants in generalized linear models and the Cox proportional hazards\n model.","Published":"2017-04-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sisal","Version":"0.46","Title":"Sequential Input Selection Algorithm","Description":"Implements the SISAL algorithm by Tikka and Hollmén. It is\n a sequential backward selection algorithm which uses a linear\n model in a cross-validation setting. Starting from the full\n model, one variable at a time is removed based on the\n regression coefficients. From this set of models, a\n parsimonious (sparse) model is found by choosing the model with\n the smallest number of variables among those models where the\n validation error is smaller than a threshold. Also implements\n extensions which explore larger parts of the search space\n and/or use ridge regression instead of ordinary least squares.","Published":"2015-10-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SISIR","Version":"0.1","Title":"Sparse Interval Sliced Inverse Regression","Description":"An interval fusion procedure for functional data in the\n semiparametric framework of SIR. Standard ridge and sparse SIR are \n also included in the package.","Published":"2016-12-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sisus","Version":"3.9-13","Title":"SISUS: Stable Isotope Sourcing using Sampling","Description":"SISUS for source partitioning using stable isotopes.\n SISUS reads in a specific Excel-like workbook and performs an IsoSource-type analysis by\n returning a sample of feasible solutions to p in the non-over constrained linear systems, b=Ap.\n Edit \\file{sisus_*_template.xls} and input data values and run parameters.\n Run \\code{\\link{sisus.run}(filename)}. See output in current working directory.","Published":"2014-11-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sisVIVE","Version":"1.4","Title":"Some Invalid Some Valid Instrumental Variables Estimator","Description":"Selects invalid instruments amongst a candidate of potentially bad instruments. The algorithm selects potentially invalid instruments and provides an estimate of the causal effect between exposure and outcome.","Published":"2017-05-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sitar","Version":"1.0.9","Title":"Super Imposition by Translation and Rotation Growth Curve\nAnalysis","Description":"Functions for fitting and plotting SITAR (Super Imposition by\n Translation And Rotation) growth curve models. SITAR is a shape-invariant model\n with a regression B-spline mean curve and subject-specific random effects on\n both the measurement and age scales.","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sitmo","Version":"1.1.0","Title":"Parallel Pseudo Random Number Generator (PPRNG) 'sitmo' Header\nFiles","Description":"Provided within is a high quality and fast PPRNG that is able to be used in an 'OpenMP' parallel\n environment compiled under either C++98 or C++11. The objective of this package release is to consolidate\n the distribution of the 'sitmo' library on CRAN by enabling others to link to the 'sitmo' header file instead \n of including a copy of 'sitmo' within their individual package. Lastly, the package contains example \n implementations using 'sitmo' and three accompanying vignette that provide additional information.","Published":"2017-04-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sitools","Version":"1.4","Title":"Format a number to a string with SI prefix","Description":"Format a number (or a list of numbers) to a string (or a\n list of strings) with SI prefix. Use SI prefixes as constants\n like (4 * milli)^2","Published":"2012-08-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sitree","Version":"0.1-1","Title":"Single Tree Simulator","Description":"Forecasts plots at tree level.","Published":"2017-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sivipm","Version":"1.1-3","Title":"Sensitivity Indices with Dependent Inputs","Description":"Sensitivity indices with dependent correlated inputs, using a\n method based on PLS regression.","Published":"2016-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SixSigma","Version":"0.9-4","Title":"Six Sigma Tools for Quality Control and Improvement","Description":"Functions and utilities to perform\n Statistical Analyses in the Six Sigma way.\n Through the DMAIC cycle (Define, Measure, Analyze, Improve, Control),\n you can manage several Quality Management studies:\n Gage R&R, Capability Analysis, Control Charts, Loss Function Analysis,\n etc. Data frames used in the books \"Six Sigma with R\" (Springer, 2012)\n and \"Quality Control with R\" (Springer, 2015)\n are also included in the package.","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SizeEstimation","Version":"1.1.1","Title":"Estimating the Sizes of Populations at Risk of HIV Infection\nfrom Multiple Data Sources Using a Bayesian Hierarchical Model","Description":"This function develops an algorithm for presenting a Bayesian hierarchical model for estimating the sizes of local and national drug injected populations in Bangladesh. The model incorporates multiple commonly used data sources including mapping data, surveys, interventions, capture-recapture data, estimates or guesstimates from organizations, and expert opinion.","Published":"2016-07-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sizeMat","Version":"0.3.0","Title":"Estimate Size at Sexual Maturity","Description":"Contains functions to estimate morphometric and gonadal size at sexual maturity for organisms, usually fish and invertebrates. It includes methods for classification based on relative growth (using principal components analysis, hierarchical clustering, discriminant analysis), logistic regression (Frequentist or Bayes), parameters estimation and some basic plots.","Published":"2017-04-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SiZer","Version":"0.1-4","Title":"SiZer: Significant Zero Crossings","Description":"Calculates and plots the SiZer map for scatterplot data.\n A SiZer map is a way of examining when the p-th derivative of a\n scatterplot-smoother is significantly negative, possibly zero\n or significantly positive across a range of smoothing\n bandwidths.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sjdbc","Version":"1.6.0","Title":"JDBC Driver Interface","Description":"Provides a database-independent JDBC interface.","Published":"2016-12-16","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sjlabelled","Version":"1.0.0","Title":"Labelled Data Utility Functions","Description":"Collection of functions to work with labelled data to read and \n write data between R and other statistical software packages like 'SPSS',\n 'SAS' or 'Stata', and to work with labelled data. This includes easy ways \n to get, set or change value and variable label attributes, to convert \n labelled vectors into factors or numeric (and vice versa), or to deal with \n multiple declared missing values.","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sjmisc","Version":"2.5.0","Title":"Data and Variable Transformation Functions","Description":"Collection of miscellaneous utility functions, supporting data \n transformation tasks like recoding, dichotomizing or grouping variables, \n setting and replacing missing values. The data transformation functions \n also support labelled data, and all integrate seamlessly into a \n 'tidyverse'-workflow.","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sjPlot","Version":"2.3.1","Title":"Data Visualization for Statistics in Social Science","Description":"Collection of plotting and table output functions for data\n visualization. Results of various statistical analyses (that are commonly used\n in social sciences) can be visualized using this package, including simple and\n cross tabulated frequencies, histograms, box plots, (generalized) linear models,\n mixed effects models, PCA and correlation matrices, cluster analyses, scatter\n plots, Likert scales, effects plots of regression models (including interaction\n terms) and much more. This package supports labelled data.","Published":"2017-03-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sjstats","Version":"0.10.1","Title":"Collection of Convenient Functions for Common Statistical\nComputations","Description":"Collection of convenient functions for common statistical computations,\n which are not directly provided by R's base or stats packages.\n This package aims at providing, first, shortcuts for statistical\n measures, which otherwise could only be calculated with additional\n effort (like standard errors or root mean squared errors). Second,\n these shortcut functions are generic (if appropriate), and can be\n applied not only to vectors, but also to other objects as well\n (e.g., the Coefficient of Variation can be computed for vectors,\n linear models, or linear mixed models; the r2()-function returns\n the r-squared value for 'lm', 'glm', 'merMod' or 'lme' objects).\n The focus of most functions lies on summary statistics or fit\n measures for regression models, including generalized linear\n models and mixed effects models. However, some of the functions\n also deal with other statistical measures, like Cronbach's Alpha,\n Cramer's V, Phi etc.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SKAT","Version":"1.3.0","Title":"SNP-Set (Sequence) Kernel Association Test","Description":"Functions for kernel-regression-based association tests including Burden test, SKAT and SKAT-O. These methods aggregate individual SNP score statistics in a SNP set and efficiently compute SNP-set level p-values.","Published":"2017-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"skda","Version":"0.1","Title":"Sparse (Multicategory) Kernel Discriminant Analysis","Description":"Sparse (Multicategory) Kernel Discriminant Analysis does variable selection for nonparametric classification","Published":"2013-12-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"skeleSim","Version":"0.9.5","Title":"Genetic Simulation Engine","Description":"A shiny interface and supporting tools to guide users in choosing\n appropriate simulations, setting parameters, calculating summary genetic\n statistics, and organizing data output, all within the R environment. In\n addition to supporting existing forward and reverse-time simulators, new\n simulators can be integrated into the environment relatively easily.","Published":"2016-09-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"skeletor","Version":"1.0.4","Title":"An R Package Skeleton Generator","Description":"A tool for bootstrapping new packages with useful defaults,\n including a test suite outline that passes checks and helpers for running\n tests, checking test coverage, building vignettes, and more. Package\n skeletons it creates are set up for pushing your package to\n 'GitHub' and using other hosted services for building and test automation.","Published":"2017-04-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"skellam","Version":"0.2.0","Title":"Densities and Sampling for the Skellam Distribution","Description":"Functions for the Skellam distribution, including: density\n (pmf), cdf, quantiles and regression.","Published":"2016-12-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SkewHyperbolic","Version":"0.3-2","Title":"The Skew Hyperbolic Student t-Distribution","Description":"Functions are provided for the density function,\n distribution function, quantiles and random number generation\n for the skew hyperbolic t-distribution. There are also\n functions that fit the distribution to data. There are\n functions for the mean, variance, skewness, kurtosis and mode\n of a given distribution and to calculate moments of any order\n about any centre. To assess goodness of fit, there are\n functions to generate a Q-Q plot, a P-P plot and a tail plot.","Published":"2015-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"skewt","Version":"0.1","Title":"The Skewed Student-t Distribution","Description":"Density, distribution function, quantile function and\n random generation for the skewed t distribution of Fernandez\n and Steel.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Skillings.Mack","Version":"1.10","Title":"The Skillings-Mack Test Statistic for Block Designs with Missing\nObservations","Description":"A generalization of the statistic used in Friedman's ANOVA method and in Durbin's rank test. This nonparametric statistical test is useful for the data obtained from block designs with missing observations occurring randomly. A resulting p-value is based on the chi-squared distribution and Monte Carlo method.","Published":"2015-09-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"skm","Version":"0.1.5.4","Title":"Selective k-Means","Description":"Algorithms for solving selective k-means problem,\n which is defined as finding k rows in an m x n matrix such that \n the sum of each column minimal is minimized. \n In the scenario when m == n and each cell value in matrix is a \n valid distance metric, this is equivalent to a k-means problem. \n The selective k-means extends the k-means problem in the sense \n that it is possible to have m != n, often the case m < n which \n implies the search is limited within a small subset of rows. \n Also, the selective k-means extends the k-means problem in the \n sense that the instance in row set can be instance not seen in \n the column set, e.g., select 2 from 3 internet service provider\n (row) for 5 houses (column) such that minimize the overall cost \n (cell value) - overall cost is the sum of the column minimal of\n the selected 2 service provider.","Published":"2017-01-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"skmeans","Version":"0.2-10","Title":"Spherical k-Means Clustering","Description":"Algorithms to compute spherical k-means partitions.\n Features several methods, including a genetic and a fixed-point\n algorithm and an interface to the CLUTO vcluster program.","Published":"2017-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Sky","Version":"1.0","Title":"Canopy Openness Analyzer Package","Description":"Provides an alternative to manually process hemispherical pictures. The algorithm processes each picture one by one to determine the proportion of sky pixel. The algorithm uses the Ridler and Calvard method (Ridler and Calvard 1978).","Published":"2016-02-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SkyWatchr","Version":"0.5-1","Title":"Wrapper for the SkyWatch API","Description":"Query and download satellite imagery and climate/atmospheric datasets using the SkyWatch API. \n Search datasets by wavelength (band), cloud cover, resolution, location, date, etc.\n Get the query results as data frame and as HTML. To learn more about the SkyWatch API, see .","Published":"2017-01-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sla","Version":"0.1","Title":"Two-Group Straight Line ANCOVA","Description":"Provides directly interpretable estimated coefficients\n for four models in connection with the two-group straight line\n ANCOVA problem: (A) the full model, which requires the fitting of\n two intercepts and two slopes; (B) a reduced model, which requires\n the fitting of a single intercept and single slope; (C) a reduced\n model, which requires the fitting of two separate intercepts and a\n single, common slope; and (D) a reduced model, which requires the\n fitting of a single, common intercept and two separate slopes. The\n summary function provides tests of fit for the (null) hypotheses of:\n (1) equivalent data sets, (2) equivalent slopes, and\n (3) equivalent intercepts.","Published":"2016-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"slackr","Version":"1.4.2","Title":"Send Messages, Images, R Objects and Files to 'Slack'\nChannels/Users","Description":"'Slack' provides a service for teams to\n collaborate by sharing messages, images, links, files and more. Functions are provided\n that make it possible to interact with the 'Slack' platform 'API'. When\n you need to share information or data from R, rather than resort to copy/\n paste in e-mails or other services like 'Skype' , you\n can use this package to send well-formatted output from multiple R objects and\n expressions to all teammates at the same time with little effort. You can also\n send images from the current graphics device, R objects, and upload files.","Published":"2016-07-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"slam","Version":"0.1-40","Title":"Sparse Lightweight Arrays and Matrices","Description":"Data structures and algorithms for sparse arrays and matrices,\n based on index arrays and simple triplet representations, respectively.","Published":"2016-12-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SLC","Version":"0.3","Title":"Slope and level change","Description":"Estimates the slope and level change present in data after\n removing phase A trend. Represents graphically the original and\n the detrended data.","Published":"2013-01-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sld","Version":"0.3.2","Title":"Estimation and Use of the Quantile-Based Skew Logistic\nDistribution","Description":"The skew logistic distribution is a quantile-defined generalisation\n of the logistic distribution (van Staden and King 2015). Provides random \n numbers, quantiles, probabilities, densities and density quantiles for the distribution.\n It provides Quantile-Quantile plots and method of L-Moments estimation \n (including asymptotic standard errors) for the distribution.","Published":"2016-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SLDAssay","Version":"1.6","Title":"Software for Analyzing Limiting Dilution Assays","Description":"Calculates maximum likelihood estimate, exact and asymptotic confidence intervals, and exact and asymptotic goodness of fit p-values for infectious units per million (IUPM) from serial limiting dilution assays. This package uses the likelihood equation, exact PGOF, and exact confidence intervals described in Meyers et al. (1994) . This software is also implemented as a web application through the Shiny R package .","Published":"2017-03-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sleekts","Version":"1.0.2","Title":"4253H, Twice Smoothing","Description":"Compute Time series Resistant Smooth 4253H, twice smoothing method.","Published":"2015-12-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Sleuth2","Version":"2.0-4","Title":"Data Sets from Ramsey and Schafer's \"Statistical Sleuth (2nd\nEd)\"","Description":"Data sets from Ramsey, F.L. and Schafer, D.W. (2002), \"The\n Statistical Sleuth: A Course in Methods of Data Analysis (2nd\n ed)\", Duxbury. ","Published":"2016-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Sleuth3","Version":"1.0-2","Title":"Data Sets from Ramsey and Schafer's \"Statistical Sleuth (3rd\nEd)\"","Description":"Data sets from Ramsey, F.L. and Schafer, D.W. (2013), \"The\n Statistical Sleuth: A Course in Methods of Data Analysis (3rd\n ed)\", Cengage Learning. ","Published":"2016-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"slfm","Version":"0.2.2","Title":"Tools for Fitting Sparse Latent Factor Model","Description":"Set of tools to find coherent patterns in microarray data\n using a Bayesian sparse latent factor model (Duarte and Mayrink 2015 -\n http://link.springer.com/chapter/10.1007%2F978-3-319-12454-4_15).\n Considerable effort has been put into making slfm fast and memory efficient,\n turning it an interesting alternative to simpler methods in terms\n of execution time. It implements versions of the SLFM using both type\n of mixtures: using a degenerate distribution or a very concentrated\n normal distribution for the spike part of the mixture. It also implements\n additional functions to help pre-process the data and fit the model\n for a large number of arrays.","Published":"2015-08-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SLHD","Version":"2.1-1","Title":"Maximin-Distance (Sliced) Latin Hypercube Designs","Description":"Generate the optimal Latin Hypercube Designs (LHDs) for computer experiments with quantitative factors and the optimal Sliced Latin Hypercube Designs (SLHDs) for computer experiments with both quantitative and qualitative factors. Details of the algorithm can be found in Ba, S., Brenneman, W. A. and Myers, W. R. (2015), \"Optimal Sliced Latin Hypercube Designs,\" Technometrics. Important function in this package is \"maximinSLHD\". ","Published":"2015-01-28","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"slickR","Version":"0.1.0","Title":"Create Interactive Carousels with the JavaScript 'Slick' Library","Description":"Create and customize interactive carousels using the 'Slick'\n JavaScript library and the 'htmlwidgets' package. The carousels can contain plots\n produced in R, images, 'iframes', videos and other 'htmlwidgets'.\n These carousels can be used directly from the R console, from 'RStudio', \n in Shiny apps and R Markdown documents.","Published":"2017-04-01","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"slim","Version":"0.1.1","Title":"Singular Linear Models for Longitudinal Data","Description":"Fits singular linear models to longitudinal data. Singular linear\n models are useful when the number, or timing, of longitudinal observations\n may be informative about the observations themselves. They are described\n in Farewell (2010) , and are extensions of the\n linear increments model to general\n longitudinal data. ","Published":"2017-05-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"slimrec","Version":"0.1.0","Title":"Sparse Linear Method to Predict Ratings and Top-N\nRecommendations","Description":"Sparse Linear Method(SLIM) predicts ratings and top-n\n recommendations suited for sparse implicit positive feedback systems. SLIM\n is decomposed into multiple elasticnet optimization problems which are solved\n in parallel over multiple cores. The package is based on \"SLIM: Sparse Linear\n Methods for Top-N Recommender Systems\" by Xia Ning and George Karypis .","Published":"2017-03-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SLOPE","Version":"0.1.3","Title":"Sorted L1 Penalized Estimation (SLOPE)","Description":"Efficient procedures for Sorted L1 Penalized Estimation (SLOPE).\n The sorted L1 norm is useful for statistical estimation and testing,\n particularly for variable selection in the linear model.","Published":"2015-11-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"slp","Version":"1.0-5","Title":"Discrete Prolate Spheroidal (Slepian) Sequence Regression\nSmoothers","Description":"Interface for creation of 'slp' class smoother objects for \n use in Generalized Additive Models (as implemented by packages \n 'gam' and 'mgcv'). ","Published":"2016-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sm","Version":"2.2-5.4","Title":"Smoothing methods for nonparametric regression and density\nestimation","Description":"This is software linked to the book\n 'Applied Smoothing Techniques for Data Analysis -\n The Kernel Approach with S-Plus Illustrations' Oxford University Press.","Published":"2014-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smaa","Version":"0.2-5","Title":"Stochastic Multi-Criteria Acceptability Analysis","Description":"Implementation of the Stochastic Multi-Criteria Acceptability Analysis (SMAA) family of Multiple Criteria Decision Analysis (MCDA) methods.","Published":"2016-12-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"smac","Version":"1.0","Title":"Sparse Multi-category Angle-Based Large-Margin Classifiers","Description":"This package provides a solution path for L1-penalized angle-based classification. Three loss functions are implemented in smac, including the deviance loss in logistic regression, the exponential loss in boosting, and the proximal support vector machine loss.","Published":"2014-11-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"smacof","Version":"1.9-6","Title":"Multidimensional Scaling","Description":"Provides the following approaches for multidimensional scaling (MDS) based on stress minimization using majorization (smacof): basic MDS on symmetric dissimilarity matrices,\n MDS with external constraints on the configuration, individual differences scaling (idioscal, indscal, and friends), MDS with spherical restrictions, and unfolding. The MDS type can be ratio, interval, ordinal, and monotone splines. \n Various tools and extensions like jackknife MDS, bootstrap MDS, permutation tests, MDS biplots, gravity models, inverse MDS, unidimensional scaling, drift vectors (asymmetric MDS), classical scaling, and Procrustes are implemented as well. ","Published":"2017-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"smacpod","Version":"1.4.1","Title":"Statistical Methods for the Analysis of Case-Control Point Data","Description":"Various statistical methods for analyzing case-control point data.\n The methods available closely follow those in chapter 6 of Applied Spatial\n Statistics for Public Health Data by Waller and Gotway (2004).","Published":"2015-09-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smallarea","Version":"0.1","Title":"Fits a Fay Herriot Model","Description":"Inference techniques for Fay Herriot Model.","Published":"2015-10-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smam","Version":"0.3-0","Title":"Statistical Modeling of Animal Movements","Description":"Animal movement models including moving-resting process\n with embedded Brownian motion, Brownian motion with measurement error.","Published":"2016-10-01","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"smapr","Version":"0.1.0","Title":"Acquisition and Processing of NASA Soil Moisture Active-Passive\n(SMAP) Data","Description":"\n Facilitates programmatic access to NASA Soil Moisture Active\n Passive (SMAP) data with R. It includes functions to search for, acquire,\n and extract SMAP data.","Published":"2017-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"smart","Version":"1.0.1","Title":"Sparse Multivariate Analysis via Rank Transformation","Description":"The package \"smart\" provides a general framework for\n analyzing (including estimation, feature selection and\n prediction) and visualize big data. It integrates several\n novel, efficient and robust data analysis tools, including\n Transelliptical Component Analysis (TCA), Transelliptical\n Correlation Estimation (TCE) and Group Nearest Shrunken\n Centroids (gnsc). We target on high dimensional data\n analysis(usually d >> n), and exploit computationally\n efficiently approaches. Results are organized to be visualized\n properly for users.","Published":"2013-10-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SmarterPoland","Version":"1.7","Title":"Tools for Accessing Various Datasets Developed by the Foundation\nSmarterPoland.pl","Description":"Tools for accessing and processing datasets prepared by the Foundation SmarterPoland.pl. Among all: access to API of Google Maps, Central Statistical Office of Poland, MojePanstwo, Eurostat, WHO and other sources.","Published":"2016-03-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SmartSifter","Version":"0.1.0","Title":"Online Unsupervised Outlier Detection Using Finite Mixtures with\nDiscounting Learning Algorithms","Description":"Addressing the problem of outlier detection from the viewpoint of statistical learning theory. This method is proposed by Yamanishi, K., Takeuchi, J., Williams, G. et al. (2004) . It learns the probabilistic model (using a finite mixture model) through an on-line unsupervised process. After each datum is input, a score will be given with a high one indicating a high possibility of being a statistical outlier. ","Published":"2016-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SmartSVA","Version":"0.1.3","Title":"Fast and Robust Surrogate Variable Analysis","Description":"Introduces a fast and efficient Surrogate Variable Analysis algorithm that captures variation of unknown sources (batch effects) for high-dimensional data sets. The algorithm is built on the 'irwsva.build' function of the 'sva' package and proposes a revision on it that achieves an order of magnitude faster running time while trading no accuracy loss in return.","Published":"2017-05-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"smatr","Version":"3.4-3","Title":"(Standardised) Major Axis Estimation and Testing Routines","Description":"This package provides methods of fitting bivariate lines in allometry using the major axis (MA) or standardised major axis (SMA), and for making inferences about such lines. The available methods of inference include confidence intervals and one-sample tests for slope and elevation, testing for a common slope or elevation amongst several allometric lines, constructing a confidence interval for a common slope or elevation, and testing for no shift along a common axis, amongst several samples.","Published":"2014-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"smbinning","Version":"0.3","Title":"Optimal Binning for Scoring Modeling","Description":"The main purpose of the package is to categorize a numeric variable into bins\n mapped to a binary target variable for its ulterior usage in scoring modeling. \n This functionality reduces dramatically the time consuming process of finding the optimal \n cut points for a given numeric variable, quickly calculates the Information Value, either for\n one variable at the time or all at once in one line of code; and also outputs 'SQL' codes, \n tables, and plots used throughout the development stage.\n The package also allows the user to understand the data via exploratory data analysis \n in one step, establish customized cut points for numeric characteristics, and run the analysis \n for categorical variables.","Published":"2016-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SMC","Version":"1.1","Title":"Sequential Monte Carlo (SMC) Algorithm","Description":"particle filtering, auxiliary particle filtering and sequential Monte Carlo algorithms","Published":"2011-12-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smcfcs","Version":"1.3.0","Title":"Multiple Imputation of Covariates by Substantive Model\nCompatible Fully Conditional Specification","Description":"Implements multiple imputation of missing covariates by\n Substantive Model Compatible Fully Conditional Specification.\n This is a modification of the popular FCS/chained equations\n multiple imputation approach, and allows imputation of missing\n covariate values from models which are compatible with the user\n specified substantive model.","Published":"2017-06-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"smco","Version":"0.1","Title":"A simple Monte Carlo optimizer using adaptive coordinate\nsampling","Description":"This package is for optimizing non-linear complex\n functions based on Monte Carlo random sampling.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SMCP","Version":"1.1.3","Title":"Smoothed minimax concave penalization (SMCP) method for\ngenome-wide association studies","Description":"A package containing functions for genome-wide association\n studies.","Published":"2010-09-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SMCRM","Version":"0.0-3","Title":"Data Sets for Statistical Methods in Customer Relationship\nManagement by Kumar and Petersen (2012)","Description":"Data Sets for Kumar and Petersen (2012).\n Statistical Methods in Customer Relationship Management,\n Wiley: New York.","Published":"2013-09-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"smcure","Version":"2.0","Title":"Fit Semiparametric Mixture Cure Models","Description":"An R-package for Estimating Semiparametric PH and AFT\n Mixture Cure Models","Published":"2012-09-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"smcUtils","Version":"0.2.2","Title":"Utility functions for sequential Monte Carlo","Description":"Provides resampling functions (stratified, residual,\n multinomial, systematic, and branching), measures of weight\n uniformity (coefficient of variation, effective sample size,\n and entropy), and a weight renormalizing function.","Published":"2013-02-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"smdata","Version":"1.1","Title":"Data to accompany Smithson & Merkle, 2013","Description":"Contains data files to accompany Smithson & Merkle (2013), Generalized Linear Models for Categorical and Continuous Limited Dependent Variables.","Published":"2013-09-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"smdc","Version":"0.0.2","Title":"Document Similarity","Description":"This package provides similarity among documents.","Published":"2013-02-15","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"smds","Version":"1.0","Title":"Symbolic Multidimensional Scaling","Description":"Symbolic multidimensional scaling for interval-valued dissimilarities. The hypersphere model and the hyperbox model are available. ","Published":"2015-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sme","Version":"0.8","Title":"Smoothing-splines Mixed-effects Models","Description":"A package for fitting smoothing-splines mixed-effects models to replicated functional\n data sets.","Published":"2013-08-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"smerc","Version":"0.2.2","Title":"Statistical Methods for Regional Counts","Description":"Provides statistical methods for the analysis of data areal data, with a focus on cluster detection.","Published":"2015-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SMFI5","Version":"1.0","Title":"R functions and data from Chapter 5 of 'Statistical Methods for\nFinancial Engineering'","Description":"R functions and data from Chapter 5 of 'Statistical\n Methods for Financial Engineering', by Bruno Remillard, CRC\n Press, (2013).","Published":"2013-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smfsb","Version":"1.1","Title":"SMfSB 2e: Stochastic Modelling for Systems Biology, second\nedition","Description":"This package contains code and data for modelling and simulation of stochastic kinetic biochemical network models. It contains the code and data associated with the second edition of the book Stochastic Modelling for Systems Biology, published by Chapman & Hall/CRC Press, November 2011.","Published":"2013-12-11","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"SMIR","Version":"0.02","Title":"Companion to Statistical Modelling in R","Description":"This package accompanies Aitkin et al, Statistical\n Modelling in R, OUP, 2009. The package contains some functions\n and datasets used in the text.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smirnov","Version":"1.0-1","Title":"Provides two taxonomic coefficients from E. S. Smirnov\n\"Taxonomic analysis\" (1969) book","Description":"This tiny package contains one function smirnov() which\n calculates two scaled taxonomic coefficients, Txy (coefficient\n of similarity) and Txx (coefficient of originality). These two\n characteristics may be used for the analysis of similarities\n between any number of taxonomic groups, and also for assessing\n uniqueness of giving taxon. It is possible to use smirnov()\n output as a distance measure: convert it to distance by\n \"as.dist(1 - smirnov(x))\".","Published":"2012-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Smisc","Version":"0.3.6","Title":"Sego Miscellaneous","Description":"A collection of functions for statistical computing and data manipulation in R. \n Includes routines for data ingestion, operating on dataframes and matrices, conversion to and \n from lists, converting factors, filename manipulation, programming utilities, parallelization, plotting, \n statistical and mathematical operations, and time series.","Published":"2016-06-23","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SmithWilsonYieldCurve","Version":"1.0.1","Title":"Smith-Wilson Yield Curve Construction","Description":"Constructs a yield curve by the Smith-Wilson method from a\n table of LIBOR and SWAP rates","Published":"2013-06-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SMLoutliers","Version":"0.1","Title":"Outlier Detection Using Statistical and Machine Learning Methods","Description":"Local Correlation Integral (LOCI) method for outlier identification is implemented here. The LOCI method developed here is invented in Breunig, et al. (2000), see .","Published":"2017-02-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SMMA","Version":"1.0.1","Title":"Soft Maximin Estimation for Large Scale Array-Tensor Models","Description":"Efficient design matrix free procedure for solving a soft maximin problem for large scale array-tensor structured models. Currently Lasso and SCAD penalized estimation is implemented.","Published":"2017-03-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SMNCensReg","Version":"3.0","Title":"Fitting Univariate Censored Regression Model Under the Family of\nScale Mixture of Normal Distributions","Description":"Fit univariate right, left or interval censored regression model under the scale mixture of normal distributions","Published":"2015-01-28","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"smnet","Version":"2.1.1","Title":"Smoothing for Stream Network Data","Description":"Fits flexible additive models to data on stream networks, taking account of flow-connectivity of the network. Models are fitted using penalised least squares.","Published":"2017-03-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"smoof","Version":"1.5","Title":"Single and Multi-Objective Optimization Test Functions","Description":"Provides generators for a high number of both single- and multi-\n objective test functions which are frequently used for the benchmarking of\n (numerical) optimization algorithms. Moreover, it offers a set of convenient\n functions to generate, plot and work with objective functions.","Published":"2017-04-26","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"smooth","Version":"1.9.9","Title":"Forecasting Using Smoothing Functions","Description":"The set of smoothing functions used for time series analysis and\n in forecasting. Currently the package includes exponential smoothing models and\n SARIMA in state-space form + several simulation functions.","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smoothAPC","Version":"0.1","Title":"Smoothing of Two-Dimensional Demographic Data, Optionally Taking\ninto Account Period and Cohort Effects","Description":"The implemented method uses for smoothing bivariate thin plate splines, bivariate lasso-type regularization, and allows for both period and cohort effects. Thus the mortality rates are modelled as the sum of four components: a smooth bivariate function of age and time, smooth one-dimensional cohort effects, smooth one-dimensional period effects and random errors.","Published":"2016-09-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smoother","Version":"1.1","Title":"Functions Relating to the Smoothing of Numerical Data","Description":"A collection of methods for smoothing numerical data, commencing with a port of the Matlab gaussian window smoothing function. In addition, several functions typically used in smoothing of financial data are included.","Published":"2015-04-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SmoothHazard","Version":"1.4.0","Title":"Estimation of Smooth Hazard Models for Interval-Censored Data\nwith Applications to Survival and Illness-Death Models","Description":"Estimation of two-state (survival) models and irreversible illness-\n death models with possibly interval-censored,left-truncated and right-censored\n data. Proportional intensities regression models can be specified to allow for\n covariates effects separately for each transition. We use either a parametric\n approach with Weibull baseline intensities or a semi-parametric approach with\n M-splines approximation of baseline intensities in order to obtain smooth\n estimates of the hazard functions. Parameter estimates are obtained by maximum\n likelihood in the parametric approach and by penalized maximum likelihood in the\n semi-parametric approach.","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smoothHR","Version":"1.0.2","Title":"Smooth Hazard Ratio Curves Taking a Reference Value","Description":"Provides flexible hazard ratio curves allowing non-linear\n relationships between continuous predictors and survival. To\n better understand the effects that each continuous covariate\n has on the outcome, results are ex pressed in terms of hazard\n ratio curves, taking a specific covariate value as reference.\n Confidence bands for these curves are also derived.","Published":"2015-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smoothie","Version":"1.0-1","Title":"Two-dimensional Field Smoothing","Description":"Functions to smooth two-dimensional fields using FFT and the convolution theorem","Published":"2013-07-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smoothmest","Version":"0.1-2","Title":"Smoothed M-estimators for 1-dimensional location","Description":"Some M-estimators for 1-dimensional location (Bisquare, ML\n for the Cauchy distribution, and the estimators from\n application of the smoothing principle introduced in Hampel,\n Hennig and Ronchetti (2011) to the above, the Huber\n M-estimator, and the median, main function is smoothm), and\n Pitman estimator.","Published":"2012-08-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"smoothSurv","Version":"1.6","Title":"Survival Regression with Smoothed Error Distribution","Description":"Contains, as a main contribution, a function to fit\n a regression model with possibly right, left or interval\n censored observations and with the error distribution\n expressed as a mixture of G-splines. Core part\n of the computation is done in compiled C++ written\n using the Scythe Statistical Library Version 0.3.","Published":"2015-07-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smoothtail","Version":"2.0.5","Title":"Smooth Estimation of GPD Shape Parameter","Description":"Given independent and identically distributed observations X(1), ..., X(n) from a Generalized Pareto distribution with shape parameter gamma in [-1,0], offers several estimates to compute estimates of gamma. The estimates are based on the principle of replacing the order statistics by quantiles of a distribution function based on a log--concave density function. This procedure is justified by the fact that the GPD density is log--concave for gamma in [-1,0].","Published":"2016-07-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"smotefamily","Version":"1.0","Title":"A Collection of Oversampling Techniques for Class Imbalance\nProblem Based on SMOTE","Description":"A collection of various oversampling techniques developed from SMOTE is provided. SMOTE is a oversampling technique which synthesizes a new minority instance between a pair of one minority instance and one of its K nearest neighbor. (see for more information) Other techniques adopt this concept with other criteria in order to generate balanced dataset for class imbalance problem.","Published":"2016-09-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SMPracticals","Version":"1.4-2","Title":"Practicals for use with Davison (2003) Statistical Models","Description":"This package contains the datasets and a few functions for\n use with the practicals outlined in Appendix A of the book\n Statistical Models (Davison, 2003, Cambridge University Press).\n The practicals themselves can be found at\n http://statwww.epfl.ch/davison/SM/","Published":"2013-01-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SMR","Version":"2.0.1","Title":"Externally Studentized Midrange Distribution","Description":"Computes the studentized midrange distribution (pdf, cdf and quantile) and generates random numbers ","Published":"2014-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sms","Version":"2.3.1","Title":"Spatial Microsimulation","Description":"Produce small area population estimates by fitting census data to\n survey data.","Published":"2015-11-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"smss","Version":"1.0-2","Title":"Datasets for Agresti and Finlay's \"Statistical Methods for the\nSocial Sciences\"","Description":"Datasets used in \"Statistical Methods for the Social Sciences\"\n (SMSS) by Alan Agresti and Barbara Finlay.","Published":"2015-10-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SMVar","Version":"1.3.3","Title":"Structural Model for variances","Description":"Implements the structural model for variances in order to\n detect differentially expressed genes from gene expression data","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"sn","Version":"1.5-0","Title":"The Skew-Normal and Related Distributions, such as the Skew-t","Description":"Build and manipulate probability distributions of the skew-normal \n family and some related ones, notably the skew-t family, and provide related\n statistical methods for data fitting and diagnostics, in the univariate and \n the multivariate case.","Published":"2017-02-10","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"sna","Version":"2.4","Title":"Tools for Social Network Analysis","Description":"A range of tools for social network analysis, including node and graph-level indices, structural distance and covariance methods, structural equivalence detection, network regression, random graph generation, and 2D/3D network visualization.","Published":"2016-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"snakecase","Version":"0.4.0","Title":"Convert Strings into any Case","Description":"A consistent, flexible and easy to use tool to parse and convert strings into cases like snake or camel among others.","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SnakeCharmR","Version":"1.0.6","Title":"R and Python Integration","Description":"Run 'Python' code, make function calls, assign and retrieve variables, etc. from R.\n A fork from 'rPython' which uses 'jsonlite', 'Rcpp' and has several fixes and improvements.","Published":"2017-03-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"snapshot","Version":"0.1.2","Title":"Gadget N-body cosmological simulation code snapshot I/O\nutilities","Description":"Functions for reading and writing Gadget N-body snapshots. The Gadget code is popular in astronomy for running N-body / hydrodynamical cosmological and merger simulations. To find out more about Gadget see the main distribution page at www.mpa-garching.mpg.de/gadget/","Published":"2013-10-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SNFtool","Version":"2.2","Title":"Similarity Network Fusion","Description":"Similarity Network Fusion takes multiple views of a network and fuses them together to construct an overall status matrix. The input to our algorithm can be feature vectors, pairwise distances, or pairwise similarities. The learned status matrix can then be used for retrieval, clustering, and classification.","Published":"2014-09-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"snht","Version":"1.0.4","Title":"Standard Normal Homogeneity Test","Description":"Implementation of robust and non-robust Standard Normal Homogeneity\n Test (SNHT) for changepoint detection. This test statistic is equal sided,\n as proposed in \"Homogenization of Radiosonde Temperature Time Series Using\n Innovation Statistics\" by Haimberger, L., 2007. However, the statistic contains\n an estimate of sigma^2 in the denominator instead of sigma, which seems to\n be a more appropriate value (based on the paper \"Homogenization of Swedish\n temperature data. Part I: Homogeneity test for linear trends.\" by Alexandersson,\n H., and A. Moberg, 1997).","Published":"2016-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"snipEM","Version":"1.0","Title":"Snipping methods for robust estimation and clustering","Description":"Snipping methods","Published":"2014-10-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"snn","Version":"1.1","Title":"Stabilized Nearest Neighbor Classifier","Description":"Implement K-nearest neighbor classifier, weighted nearest neighbor classifier, bagged nearest neighbor classifier, optimal weighted nearest neighbor classifier and stabilized nearest neighbor classifier, and perform model selection via 5 fold cross-validation for them. This package also provides functions for computing the classification error and classification instability of a classification procedure.","Published":"2015-08-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"snow","Version":"0.4-2","Title":"Simple Network of Workstations","Description":"Support for simple parallel computing in R.","Published":"2016-10-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SnowballC","Version":"0.5.1","Title":"Snowball stemmers based on the C libstemmer UTF-8 library","Description":"An R interface to the C libstemmer library that implements\n Porter's word stemming algorithm for collapsing words to a common\n root to aid comparison of vocabulary. Currently supported languages are\n Danish, Dutch, English, Finnish, French, German, Hungarian, Italian,\n Norwegian, Portuguese, Romanian, Russian, Spanish, Swedish\n and Turkish.","Published":"2014-08-09","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"snowboot","Version":"0.5.2","Title":"Bootstrap Methods for Network Inference","Description":"Functions for analysis of network objects, which are imported or\n simulated by the package. The non-parametric methods of analysis center\n around snowball and bootstrap sampling.","Published":"2016-12-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"snowfall","Version":"1.84-6.1","Title":"Easier cluster computing (based on snow)","Description":"Usability wrapper around snow for easier development of\n parallel R programs. This package offers e.g. extended error\n checks, and additional functions. All functions work in\n sequential mode, too, if no cluster is present or wished.\n Package is also designed as connector to the cluster management\n tool sfCluster, but can also used without it.","Published":"2015-10-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"snowFT","Version":"1.6-0","Title":"Fault Tolerant Simple Network of Workstations","Description":"Extension of the snow package supporting fault tolerant and reproducible applications, as well as supporting easy-to-use parallel programming - only one function is needed. Dynamic cluster size is also available.","Published":"2017-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"snp.plotter","Version":"0.5.1","Title":"snp.plotter","Description":"Creates plots of p-values using single SNP and/or haplotype data.\n Main features of the package include options to display a linkage\n disequilibrium (LD) plot and the ability to plot multiple datasets\n simultaneously. Plots can be created using global and/or individual\n haplotype p-values along with single SNP p-values. Images are created as\n either PDF/EPS files.","Published":"2014-02-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"snpar","Version":"1.0","Title":"Supplementary Non-parametric Statistics Methods","Description":"contains several supplementary non-parametric statistics methods including quantile test, Cox-Stuart trend test, runs test, normal score test, kernel PDF and CDF estimation, kernel regression estimation and kernel Kolmogorov-Smirnov test.","Published":"2014-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SNPassoc","Version":"1.9-2","Title":"SNPs-based whole genome association studies","Description":"This package carries out most common analysis when performing whole genome association studies. These analyses include descriptive statistics and exploratory analysis of missing values, calculation of Hardy-Weinberg equilibrium, analysis of association based on generalized linear models (either for quantitative or binary traits), and analysis of multiple SNPs (haplotype and epistasis analysis). Permutation test and related tests (sum statistic and truncated product) are also implemented. Max-statistic and genetic risk-allele score exact distributions are also possible to be estimated. ","Published":"2014-04-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"snpEnrichment","Version":"1.7.0","Title":"SNPs Enrichment Analysis","Description":"Implements classes and methods for large scale SNP enrichment analysis (e.g. SNPs associated with genes expression in a GWAS signal).","Published":"2015-10-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"snplist","Version":"0.17","Title":"Tools to Create Gene Sets","Description":"A set of functions to create SQL tables of gene and SNP information and compose them into a SNP Set, for example for use with the RSNPset package, or to export to a PLINK set.","Published":"2016-12-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sNPLS","Version":"0.1.8","Title":"NPLS Regression with L1 Penalization","Description":"Tools for performing variable selection in three-way data using N-PLS \n in combination with L1 penalization. The N-PLS model (Rasmus Bro, 1996 \n ) is the \n natural extension of PLS (Partial Least Squares) to N-way structures, and tries \n to maximize the covariance between X and Y data arrays. The package also adds\n variable selection through L1 penalization.","Published":"2017-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SNPmaxsel","Version":"1.0-3","Title":"Maximally selected statistics for SNP data","Description":"This package implements asymptotic methods related to\n maximally selected statistics, with applications to SNP data.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SNPMClust","Version":"1.3","Title":"Bivariate Gaussian Genotype Clustering and Calling for Illumina\nMicroarrays","Description":"Bivariate Gaussian genotype clustering and calling for Illumina\n microarrays, building on the package 'mclust'. Pronounced snip-em-clust.","Published":"2016-07-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"snpRF","Version":"0.4","Title":"Random Forest for SNPs to Prevent X-chromosome SNP Importance\nBias","Description":"A modification of Breiman and Cutler's classification random forests modified for SNP (Single Nucleotide Polymorphism) data (based on randomForest v4.6-7) to prevent X-chromosome SNP variable importance bias compared to autosomal SNPs by simulating the process of X chromosome inactivation. Classification is based on a forest of trees using random subsets of SNPs and other variables as inputs.","Published":"2015-01-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"snpStatsWriter","Version":"1.5-6","Title":"Flexible writing of snpStats objects to flat files","Description":"Write snpStats objects to disk in formats suitable for reading by\n snphap, phase, mach, IMPUTE, beagle, and (almost) anything else that\n expects a rectangular format.","Published":"2013-12-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SNPtools","Version":"1.1","Title":"Accessing, subsetting and plotting mouse SNPs","Description":"This package queries the SNPs data sets and makes plots of\n genes and SNPs. This package allows users to access these data\n sets once they have been compiled into one file and indexed\n with Tabix (http://samtools.sourceforge.net/tabix.shtml). There\n are functions to retrieve SNPs, subset them for specific\n strains, obtain only SNPs that are polymorphic for a subset of\n strains, plot SNPs and look for certain allele patterns in\n SNPs. We have also added functions to access indels and\n structural variants.","Published":"2013-07-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sns","Version":"1.1.2","Title":"Stochastic Newton Sampler (SNS)","Description":"Stochastic Newton Sampler (SNS) is a Metropolis-Hastings-based, Markov Chain Monte Carlo sampler for twice differentiable, log-concave probability density functions (PDFs) where the proposal density function is a multivariate Gaussian resulting from a second-order Taylor-series expansion of log-density around the current point. The mean of the Gaussian proposal is the full Newton-Raphson step from the current point. A Boolean flag allows for switching from SNS to Newton-Raphson optimization (by choosing the mean of proposal function as next point). This can be used during burn-in to get close to the mode of the PDF (which is unique due to concavity). For high-dimensional densities, mixing can be improved via 'state space partitioning' strategy, in which SNS is applied to disjoint subsets of state space, wrapped in a Gibbs cycle. Numerical differentiation is available when analytical expressions for gradient and Hessian are not available. Facilities for validation and numerical differentiation of log-density are provided. ","Published":"2016-10-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SNscan","Version":"1.0","Title":"Scan Statistics in Social Networks","Description":"Scan statistics applied in social network data can be used to test the cluster characteristics among a social network.","Published":"2016-01-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SNSequate","Version":"1.3.0","Title":"Standard and Nonstandard Statistical Models and Methods for Test\nEquating","Description":"Contains functions to perform various models and\n methods for test equating. It currently implements the traditional\n mean, linear and equipercentile equating methods, as well as the\n mean-mean, mean-sigma, Haebara and Stocking-Lord IRT linking methods.\n It also supports newest methods such that local equating, kernel\n equating (using Gaussian, logistic and uniform kernels) with presmoothing,\n and IRT parameter linking methods based on asymmetric item characteristic\n functions. Functions to obtain both standard error of equating (SEE)\n and standard error of equating difference between two equating\n functions (SEED) are also implemented for the kernel method of\n equating.","Published":"2017-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SOAR","Version":"0.99-11","Title":"Memory management in R by delayed assignments","Description":"Allows objects to be stored on disc and automatically\n \t\trecalled into memory, as required, by delayed assignment.","Published":"2013-12-11","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"soc.ca","Version":"0.7.3","Title":"Specific Correspondence Analysis for the Social Sciences","Description":"Specific and class specific multiple correspondence analysis on\n survey-like data. Soc.ca is optimized to the needs of the social scientist and\n presents easily interpretable results in near publication ready quality.","Published":"2016-02-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SocialMediaLab","Version":"0.23.2","Title":"Tools for Collecting Social Media Data and Generating Networks\nfor Analysis","Description":"A suite of tools for collecting and constructing networks from\n social media data. Provides easy-to-use functions for collecting data across\n popular platforms (Instagram, Facebook, Twitter, and YouTube) and generating\n different types of networks for analysis.","Published":"2017-05-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SocialMediaMineR","Version":"0.4","Title":"A Social Media Search and Analytic Tool","Description":"Social media search and analytic tool that takes one or multiple URL(s) and returns the information about the popularity and reach of the URL(s) on social media. The function get_socialmedia() retrieves the number of shares, likes, pins, and hits on Facebook (), Pinterest (), StumbleUpon (), LinkedIn (), and Reddit (). The package also includes dedicated functions for each social network platform and a function to resolve shortened URLs.","Published":"2016-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SocialNetworks","Version":"1.1","Title":"Generates social networks based on distance","Description":"Generates social networks using either of two\n approaches: using either pairwise distances or territorial area intersections.","Published":"2014-08-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SocialPosition","Version":"1.0.1","Title":"Social Position Indicators Construction Toolbox","Description":"Provides to sociologists (and related scientists) a toolbox to facilitate the construction of social position indicators from survey data. Social position indicators refer to what is commonly known as social class and social status. There exists in the sociological literature many theoretical conceptualisation and empirical operationalization of social class and social status. This first version of the package offers tools to construct the International Socio-Economic Index of Occupational Status (ISEI) and the Oesch social class schema. It also provides tools to convert several occupational classifications (PCS82, PCS03, and ISCO08) into a common one (ISCO88) to facilitate data harmonisation work, and tools to collapse (i.e. group) modalities of social position indicators.","Published":"2015-07-07","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"SOD","Version":"1.0","Title":"SOD for multidimensional scaling","Description":"SOD (Self-Organising-Deltoids) provides multidimensional scaling by gradually reducing the dimensionality of an initial space and using the resulting stress in the configuration to re-arrange nodes. Stress is calculated from the errors in the inter-node distances, and the sum of the stresses at each node is combined to create N-dimensional force vectors that direct the movement of nodes as the dimensionality is iteratively reduced.","Published":"2014-07-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SoDA","Version":"1.0-6","Title":"Functions and Examples for \"Software for Data Analysis\"","Description":"Functions, examples and other software related to the book\n \"Software for Data Analysis: Programming with R\". See\n package?SoDA for an overview.","Published":"2013-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sodavis","Version":"1.0","Title":"SODA: Main and Interaction Effects Selection for Logistic\nRegression, Quadratic Discriminant and General Index Models","Description":"Variable and interaction selection are essential to classification in high-dimensional setting. In this package, we provide the implementation of SODA procedure, which is a forward-backward algorithm that selects both main and interaction effects under logistic regression and quadratic discriminant analysis. We also provide an extension, S-SODA, for dealing with the variable selection problem for semi-parametric models with continuous responses.","Published":"2017-05-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SODC","Version":"1.0","Title":"Optimal Discriminant Clustering(ODC) and Sparse Optimal\nDiscriminant Clustering(SODC)","Description":"To implement two clustering methods, ODC and SODC, for\n clustering datasets using optimal scoring, can also be used as\n an dimension reduction tool.","Published":"2013-05-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sodium","Version":"1.1","Title":"A Modern and Easy-to-Use Crypto Library","Description":"Bindings to 'libsodium': a modern, easy-to-use software library for\n encryption, decryption, signatures, password hashing and more. Sodium uses\n curve25519, a state-of-the-art Diffie-Hellman function by Daniel Bernstein,\n which has become very popular after it was discovered that the NSA had\n backdoored Dual EC DRBG.","Published":"2017-03-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sofa","Version":"0.2.0","Title":"Connector to 'CouchDB'","Description":"Provides an interface to the 'NoSQL' database 'CouchDB'\n (). Methods are provided for managing\n databases within 'CouchDB', including creating/deleting/updating/transferring,\n and managing documents within databases. One can connect with a local\n 'CouchDB' instance, or a remote 'CouchDB' databases such as 'Cloudant'\n (). Documents can be inserted directly from\n vectors, lists, data.frames, and 'JSON'. Targeted at 'CouchDB' v2 or\n greater.","Published":"2016-10-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Sofi","Version":"0.16.4.8","Title":"Interfaz interactiva con fines didacticos","Description":"Este paquete tiene la finalidad de ayudar a aprender de una forma interactiva, teniendo ejemplos y la posibilidad de resolver nuevos al mismo tiempo. Apuntes de clase interactivos.","Published":"2016-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SOFIA","Version":"1.0","Title":"Making Sophisticated and Aesthetical Figures in R","Description":"Software that leverages the capabilities of Circos by manipulating data, preparing configuration files, and running the Perl-native Circos directly from the R environment with minimal user intervention. Circos is a novel software that addresses the challenges in visualizing genetic data by creating circular ideograms composed of tracks of heatmaps, scatter plots, line plots, histograms, links between common markers, glyphs, text, and etc. Please see . ","Published":"2017-01-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"softclassval","Version":"1.0-20160527","Title":"Soft Classification Performance Measures","Description":"An extension of sensitivity, specificity, positive and negative\n predictive value to continuous predicted and reference memberships in\n [0, 1].","Published":"2016-05-28","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SoftClustering","Version":"1.1502","Title":"Soft Clustering Algorithms","Description":"It contains soft clustering algorithms, in particular approaches derived from rough set theory: Lingras & West original rough k-means, Peters' refined rough k-means, and PI rough k-means. It also contains classic k-means and a corresponding illustrative demo.","Published":"2015-02-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"softImpute","Version":"1.4","Title":"Matrix Completion via Iterative Soft-Thresholded SVD","Description":"Iterative methods for matrix completion that use nuclear-norm regularization. There are two main approaches.The one approach uses iterative soft-thresholded svds to impute the missing values. The second approach uses alternating least squares. Both have an \"EM\" flavor, in that at each iteration the matrix is completed with the current estimate. For large matrices there is a special sparse-matrix class named \"Incomplete\" that efficiently handles all computations. The package includes procedures for centering and scaling rows, columns or both, and for computing low-rank SVDs on large sparse centered matrices (i.e. principal components)","Published":"2015-04-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"softmaxreg","Version":"1.2","Title":"Training Multi-Layer Neural Network for Softmax Regression and\nClassification","Description":"Implementation of 'softmax' regression and classification models with multiple layer neural network. It can be used for many tasks like word embedding based document classification, 'MNIST' dataset handwritten digit recognition and so on. Multiple optimization algorithm including 'SGD', 'Adagrad', 'RMSprop', 'Moment', 'NAG', etc are also provided.","Published":"2016-09-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SOIL","Version":"1.0","Title":"Sparsity Oriented Importance Learning","Description":"Sparsity Oriented Importance Learning (SOIL) provides an objective and informative profile of variable importances for high dimensional regression and classification models.","Published":"2016-07-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"soil.spec","Version":"2.1.4","Title":"Soil Spectroscopy Tools and Reference Models","Description":"Methods and classes for processing and analyzing soil and plant infrared (MIR, alpha-MIR and VISNIR) spectroscopy readings based on the Africa Soil Information Services (AfSIS) project data.","Published":"2014-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"soilcarbon","Version":"1.0.0","Title":"Tools to Analyze Soil Carbon Database Created by Powell Center\nWorking Group","Description":"A tool for importing, visualizing, and analyzing the soil carbon database created by the Powell Center working group.","Published":"2017-05-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"soilDB","Version":"1.8-7","Title":"Soil Database Interface","Description":"A collection of functions for reading data from USDA-NCSS soil databases.","Published":"2016-11-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"soilphysics","Version":"3.1","Title":"Soil Physical Analysis","Description":"Basic and model-based soil physical analyses.","Published":"2017-05-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"soilprofile","Version":"1.0","Title":"A package to consistently represent soil properties along a soil\nprofile","Description":"This package provides functions to graphically represent \n\t soil properties. Morphological data gathered in the field \n\t such as horizon boundaries, root abundance and dimensions,\n\t skeletal shape, abundance and dimension as well as\t\n\t meaningful soil color may be represented via the plot\n\t function. A lattice-based plot.element function has been \n\t designed to represent depth function of a given variable.","Published":"2013-08-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SoilR","Version":"1.1-23","Title":"Models of Soil Organic Matter Decomposition","Description":"This package contains functions for modeling Soil Organic\n Matter decomposition in terrestrial ecosystems.","Published":"2014-04-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"soiltexture","Version":"1.4.1","Title":"Functions for Soil Texture Plot, Classification and\nTransformation","Description":"\"The Soil Texture Wizard\" is a set of R functions designed to produce texture triangles (also called texture plots, texture diagrams, texture ternary plots), classify and transform soil textures data. These functions virtually allows to plot any soil texture triangle (classification) into any triangle geometry (isosceles, right-angled triangles, etc.). This set of function is expected to be useful to people using soil textures data from different soil texture classification or different particle size systems. Many (> 15) texture triangles from all around the world are predefined in the package. A simple text based graphical user interface is provided: soiltexture_gui().","Published":"2016-06-07","License":"AGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"soilwater","Version":"1.0.2","Title":"Implementation of Parametric Formulas for Soil Water Retention\nor Conductivity Curve","Description":"It is a set of R implementations of parametric formulas of soil water\n retention or conductivity curve. At the moment, only Van Genuchten (for\n soil water retention curve) and Mualem (for hydraulic conductivity) were\n implemented. See reference\n \\url{http://en.wikipedia.org/wiki/Water_retention_curve}. ","Published":"2015-07-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"solaR","Version":"0.44","Title":"Radiation and Photovoltaic Systems","Description":"Calculation methods of solar radiation and performance of photovoltaic systems from daily and intradaily irradiation data sources.","Published":"2016-04-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"solarius","Version":"0.3.0.2","Title":"An R Interface to SOLAR","Description":"SOLAR is the standard software program to perform linkage and\n association mappings of the quantitative trait loci (QTLs) in pedigrees of\n arbitrary size and complexity. This package allows the user to exploit the\n variance component methods implemented in SOLAR. It automates such routine\n operations as formatting pedigree and phenotype data. It also parses the model\n output and contains summary and plotting functions for exploration of the\n results. In addition, solarius enables parallel computing of the linkage and\n association analyses, that makes the calculation of genome-wide scans more\n efficient. See for more information about\n SOLAR.","Published":"2015-12-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"solarPos","Version":"1.0","Title":"Solar Position Algorithm for Solar Radiation Applications","Description":"Calculation of solar zenith and azimuth angles.","Published":"2016-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"solidearthtide","Version":"1.0.2","Title":"Solid Earth Tide Computation","Description":"Predicted solid earth tide displacements in the meridional,\n zonal and vertical directions. Based on \"Solid\" from Dennis Milbert, \n modified from \"dehanttideinelMJD\" by V. Dehant, S. Mathews, J. Gipson, \n and C. Bruyninx.","Published":"2015-09-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"SOLOMON","Version":"1.0-1","Title":"Parentage analysis","Description":"Parentage analysis using Bayes' theorem","Published":"2013-08-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"solr","Version":"0.1.6","Title":"General Purpose R Interface to Solr","Description":"Provides a set of functions for querying and parsing\n data from Solr endpoints (local and remote), including search, faceting,\n highlighting, stats, and 'more like this'.","Published":"2015-09-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"solrium","Version":"0.4.0","Title":"General Purpose R Interface to 'Solr'","Description":"Provides a set of functions for querying and parsing data\n from 'Solr' () 'endpoints' (local and \n remote), including search, 'faceting', 'highlighting', 'stats', and \n 'more like this'. In addition, some functionality is included for \n creating, deleting, and updating documents in a 'Solr' 'database'.","Published":"2016-10-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"solvebio","Version":"0.4.1","Title":"The Official SolveBio API Client for R","Description":"R language bindings for SolveBio's API.\n SolveBio is a biomedical knowledge hub that enables life science\n organizations to collect and harmonize the complex, disparate\n \"multi-omic\" data essential for today's R&D and BI needs.\n For more information, visit .","Published":"2017-06-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"som","Version":"0.3-5.1","Title":"Self-Organizing Map","Description":"Self-Organizing Map (with application in gene clustering).","Published":"2016-07-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"som.nn","Version":"1.1.0","Title":"Topological k-NN Classifier Based on Self-Organising Maps","Description":"A topological version of k-NN: An abstract model is build\n as 2-dimensional self-organising map. Samples of unknown\n class are predicted by mapping them on the SOM and analysing\n class membership of neurons in the neighbourhood.","Published":"2017-02-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"soma","Version":"1.1.1","Title":"General-Purpose Optimisation With the Self-Organising Migrating\nAlgorithm","Description":"This package provides an R implementation of the Self-Organising Migrating Algorithm, a general-purpose, stochastic optimisation algorithm. The approach is similar to that of genetic algorithms, although it is based on the idea of a series of ``migrations'' by a fixed set of individuals, rather than the development of successive generations. It can be applied to any cost-minimisation problem with a bounded parameter space, and is robust to local minima.","Published":"2014-11-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SOMbrero","Version":"1.2","Title":"SOM Bound to Realize Euclidean and Relational Outputs","Description":"The stochastic (also called on-line) version of the Self-Organising\n Map (SOM) algorithm is provided. Different versions of the \n algorithm are implemented, for numeric and relational data and for\n contingency tables. The package also contains many plotting \n features (to help the user interpret the results) and a graphical\n user interface based on shiny.","Published":"2016-09-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"somebm","Version":"0.1","Title":"some Brownian motions simulation functions","Description":"some Brownian motions simulation functions","Published":"2013-11-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"someKfwer","Version":"1.2","Title":"Controlling the Generalized Familywise Error Rate","Description":"This package collects some procedures controlling the\n Generalized Familywise Error Rate.","Published":"2014-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"someMTP","Version":"1.4.1","Title":"Some Multiple Testing Procedures","Description":"It's a collection of functions for Multiplicity Correction and Multiple Testing.","Published":"2013-11-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sommer","Version":"2.8","Title":"Solving Mixed Model Equations in R","Description":"Multivariate linear mixed model solver for estimation of heterogeneous variances and specification of variance covariance structures. Maximum and Restricted Maximum Likelihood (ML/REML) estimates can be obtained using the Direct-Inversion Newton-Raphson (NR), Direct-Inversion Average Information (AI), MME-based Expectation-Maximization (EM), and Efficient Mixed Model Association (EMMA) algorithms. Designed for genomic prediction and genome wide association studies (GWAS) to include additive, dominance and epistatic relationship structures or other covariance structures in R, but also functional as a regular multivariate mixed model software. Multivariate models (multiple responses) can be fitted currently with NR, AI and EMMA algorithms.","Published":"2017-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"somplot","Version":"1.6.4","Title":"Visualisation of hexagonal Kohonen maps","Description":"The package provides the plot function som.plot() to\n create high quality visualisations of hexagonal Kohonen maps\n (self-organising maps).","Published":"2013-07-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sonar","Version":"1.0.2","Title":"Fundamental Formulas for Sonar","Description":"Formulas for calculating sound velocity, water pressure, depth, \n density, absorption and sonar equations.","Published":"2016-09-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"sonicLength","Version":"1.4.4","Title":"Estimating Abundance of Clones from DNA fragmentation data","Description":"Estimate the abundance of cell clones from the\n\t distribution of lengths of DNA fragments (as created by\n\t sonication, whence `sonicLength'). The algorithm in\n\t \"Estimating abundances of retroviral insertion sites from\n\t DNA fragment length data\" by Berry CC, Gillet NA, Melamed\n\t A, Gormley N, Bangham CR, Bushman FD. Bioinformatics;\n\t 2012 Mar 15;28(6):755-62 is implemented. The\n\t experimental setting and estimation details are described\n\t in detail there. Briefly, integration of new DNA in a\n\t host genome (due to retroviral infection or gene therapy)\n\t can be tracked using DNA sequencing, potentially allowing\n\t characterization of the abundance of individual cell\n\t clones bearing distinct integration sites. The locations\n\t of integration sites can be determined by fragmenting the\n\t host DNA (via sonication or fragmentase), breaking the\n\t newly integrated DNA at a known sequence, amplifying the\n\t fragments containing both host and integrated DNA,\n\t sequencing those amplicons, then mapping the host\n\t sequences to positions on the reference genome. The\n\t relative number of fragments containing a given position\n\t in the host genome estimates the relative abundance of\n\t cells hosting the corresponding integration site, but\n\t that number is not available and the count of amplicons\n\t per fragment varies widely. However, the expected number\n\t of distinct fragment lengths is a function of the\n\t abundance of cells hosting an integration site at a given\n\t position and a certain nuisance parameter. The algorithm\n\t implicitly estimates that function to estimate the\n\t relative abundance.","Published":"2014-08-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sonify","Version":"0.0-1","Title":"Data Sonification - Turning Data into Sound","Description":"Sonification (or audification) is the process of representing data by sounds in the audible range. This package provides the R function sonify() that transforms univariate data, sampled at regular or irregular intervals, into a continuous sound with time-varying frequency. The ups and downs in frequency represent the ups and downs in the data. Sonify provides a substitute for R's plot function to simplify data analysis for the visually impaired.","Published":"2017-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"soobench","Version":"1.0-73","Title":"Single Objective Optimization Benchmark Functions","Description":"Collection of different single objective test functions\n useful for benchmarks and algorithm development.","Published":"2012-03-05","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"sophisthse","Version":"0.7.0","Title":"Load Russian Economic Indicators from the Archive of Economic\nand Social Data","Description":"Load Russian economic indicators from the Archive of Economic and Social Data .","Published":"2016-07-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SOPIE","Version":"1.5","Title":"Non-Parametric Estimation of the Off-Pulse Interval of a Pulsar","Description":"Provides functions to non-parametrically estimate the off-pulse interval of a source\n function originating from a pulsar. The technique is based on a sequential application of P-values\n obtained from goodness-of-fit tests for the uniform distribution, such as the Kolmogorov-Smirnov,\n Cramer-von Mises, Anderson-Darling and Rayleigh goodness-of-fit tests.","Published":"2015-09-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"soptdmaeA","Version":"1.0.0","Title":"Sequential Optimal Designs for Two-Colour cDNA Microarray\nExperiments","Description":"Computes sequential A-, MV-, D- and E-optimal or near-optimal block and row-column designs for two-colour cDNA microarray experiments using the linear fixed effects and mixed effects models where the interest is in a comparison of all possible elementary treatment contrasts. The package also provides an optional method of using the graphical user interface (GUI) R package 'tcltk' to ensure that it is user friendly.","Published":"2017-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"soql","Version":"0.1.1","Title":"Helps Make Socrata Open Data API Calls","Description":"Used to construct the URLs and parameters of 'Socrata Open Data API' calls, using the API's 'SoQL' parameter format. Has method-chained and sensical syntax. Plays well with pipes.","Published":"2016-04-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SOR","Version":"0.23.0","Title":"Estimation using Sequential Offsetted Regression","Description":"Estimation for longitudinal data following outcome dependent sampling using the sequential offsetted regression technique. Includes support for binary, count, and continuous data. The first regression is a logistic regression, which uses a known ratio (the probability of being sampled given that the subject/observation was referred divided by the probability of being sampled given that the subject/observation was no referred) as an offset to estimate the probability of being referred given outcome and covariates. The second regression uses this estimated probability to calculate the mean population response given covariates.","Published":"2016-12-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SortableHTMLTables","Version":"0.1-3","Title":"Turns a data frame into an HTML file containing a sortable\ntable","Description":"SortableHTMLTables writes a data frame to an HTML file\n that contains a sortable table. The sorting is done using the\n jQuery plugin Tablesorter. The appearance is controlled through\n a CSS file and several GIF's.","Published":"2012-05-13","License":"MIT","snapshot_date":"2017-06-23"} {"Package":"sortinghat","Version":"0.1","Title":"sortinghat","Description":"sortinghat is a classification framework to streamline the\n evaluation of classifiers (classification models and algorithms) and seeks\n to determine the best classifiers on a variety of simulated and benchmark\n data sets. Several error-rate estimators are included to evaluate the\n performance of a classifier. This package is intended to complement the\n well-known 'caret' package.","Published":"2013-12-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sorvi","Version":"0.7.26","Title":"Finnish Open Government Data Toolkit","Description":"Algorithms for Finnish open government data.","Published":"2015-06-26","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sos","Version":"1.4-1","Title":"Search Contributed R Packages, Sort by Package","Description":"Search contributed R packages, sort by package.","Published":"2017-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sos4R","Version":"0.2-11","Title":"An R client for the OGC Sensor Observation Service","Description":"sos4R is a client for Sensor Observation Services (SOS) as\n specified by the Open Geospatial Consortium (OGC). It allows\n users to retrieve metadata from SOS web services and to\n interactively create requests for near real-time observation\n data based on the available sensors, phenomena, observations et\n cetera using thematic, temporal and spatial filtering.","Published":"2013-05-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sotkanet","Version":"0.9.48","Title":"Sotkanet Open Data Access and Analysis","Description":"Access data from the sotkanet open data portal\n .","Published":"2017-05-16","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sotu","Version":"1.0.2","Title":"United States Presidential State of the Union Addresses","Description":"The President of the United States is constitutionally obligated to provide\n a report known as the 'State of the Union'. The report summarizes the current challenges\n facing the country and the president's upcoming legislative agenda. While historically\n the State of the Union was often a written document, in recent decades it has always\n taken the form of an oral address to a joint session of the United States Congress.\n This package provides the raw text from every such address with the intention of\n being used for meaningful examples of text analysis in R. The corpus is well suited\n to the task as it is historically important, includes material intended to be read\n and material intended to be spoken, and it falls in the public domain. As the corpus\n spans over two centuries it is also a good test of how well various methods hold up\n to the idiosyncrasies of historical texts. Associated data about each address, such\n as the year, president, party, and format, are also included.","Published":"2017-05-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sound","Version":"1.4.4","Title":"A Sound Interface for R","Description":"Basic functions for dealing with wav files and sound samples.","Published":"2016-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"soundecology","Version":"1.3.2","Title":"Soundscape Ecology","Description":"Functions to calculate indices for soundscape ecology and other ecology research that uses audio recordings.","Published":"2016-07-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SoundexBR","Version":"1.2","Title":"Phonetic-Coding for Portuguese","Description":"The SoundexBR package provides an algorithm for decoding names\n into phonetic codes, as pronounced in Portuguese. The goal is for\n homophones to be encoded to the same representation so that they can be\n matched despite minor differences in spelling. The algorithm mainly encodes\n consonants; a vowel will not be encoded unless it is the first letter. The\n soundex code resultant consists of a four digits long string composed by\n one letter followed by three numerical digits: the letter is the first\n letter of the name, and the digits encode the remaining consonants.","Published":"2015-07-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SOUP","Version":"1.1","Title":"Stochastic Ordering Using Permutations (and Pairwise\nComparisons)","Description":"Construct a ranking of a set of treatments/groups by\n gathering together information coming from several response variables.\n It can be used with both balanced and unbalanced experiments\n (with almost all test statistics) as well as in presence of either\n continuous covariates or a stratifying (categorical) variable.","Published":"2015-04-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"source.gist","Version":"1.0.0","Title":"Read R code from a GitHub Gist","Description":"Analogous to source(), but works when given a Gist URL or\n ID.","Published":"2012-12-04","License":"MIT","snapshot_date":"2017-06-23"} {"Package":"sourceR","Version":"1.0.1","Title":"Fits a Non-Parametric Bayesian Source Attribution Model","Description":"Implements a non-parametric source attribution model to attribute\n cases of disease to sources in Bayesian framework with source and type effects.\n Type effects are clustered using a Dirichlet Process. Multiple times and\n locations are supported.","Published":"2017-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sourcetools","Version":"0.1.6","Title":"Tools for Reading, Tokenizing and Parsing R Code","Description":"Tools for the reading and tokenization of R code. The\n 'sourcetools' package provides both an R and C++ interface for the tokenization\n of R code, and helpers for interacting with the tokenized representation of R\n code.","Published":"2017-04-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SoyNAM","Version":"1.3","Title":"Soybean Nested Association Mapping Dataset","Description":"Genomic and multi-environmental soybean data. Soybean Nested\n Association Mapping (SoyNAM) project dataset funded by the United Soybean Board\n (USB), pre-formatted for general analysis and genome-wide association analysis\n using the NAM package.","Published":"2016-12-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sp","Version":"1.2-4","Title":"Classes and Methods for Spatial Data","Description":"Classes and methods for spatial\n data; the classes document where the spatial location information\n resides, for 2D or 3D data. Utility functions are provided, e.g. for\n plotting data as maps, spatial selection, as well as methods for\n retrieving coordinates, for subsetting, print, summary, etc.","Published":"2016-12-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sp23design","Version":"0.9","Title":"Design and Simulation of seamless Phase II-III Clinical Trials","Description":"Provides methods for generating, exploring and executing seamless Phase II-III designs of Lai, Lavori and Shih using generalized likelihood ratio statistics. Includes pdf and source files that describe the entire R implementation with the relevant mathematical details.","Published":"2014-06-26","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"sp500SlidingWindow","Version":"0.1.0","Title":"Sliding Window Investment Analysis","Description":"Test the results of any given investment/expense combinations for a series of sliding-window periods of the S&P500 from 1950 to the present.","Published":"2016-05-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spa","Version":"2.0","Title":"Implements The Sequential Predictions Algorithm","Description":"Implements the Sequential Predictions Algorithm","Published":"2012-07-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SPA3G","Version":"1.0","Title":"SPA3G: R package for the method of Li and Cui (2012)","Description":"The package implements the model-based kernel machine\n method for detecting gene-centric gene-gene interactions of Li\n and Cui (2012).","Published":"2012-03-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"spaa","Version":"0.2.2","Title":"SPecies Association Analysis","Description":"Miscellaneous functions for analysing species association\n and niche overlap.","Published":"2016-06-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SpaCCr","Version":"0.1.0","Title":"Spatial Convex Clustering","Description":"Genomic Region Detection via Spatial Convex Clustering. See for details.","Published":"2016-11-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"space","Version":"0.1-1","Title":"Sparse PArtial Correlation Estimation","Description":"Partial correlation estimation with joint sparse\n regression model","Published":"2010-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SPACECAP","Version":"1.1.0","Title":"A Program to Estimate Animal Abundance and Density using\nBayesian Spatially-Explicit Capture-Recapture Models","Description":"SPACECAP is a user-friendly software package for\n estimating animal densities using closed model\n capture-recapture sampling based on photographic captures using\n Bayesian spatially-explicit capture-recapture models. This\n approach offers advantage such as: substantially dealing with\n problems posed by individual heterogeneity in capture\n probabilities in conventional capture-recapture analyses. It\n also offers non-asymptotic inferences which are more\n appropriate for small samples of capture data typical of\n photo-capture studies.","Published":"2014-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spaceExt","Version":"1.0","Title":"Extension of SPACE","Description":"undirected graph inference with missing data","Published":"2011-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spacejam","Version":"1.1","Title":"Sparse conditional graph estimation with joint additive models","Description":"This package provides an extension of conditional\n independence (CIG) and directed acyclic graph (DAG) estimation\n to the case where conditional relationships are (non-linear)\n additive models.","Published":"2013-04-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spacetime","Version":"1.2-0","Title":"Classes and Methods for Spatio-Temporal Data","Description":"Classes and methods for spatio-temporal data, including space-time regular lattices, sparse lattices, irregular data, and trajectories; utility functions for plotting data as map sequences (lattice or animation) or multiple time series; methods for spatial and temporal selection and subsetting, as well as for spatial/temporal/spatio-temporal matching or aggregation, retrieving coordinates, print, summary, etc.","Published":"2016-09-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spacodiR","Version":"0.13.0115","Title":"Spatial and Phylogenetic Analysis of Community Diversity","Description":"SPACoDi is primarily designed to characterise the\n structure and phylogenetic diversity of communities using\n abundance or presence-absence data of species among community\n plots.","Published":"2013-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spacom","Version":"1.0-5","Title":"Spatially Weighted Context Data for Multilevel Modelling","Description":"Provides tools to construct and exploit spatially weighted context data. Spatial weights are derived by a Kernel function from a user-defined matrix of distances between contextual units. Spatial weights can then be applied either to precise contextual measures or to aggregate estimates based on micro-level survey data, to compute spatially weighted context data. Available aggregation functions include indicators of central tendency, dispersion, or inter-group variability, and take into account survey design weights. The package further allows combining the resulting spatially weighted context data with individual-level predictor and outcome variables, for the purposes of multilevel modelling. An ad hoc stratified bootstrap resampling procedure generates robust point estimates for multilevel regression coefficients and model fit indicators, and computes confidence intervals adjusted for measurement dependency and measurement error of aggregate estimates. As an additional feature, residual and explained spatial dependency can be estimated for the tested models.","Published":"2016-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spacyr","Version":"0.9.0","Title":"R Wrapper to the spaCy NLP Library","Description":"An R wrapper to the 'Python' 'spaCy' 'NLP' library,\n from .","Published":"2017-05-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SPADAR","Version":"1.0","Title":"Spherical Projections of Astronomical Data","Description":"Provides easy to use functions to create all-sky grid plots of widely used astronomical coordinate systems (equatorial, ecliptic, galactic) and scatter plots of data on any of these systems including on-the-fly system conversion. It supports any type of spherical projection to the plane defined by the 'mapproj' package.","Published":"2017-04-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"spAddins","Version":"0.1.7","Title":"A Set of RStudio Addins","Description":"A set of RStudio addins that are designed to be used in\n combination with user-defined RStudio keyboard shortcuts. These\n addins either:\n 1) insert text at a cursor position (e.g. insert\n operators %>%, <<-, %$%, etc.),\n 2) replace symbols in selected pieces of text (e.g., convert\n backslashes to forward slashes which results in stings like\n \"c:\\data\\\" converted into \"c:/data/\") or\n 3) enclose text with special symbols (e.g., converts \"bold\" into\n \"**bold**\") which is convenient for editing R Markdown files.","Published":"2017-01-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SpadeR","Version":"0.1.1","Title":"Species-Richness Prediction and Diversity Estimation with R","Description":"Estimation of various biodiversity indices and related (dis)similarity measures based on individual-based (abundance) data or sampling-unit-based (incidence) data taken from one or multiple communities/assemblages.","Published":"2016-09-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"SpaDES","Version":"1.3.1","Title":"Develop and Run Spatially Explicit Discrete Event Simulation\nModels","Description":"Implement a variety of event-based models, with a focus on\n spatially explicit models. These include raster-based, event-based, and\n agent-based models. The core simulation components are built upon a discrete\n event simulation (DES) framework that facilitates modularity, and easily\n enables the user to include additional functionality by running user-built\n simulation modules. Included are numerous tools to visualize raster and\n other maps. The suggested package 'fastshp' can be installed with\n `install.packages(\"fastshp\", repos = \"http://rforge.net\", type = \"source\")`.","Published":"2016-10-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spaero","Version":"0.2.0","Title":"Software for Project AERO","Description":"Implements methods for anticipating the emergence and eradication\n of infectious diseases from surveillance time series. Also provides support\n for computational experiments testing the performance of such methods.","Published":"2016-10-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spam","Version":"1.4-0","Title":"SPArse Matrix","Description":"Set of functions for sparse matrix algebra.\n Differences with SparseM/Matrix are: \n (1) we only support (essentially) one sparse matrix format, \n (2) based on transparent and simple structure(s), \n (3) tailored for MCMC calculations within GMRF. \n (4) S3 and S4 like-\"compatible\" ... and it is fast.","Published":"2016-08-30","License":"LGPL-2","snapshot_date":"2017-06-23"} {"Package":"spaMM","Version":"2.1.0","Title":"Mixed-Effect Models, Particularly Spatial Models","Description":"Inference in mixed-effect models, including generalized linear mixed models with spatial\n correlations and models with non-Gaussian random effects (e.g., Beta Binomial,\n or negative-binomial mixed models). Variation in residual variance is handled and can be modelled\n as a linear model. The algorithms are currently various Laplace approximations\n methods for likelihood or restricted likelihood, in particular h-likelihood and penalized-likelihood\n methods.","Published":"2017-05-25","License":"CeCILL-2","snapshot_date":"2017-06-23"} {"Package":"spanel","Version":"0.1","Title":"Spatial Panel Data Models","Description":"Fit the spatial panel data models: the fixed effects, random\n effects and between models.","Published":"2015-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spant","Version":"0.3.0","Title":"MR Spectroscopy Analysis Tools","Description":"Tools for reading, visualising and processing Magnetic Resonance\n Spectroscopy data.","Published":"2017-06-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SPAr","Version":"0.1","Title":"Perform rare variants association analysis based on summation of\npartition approaches","Description":"This package performs robust nonparametric tests for rare variants\n association analysis using summation of partition approaches that incorporate\n gene-gene and gene-environmental interactions","Published":"2014-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sparc","Version":"0.9.0","Title":"Semiparametric Generalized Linear Models","Description":"We provide an efficient solver for estimating semiparametric generalized linear models.","Published":"2013-09-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sparcl","Version":"1.0.3","Title":"Perform sparse hierarchical clustering and sparse k-means\nclustering","Description":"Implements the sparse clustering methods of Witten and\n Tibshirani (2010): \"A framework for feature selection in\n clustering\"; published in Journal of the American Statistical\n Association 105(490): 713-726.","Published":"2013-01-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spareserver","Version":"1.0.1","Title":"Client Side Load Balancing","Description":"Decide which server to connect to,\n based on previous response times, and configuration.","Published":"2015-07-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"spark","Version":"1.0.1","Title":"Sparklines in the 'R' Terminal","Description":"A sparkline is a line chart, without axes and labels.\n Its goal is to show the general shape of changes over time, or\n another quantity. This package is an 'R' implementation\n of the original shell project: .","Published":"2015-07-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"spark.sas7bdat","Version":"1.2","Title":"Read in 'SAS' Data ('.sas7bdat' Files) into 'Apache Spark'","Description":"Read in 'SAS' Data ('.sas7bdat' Files) into 'Apache Spark' from R. 'Apache Spark' is an open source cluster computing framework available at . This R package uses the 'spark-sas7bdat' 'Spark' package () to import and process 'SAS' data in parallel using 'Spark'. Hereby allowing to execute 'dplyr' statements in parallel on top of 'SAS' data.","Published":"2016-12-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sparkline","Version":"2.0","Title":"'jQuery' Sparkline 'htmlwidget'","Description":"Include interactive sparkline charts\n in \n all R contexts with the convenience of 'htmlwidgets'. ","Published":"2016-11-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sparklyr","Version":"0.5.6","Title":"R Interface to Apache Spark","Description":"R interface to Apache Spark, a fast and general engine for big data\n processing, see . This package supports connecting to\n local and remote Apache Spark clusters, provides a 'dplyr' compatible back-end,\n and provides an interface to Spark's built-in machine learning algorithms.","Published":"2017-06-10","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sparkTable","Version":"1.3.0","Title":"Sparklines and Graphical Tables for TeX and HTML","Description":"Create sparklines and graphical tables for documents and websites.","Published":"2016-12-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"sparktex","Version":"0.1","Title":"Generate LaTeX sparklines in R","Description":"Generate syntax for use with the sparklines package for\n LaTeX.","Published":"2013-06-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sparkwarc","Version":"0.1.1","Title":"Load WARC Files into Apache Spark","Description":"Load WARC (Web ARChive) files into Apache Spark using 'sparklyr'. This\n allows to read files from the Common Crawl project .","Published":"2017-01-13","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"SPARQL","Version":"1.16","Title":"SPARQL client","Description":"Use SPARQL to pose SELECT or UPDATE queries to an end-point. ","Published":"2013-10-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sparr","Version":"0.3-8","Title":"SPAtial Relative Risk","Description":"Provides functions to estimate kernel-smoothed relative risk functions and perform subsequent inference.","Published":"2016-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sparseBC","Version":"1.1","Title":"Sparse Biclustering of Transposable Data","Description":"Implements the sparse biclustering proposal of Tan and Witten (2014), Sparse biclustering of transposable data. Journal of Computational and Graphical Statistics 23(4):985-1008.","Published":"2015-02-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sparsebn","Version":"0.0.4","Title":"Learning Sparse Bayesian Networks from High-Dimensional Data","Description":"Fast methods for learning sparse Bayesian networks from high-dimensional data using sparse regularization, as described in as described in Aragam, Gu, and Zhou (2017) . Designed to handle mixed experimental and observational data with thousands of variables with either continuous or discrete observations.","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sparsebnUtils","Version":"0.0.4","Title":"Utilities for Learning Sparse Bayesian Networks","Description":"A set of tools for representing and estimating sparse Bayesian networks from continuous and discrete data.","Published":"2017-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SparseDC","Version":"0.1.5","Title":"Implementation of SparseDC Algorithm","Description":"Implements the algorithm described in \n Barron, M., Zhang, S. and Li, J. \"A sparse differential\n clustering algorithm for tracing cell type changes via single-cell\n RNA-sequencing data\" (Unpublished). This algorithm clusters samples from two different\n populations, links the clusters across the conditions and identifies \n marker genes for these changes. The package was designed for scRNA-Seq\n data but is also applicable to many other data types, just replace cells\n with samples and genes with variables. The package also contains functions\n for estimating the parameters for SparseDC as outlined in the paper.","Published":"2017-05-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SparseFactorAnalysis","Version":"1.0","Title":"Scaling Count and Binary Data with Sparse Factor Analysis","Description":"Multidimensional scaling provides a means of uncovering a latent structure underlying observed data, while estimating the number of latent dimensions. This package presents a means for scaling binary and count data, for example the votes and word counts for legislators. Future work will include an EM implementation and extend this work to ordinal and continuous data.","Published":"2015-07-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sparseFLMM","Version":"0.1.1","Title":"Functional Linear Mixed Models for Irregularly or Sparsely\nSampled Data","Description":"Estimation of functional linear mixed models for irregularly or\n sparsely sampled data based on functional principal component analysis.","Published":"2017-06-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SparseGrid","Version":"0.8.2","Title":"Sparse grid integration in R","Description":"SparseGrid is a package to create sparse grids for numerical integration, based on code from www.sparse-grids.de","Published":"2013-07-31","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"sparseHessianFD","Version":"0.3.3","Title":"Numerical Estimation of Sparse Hessians","Description":"Estimates Hessian of a scalar-valued function, and returns it\n in a sparse Matrix format. The sparsity pattern must be known in advance. The\n algorithm is especially efficient for hierarchical models with a large number of\n heterogeneous units.","Published":"2017-04-19","License":"MPL (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"sparseLDA","Version":"0.1-9","Title":"Sparse Discriminant Analysis","Description":"Performs sparse linear discriminant analysis for Gaussians and mixture of Gaussian models.","Published":"2016-09-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SparseLearner","Version":"1.0-2","Title":"Sparse Learning Algorithms Using a LASSO-Type Penalty for\nCoefficient Estimation and Model Prediction","Description":"Coefficient estimation and model prediction based on the LASSO sparse \n learning algorithm and its improved versions such as Bolasso, bootstrap ranking \n LASSO, two-stage hybrid LASSO and others. These LASSO estimation procedures are \n applied in the fields of variable selection, graphical modeling and ensemble \n learning. The bagging LASSO model uses a Monte Carlo cross-entropy algorithm to \n determine the best base-level models and improve predictive performance. ","Published":"2015-11-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sparseLTSEigen","Version":"0.2.0","Title":"RcppEigen back end for sparse least trimmed squares regression","Description":"Use RcppEigen to fit least trimmed squares\n regression models with an L1 penalty in order to obtain\n sparse models.","Published":"2013-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SparseM","Version":"1.77","Title":"Sparse Linear Algebra","Description":"Some basic linear algebra functionality for sparse matrices is\n provided: including Cholesky decomposition and backsolving as well as \n standard R subsetting and Kronecker products.","Published":"2017-04-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sparseMVN","Version":"0.2.1","Title":"Multivariate Normal Functions for Sparse Covariance and\nPrecision Matrices","Description":"Computes multivariate normal (MVN) densities, and\n samples from MVN distributions, when the covariance or\n precision matrix is sparse.","Published":"2017-05-24","License":"MPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"sparsenet","Version":"1.2","Title":"Fit sparse linear regression models via nonconvex optimization","Description":"Sparsenet uses the MC+ penalty of Zhang. It computes the regularization surface over both the family parameter and the tuning parameter by coordinate descent.","Published":"2014-03-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sparsepp","Version":"0.1.1","Title":"'Rcpp' Interface to 'sparsepp'","Description":"Provides interface to 'sparsepp' - fast, memory efficient hash map. \n It is derived from Google's excellent 'sparsehash' implementation.\n We believe 'sparsepp' provides an unparalleled combination of performance and memory usage, \n and will outperform your compiler's unordered_map on both counts. \n Only Google's 'dense_hash_map' is consistently faster, at the cost of much greater \n memory usage (especially when the final size of the map is not known in advance).","Published":"2017-01-23","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sparsereg","Version":"1.2","Title":"Sparse Bayesian Models for Regression, Subgroup Analysis, and\nPanel Data","Description":"Sparse modeling provides a mean selecting a small number of non-zero effects from a large possible number of candidate effects. This package includes a suite of methods for sparse modeling: estimation via EM or MCMC, approximate confidence intervals with nominal coverage, and diagnostic and summary plots. The method can implement sparse linear regression and sparse probit regression. Beyond regression analyses, applications include subgroup analysis, particularly for conjoint experiments, and panel data. Future versions will include extensions to models with truncated outcomes, propensity score, and instrumental variable analysis.","Published":"2016-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sparseSEM","Version":"2.5","Title":"Sparse-aware Maximum Likelihood for Structural Equation Models","Description":"Sparse-aware maximum likelihood for structural equation models in inferring gene regulatory networks","Published":"2014-09-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"sparsestep","Version":"1.0.0","Title":"SparseStep Regression","Description":"Implements the SparseStep model for solving regression\n problems with a sparsity constraint on the parameters. The SparseStep \n regression model was proposed in Van den Burg, Groenen, and Alfons (2017) \n . In the model, a regularization term is \n added to the regression problem which approximates the counting norm of \n the parameters. By iteratively improving the approximation a sparse \n solution to the regression problem can be obtained. In this package both \n the standard SparseStep algorithm is implemented as well as a path \n algorithm which uses golden section search to determine solutions with \n different values for the regularization parameter.","Published":"2017-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sparsesvd","Version":"0.1-1","Title":"Sparse Truncated Singular Value Decomposition (from 'SVDLIBC')","Description":"Wrapper around the 'SVDLIBC' library for (truncated) singular value decomposition of a sparse matrix.\n Currently, only sparse real matrices in Matrix package format are supported.","Published":"2016-04-24","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sparseSVM","Version":"1.1-2","Title":"Solution Paths of Sparse Linear Support Vector Machine with\nLasso or ELastic-Net Regularization","Description":"Fast algorithm for fitting solution paths of sparse linear SVM with lasso or elastic-net regularization generate sparse solutions.","Published":"2016-03-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SparseTSCGM","Version":"2.5","Title":"Sparse Time Series Chain Graphical Models","Description":"Computes sparse vector autoregressive coefficients and precision \n matrices for time series chain graphical models.","Published":"2016-11-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"sparsevar","Version":"0.0.10","Title":"A Package for Sparse VAR/VECM Estimation","Description":"A wrapper for sparse VAR/VECM time series models estimation\n using penalties like ENET, SCAD and MCP.","Published":"2016-11-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spartan","Version":"2.3","Title":"Simulation Parameter Analysis R Toolkit ApplicatioN: Spartan","Description":"Computer simulations are becoming a popular technique to use in attempts to further our understanding of complex systems. SPARTAN, described in our 2013 publication in PLoS Computational Biology, provides code for four techniques described in available literature which aid the analysis of simulation results, at both single and multiple timepoints in the simulation run. The first technique addresses aleatory uncertainty in the system caused through inherent stochasticity, and determines the number of replicate runs necessary to generate a representative result. The second examines how robust a simulation is to parameter perturbation, through the use of a one-at-a-time parameter analysis technique. Thirdly, a latin hypercube based sensitivity analysis technique is included which can elucidate non-linear effects between parameters and indicate implications of epistemic uncertainty with reference to the system being modelled. Finally, a further sensitivity analysis technique, the extended Fourier Amplitude Sampling Test (eFAST) has been included to partition the variance in simulation results between input parameters, to determine the parameters which have a significant effect on simulation behaviour. Version 1.3 adds support for Netlogo simulations, aiding simulation developers who use Netlogo to build their simulations perform the same analyses. We have also added user support through the group spartan-group[AT]york[DOT]ac[DOT]uk. Version 2.0 added the ability to read all simulations in from a single CSV file in addition to the prescribed folder structure in previous versions.","Published":"2015-10-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spass","Version":"1.0","Title":"Study Planning and Adaptation of Sample Size","Description":"Sample size estimation and blinded sample size reestimation in Adaptive Study Design.","Published":"2016-10-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"spatcounts","Version":"1.1","Title":"Spatial count regression","Description":"Fit spatial CAR count regression models using MCMC","Published":"2009-06-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"spate","Version":"1.5","Title":"Spatio-Temporal Modeling of Large Data Using a Spectral SPDE\nApproach","Description":"Functionality for spatio-temporal modeling of large data sets is provided. A Gaussian process in space and time is defined through a stochastic partial differential equation (SPDE). The SPDE is solved in the spectral space, and after discretizing in time and space, a linear Gaussian state space model is obtained. When doing inference, the main computational difficulty consists in evaluating the likelihood and in sampling from the full conditional of the spectral coefficients, or equivalently, the latent space-time process. In comparison to the traditional approach of using a spatio-temporal covariance function, the spectral SPDE approach is computationally advantageous. This package aims at providing tools for two different modeling approaches. First, the SPDE based spatio-temporal model can be used as a component in a customized hierarchical Bayesian model (HBM). The functions of the package then provide parameterizations of the process part of the model as well as computationally efficient algorithms needed for doing inference with the HBM. Alternatively, the adaptive MCMC algorithm implemented in the package can be used as an algorithm for doing inference without any additional modeling. The MCMC algorithm supports data that follow a Gaussian or a censored distribution with point mass at zero. Covariates can be included in the model through a regression term.","Published":"2016-08-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SPAtest","Version":"1.1.1","Title":"Score Test Based on Saddlepoint Approximation","Description":"Performs score test using saddlepoint approximation to estimate the null distribution. ","Published":"2017-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spatgraphs","Version":"3.0","Title":"Graph Edge Computations for Spatial Point Patterns","Description":"Graphs (or networks) and graph component\n calculations for spatial locations in *D.","Published":"2015-10-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spatial","Version":"7.3-11","Title":"Functions for Kriging and Point Pattern Analysis","Description":"Functions for kriging and point pattern analysis.","Published":"2015-08-30","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"spatial.gev.bma","Version":"1.0","Title":"Hierarchical spatial generalized extreme value (GEV) modeling\nwith Bayesian Model Averaging (BMA)","Description":"This package fits a hierarchical spatial model for the generalized extreme value distribution with the option of model averaging over the space of covariates.","Published":"2014-05-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"spatial.tools","Version":"1.4.8","Title":"R functions for working with spatial data","Description":"Spatial functions meant to enhance the core functionality of the\n package \"raster\", including a parallel processing engine for use with\n rasters.","Published":"2014-07-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SpatialAcc","Version":"0.1","Title":"Spatial Accessibility Measures","Description":"Provides a set of spatial accessibility measures from a set of locations (demand) to another set of locations (supply). It aims, among others, to support research on spatial accessibility to health care facilities.","Published":"2017-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spatialClust","Version":"1.1.1","Title":"Spatial Clustering using Fuzzy Geographically Weighted\nClustering","Description":"Perform Spatial Clustering Analysis using Fuzzy Geographically Weighted Clustering. Provide optimization using Gravitational Search Algorithm.","Published":"2016-09-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spatialCovariance","Version":"0.6-9","Title":"Computation of Spatial Covariance Matrices for Data on\nRectangles","Description":"Functions that compute the spatial covariance matrix for the matern and power classes of spatial models, for data that arise on rectangular units. This code can also be used for the change of support problem and for spatial data that arise on irregularly shaped regions like counties or zipcodes by laying a fine grid of rectangles and aggregating the integrals in a form of Riemann integration.","Published":"2015-07-08","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"spatialEco","Version":"0.1-7","Title":"Spatial Analysis and Modelling","Description":"Utilities to support spatial data manipulation, query, sampling\n and modelling. Functions include models for species population density, download\n utilities for climate and global deforestation spatial products, spatial\n smoothing, multivariate separability, point process model for creating pseudo-\n absences and sub-sampling, polygon and point-distance landscape metrics,\n auto-logistic model, sampling models, cluster optimization and statistical\n exploratory tools.","Published":"2017-04-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SpatialEpi","Version":"1.2.2","Title":"Methods and Data for Spatial Epidemiology","Description":"\n Methods and data for cluster detection and disease mapping.","Published":"2016-01-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SpatialEpiApp","Version":"0.2","Title":"A Shiny Web Application for the Analysis of Spatial and\nSpatio-Temporal Disease Data","Description":"Runs a Shiny web application that allows to visualize spatial and spatio-temporal disease data, estimate disease risk and detect clusters. The application allows to fit Bayesian disease models to obtain risk estimates and their uncertainty by using the 'R-INLA' package, , and to detect clusters by using the scan statistics implemented in 'SaTScan', . The application allows user interaction and creates interactive visualizations such as maps supporting padding and zooming and tables that allow for filtering. It also enables the generation of reports containing the analyses performed.","Published":"2017-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SpatialExtremes","Version":"2.0-2","Title":"Modelling Spatial Extremes","Description":"Tools for the statistical modelling of spatial extremes using max-stable processes, copula or Bayesian hierarchical models.","Published":"2015-06-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spatialfil","Version":"0.15","Title":"Application of 2D Convolution Kernel Filters to Matrices or 3D\nArrays","Description":"Filter matrices or (three dimensional) array data using different convolution kernels.","Published":"2015-09-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spatialnbda","Version":"1.0","Title":"Performs spatial NBDA in a Bayesian context","Description":"Network based diffusion analysis (NBDA) allows inference on\n the asocial and social transmission of information. This may involve\n the social transmission of a particular behaviour such as tool use, for example.\n For the NBDA, the key parameters estimated are the social effect and baseline rate \n parameters. The baseline rate parameter gives the rate at which the behaviour\n is first performed (or acquired) asocially amongst the individuals in a given population.\n The social effect parameter quantifies the effect of the social associations amongst \n the individuals on the rate at which each individual first performs or displays\n the behaviour. Spatial NBDA involves incorporating spatial information in the analysis. \n This is done by incorporating social networks derived from \n spatial point patterns (of the home bases of the individuals under study). In addition, \n a spatial covariate such as vegetation cover, or slope may be included in the modelling\n process.","Published":"2014-09-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SpatialNP","Version":"1.1-1","Title":"Multivariate nonparametric methods based on spatial signs and\nranks","Description":"This package contains test and estimates of location, \n tests of independence, tests of sphericity and several \n estimates of shape all based on spatial signs, symmetrized \n signs, ranks and signed ranks.","Published":"2013-10-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SpatialPack","Version":"0.2-3","Title":"Package for analysis of spatial data","Description":"This package provides tools to assess the association between two spatial processes.\n Currently, three methodologies are implemented: An adapted t-test to perform hypothesis\n testing about the independence between the processes, a suitable nonparametric correlation\n coefficient, and the codispersion coefficient. SpatialPack gives methods to complement\n methodologies that are available in geoR for one spatial process.","Published":"2014-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SpatialPosition","Version":"1.1.1","Title":"Spatial Position Models","Description":"Computes spatial position models: Stewart potentials, Reilly\n catchment areas, Huff catchment areas.","Published":"2016-06-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spatialprobit","Version":"0.9-11","Title":"Spatial Probit Models","Description":"Bayesian Estimation of Spatial Probit and Tobit Models.","Published":"2015-09-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spatialsegregation","Version":"2.44","Title":"Segregation Measures for Multitype Spatial Point Patterns","Description":"Summaries for measuring segregation/mingling in multitype spatial\n point patterns with graph based neighbourhood description. Included indices:\n Mingling, Shannon, Simpson (also the non-spatial) Included functionals:\n Mingling, Shannon, Simpson, ISAR, MCI. Included neighbourhoods: Geometric, k-\n nearest neighbours, Gabriel, Delaunay. Dixon's test.","Published":"2017-04-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"spatialTailDep","Version":"1.0.2","Title":"Estimation of spatial tail dependence models","Description":"Provides functions implementing the pairwise M-estimator for\n parametric models for stable tail dependence functions described in \"An\n M-estimator of spatial tail dependence\"\n (Einmahl, J.H.J., Kiriliouk, A., Krajina, A. and Segers, J., 2014).\n See http://arxiv.org/abs/1403.1975.","Published":"2014-11-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SpatialTools","Version":"1.0.2","Title":"Tools for Spatial Data Analysis","Description":"Tools for spatial data analysis. Emphasis on kriging. Provides functions for prediction and simulation. Intended to be relatively straightforward, fast, and flexible.","Published":"2015-12-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SpatialVx","Version":"0.6-1","Title":"Spatial Forecast Verification","Description":"Spatial forecast verification arose from verifying high-resolution forecasts, where coarser-resolution models generally are favored even when a human forecaster finds the higher-resolution model to be considerably better. Most newly proposed methods, which largely come from image analysis, computer vision, and similar, are available, with more on the way.","Published":"2017-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SpaTimeClus","Version":"1.0","Title":"Model-Based Clustering of Spatio-Temporal Data","Description":"Mixture model is used to achieve the clustering goal. Each component is itself a mixture model of polynomial autoregressive regressions whose the logistic weights consider the spatial and temporal information.","Published":"2016-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SpatioTemporal","Version":"1.1.7","Title":"Spatio-Temporal Model Estimation","Description":"Utilities that estimate, predict and cross-validate the\n spatio-temporal model developed for MESA Air.","Published":"2013-08-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SpatMCA","Version":"1.0.0.2","Title":"Regularized Spatial Maximum Covariance Analysis","Description":"Provide regularized maximum covariance analysis incorporating smoothness,\n sparseness and orthogonality of couple patterns by using the alternating direction method\n of multipliers algorithm. The method can be applied to either regularly or irregularly\n spaced data (Wang and Huang, 2017).","Published":"2017-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SpatPCA","Version":"1.1.1.2","Title":"Regularized Principal Component Analysis for Spatial Data","Description":"Provide regularized principal component analysis incorporating smoothness,\n sparseness and orthogonality of eigenfunctions by using the alternating direction method of\n multipliers algorithm. The method can be applied to either regularly or irregularly spaced\n data (Wang and Huang, 2017).","Published":"2017-03-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SpATS","Version":"1.0-5","Title":"Spatial Analysis of Field Trials with Splines","Description":"Analysis of field trial experiments by modelling spatial trends using two-dimensional Penalised spline (P-spline) models.","Published":"2017-01-21","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"spatstat","Version":"1.51-0","Title":"Spatial Point Pattern Analysis, Model-Fitting, Simulation, Tests","Description":"Comprehensive open-source toolbox for analysing Spatial Point Patterns. Focused mainly on two-dimensional point patterns, including multitype/marked points, in any spatial region. Also supports three-dimensional point patterns, space-time point patterns in any number of dimensions, point patterns on a linear network, and patterns of other geometrical objects. Supports spatial covariate data such as pixel images. \n\tContains over 2000 functions for plotting spatial data, exploratory data analysis, model-fitting, simulation, spatial sampling, model diagnostics, and formal inference. \n\tData types include point patterns, line segment patterns, spatial windows, pixel images, tessellations, and linear networks. \n\tExploratory methods include quadrat counts, K-functions and their simulation envelopes, nearest neighbour distance and empty space statistics, Fry plots, pair correlation function, kernel smoothed intensity, relative risk estimation with cross-validated bandwidth selection, mark correlation functions, segregation indices, mark dependence diagnostics, and kernel estimates of covariate effects. Formal hypothesis tests of random pattern (chi-squared, Kolmogorov-Smirnov, Monte Carlo, Diggle-Cressie-Loosmore-Ford, Dao-Genton, two-stage Monte Carlo) and tests for covariate effects (Cox-Berman-Waller-Lawson, Kolmogorov-Smirnov, ANOVA) are also supported.\n\tParametric models can be fitted to point pattern data using the functions ppm(), kppm(), slrm(), dppm() similar to glm(). Types of models include Poisson, Gibbs and Cox point processes, Neyman-Scott cluster processes, and determinantal point processes. Models may involve dependence on covariates, inter-point interaction, cluster formation and dependence on marks. Models are fitted by maximum likelihood, logistic regression, minimum contrast, and composite likelihood methods. \n\tA model can be fitted to a list of point patterns (replicated point pattern data) using the function mppm(). The model can include random effects and fixed effects depending on the experimental design, in addition to all the features listed above.\n\tFitted point process models can be simulated, automatically. Formal hypothesis tests of a fitted model are supported (likelihood ratio test, analysis of deviance, Monte Carlo tests) along with basic tools for model selection (stepwise(), AIC()). Tools for validating the fitted model include simulation envelopes, residuals, residual plots and Q-Q plots, leverage and influence diagnostics, partial residuals, and added variable plots.","Published":"2017-05-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spatstat.local","Version":"3.5-6","Title":"Extension to 'spatstat' for Local Composite Likelihood","Description":"Extension to the 'spatstat' package, enabling the user\n\t to fit point process models to point pattern data\n\t by local composite likelihood ('geographically weighted\n\t regression').","Published":"2017-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spatstat.utils","Version":"1.6-0","Title":"Utility Functions for 'spatstat'","Description":"Contains utility functions for the 'spatstat' package\n which may also be useful for other purposes.","Published":"2017-05-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spatsurv","Version":"1.1","Title":"Bayesian Spatial Survival Analysis with Parametric Proportional\nHazards Models","Description":"Bayesian inference for parametric proportional hazards spatial\n survival models; flexible spatial survival models.","Published":"2017-03-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spbabel","Version":"0.4.7","Title":"Convert Spatial Data Using Tidy Tables","Description":"Tools to convert from specific formats to more general forms of \n spatial data. Using tables to store the actual entities present in spatial\n data provides flexibility, and the functions here deliberately \n minimize the level of interpretation applied, leaving that for specific \n applications. Includes support for simple features, round-trip for 'Spatial' classes and long-form \n tables, analogous to 'ggplot2::fortify'. There is also a more 'normal form' representation\n that decomposes simple features and their kin to tables of objects, parts, and unique coordinates. ","Published":"2017-05-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spBayes","Version":"0.4-0","Title":"Univariate and Multivariate Spatial-Temporal Modeling","Description":"Fits univariate and multivariate spatio-temporal\n random effects models for point-referenced data using Markov chain Monte Carlo (MCMC). Details are given in Finley, Banerjee, and Gelfand (2015; ) and Finley, Banerjee, and Cook (2014; ).","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spBayesSurv","Version":"1.1.1","Title":"Bayesian Modeling and Analysis of Spatially Correlated Survival\nData","Description":"Provides several Bayesian survival models for spatial/non-spatial survival data: marginal Bayesian Nonparametric models, marginal Bayesian proportional hazards models, generalized accelerated failure time frailty models, and standard semiparametric frailty models within the context of proportional hazards, proportional odds and accelerated failure time.","Published":"2017-05-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spc","Version":"0.5.3","Title":"Statistical Process Control -- Collection of Some Useful\nFunctions","Description":"Evaluation of control charts by means of\n the zero-state, steady-state ARL (Average Run Length) and RL quantiles.\n Setting up control charts for given in-control ARL. The control charts\n under consideration are one- and two-sided EWMA, CUSUM, and\n Shiryaev-Roberts schemes for monitoring the mean of normally\n distributed independent data. ARL calculation\n of the same set of schemes under drift are added. \n Other charts and parameters are in preparation.\n Further SPC areas will be covered as well\n (sampling plans, capability indices ...).","Published":"2016-02-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spcadjust","Version":"1.1","Title":"Functions for Calibrating Control Charts","Description":"Calibration of thresholds of control charts such as\n CUSUM charts based on past data, taking estimation error into account.","Published":"2016-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SPCALDA","Version":"1.0","Title":"A New Reduced-Rank Linear Discriminant Analysis Method","Description":"A new reduced-rank LDA method which works for high dimensional multi-class data. ","Published":"2015-11-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spcosa","Version":"0.3-6","Title":"Spatial Coverage Sampling and Random Sampling from Compact\nGeographical Strata","Description":"Spatial coverage sampling and random sampling from compact\n geographical strata created by k-means.","Published":"2015-12-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"spcov","Version":"1.01","Title":"Sparse Estimation of a Covariance Matrix","Description":"Provides a covariance estimator for multivariate normal\n data that is sparse and positive definite. Implements the\n majorize-minimize algorithm described in Bien, J., and\n Tibshirani, R. (2011), \"Sparse Estimation of a Covariance\n Matrix,\" Biometrika. 98(4). 807--820.","Published":"2012-09-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spcr","Version":"2.0","Title":"Sparse Principal Component Regression","Description":"The sparse principal component regression is computed. The regularization parameters are optimized by cross-validation.","Published":"2016-10-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spd","Version":"2.0-1","Title":"Semi Parametric Distribution","Description":"The Semi Parametric Piecewise Distribution blends the Generalized Pareto Distribution for the tails with a kernel based interior.","Published":"2015-07-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"spdep","Version":"0.6-13","Title":"Spatial Dependence: Weighting Schemes, Statistics and Models","Description":"A collection of functions to create spatial weights matrix\n objects from polygon contiguities, from point patterns by distance and\n tessellations, for summarizing these objects, and for permitting their\n use in spatial data analysis, including regional aggregation by minimum\n spanning tree; a collection of tests for spatial autocorrelation,\n including global Moran's I, APLE, Geary's C, Hubert/Mantel general cross\n product statistic, Empirical Bayes estimates and Assunção/Reis Index,\n Getis/Ord G and multicoloured join count statistics, local Moran's I\n and Getis/Ord G, saddlepoint approximations and exact tests for global\n and local Moran's I; and functions for estimating spatial simultaneous\n autoregressive (SAR) lag and error models, impact measures for lag\n models, weighted and unweighted SAR and CAR spatial regression models,\n semi-parametric and Moran eigenvector spatial filtering, GM SAR error\n models, and generalized spatial two stage least squares models.","Published":"2017-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spdownscale","Version":"0.1.0","Title":"Spatial Downscaling Using Bias Correction Approach","Description":"Spatial downscaling of climate data (Global Circulation Models/Regional Climate Models) using quantile-quantile bias correction technique.","Published":"2017-02-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spdplyr","Version":"0.1.3","Title":"Data Manipulation Verbs for the Spatial Classes","Description":"Methods for 'dplyr' verbs for 'sp' 'Spatial' classes. The basic \n verbs that modify data attributes, remove or re-arrange rows are supported\n and provide complete 'Spatial' analogues of the input data. The group by\n and summarize work flow returns a non-topological spatial union. There is \n limited support for joins, with left and inner to copy attributes from \n another table. ","Published":"2017-06-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spduration","Version":"0.16.0","Title":"Split-Population Duration (Cure) Regression","Description":"An implementation of split-population duration regression models. \n Unlike regular duration models, split-population duration models are\n mixture models that accommodate the presence of a sub-population that is \n not at risk for failure, e.g. cancer patients who have been cured by \n treatment. This package implements Weibull and Loglogistic forms for the \n duration component, and focuses on data with time-varying covariates. \n These models were originally formulated in Boag (1949) \n and Berkson and Gage (1952) \n , and extended in Schmidt and Witte \n (1989) .","Published":"2017-04-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spdynmod","Version":"1.1.3","Title":"Spatio-Dynamic Wetland Plant Communities Model","Description":"A spatio-dynamic modelling package that focuses on three\n characteristic wetland plant communities in a semiarid Mediterranean\n wetland in response to hydrological pressures from the catchment. The\n package includes the data on watershed hydrological pressure and the\n initial raster maps of plant communities but also allows for random initial\n distribution of plant communities. Ongoing developments of the package\n focus on offering easy to use tools for creating other spatio-dynamic\n models.","Published":"2015-12-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spe","Version":"1.1.2","Title":"Stochastic Proximity Embedding","Description":"Implements stochastic proximity embedding as described by\n Agrafiotis et al. in PNAS, 2002, 99, pg 15869 and J. Comput. Chem., 2003,24, pg 1215","Published":"2009-02-24","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"speaq","Version":"2.0.0","Title":"Tools for Nuclear Magnetic Resonance (NMR) Spectra Alignment,\nPeak Based Processing, Quantitative Analysis and Visualizations","Description":"The speaq package is meant to make Nuclear Magnetic Resonance spectroscopy (NMR spectroscopy) data analysis as easy as possible by only requiring a small set of functions to perform an entire analysis. speaq offers the possibility of raw spectra alignment and quantitation but also an analysis based on features whereby the spectra are converted to peaks which are then grouped and turned into features. These features can be processed with any number of statistical tools either included in speaq or available elsewhere on CRAN. ","Published":"2017-05-16","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"spec","Version":"0.1.3","Title":"A Data Specification Format and Interface","Description":"Creates a data specification that describes the columns of a \n table (data.frame). Provides methods to read, write, and update the \n specification. Checks whether a table matches its specification. See\n specification.data.frame(),read.spec(), write.spec(), as.csv.spec(),\n respecify.character(), and %matches%.data.frame().","Published":"2017-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"speccalt","Version":"0.1.1","Title":"Alternative spectral clustering, with automatic estimation of k","Description":"Alternative to the kernlab::specc function. Includes a spectral clustering implementation, a locally adapted kernel function akin to what is already proposed in kernlab, and an optional procedure that automatically estimates the optimal number of clusters. Several sample data sets are also included.","Published":"2013-09-16","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"SpecHelpers","Version":"0.2.2","Title":"Spectroscopy Related Utilities","Description":"Utility functions for spectroscopy. 1. Functions to simulate\n spectra for use in teaching or testing. 2. Functions to process files created by\n 'LoggerPro' and 'SpectraSuite' software.","Published":"2016-01-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SPECIES","Version":"1.0","Title":"Statistical package for species richness estimation","Description":"SPECIES is an R package for estimation of species richness\n or diversity.","Published":"2011-04-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"speciesgeocodeR","Version":"1.0-4","Title":"Prepare Species Distributions for the Use in Phylogenetic\nAnalyses","Description":"Preparation of species occurrences and distribution data for the use in phylogenetic analyses. SpeciesgeocodeR is built for data cleaning, data exploration and data analysis and especially suited for biogeographical and ecological questions on large datasets. The package includes the easy creation of summary-tables and -graphs and geographical maps, the automatic cleaning of geographic occurrence data, the calculating of coexistence matrices and species ranges (EOO) as well as mapping diversity in geographic areas.","Published":"2015-10-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SpeciesMix","Version":"0.3.4","Title":"Fit Mixtures of Archetype Species","Description":"Fitting Mixtures to Species distributions using BFGS and analytical derivatives.","Published":"2016-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"specificity","Version":"0.1.1","Title":"Specificity of personality trait-outcome (or trait-trait)\nassociations","Description":"The package helps to test the specificity of personality trait-outcome (or trait-trait) associations by comparing the observed associations to those obtained using randomly created personality scales. ","Published":"2013-08-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"specmine","Version":"1.0","Title":"Metabolomics and Spectral Data Analysis and Mining","Description":"Provides a set of methods for metabolomics \n\tdata analysis, including data loading in different formats, \n\tpre-processing, metabolite identification, univariate and multivariate \n\tdata analysis, machine learning and feature selection. Case studies \n\tcan be found on the website: http://darwin.di.uminho.pt/metabolomics .","Published":"2015-11-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SpecsVerification","Version":"0.5-2","Title":"Forecast Verification Routines for Ensemble Forecasts of Weather\nand Climate","Description":"A collection of forecast verification routines developed for the SPECS\n FP7 project. The emphasis is on comparative verification of ensemble forecasts of weather and climate.","Published":"2017-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spectral","Version":"1.0.1","Title":"Common Methods of Spectral Data Analysis","Description":"Fourier and Hilbert transforms are utilized to perform several types\n of spectral analysis on the supplied data. Also fragmented and\n irregularly spaced data can be processed. A user friendly interface\n helps to interpret the results.","Published":"2016-09-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spectral.methods","Version":"0.7.2.133","Title":"Singular Spectrum Analysis (SSA) Tools for Time Series Analysis","Description":"Contains some implementations of Singular Spectrum Analysis (SSA) for the gapfilling and spectral decomposition of time series. It contains the code used by Buttlar et. al. (2014), Nonlinear Processes in Geophysics. In addition, the iterative SSA gapfilling method of Kondrashov and Ghil (2006) is implemented. All SSA calculations are done via the truncated and fast SSA algorithm of Korobeynikov (2010) (package 'Rssa'). ","Published":"2015-06-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spectralGP","Version":"1.3.3","Title":"Approximate Gaussian Processes Using the Fourier Basis","Description":"Routines for creating, manipulating, and performing \n Bayesian inference about Gaussian processes in \n one and two dimensions using the Fourier basis approximation: \n simulation and plotting of processes, calculation of \n coefficient variances, calculation of process density, \n coefficient proposals (for use in MCMC). It uses R environments to\n store GP objects as references/pointers.","Published":"2015-07-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SpectralMap","Version":"1.0","Title":"Diffusion Map and Spectral Map","Description":"Implements the diffusion map method of dimensionality reduction and spectral method of combining multiple diffusion maps, including creation of the spectra and visualization of maps.","Published":"2016-07-07","License":"GNU General Public License version 2","snapshot_date":"2017-06-23"} {"Package":"spectrino","Version":"1.6.0","Title":"Spectra Visualization, Organizer and Data Preparation","Description":"Spectra visualization, organizer and data preparation\n from within R or stand-alone. Binary (application) part is \n installed separately by running spnInstallApp() \n immediately after installing the package.","Published":"2015-07-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SPEDInstabR","Version":"1.4","Title":"Estimation of the Relative Importance of Factors Affecting\nSpecies Distribution Based on Stability Concept","Description":"From output files obtained from the software 'ModestR', the relative contribution of factors to explain species distribution is depicted using several plots. A global geographic raster file for each environmental variable may be also obtained with the mean relative contribution, considering all species present in each raster cell, of the factor to explain species distribution. Finally, for each variable it is also possible to compare the frequencies of any variable obtained in the cells where the species is present with the frequencies of the same variable in the cells of the extent.","Published":"2017-06-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"speedglm","Version":"0.3-2","Title":"Fitting Linear and Generalized Linear Models to Large Data Sets","Description":"Fitting linear models and generalized linear models to large data sets by updating algorithms.","Published":"2017-01-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"spef","Version":"1.0-3","Title":"Semiparametric Estimating Functions","Description":"Functions for fitting semiparametric regression models for\n panel count survival data. See Wang and Yan (2011)\n\t for more details.","Published":"2017-04-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"speff2trial","Version":"1.0.4","Title":"Semiparametric efficient estimation for a two-sample treatment\neffect","Description":"The package performs estimation and testing of the\n treatment effect in a 2-group randomized clinical trial with a\n quantitative, dichotomous, or right-censored time-to-event\n endpoint. The method improves efficiency by leveraging baseline\n predictors of the endpoint. The inverse probability weighting\n technique of Robins, Rotnitzky, and Zhao (JASA, 1994) is used\n to provide unbiased estimation when the endpoint is missing at\n random.","Published":"2012-10-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SPEI","Version":"1.7","Title":"Calculation of the Standardised Precipitation-Evapotranspiration\nIndex","Description":"A set of functions for computing potential evapotranspiration and several widely used drought indices including the Standardized Precipitation-Evapotranspiration Index (SPEI).","Published":"2017-06-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spellcheckr","Version":"0.1.2","Title":"Correct the Spelling of a Given Word in the English Language","Description":"Corrects the spelling of a given word in English \n using a modification of Peter Norvig's spell correct \n algorithm (see ) \n which handles up to three edits. The algorithm tries to \n find the spelling with maximum probability of intended\n correction out of all possible candidate corrections from\n the original word.","Published":"2016-10-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"sperich","Version":"1.5-7","Title":"Auxiliary Functions to Estimate Centers of Biodiversity","Description":"Provides some easy-to-use functions to interpolate species range based on species occurrences and to estimate centers of biodiversity.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sperrorest","Version":"2.0.0","Title":"Perform Spatial Error Estimation and Variable Importance in\nParallel","Description":"Implements spatial error estimation and \n permutation-based variable importance measures for predictive models using \n spatial cross-validation and spatial block bootstrap.","Published":"2017-06-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spex","Version":"0.3.0","Title":"Spatial Extent as Polygons with Projection","Description":"Functions to produce a fully fledged 'Spatial' object extent as a\n 'SpatialPolygonsDataFrame'. Also included are functions to generate polygons\n from raster using 'quadmesh' techniques, and a round number buffered extent for\n generating data structures. ","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spfrontier","Version":"0.2.3","Title":"Spatial Stochastic Frontier Models","Description":"A set of tools for estimation of various spatial specifications of\n stochastic frontier models.","Published":"2016-05-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spftir","Version":"0.1.0","Title":"Pre-Processing and Analysis of Mid-Infrared Spectral Region","Description":"Functions to manipulate, pre-process and analyze spectra in the mid-infrared region. The pre-processing of the mid-infrared spectra is a transcendental step in the spectral analysis. Preprocessing of the spectra includes smoothing, offset, baseline correction, and normalization, is performed before the analysis of the spectra and is essential to obtain conclusive results in subsequent quantitative or qualitative analysis. This package was supported by FONDECYT 3150630, and CIPA Conicyt-Regional R08C1002 is gratefully acknowledged.","Published":"2016-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spgrass6","Version":"0.8-9","Title":"Interface Between GRASS 6+ Geographical Information System and R","Description":"Interpreted interface between GRASS 6+ geographical \n information system and R, based on starting R from within the GRASS \n environment, or running free-standing R in a temporary GRASS location;\n the package provides facilities for using all GRASS commands from the \n R command line. This package may not be used for GRASS 7, for which\n rgrass7 should be used.","Published":"2016-02-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spgs","Version":"1.0","Title":"Statistical Patterns in Genomic Sequences","Description":"A collection of statistical hypothesis tests and other \n\ttechniques for identifying certain spatial relationships/phenomena in \n\tDNA sequences. In particular, it provides tests and graphical methods for determining \n\twhether or not DNA sequences comply with Chargaff's second parity rule \n\tor exhibit purine-pyrimidine parity. In addition, there are functions for \n\tefficiently simulating discrete state space Markov chains and testing \n\tarbitrary symbolic sequences of symbols for the presence of first-order \n\tMarkovianness.\n\tAlso, it has functions for counting words/k-mers (and cylinder patterns) in \n\tarbitrary symbolic sequences. Functions which take a DNA sequence as input \n\tcan handle sequences stored as SeqFastadna objects from the 'seqinr' package.","Published":"2015-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spgwr","Version":"0.6-31","Title":"Geographically Weighted Regression","Description":"Functions for computing geographically weighted\n regressions are provided, based on work by Chris\n Brunsdon, Martin Charlton and Stewart Fotheringham. ","Published":"2017-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sphereplot","Version":"1.5","Title":"Spherical plotting","Description":"Various functions for creating spherical coordinate system plots via extensions to rgl.","Published":"2013-10-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SphericalCubature","Version":"1.3","Title":"Numerical Integration over Spheres and Balls in n-Dimensions;\nMultivariate Polar Coordinates","Description":"Provides several methods to integrate functions over the unit\n sphere and ball in n-dimensional Euclidean space. Routines for converting to/from\n multivariate polar/spherical coordinates are also provided.","Published":"2016-10-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SphericalK","Version":"1.2","Title":"Spherical K-Function","Description":"Spherical K-function for point-pattern analysis on the sphere.","Published":"2015-10-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sphet","Version":"1.6","Title":"Estimation of spatial autoregressive models with and without\nheteroskedastic innovations","Description":"Generalized Method of Moment estimation of Cliff-Ord-type\n spatial autoregressive models with and without heteroskedastic\n innovations","Published":"2015-01-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spi","Version":"1.1","Title":"Compute SPI index","Description":"Compute the SPI index using R","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SPIAssay","Version":"1.1.0","Title":"A Genetic-Based Assay for the Identification of Cell Lines","Description":"The SNP Panel Identification Assay (SPIA) is a package that enables an accurate determination of cell line identity from the genotype of single nucleotide polymorphisms (SNPs). The SPIA test allows to discern when two cell lines are close enough to be called similar and when they are not. Details about the method are available at \"Demichelis et al. (2008) SNP panel identification assay (SPIA): a genetic-based assay for the identification of cell lines. Nucleic Acids Res., 3, 2446-2456\".","Published":"2016-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spiders","Version":"1.2","Title":"Fits Predator Preferences Model","Description":"Fits and simulates data from our predator preferences model, .","Published":"2016-03-02","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"SPIGA","Version":"1.0.0","Title":"Compute SPI Index using the Methods Genetic Algorithm and\nMaximum Likelihood","Description":"Calculate the Standardized Precipitation Index (SPI) for monitoring drought, using Artificial Intelligence techniques (SPIGA) and traditional numerical technique Maximum Likelihood (SPIML). For more information see: http://drought.unl.edu/monitoringtools/downloadablespiprogram.aspx.","Published":"2016-06-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spikes","Version":"1.1","Title":"Detecting Election Fraud from Irregularities in Vote-Share\nDistributions","Description":"Applies re-sampled kernel density method to detect vote fraud. It estimates the proportion of coarse vote-shares in the observed data relative to the null hypothesis of no fraud.","Published":"2016-09-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spikeslab","Version":"1.1.5","Title":"Prediction and variable selection using spike and slab\nregression","Description":"Spike and slab for prediction and variable selection in\n linear regression models. Uses a generalized elastic net for\n variable selection.","Published":"2013-04-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"spikeSlabGAM","Version":"1.1-11","Title":"Bayesian Variable Selection and Model Choice for Generalized\nAdditive Mixed Models","Description":"Bayesian variable selection, model choice, and regularized\n estimation for (spatial) generalized additive mixed regression models\n via stochastic search variable selection with spike-and-slab priors.","Published":"2016-02-29","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"SPIn","Version":"1.1","Title":"Simulation-efficient Shortest Probability Intervals","Description":"An optimal weighting strategy to compute\n simulation-efficient shortest probability intervals (spins).","Published":"2013-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spind","Version":"2.0.1","Title":"Spatial Methods and Indices","Description":"Functions for spatial methods based on generalized estimating equations (GEE) and\n wavelet-revised methods (WRM), functions for scaling by wavelet multiresolution regression (WMRR),\n conducting multi-model inference, and stepwise model selection. Further, contains functions \n for spatially corrected model accuracy measures.","Published":"2017-04-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spinyReg","Version":"0.1-0","Title":"Sparse Generative Model and Its EM Algorithm","Description":"Implements a generative model that uses a\n spike-and-slab like prior distribution obtained by multiplying a\n deterministic binary vector. Such a model allows an EM algorithm,\n optimizing a type-II log-likelihood.","Published":"2015-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"splancs","Version":"2.01-40","Title":"Spatial and Space-Time Point Pattern Analysis","Description":"The Splancs package was written as an enhancement to S-Plus for display and analysis of spatial point pattern data; it has been ported to R and is in \"maintenance mode\". ","Published":"2017-04-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"splines2","Version":"0.2.5","Title":"Regression Spline Functions and Classes Too","Description":"A complementary package on splines providing functions\n constructing B-splines, integral of B-splines, monotone splines\n (M-splines) and its integral (I-splines), convex splines (C-splines),\n and their derivatives of given order. Piecewise constant basis is\n allowed for B-spline and M-spline basis.","Published":"2017-02-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"splitfngr","Version":"0.1.1","Title":"Combined Evaluation and Split Access of Functions","Description":"\n Some R functions, such as optim(), require a function\n its gradient passed as separate arguments. When these are\n expensive to calculate it may be much faster to calculate\n the function (fn) and gradient (gr) together since they often share\n many calculations (chain rule). This package allows the user\n to pass in a single function that returns both the function\n and gradient, then splits (hence 'splitfngr') them\n so the results can be accessed\n separately. The functions provided allow this to be done with\n any number of functions/values, not just for functions and gradients.","Published":"2016-09-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"splithalf","Version":"0.2.0","Title":"Calculate Task Split Half Reliability Estimates","Description":"A series of functions to calculate the split \n half reliability of RT based tasks. The core function performs a Monte Carlo\n procedure to process a user defined number of random splits in order to \n provide a better reliability estimate. The current functions target the dot-\n probe task, however, can be modified for other tasks.","Published":"2017-04-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"splitstackshape","Version":"1.4.2","Title":"Stack and Reshape Datasets After Splitting Concatenated Values","Description":"Online data collection tools like Google Forms often export\n multiple-response questions with data concatenated in cells. The\n concat.split (cSplit) family of functions splits such data into \n separate cells. The package also includes functions to stack groups \n of columns and to reshape wide data, even when the data are \n \"unbalanced\"---something which reshape (from base R) does not handle, \n and which melt and dcast from reshape2 do not easily handle.","Published":"2014-10-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"splm","Version":"1.4-6","Title":"Econometric Models for Spatial Panel Data","Description":"ML and GM estimation and diagnostic testing of econometric models for spatial panel data.","Published":"2016-11-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spls","Version":"2.2-1","Title":"Sparse Partial Least Squares (SPLS) Regression and\nClassification","Description":"This package provides functions for fitting a Sparse\n Partial Least Squares Regression and Classification","Published":"2013-08-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"splus2R","Version":"1.2-2","Title":"Supplemental S-PLUS Functionality in R","Description":"Currently there are many functions in S-PLUS that are\n missing in R. To facilitate the conversion of S-PLUS packages\n to R packages, this package provides some missing S-PLUS\n functionality in R.","Published":"2016-09-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"splusTimeDate","Version":"2.5.0-137","Title":"Times and Dates from S-PLUS","Description":"A collection of classes and methods for working with\n times and dates. The code was originally available in S-PLUS.","Published":"2016-05-10","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"splusTimeSeries","Version":"1.5.0-74","Title":"Time Series from S-PLUS","Description":"A collection of classes and methods for working with time series.\n The code was originally available in S-PLUS.","Published":"2016-05-03","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"spm12r","Version":"2.3.1","Title":"Wrapper Functions for 'SPM' (Statistical Parametric Mapping)\nVersion 12 from the 'Wellcome' Trust Centre for 'Neuroimaging'","Description":"Installs 'SPM12' to the R library directory and has associated\n functions for 'fMRI' and general imaging utilities, called through 'MATLAB'.","Published":"2017-03-07","License":"GPL-2 | file LICENCE","snapshot_date":"2017-06-23"} {"Package":"spMC","Version":"0.3.8","Title":"Continuous-Lag Spatial Markov Chains","Description":"A set of functions is provided for 1) the stratum lengths analysis along a chosen direction, 2) fast estimation of continuous lag spatial Markov chains model parameters and probability computing (also for large data sets), 3) transition probability maps and transiograms drawing, 4) simulation methods for categorical random fields.","Published":"2016-10-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SPmlficmcm","Version":"1.4","Title":"Semiparametric Maximum Likelihood Method for Interactions\nGene-Environment in Case-Mother Control-Mother Designs","Description":"Implements the method of general semiparametric maximum likelihood estimation for logistic models in case-mother control-mother designs.","Published":"2015-12-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"spmoran","Version":"0.1.1","Title":"Moran's Eigenvector-Based Spatial Regression Models","Description":"Functions for estimating fixed and random effects\n eigenvector spatial filtering models.","Published":"2017-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spnet","Version":"0.9.1-0","Title":"Plotting (Social) Networks on Maps","Description":"Facilitates the rendering of networks for which nodes have a specific position on a map (cities, participants in a political debate, etc.). Map data and network data are stored together in a single object which handles the match between network nodes and their respective position on the map. The plot method renders both the map and the network data. Several networks can be plot simultaneously. The graphic is highly customisable and the legend is automatically printed. Map data have to be supplied as 'SpatialPolygons' objects (from the 'sp' package) and network data as 'named matrix'.","Published":"2016-02-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spocc","Version":"0.7.0","Title":"Interface to Species Occurrence Data Sources","Description":"A programmatic interface to many species occurrence data sources,\n including Global Biodiversity Information Facility ('GBIF'), 'USGSs'\n Biodiversity Information Serving Our Nation ('BISON'), 'iNaturalist',\n Berkeley 'Ecoinformatics' Engine, 'eBird', 'AntWeb', Integrated Digitized\n 'Biocollections' ('iDigBio'), 'VertNet', Ocean 'Biogeographic' Information\n System ('OBIS'), and Atlas of Living Australia ('ALA'). Includes\n functionality for retrieving species occurrence data, and combining\n those data.","Published":"2017-04-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SPODT","Version":"0.9-1","Title":"Spatial Oblique Decision Tree","Description":"SPODT is a spatial partitioning method based on oblique decision trees, in order to classify study area into zones of different risks, determining their boundaries","Published":"2015-01-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spongecake","Version":"0.1.1","Title":"Transform a Movie into a Synthetic Picture","Description":"Transform a Movie into a Synthetic Picture. A frame every 10 seconds is summarized into one colour, then every generated colors are stacked together.","Published":"2016-11-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sporm","Version":"1.1","Title":"Semiparametric proportional odds rate model","Description":"R implementation of the methods described in \"A rank-based\n empirical likelihood approach to two-sample proportional odds\n model and its goodness-of-fit\" by Zhong Guan and Cheng Peng,\n Journal of Nonparametric Statistics, to appear.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SportsAnalytics","Version":"0.2","Title":"Infrastructure for Sports Analytics","Description":"The aim of this package is to provide infrastructure for\n sports analysis. Anyway, currently it is a selection of data\n sets, functions to fetch sports data, examples, and demos --\n with the ambition to develop bit by bit a set of classes to\n represent general concepts of sports analysis.","Published":"2013-04-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SPOT","Version":"2.0.1","Title":"Sequential Parameter Optimization Toolbox","Description":"A set of tools for model based optimization and tuning of\n algorithms. It includes surrogate models, optimizers and design of experiment\n approaches. The main interface is spot, which uses sequentially updated\n surrogate models for the purpose of efficient optimization. The main goal is\n to ease the burden of objective function evaluations, when a single evaluation\n requires a significant amount of resources.","Published":"2017-03-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SPPcomb","Version":"0.1","Title":"Combining Different Spatial Datasets in Cancer Risk Estimation","Description":"We propose a novel two-step procedure to combine epidemiological\n data obtained from diverse sources with the aim to quantify risk factors\n affecting the probability that an individual develops certain disease such as\n cancer. See Hui Huang, Xiaomei Ma, Rasmus Waagepetersen, Theodore R. Holford,\n Rong Wang, Harvey Risch, Lloyd Mueller & Yongtao Guan (2014) A New Estimation Approach\n for Combining Epidemiological Data From Multiple Sources, Journal of the American Statistical\n Association, 109:505, 11-23, .","Published":"2016-12-20","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"sppmix","Version":"1.0.1","Title":"Modeling Spatial Poisson and Related Point Processes","Description":"Implements classes and methods for modeling spatial point patterns using inhomogeneous Poisson point processes, where the intensity surface is assumed to be analogous to a finite additive mixture of normal components and the number of components is a finite, fixed or random integer. Extensions to the marked inhomogeneous Poisson point processes case are also presented. We provide an extensive suite of R functions that can be used to simulate, visualize and model point patterns, estimate the parameters of the models, assess convergence of the algorithms and perform model selection and checking in the proposed modeling context.","Published":"2017-03-31","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"spray","Version":"1.0-3","Title":"Sparse Arrays and Multivariate Polynomials","Description":"Sparse arrays interpreted as multivariate polynomials.","Published":"2017-05-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SPREDA","Version":"1.0","Title":"Statistical Package for Reliability Data Analysis","Description":"The Statistical Package for REliability Data Analysis (SPREDA) implements recently-developed statistical methods for the analysis of reliability data. Modern technological developments, such as sensors and smart chips, allow us to dynamically track product/system usage as well as other environmental variables, such as temperature and humidity. We refer to these variables as dynamic covariates. The package contains functions for the analysis of time-to-event data with dynamic covariates and degradation data with dynamic covariates. The package also contains functions that can be used for analyzing time-to-event data with right censoring, and with left truncation and right censoring. Financial support from NSF and DuPont are acknowledged. ","Published":"2014-09-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SPreFuGED","Version":"1.0","Title":"Selecting a Predictive Function for a Given Gene Expression Data","Description":"The recent advancement of high-throughput technologies has led to frequent utilization of gene expression and other \"omics\" data for toxicological, diagnostic or prognostic studies in and clinical applications. Unlike in classical predictions where the number of samples is greater than the number of variables (n>p), the challenge faced with prediction using \"omics\" data is that the number of parameters greatly exceeds the number of samples (p>>n). To solve this curse of dimensionality problem, several predictive functions have been proposed for direct and probabilistic classification and survival predictions. Nevertheless, these predictive functions have been shown to perform differently across datasets. Comparing predictive functions and choosing the best is computationally intensive and leads to selection bias. Thus, the question which function should one choose for a given dataset is to be ascertained. This package implements the approach proposed by Jong et al., (2016) to address this question.","Published":"2016-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sprex","Version":"1.4.1","Title":"Calculate Species Richness and Extrapolation Metrics","Description":"Calculate species richness functions for rarefaction and\n extrapolation.","Published":"2016-04-16","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"sprint","Version":"1.0.7","Title":"Simple Parallel R INTerface","Description":"SPRINT (Simple Parallel R INTerface) is a parallel\n framework for R. It provides a High Performance Computing (HPC)\n harness which allows R scripts to run on HPC clusters. SPRINT\n contains a library of selected R functions that have been\n parallelized. Functions are named after the original R function\n with the added prefix 'p', i.e. the parallel version of cor()\n in SPRINT is called pcor(). Call to the parallel R functions\n are included directly in standard R scripts.\n\t\tSPRINT contains functions for correlation (pcor), partitioning around medoids (ppam), \n\t\tapply (papply), permutation testing (pmaxT), bootstrapping (pboot), random forest (prandomForest), \n\t\trank product (pRP) and hamming distance (pstringdistmatrix). ","Published":"2014-09-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"sprinter","Version":"1.1.0","Title":"Framework for Screening Prognostic Interactions","Description":"The main function of this package builds prognostic models that consider interactions by combining available statistical components. Furthermore, it provides a function for evaluating the relevance of the selected interactions by resampling techniques.","Published":"2014-12-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sprm","Version":"1.2.2","Title":"Sparse and Non-Sparse Partial Robust M Regression and\nClassification","Description":"Robust dimension reduction methods for regression and discriminant analysis are implemented that yield estimates with a partial least squares alike interpretability. Partial robust M regression (PRM) is robust to both vertical outliers and leverage points. Sparse partial robust M regression (SPRM) is a related robust method with sparse coefficient estimate, and therefore with intrinsic variable selection. For binary classification related discriminant methods are PRM-DA and SPRM-DA.","Published":"2016-02-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"sprsmdl","Version":"0.1-0","Title":"Sparse modeling toolkit","Description":"R functions to mine sparse models from data.","Published":"2013-03-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SPRT","Version":"1.0","Title":"Wald's Sequential Probability Ratio Test","Description":"Perform Wald's Sequential Probability Ratio Test on variables with a Normal, Bernoulli, Exponential and Poisson distribution. Plot acceptance and continuation regions, or create your own with the help of closures.","Published":"2015-04-15","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"spsann","Version":"2.1-0","Title":"Optimization of Sample Configurations using Spatial Simulated\nAnnealing","Description":"Methods to optimize sample configurations using spatial simulated annealing. Multiple objective \n functions are implemented for various purposes, such as variogram estimation, spatial trend estimation \n and spatial interpolation. A general purpose spatial simulated annealing function enables the user to \n define his/her own objective function. Solutions for augmenting existing sample configurations and solving\n multi-objective optimization problems are available as well.","Published":"2017-06-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spselect","Version":"0.0.1","Title":"Selecting Spatial Scale of Covariates in Regression Models","Description":"Fits spatial scale (SS) forward stepwise regression, SS incremental forward stagewise regression, SS least angle regression (LARS), and SS lasso models. All area-level covariates are considered at all available scales to enter a model, but the SS algorithms are constrained to select each area-level covariate at a single spatial scale.","Published":"2016-08-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spsi","Version":"0.1","Title":"Shape-Preserving Uni-Variate and Bi-Variate Spline Interpolation","Description":"Program uses method of polynomial of variable degrees to interpolate gridded data preserving monotonicity and/or convexity or none. Method is implemented for univariate and bivariate cases. If values of derivatives are provided, spline will fix them,if not program will estimate them numerically. Package written purely in R.","Published":"2015-08-19","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"SPSL","Version":"0.1-8","Title":"Site Percolation on Square Lattice (SPSL)","Description":"SPSL package provides functionality for labeling of\n percolation clusters on 2D & 3D square lattice with various\n lattice size, relative fraction of accessible sites (occupation\n probability), iso- & anisotropy, von Neumann & Moore\n (1,d)-neighborhood","Published":"2012-12-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spsurvey","Version":"3.3","Title":"Spatial Survey Design and Analysis","Description":"This group of functions implements algorithms for design and\n analysis of probability surveys. The functions are tailored for Generalized\n Random Tessellation Stratified survey designs.","Published":"2016-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spt","Version":"1.13-8-8","Title":"Sierpinski Pedal Triangle","Description":"This package collects algorithms related to Sierpinski pedal triangle (SPT).","Published":"2013-08-08","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"spTDyn","Version":"1.0","Title":"Spatially Varying and Spatio-Temporal Dynamic Linear Models","Description":"Fits, spatially predicts, and temporally forecasts space-time data using Gaussian Process (GP): (1) spatially varying coefficient process models and (2) spatio-temporal dynamic linear models.","Published":"2015-06-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spTest","Version":"0.2.4","Title":"Nonparametric Hypothesis Tests of Isotropy and Symmetry","Description":"Implements nonparametric hypothesis tests to check isotropy and\n symmetry properties for two-dimensional spatial data.","Published":"2016-05-12","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"spThin","Version":"0.1.0","Title":"Functions for Spatial Thinning of Species Occurrence Records for\nUse in Ecological Models","Description":"spThin is a set of functions that can be used to spatially thin\n species occurrence data. The resulting thinned data can be used in ecological\n modeling, such as ecological niche modeling.","Published":"2014-11-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"spTimer","Version":"2.0-1","Title":"Spatio-Temporal Bayesian Modelling Using R","Description":"Fits, spatially predicts and temporally forecasts large amounts of space-time data using [1] Bayesian Gaussian Process (GP) Models, [2] Bayesian Auto-Regressive (AR) Models, and [3] Bayesian Gaussian Predictive Processes (GPP) based AR Models for spatio-temporal big-n problems.","Published":"2015-08-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sptm","Version":"16.7-9","Title":"SemiParametric Transformation Model Methods","Description":"Implements semiparametric transformation model two-phase estimation using calibration weights.","Published":"2016-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"spup","Version":"0.1-0","Title":"Uncertainty Propagation Analysis","Description":"Uncertainty propagation analysis in spatial environmental modelling following methodology\n described in Heuvelink et al. (2017) \n and Brown and Heuvelink (2007) . The package provides functions\n for examining the uncertainty propagation starting from input data and model parameters,\n via the environmental model onto model outputs. The functions include uncertainty model specification,\n stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques.\n Uncertain variables are described by probability distributions. Both numerical and categorical data types are handled.\n Spatial auto-correlation within an attribute and cross-correlation between attributes is accommodated for.\n The MC realizations may be used as input to the environmental models called from R, or externally.","Published":"2017-04-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"spuRs","Version":"2.0.0","Title":"Functions and Datasets for \"Introduction to Scientific\nProgramming and Simulation Using R\"","Description":"This package provides functions and datasets from the book \"Introduction to Scientific Programming and Simulation Using R\".","Published":"2014-07-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SQDA","Version":"1.0","Title":"Sparse Quadratic Discriminant Analysis","Description":"Sparse Quadratic Discriminant Analysis (SQDA) can be performed. In SQDA, the covariance matrix are assumed to be block-diagonal.And, for each block, sparsity assumption is imposed on the covariance matrix. It is useful in high-dimensional setting.","Published":"2014-10-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sqldf","Version":"0.4-10","Title":"Perform SQL Selects on R Data Frames","Description":"Description: Manipulate R data frames using SQL.","Published":"2014-11-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sqliter","Version":"0.1.0","Title":"Connection wrapper to SQLite databases","Description":"sqliter helps users, mainly data munging practioneers, to organize\n their sql calls in a clean structure. It simplifies the process of\n extracting and transforming data into useful formats.","Published":"2014-01-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SqlRender","Version":"1.3.7","Title":"Rendering Parameterized SQL and Translation to Dialects","Description":"A rendering tool for parameterized SQL that also translates into\n different SQL dialects. These dialects include Sql Server, Oracle, \n PostgreSql, Amazon RedShift, and Microsoft PDW.","Published":"2017-05-03","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"sqlscore","Version":"0.1.1","Title":"Utilities for Generating SQL Queries from Model Objects","Description":"Provides utilities for generating SQL queries (particularly CREATE\n TABLE statements) from R model objects. The most important use case is\n generating SQL to score a generalized linear model or related model\n represented as an R object, in which case the package handles parsing\n formula operators and including the model's response function.","Published":"2017-01-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sqlutils","Version":"1.2","Title":"Utilities for working with SQL files","Description":"This package provides utilities for working with a library of SQL\n files.","Published":"2014-11-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SQN","Version":"1.0.5","Title":"subset quantile normalization","Description":"Normalization based a subset of negative control probes as\n described in 'Subset quantile normalization using negative\n control features'. Wu Z, Aryee MJ, J Comput Biol. 2010\n Oct;17(10):1385-95 [PMID 20976876].","Published":"2012-08-13","License":"LGPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"SQUAREM","Version":"2016.8-2","Title":"Squared Extrapolation Methods for Accelerating EM-Like Monotone\nAlgorithms","Description":"Algorithms for accelerating the convergence of slow,\n monotone sequences from smooth, contraction mapping such as the\n EM algorithm. It can be used to accelerate any smooth, linearly\n convergent acceleration scheme. A tutorial style introduction\n to this package is available in a vignette on the CRAN download\n page or, when the package is loaded in an R session, with\n vignette(\"SQUAREM\").","Published":"2016-08-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"squash","Version":"1.0.8","Title":"Color-Based Plots for Multivariate Visualization","Description":"Functions for color-based visualization of multivariate data, i.e. colorgrams or heatmaps. Lower-level functions map numeric values to colors, display a matrix as an array of colors, and draw color keys. Higher-level plotting functions generate a bivariate histogram, a dendrogram aligned with a color-coded matrix, a triangular distance matrix, and more.","Published":"2017-05-29","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"squid","Version":"0.1.1","Title":"Statistical Quantification of Individual Differences","Description":"A simulation-based tool made to help researchers to become familiar with\n multilevel variations, and to build up sampling designs for their study. \n This tool has two main objectives: First, it provides an educational tool useful for students, \n teachers and researchers who want to learn to use mixed-effects models. \n Users can experience how the mixed-effects model framework can be used to understand \n distinct biological phenomena by interactively exploring simulated multilevel data. \n Second, it offers research opportunities to those who are already familiar with \n mixed-effects models, as it enables the generation of data sets that users may download \n and use for a range of simulation-based statistical analyses such as power \n and sensitivity analysis of multilevel and multivariate data.","Published":"2016-06-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sra","Version":"0.1.1","Title":"Selection Response Analysis","Description":"This package (sra) provides a set of tools to analyse artificial-selection response datasets. The data typically feature for several generations the average value of a trait in a population, the variance of the trait, the population size and the average value of the parents that were chosen to breed. Sra implements two families of models aiming at describing the dynamics of the genetic architecture of the trait during the selection response. The first family relies on purely descriptive (phenomenological) models, based on an autoregressive framework. The second family provides different mechanistic models, accounting e.g. for inbreeding, mutations, genetic and environmental canalization, or epistasis. The parameters underlying the dynamics of the time series are estimated by maximum likelihood. The sra package thus provides (i) a wrapper for the R functions mle() and optim() aiming at fitting in a convenient way a predetermined set of models, and (ii) some functions to plot and analyze the output of the models. ","Published":"2015-01-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SRCS","Version":"1.1","Title":"Statistical Ranking Color Scheme for Multiple Pairwise\nComparisons","Description":"Implementation of the SRCS method for a color-based visualization of the\n results of multiple pairwise tests on a large number of problem configurations, proposed in: \n I.G. del Amo, D.A. Pelta. SRCS: a technique for comparing multiple algorithms under several\n factors in dynamic optimization problems. In: E. Alba, A. Nakib, P. Siarry\n (Eds.), Metaheuristics for Dynamic Optimization. Series: Studies in\n Computational Intelligence 433, Springer, Berlin/Heidelberg, 2012.","Published":"2015-07-02","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"sROC","Version":"0.1-2","Title":"Nonparametric Smooth ROC Curves for Continuous Data","Description":"This package contains a collection of functions to perform\n nonparametric estimation of receiver operating characteristic\n (ROC) curves for continuous data.","Published":"2012-04-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"SRRS","Version":"0.1.1","Title":"The Stepwise Response Refinement Screener (SRRS)","Description":"This package implements the SRRS method introduced in Phoa (2013) into a graphical user interface (GUI) R program.","Published":"2014-05-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"srvyr","Version":"0.2.2","Title":"'dplyr'-Like Syntax for Summary Statistics of Survey Data","Description":"Use piping, verbs like 'group_by' and 'summarize', and other\n 'dplyr' inspired syntactic style when calculating summary statistics on survey\n data using functions from the 'survey' package.","Published":"2017-06-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ss3sim","Version":"0.9.5","Title":"Fisheries Stock Assessment Simulation Testing with Stock\nSynthesis","Description":"Develops a framework for fisheries stock assessment simulation\n testing with Stock Synthesis 3 (SS3) as described in Anderson et al.\n (2014) .","Published":"2017-04-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ssa","Version":"1.2.1","Title":"Simultaneous Signal Analysis","Description":"Procedures for analyzing simultaneous signals, e.g., features that are simultaneously significant in two different studies. Includes methods for detecting simultaneous signals, for identifying them under false discovery rate control, and for leveraging them to improve prediction.","Published":"2016-07-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ssanv","Version":"1.1","Title":"Sample Size Adjusted for Nonadherence or Variability of Input\nParameters","Description":"A set of functions to calculate sample size for two-sample difference in means tests. Does adjustments for either nonadherence or variability that comes from using data to estimate parameters.","Published":"2015-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SSBtools","Version":"0.2.1","Title":"Statistics Norway's Miscellaneous Small Tools","Description":"Small functions used by other packages from Statistics Norway are gathered. Both general data manipulation functions and some more special functions for statistical disclosure control are included. One reason for a separate package is possible reuse of the functions within a Renjin environment.","Published":"2017-02-07","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ssc","Version":"1.0","Title":"Semi-Supervised Classification Methods","Description":"Provides a collection of self-labeled techniques for semi-supervised classification. In semi-supervised classification, both labeled and unlabeled data are used to train a classifier. This learning paradigm has obtained promising results, specifically in the presence of a reduced set of labeled examples. This package implements a collection of self-labeled techniques to construct a distance-based classification model. This family of techniques enlarges the original labeled set using the most confident predictions to classify unlabeled data. The techniques implemented can be applied to classification problems in several domains by the specification of a suitable base classifier and distance measure. At low ratios of labeled data, it can be shown to perform better than classical supervised classifiers.","Published":"2016-10-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sscor","Version":"0.2","Title":"Robust Correlation Estimation and Testing Based on Spatial Signs","Description":"Provides the spatial sign correlation and the two-stage spatial sign correlation as well as a one-sample test for the correlation coefficient.","Published":"2016-01-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"ssd","Version":"0.3","Title":"Sample Size Determination (SSD) for Unordered Categorical Data","Description":"ssd calculates the sample size needed to detect the differences between two sets of unordered categorical data.","Published":"2014-12-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SSDforR","Version":"1.4.15","Title":"Functions to Analyze Single System Data","Description":"Functions to visually and statistically analyze single system data.","Published":"2017-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SSDM","Version":"0.2.3","Title":"Stacked Species Distribution Modelling","Description":"Allows to map species richness and endemism based on stacked\n species distribution models (SSDM). Individuals SDMs can be created using a\n single or multiple algorithms (ensemble SDMs). For each species, an SDM can\n yield a habitat suitability map, a binary map, a between-algorithm variance\n map, and can assess variable importance, algorithm accuracy, and between-\n algorithm correlation. Methods to stack individual SDMs include summing\n individual probabilities and thresholding then summing. Thresholding can be\n based on a specific evaluation metric or by drawing repeatedly from a Bernoulli\n distribution. The SSDM package also provides a user-friendly interface.","Published":"2017-05-09","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sSDR","Version":"1.2.0","Title":"Tools Developed for Structured Sufficient Dimension Reduction\n(sSDR)","Description":"Performs structured OLS (sOLS) and structured SIR (sSIR).","Published":"2016-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ssfa","Version":"1.1","Title":"Spatial Stochastic Frontier Analysis","Description":"Spatial Stochastic Frontier Analysis (SSFA) is an original method for controlling the spatial heterogeneity in Stochastic Frontier Analysis (SFA) models, for cross-sectional data, by splitting the inefficiency term into three terms: the first one related to spatial peculiarities of the territory in which each single unit operates, the second one related to the specific production features and the third one representing the error term.","Published":"2015-06-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ssfit","Version":"1.1","Title":"Fitting of parametric models using summary statistics","Description":"Fits complex parametric models using the method proposed by Cox and Kartsonaki (2012) without likelihoods.","Published":"2013-08-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ssh.utils","Version":"1.0","Title":"Local and remote system commands with output and error capture","Description":"This package provides utility functions for system command\n execution, both locally and remotely using ssh/scp. The command\n output is captured and provided to the caller. This functionality is\n intended to streamline calling shell commands from R, retrieving and\n using their output, while instrumenting the calls with appropriate\n error handling. NOTE: this first version is limited to unix with local\n and remote systems running bash as the default shell.","Published":"2014-07-24","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"} {"Package":"ssize.fdr","Version":"1.2","Title":"Sample Size Calculations for Microarray Experiments","Description":"This package contains a set of functions that calculates \n appropriate sample sizes for one-sample t-tests, two-sample t-tests, \n and F-tests for microarray experiments based on desired power while \n controlling for false discovery rates. For all tests, the standard\n deviations (variances) among genes can be assumed fixed or random. \n This is also true for effect sizes among genes in one-sample and two\n sample experiments. Functions also output a chart of power versus sample\n size, a table of power at different sample sizes, and a table of critical\n test values at different sample sizes.","Published":"2015-02-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ssizeRNA","Version":"1.2.9","Title":"Sample Size Calculation for RNA-Seq Experimental Design","Description":"We propose a procedure for sample size calculation while\n controlling false discovery rate for RNA-seq experimental design. Our\n procedure depends on the Voom method proposed for RNA-seq data analysis\n by Law et al. (2014) and the sample size \n calculation method proposed for microarray experiments by Liu and Hwang \n (2007) . We develop a set of functions\n that calculates appropriate sample sizes for two-sample t-test for RNA-seq\n experiments with fixed or varied set of parameters. The outputs also contain a\n plot of power versus sample size, a table of power at different sample sizes,\n and a table of critical test values at different sample sizes. \n To install this package, please use \n 'source(\"http://bioconductor.org/biocLite.R\"); biocLite(\"ssizeRNA\")'.","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sskm","Version":"1.0.0","Title":"Stable Sparse K-Means","Description":"Achieve feature selection via taking subsamples of data and then running sparse k-means on each of the subsamples. Only maintain features that received positive weights a high proportion of times. Run standard k-means to cluster the data based on subset of features selected.","Published":"2017-03-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SSL","Version":"0.1","Title":"Semi-Supervised Learning","Description":"Semi-supervised learning has attracted the attention of machine learning community because of its high accuracy with less annotating effort compared with supervised learning.The question that semi-supervised learning wants to address is: given a relatively small labeled dataset and a large unlabeled dataset, how to design classification algorithms learning from both ? This package is a collection of some classical semi-supervised learning algorithms in the last few decades.","Published":"2016-05-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ssmn","Version":"1.1","Title":"Skew Scale Mixtures of Normal Distributions","Description":"Performs the EM algorithm for regression models using Skew Scale Mixtures of Normal Distributions.","Published":"2016-08-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ssmrob","Version":"0.4","Title":"Robust estimation and inference in sample selection models","Description":"Package provides a set of tools for robust estimation and inference for models with sample selectivity.","Published":"2014-03-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ssmsn","Version":"0.2.0","Title":"Scale-Shape Mixtures of Skew-Normal Distributions","Description":"It provides the density and random number generator for the Scale-Shape Mixtures of Skew-Normal Distributions proposed by Jamalizadeh and Lin (2016) .","Published":"2017-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SSN","Version":"1.1.10","Title":"Spatial Modeling on Stream Networks","Description":"Spatial statistical modeling and prediction for data on stream networks, including models based on in-stream distance. Models are created using moving average constructions. Spatial linear models, including explanatory variables, can be fit with (restricted) maximum likelihood. Mapping and other graphical functions are included.","Published":"2017-04-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sspline","Version":"0.1-6","Title":"Smoothing Splines on the Sphere","Description":"R package for computing the spherical smoothing splines","Published":"2013-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"sspse","Version":"0.5-1","Title":"Estimating Hidden Population Size using Respondent Driven\nSampling Data","Description":"An integrated set of tools to estimate the size of a networked population based on respondent-driven sampling data. The package is part of the \"RDS Analyst\" suite of packages for the analysis of respondent-driven sampling data.","Published":"2015-04-22","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SSRA","Version":"0.1-0","Title":"Sakai Sequential Relation Analysis","Description":"Takeya Semantic Structure Analysis (TSSA) and Sakai Sequential Relation Analysis (SSRA)\n for polytomous items for examining whether each pair of items has a sequential or equal\n relation. Package includes functions for generating a sequential relation table and a\n treegram to visualize sequential or equal relations between pairs of items.","Published":"2016-08-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SSrat","Version":"1.0","Title":"Two-dimensional sociometric status determination with rating\nscales","Description":"SSRAT is a computer program for two-dimensional sociometric status\n determination with rating scales. For each person assessed, SSRAT computes\n probability distributions of the total scores for `sympathy' (S),\n `antipathy' (A), `social preference' (P) and `social impact' (I), and\n applies the criteria for sociometric status categorization.","Published":"2014-11-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SSRMST","Version":"0.1.1","Title":"Sample Size Calculation using Restricted Mean Survival Time","Description":"Calculates the power and sample size based on the difference in Restricted Mean Survival Time.","Published":"2017-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sss","Version":"0.1-0","Title":"Tools for Importing Files in the Triple-s (Standard Survey\nStructure) Format","Description":"Tools to import survey files\n in the .sss (triple-s) format. The package provides the function\n read.sss() that reads the .asc (or .csv) and .sss files of a\n triple-s survey data file.","Published":"2017-04-01","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"SSsimple","Version":"0.6.4","Title":"State space models","Description":"Simulate, solve state space models","Published":"2014-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ssvd","Version":"1.0","Title":"Sparse SVD","Description":"Fast iterative thresholding sparse SVD, together with an initialization algorithm","Published":"2013-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ssym","Version":"1.5.7","Title":"Fitting Semi-Parametric log-Symmetric Regression Models","Description":"Set of tools to fit a semi-parametric regression model suitable for analysis of data sets in which the response variable is continuous, strictly positive, asymmetric and possibly, censored. Under this setup, both the median and the skewness of the response variable distribution are explicitly modeled by using semi-parametric functions, whose non-parametric components may be approximated by natural cubic splines or P-splines. Supported distributions for the model error include log-normal, log-Student-t, log-power-exponential, log-hyperbolic, log-contaminated-normal, log-slash, Birnbaum-Saunders and Birnbaum-Saunders-t distributions.","Published":"2016-10-16","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"st","Version":"1.2.5","Title":"Shrinkage t Statistic and Correlation-Adjusted t-Score","Description":"Implements the \"shrinkage t\" statistic \n introduced in Opgen-Rhein and Strimmer (2007) and a shrinkage estimate\n of the \"correlation-adjusted t-score\" (CAT score) described in\n Zuber and Strimmer (2009). It also offers a convenient interface \n to a number of other regularized t-statistics commonly \n employed in high-dimensional case-control studies. ","Published":"2015-07-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"stable","Version":"1.1.2","Title":"Probability Functions and Generalized Regression Models for\nStable Distributions","Description":"Density, distribution, quantile and hazard functions of a\n stable variate; generalized regression models for the parameters\n of a stable distribution. See the README for how to make equivalent calls\n to those of 'stabledist'. ","Published":"2017-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stabledist","Version":"0.7-1","Title":"Stable Distribution Functions","Description":"Density, Probability and Quantile functions, and random number\n generation for (skew) stable distributions, using the parametrizations of\n Nolan.","Published":"2016-09-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"StableEstim","Version":"2.1","Title":"Estimate the Four Parameters of Stable Laws using Different\nMethods","Description":"Estimate the four parameters of stable laws using maximum\n likelihood method, generalised method of moments with\n finite and continuum number of points, iterative\n Koutrouvelis regression and Kogon-McCulloch method. The\n asymptotic properties of the estimators (covariance\n matrix, confidence intervals) are also provided.","Published":"2016-07-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stablespec","Version":"0.3.0","Title":"Stable Specification Search in Structural Equation Models","Description":"An exploratory and heuristic approach for specification search in\n Structural Equation Modeling. The basic idea is to subsample the original data\n and then search for optimal models on each subset. Optimality is defined through\n two objectives: model fit and parsimony. As these objectives are conflicting,\n we apply a multi-objective optimization methods, specifically NSGA-II, to obtain\n optimal models for the whole range of model complexities. From these optimal\n models, we consider only the relevant model specifications (structures), i.e.,\n those that are both stable (occur frequently) and parsimonious and use those to\n infer a causal model.","Published":"2017-04-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stabs","Version":"0.6-2","Title":"Stability Selection with Error Control","Description":"Resampling procedures to assess the stability of selected variables\n with additional finite sample error control for high-dimensional variable\n selection procedures such as Lasso or boosting. Both, standard stability\n selection (Meinshausen & Buhlmann, 2010, ) \n and complementary pairs stability selection with improved error bounds \n (Shah & Samworth, 2013, ) are\n implemented. The package can be combined with arbitrary user specified\n variable selection approaches.","Published":"2017-01-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Stack","Version":"2.0-1","Title":"Stylized concatenation of data.frames or ffdfs","Description":"Stacks rectangular datasets on top of each other, possibly\n performing several type coercions along the way. For large datasets,\n depends on the ff package. Provides an aggressive version of\n ffbase::compact for data that may appear be real-typed but is in fact\n int/short/byte. For many purposes plyr::rbind.fill may be more appropriate,\n but for some kinds of survey data, the rules here work better.","Published":"2014-03-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stackoverflow","Version":"0.1.2","Title":"Stack Overflow's Greatest Hits","Description":"Consists of helper functions collected from StackOverflow.com, a \n question and answer site for professional and enthusiast programmers.","Published":"2015-05-13","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"stacomirtools","Version":"0.5.0","Title":"ODBC Connection Class for Package stacomiR","Description":"S4 class wrappers \tfor ODBC connection.","Published":"2016-08-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"StagedChoiceSplineMix","Version":"1.0.0","Title":"Mixture of Two-Stage Logistic Regressions with Fixed Candidate\nKnots","Description":"Analyzing a mixture of two-stage logistic regressions with fixed candidate knots. See Bruch, E., F. Feinberg, K. Lee (in press).","Published":"2016-08-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"stagePop","Version":"1.1-1","Title":"Modelling the Population Dynamics of a Stage-Structured Species\nin Continuous Time","Description":"Provides facilities to implement and run population models of\n stage-structured species...","Published":"2015-05-12","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stam","Version":"0.0-1","Title":"Spatio-Temporal Analysis and Modelling","Description":"stam is an evolving package that target on the various\n methods to conduct Spatio-Temporal Analysis and\n Modelling,including Exploratory Spatio-Temporal Analysis and\n Inferred Spatio-Temporal Modelling.","Published":"2010-02-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"StAMPP","Version":"1.4","Title":"Statistical Analysis of Mixed Ploidy Populations","Description":"Allows users to calculate pairwise Nei's Genetic Distances (Nei 1972), pairwise Fixation\n Indexes (Fst) (Weir & Cockerham 1984) and also Genomic Relationship matrixes following Yang et al. (2010) in mixed and single\n ploidy populations. Bootstrapping across loci is implemented during Fst calculation to generate confidence intervals and p-values\n around pairwise Fst values. StAMPP utilises SNP genotype data of any ploidy level (with the ability to handle missing data) and is coded to \n utilise multithreading where available to allow efficient analysis of large datasets. StAMPP is able to handle genotype data from genlight objects \n allowing integration with other packages such adegenet.\n Please refer to LW Pembleton, NOI Cogan & JW Forster, 2013, Molecular Ecology Resources, 13(5), 946-952. doi:10.1111/1755-0998.12129 for the appropriate citation and user manual. Thank you in advance.","Published":"2015-07-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"stampr","Version":"0.1","Title":"Spatial Temporal Analysis of Moving Polygons","Description":"Perform spatial temporal analysis of moving polygons; a longstanding analysis problem in Geographic Information Systems. Facilitates directional analysis, shape analysis, and some other simple functionality for examining spatial-temporal patterns of moving polygons.","Published":"2017-01-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"STAND","Version":"2.0","Title":"Statistical Analysis of Non-Detects","Description":"Provides functions for the analysis of\n occupational and environmental data with non-detects. Maximum\n likelihood (ML) methods for censored log-normal data and\n non-parametric methods based on the product limit estimate (PLE)\n for left censored data are used to calculate all of the\n statistics recommended by the American Industrial Hygiene\n Association (AIHA) for the complete data case. Functions for\n the analysis of complete samples using exact methods are also\n provided for the lognormal model. Revised from 2007-11-05\n 'survfit~1'.","Published":"2015-09-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"standardize","Version":"0.2.1","Title":"Tools for Standardizing Variables for Regression in R","Description":"Tools which allow regression variables to be placed on similar\n scales, offering computational benefits as well as easing interpretation of\n regression output.","Published":"2017-06-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"StandardizeText","Version":"1.0","Title":"Standardize Text","Description":"Standardizes text according to a template; particularly\n useful for country names.","Published":"2013-03-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"StanHeaders","Version":"2.15.0-1","Title":"C++ Header Files for Stan","Description":"The C++ header files of the Stan project are provided by this package, but it contains no R code, vignettes, or function documentation. There is a shared object containing part of the 'CVODES' library, but it is not accessible from R. 'StanHeaders' is only useful for developers who want to utilize the 'LinkingTo' directive of their package's DESCRIPTION file to build on the Stan library without incurring unnecessary dependencies. The Stan project develops a probabilistic programming language that implements full or approximate Bayesian statistical inference via Markov Chain Monte Carlo or 'variational' methods and implements (optionally penalized) maximum likelihood estimation via optimization. The Stan library includes an advanced automatic differentiation scheme, 'templated' statistical and linear algebra functions that can handle the automatically 'differentiable' scalar types (and doubles, 'ints', etc.), and a parser for the Stan language. The 'rstan' package provides user-facing R functions to parse, compile, test, estimate, and analyze Stan models.","Published":"2017-04-19","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"STAR","Version":"0.3-7","Title":"Spike Train Analysis with R","Description":"Functions to analyze neuronal spike trains from a single\n neuron or from several neurons recorded simultaneously.","Published":"2012-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stargazer","Version":"5.2","Title":"Well-Formatted Regression and Summary Statistics Tables","Description":"Produces LaTeX code, HTML/CSS code and ASCII text for well-formatted tables that hold \n regression analysis results from several models side-by-side, as well as summary\n statistics.","Published":"2015-07-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"starma","Version":"1.3","Title":"Modelling Space Time AutoRegressive Moving Average (STARMA)\nProcesses","Description":"Statistical functions to identify, estimate and diagnose a Space-Time AutoRegressive Moving Average (STARMA) model.","Published":"2016-02-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"starmie","Version":"0.1.2","Title":"Population Structure Model Inference and Visualisation","Description":"Data structures and methods for manipulating output of genetic population structure clustering algorithms.\n 'starmie' can parse output from 'STRUCTURE' (see for details) or\n 'ADMIXTURE' (see for details). 'starmie' performs model selection via\n information criterion, and provides functions for MCMC diagnostics, correcting label switching and visualisation of admixture coefficients.","Published":"2016-11-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"startR","Version":"0.0.1","Title":"Automatically Retrieve Multidimensional Distributed Data Sets","Description":"Tool to automatically fetch, transform and arrange subsets of multidimensional data sets (collections of files) stored in local and/or remote file systems or servers, using multicore capabilities where possible. The tool provides an interface to perceive a collection of data sets as a single large multidimensional data array, and enables the user to request for automatic retrieval, processing and arrangement of subsets of the large array. Wrapper functions to add support for custom file formats can be plugged in/out, making the tool suitable for any research field where large multidimensional data sets are involved.","Published":"2017-04-22","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"STARTS","Version":"0.0-9","Title":"Functions for the STARTS Model","Description":"\n Contains functions for estimating the STARTS model of\n Kenny and Zautra (1995, 2001) ,\n .","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"startup","Version":"0.6.1","Title":"Friendly R Startup Configuration","Description":"Adds support for R startup configuration via '.Renviron.d' and '.Rprofile.d' directories in addition to '.Renviron' and '.Rprofile' files. This makes it possible to keep private / secret environment variables separate from other environment variables. It also makes it easier to share specific startup settings by simply copying a file to a directory.","Published":"2017-05-17","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"} {"Package":"startupmsg","Version":"0.9.4","Title":"Utilities for Start-Up Messages","Description":"Provides utilities to create or suppress start-up messages.","Published":"2017-04-23","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"Stat2Data","Version":"1.6","Title":"Datasets for Stat2","Description":"Datasets for Stat2 textbook (by Cannon, et. al., published\n by WH Freeman)","Published":"2013-01-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"statar","Version":"0.6.4","Title":"Tools Inspired by 'Stata' to Manipulate Tabular Data","Description":"A set of tools inspired by 'Stata' to explore data.frames ('summarize',\n 'tabulate', 'xtile', 'pctile', 'binscatter', elapsed quarters/month, lead/lag).","Published":"2017-04-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"StatCharrms","Version":"0.90.4","Title":"Statistical Analysis of Chemistry, Histopathology, and\nReproduction Endpoints Including Repeated Measures and\nMulti-Generation Studies","Description":"A front end for the statistical analyses involved in the tier II endocrine \n\tdisruptor screening program. The analyses available to this package are: \n\tRao-Scott adjusted Cochran-Armitage test for trend By Slices (RSCABS), \n\ta Standard Cochran-Armitage test for trend By Slices (SCABS), \n\tmixed effects Cox proportional model, Jonckheere-Terpstra step down trend test \n\tDunn test, one way ANOVA, weighted ANOVA, mixed effects ANOVA, repeated \n\tmeasures ANOVA, and Dunnett test. \t\t","Published":"2017-06-20","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"statcheck","Version":"1.2.2","Title":"Extract Statistics from Articles and Recompute p Values","Description":"Extract statistics from articles and recompute p values.","Published":"2016-08-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"statcomp","Version":"0.0.1.1000","Title":"Statistical Complexity and Information Measures for Time Series\nAnalysis","Description":"An implementation of local and global statistical complexity measures (aka Information Theory Quantifiers, ITQ) for time series analysis based on ordinal statistics (Bandt and Pompe (2002) ). Several distance measures that operate on ordinal pattern distributions, auxiliary functions for ordinal pattern analysis, and generating functions for stochastic and deterministic-chaotic processes for ITQ testing are provided.","Published":"2016-09-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"StatDA","Version":"1.6.9","Title":"Statistical Analysis for Environmental Data","Description":"This package offers different possibilities to make statistical analysis for Environmental Data.","Published":"2015-04-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"StatDataML","Version":"1.0-26","Title":"Read and Write StatDataML Files","Description":"Support for reading and writing files in StatDataML---an XML-based data exchange format.","Published":"2015-07-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"statebins","Version":"1.2.2","Title":"U.S. State Cartogram Heatmaps in R; an Alternative to Choropleth\nMaps for USA States","Description":"Cartogram heatmaps are an alternative to choropleth maps for USA States\n and are based on work by the Washington Post graphics department in their report\n on \"The states most threatened by trade\". \"State bins\" preserve as much of the\n geographic placement of the states as possible but has the look and feel of a\n traditional heatmap. Functions are provided that allow for use of a binned,\n discrete scale, a continuous scale or manually specified colors depending on\n what is needed for the underlying data.","Published":"2015-12-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"statGraph","Version":"0.1.0","Title":"Statistical Methods for Graphs","Description":"Contains statistical methods to analyze graphs, such as\n graph parameter estimation, model selection based on the GIC\n (Graph Information Criterion), statistical tests to\n discriminate two or more populations of graphs (ANOGVA\n -Analysis of Graph Variability), correlation between graphs,\n and clustering of graphs.","Published":"2017-04-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"stationaRy","Version":"0.4.1","Title":"Get Hourly Meteorological Data from Global Stations","Description":"Selectively acquire hourly meteorological data from stations located all over the world.","Published":"2015-10-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"statip","Version":"0.1.4","Title":"Miscellaneous Basic Statistical Functions","Description":"A collection of miscellaneous statistical functions for \n probability distributions: dbern(), pbern(), qbern(), rbern() for \n the Bernoulli distribution, and distr2name(), name2distr() for \n distribution names; \n probability density estimation: densityfun(); \n most frequent value estimation: mfv(), mfv1(); \n calculation of the Hellinger distance: hellinger(); \n use of classical kernels: kernelfun(), kernel_properties().","Published":"2017-01-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"statisticalModeling","Version":"0.3.0","Title":"Functions for Teaching Statistical Modeling","Description":"Provides graphics and other functions that evaluate and display models across many different kinds of model architecture. For instance, you can evaluate the effect size of a model input in the same way, regardless of architecture, interaction terms, etc.","Published":"2016-11-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"StatMatch","Version":"1.2.5","Title":"Statistical Matching","Description":"Integration of two data sources referred to the same target population which share a number of common variables (aka data fusion). Some functions can also be used to impute missing values in data sets through hot deck imputation methods. Methods to perform statistical matching when dealing with data from complex sample surveys are available too.","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"StatMeasures","Version":"1.0","Title":"Easy Data Manipulation, Data Quality and Statistical Checks","Description":"Offers useful functions to perform day-to-day data manipulation \n operations, data quality checks and post modelling statistical checks.\n One can effortlessly change class of a number of variables to factor, \n remove duplicate observations from the data, create deciles of a \n variable, perform data quality checks for continuous (integer or numeric), \n categorical (factor) and date variables, and compute goodness of fit \n measures such as auc for statistical models. The functions are consistent \n for objects of class 'data.frame' and 'data.table', which is an enhanced \n 'data.frame' implemented in the package 'data.table'.","Published":"2015-03-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"statmod","Version":"1.4.30","Title":"Statistical Modeling","Description":"A collection of algorithms and functions to aid statistical modeling. Includes growth curve comparisons, limiting dilution analysis (aka ELDA), mixed linear models, heteroscedastic regression, inverse-Gaussian probability calculations, Gauss quadrature and a secure convergence algorithm for nonlinear models. Includes advanced generalized linear model functions that implement secure convergence, dispersion modeling and Tweedie power-law families. ","Published":"2017-06-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"statnet","Version":"2016.9","Title":"Software Tools for the Statistical Analysis of Network Data","Description":"An integrated set of tools for the representation, visualization, analysis, and simulation of network data. For an introduction, type help(package='statnet').","Published":"2016-09-10","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"statnet.common","Version":"3.3.0","Title":"Common R Scripts and Utilities Used by the Statnet Project\nSoftware","Description":"Non-statistical utilities used by the software developed by the Statnet Project. They may also be of use to others.","Published":"2015-10-24","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"statnetWeb","Version":"0.4.0","Title":"A Graphical User Interface for Network Modeling with 'Statnet'","Description":"A graphical user interface for network modeling with the 'statnet'\n software.","Published":"2015-11-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"staTools","Version":"0.1.0","Title":"Statistical Tools for Social Network Analysis","Description":"A collection of statistical tools for social network analysis, with strong emphasis on the analysis of discrete powerlaw distributions and statistical hypothesis tests.","Published":"2015-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"StatPerMeCo","Version":"0.1.0","Title":"Statistical Performance Measures to Evaluate Covariance Matrix\nEstimates","Description":"Statistical performance measures used in the econometric literature to evaluate conditional covariance/correlation matrix estimates (MSE, MAE, Euclidean distance, Frobenius distance, Stein distance, asymmetric loss function, eigenvalue loss function and the loss function defined in Eq. (4.6) of Engle et al. (2016) ). Additionally, compute Eq. (3.1) and (4.2) of Li et al. (2016) to compare the factor loading matrix. The statistical performance measures implemented have been previously used in, for instance, Laurent et al. (2012) , Amendola et al. (2015) and Becker et al. (2015) .","Published":"2017-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"statprograms","Version":"0.1.0","Title":"Graduate Statistics Program Datasets","Description":"A small collection of data on graduate statistics programs from the United States.","Published":"2016-08-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"statquotes","Version":"0.2","Title":"Quotes on Statistics, Data Visualization and Science","Description":"Generates a random quotation from a data base of quotes on topics\n in statistics, data visualization and science.","Published":"2016-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"StatRank","Version":"0.0.6","Title":"Statistical Rank Aggregation: Inference, Evaluation, and\nVisualization","Description":"A set of methods to implement Generalized Method of Moments and Maximal\n Likelihood methods for Random Utility Models. These methods are meant to\n provide inference on rank comparison data. These methods accept full,\n partial, and pairwise rankings, and provides methods to break down full or\n partial rankings into their pairwise components. Please see Generalized\n Method-of-Moments for Rank Aggregation from NIPS 2013 for a description of\n some of our methods.","Published":"2015-09-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"statsgrokse","Version":"0.1.4","Title":"R 'API' Binding to Stats.grok.se Server","Description":"The server\n provides data and an 'API' for Wikipedia page view statistics from \n 2008 up to 2015. This package provides R bindings to the 'API'. ","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"STB","Version":"0.6.3.1","Title":"Simultaneous Tolerance Bounds","Description":"Provides an implementation of simultaneous tolerance bounds (STB), useful for checking whether a numeric vector fits to a hypothetical null-distribution or not.\n Furthermore, there are functions for computing STB (bands, intervals) for random variates of linear mixed models fitted with package 'VCA'. All kinds of, possibly transformed \n (studentized, standardized, Pearson-type transformed) random variates (residuals, random effects), can be assessed employing STB-methodology. ","Published":"2016-07-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"stcov","Version":"0.1.0","Title":"Stein's Covariance Estimator","Description":"Estimates a covariance matrix using Stein's isotonized covariance\n estimator, or a related estimator suggested by Haff.","Published":"2016-04-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stddiff","Version":"2.0","Title":"Calculate the Standardized Difference for Numeric, Binary and\nCategory Variables","Description":"Contains three main functions including\n stddiff.numeric(), stddiff.binary() and stddiff.category().\n These are used to calculate the standardized difference between two groups.\n It is especially used to evaluate the balance between two groups\n before and after propensity score matching.","Published":"2017-04-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"stdReg","Version":"2.1","Title":"Regression Standardization","Description":"Contains functionality for regression standardization. Three general classes of models are allowed; generalized linear models, Cox proportional hazards models and shared gamma-Weibull frailty models.","Published":"2017-02-03","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"stdvectors","Version":"0.0.5","Title":"C++ Standard Library Vectors in R","Description":"Allows the creation and manipulation of C++ std::vector's in R.","Published":"2017-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"steadyICA","Version":"1.0","Title":"ICA and Tests of Independence via Multivariate Distance\nCovariance","Description":"Functions related to multivariate measures of independence and ICA:\n -estimate independent components by minimizing distance covariance;\n -conduct a test of mutual independence based on distance covariance; \n -estimate independent components via infomax (a popular method but generally performs poorer than mdcovica, ProDenICA, and/or fastICA, but is useful for comparisons);\n -order indepedent components by skewness;\n -match independent components from multiple estimates;\n -other functions useful in ICA.","Published":"2015-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"steepness","Version":"0.2-2","Title":"Testing Steepness of Dominance Hierarchies","Description":"steepness is a package that computes steepness as a\n property of dominance hierarchies. Steepness is defined as the\n absolute slope of the straight line fitted to the normalized\n David's scores. The normalized David's scores can be obtained\n on the basis of dyadic dominance indices corrected for chance\n or by means of proportions of wins. Given an observed\n sociomatrix, it computes hierarchy's steepness and estimates\n statistical significance by means of a randomization test.","Published":"2014-10-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SteinIV","Version":"0.1-1","Title":"Semi-Parametric Stein-Like Estimator with Instrumental Variables","Description":"Routines for computing different types of linear estimators, based on instrumental variables (IVs), including the semi-parametric Stein-like (SPS) estimator, originally introduced by Judge and Mittelhammer (2004) . ","Published":"2016-01-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stellaR","Version":"0.3-3","Title":"stellar evolution tracks and isochrones","Description":"A package to manage and display stellar tracks and\n isochrones from Pisa low-mass database. Includes tools for\n isochrones construction and tracks interpolation.","Published":"2013-01-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Stem","Version":"1.0","Title":"Spatio-temporal models in R","Description":"Estimation of the parameters of a spatio-temporal model\n using the EM algorithm, estimation of the parameter standard\n errors using a spatio-temporal parametric bootstrap, spatial\n mapping.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"STEPCAM","Version":"1.2","Title":"ABC-SMC Inference of STEPCAM","Description":"Collection of model estimation, and model plotting functions \n related to the STEPCAM family of community assembly models. \n STEPCAM is a STEPwise Community Assembly Model that infers \n the relative contribution of Dispersal Assembly, Habitat Filtering \n and Limiting Similarity from a dataset consisting of the \n combination of trait and abundance data. See also for more information.","Published":"2016-09-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"stepp","Version":"3.0-11","Title":"Subpopulation Treatment Effect Pattern Plot (STEPP)","Description":"A method to explore the treatment-covariate interactions in survival or generalized \n\tlinear model (GLM) for continuous, binomial and count data arising from two treatment \n\tarms of a clinical trial. A permutation distribution approach to inference is implemented, \n\tbased on permuting the covariate values within each treatment group. ","Published":"2014-10-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stepPenal","Version":"0.1","Title":"Stepwise Forward Variable Selection in Penalized Regression","Description":"Model Selection Based on Combined Penalties. This package implements a stepwise forward variable selection algorithm based on a penalized likelihood criterion that combines the L0 with L2 or L1 norms.","Published":"2016-12-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"stepPlr","Version":"0.92","Title":"L2 penalized logistic regression with a stepwise variable\nselection","Description":"L2 penalized logistic regression for both continuous and\n discrete predictors, with forward stagewise/forward stepwise\n variable selection procedure.","Published":"2010-07-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stepR","Version":"2.0-1","Title":"Multiscale Change-Point Inference","Description":"Allows fitting of step-functions to univariate serial data where neither the number of jumps nor their positions is known by implementing the multiscale regression estimators SMUCE and HSMUCE. In addition, confidence intervals for the change-point locations and bands for the unknown signal can be obtained.","Published":"2017-05-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"stepwise","Version":"0.3","Title":"Stepwise detection of recombination breakpoints","Description":"A stepwise approach to identifying recombination\n breakpoints in a sequence alignment.","Published":"2012-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"StepwiseTest","Version":"1.0","Title":"Multiple Testing Method to Control Generalized Family-Wise Error\nRate and False Discovery Proportion","Description":"Collection of stepwise procedures to conduct multiple hypotheses testing. The details of the stepwise algorithm can be found in Romano and Wolf (2007) and Hsu, Kuan, and Yen (2014) .","Published":"2016-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"StereoMorph","Version":"1.6.1","Title":"Stereo Camera Calibration and Reconstruction","Description":"Functions for the collection of 3D points and curves using a stereo camera setup.","Published":"2017-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stheoreme","Version":"1.2","Title":"Klimontovich's S-Theorem Algorithm Implementation and Data\nPreparation Tools","Description":"Functions implementing the procedure of entropy comparison between two data samples after the renormalization of respective probability distributions with the algorithm designed by Klimontovich (Zeitschrift fur Physik B Condensed Matter. 1987, Volume 66, Issue 1, pp 125-127) and extended by Anishchenko (Proc. SPIE 2098, Computer Simulation in Nonlinear Optics. 1994, pp.130-136). The package also includes data preparation tools which can also be used separately for various applications.","Published":"2015-03-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"STI","Version":"0.1","Title":"Calculation of the Standardized Temperature Index","Description":"A set of functions for computing the Standardized Temperature Index (STI).","Published":"2015-08-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Stickbreaker","Version":"1.0.0","Title":"Fits Stickbreaking, Multiplicative and Additive Models to Data","Description":"Genetically modified organisms are used to test the phenotypic\n effects of mutation. Often, multiple substitutions impact organismal fitness\n differently than single substitutions (epistatic interactions). This package\n fits three basic models (additive, multiplicative, and stickbreaking) to fitness\n data and suggests the best fitting model by multinomial regression. Stickbreaker\n can also be used to simulate fitness data.","Published":"2017-03-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sticky","Version":"0.5.2","Title":"Persist Attributes Across Data Operations","Description":"In base R, object attributes are lost when objects are modified by\n common data operations such as subset, filter, slice, append, extract\n etc. This packages allows objects to be marked as 'sticky' and have\n attributes persisted during these operations or when inserted\n into or extracted from recursive (i.e. list- or table-like) objects.","Published":"2017-03-20","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stilt","Version":"1.0.1","Title":"Separable Gaussian Process Interpolation (Emulation)","Description":"Functions to build and use an interpolator (\"emulator\") for time series or 1D regularly spaced data in multidimensional space. The standard usage is for interpolating time-resolved computer model output between model parameter settings. It can also be used for interpolating multivariate data (e.g., oceanographic time-series data, etc.) in space. There are functions to test the emulator using cross-validation, and to produce contour plots over 2D slices in model input parameter (or physical) space.","Published":"2014-05-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"stinepack","Version":"1.3","Title":"Stineman, a consistently well behaved method of interpolation","Description":"A consistently well behaved method of interpolation based\n on piecewise rational functions using Stineman's algorithm","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"stlplus","Version":"0.5.1","Title":"Enhanced Seasonal Decomposition of Time Series by Loess","Description":"Decompose a time series into seasonal, trend, and remainder\n components using an implementation of Seasonal Decomposition of Time\n Series by Loess (STL) that provides several enhancements over the STL\n method in the stats package. These enhancements include handling missing\n values, providing higher order (quadratic) loess smoothing with automated\n parameter choices, frequency component smoothing beyond the seasonal and\n trend components, and some basic plot methods for diagnostics.","Published":"2016-01-06","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stm","Version":"1.2.2","Title":"Estimation of the Structural Topic Model","Description":"The Structural Topic Model (STM) allows researchers \n to estimate topic models with document-level covariates. \n The package also includes tools for model selection, visualization,\n and estimation of topic-covariate regressions.","Published":"2017-03-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stmBrowser","Version":"1.0","Title":"Structural Topic Model Browser","Description":"This visualization allows users to interactively explore the relationships between topics and the covariates estimated from the stm package in R. ","Published":"2015-07-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stmCorrViz","Version":"1.3","Title":"A Tool for Structural Topic Model Visualizations","Description":"Generates an interactive visualization of topic correlations/\n hierarchy in a Structural Topic Model (STM) of Roberts, Stewart, and Tingley.\n The package performs a hierarchical clustering of topics which are then exported\n to a JSON object and visualized using D3.","Published":"2016-07-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"STMedianPolish","Version":"0.2","Title":"Spatio-Temporal Median Polish","Description":"Analyses spatio-temporal data, decomposing data in n-dimensional arrays and using the median polish technique.","Published":"2017-03-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stmgp","Version":"1.0","Title":"Rapid and Accurate Genetic Prediction Modeling for Genome-Wide\nAssociation or Whole-Genome Sequencing Study Data","Description":"Rapidly build accurate genetic prediction models for genome-wide association or whole-genome sequencing study data by smooth-threshold multivariate genetic prediction (STMGP) method. Variable selection is performed using marginal association test p-values with an optimal p-value cutoff selected by Cp-type criterion. Quantitative and binary traits are modeled respectively via linear and logistic regression models. A function that works through PLINK software (Purcell et al. 2007 , Chang et al. 2015 ) is provided. Covariates can be included in regression model.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stmgui","Version":"0.1.6","Title":"Shiny Application for Creating STM Models","Description":"Provides an application that acts as a GUI for the 'stm' text analysis package.","Published":"2016-12-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"StMoMo","Version":"0.4.0","Title":"Stochastic Mortality Modelling","Description":"Implementation of the family of generalised age-period-cohort\n stochastic mortality models. This family of models encompasses many models\n proposed in the actuarial and demographic literature including the \n Lee-Carter (1992) and\n the Cairns-Blake-Dowd (2006) models. \n It includes functions for fitting mortality models, analysing their \n goodness-of-fit and performing mortality projections and simulations.","Published":"2017-04-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"StMoSim","Version":"3.0","Title":"Plots a QQ-Norm Plot with several Gaussian simulations","Description":"Plots a QQ-Norm Plot with several Gaussian simulations.","Published":"2014-10-16","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"stocc","Version":"1.30","Title":"Fit a Spatial Occupancy Model via Gibbs Sampling","Description":"Fit a spatial-temporal occupancy models using\n a probit formulation instead of a traditional logit\n model.","Published":"2015-08-23","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"stochprofML","Version":"1.2","Title":"Stochastic Profiling using Maximum Likelihood Estimation","Description":"This is an R package accompanying the paper \"Parameterizing cell-to-cell regulatory heterogeneities via stochastic transcriptional profiles\" by Sameer S Bajikar, Christiane Fuchs, Andreas Roller, Fabian J Theis and Kevin A Janes (PNAS 2014, 111(5), E626-635). In this paper, we measure expression profiles from small heterogeneous populations of cells, where each cell is assumed to be from a mixture of lognormal distributions. We perform maximum likelihood estimation in order to infer the mixture ratio and the parameters of these lognormal distributions from the cumulated expression measurements.","Published":"2014-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stochvol","Version":"1.3.2","Title":"Efficient Bayesian Inference for Stochastic Volatility (SV)\nModels","Description":"Efficient algorithms for fully Bayesian estimation of stochastic volatility (SV) models via Markov chain Monte Carlo (MCMC) methods.","Published":"2016-10-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"StockChina","Version":"0.3.1","Title":"Real-Time Stock Price & Volume in China Market","Description":"With this package, users can obtain the real-time price and volume information of stocks in China market, as well as the information of the stock index. This package adopted the API from Sina Finance (http://finance.sina.com.cn/).","Published":"2016-01-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"stockPortfolio","Version":"1.2","Title":"Build stock models and analyze stock portfolios","Description":"Download stock data, build single index, constant\n correlation, and multigroup models, and estimate optimal stock\n portfolios. Plotting functions for the portfolio possibilities\n curve and portfolio cloud are included. A function to test a\n portfolio on a data set is also provided.","Published":"2012-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stocks","Version":"1.1.1","Title":"Fast Functions for Stock Market Analysis","Description":"Provides functions for analyzing historical performance of stocks or other investments. Functions are written in C++ to quickly calculate maximum draw-down, Sharpe ratio, Sterling ratio, and other commonly used metrics of stock performance.","Published":"2015-02-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"stoichcalc","Version":"1.1-3","Title":"R Functions for Solving Stoichiometric Equations","Description":"Given a list of substance compositions, a list of\n substances involved in a process, and a list of constraints in\n addition to mass conservation of elementary constituents, the\n package contains functions to build the substance composition\n matrix, to analyze the uniqueness of process stoichiometry, and\n to calculate stoichiometric coefficients if process\n stoichiometry is unique. (See Reichert, P. and Schuwirth, N.,\n A generic framework for deriving process stoichiometry in\n enviromental models, Environmental Modelling and Software 25,\n 1241-1251, 2010 for more details.)","Published":"2013-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Storm","Version":"1.2","Title":"Write Storm Bolts in R using the Storm Multi-Language Protocol","Description":"Storm is a distributed real-time computation system. Similar to how\n Hadoop provides a set of general primitives for doing batch processing, Storm\n provides a set of general primitives for doing real-time computation.\n\n Storm includes a \"Multi-Language\" (or \"Multilang\") Protocol to allow\n implementation of Bolts and Spouts in languages other than Java. This R\n extension provides implementations of utility functions to allow an application\n developer to focus on application-specific functionality rather than Storm/R\n communications plumbing.","Published":"2015-01-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stormwindmodel","Version":"0.1.0","Title":"Model Tropical Cyclone Wind Speeds","Description":"Allows users to input tracking data for a hurricane\n or other tropical storm, along with a data frame of grid points at which\n to model wind speeds. Functions in this package will then calculate wind\n speeds at each point based on wind model equations. This modeling framework\n is currently set up to model winds for North American locations with \n Atlantic basin storms. This work was supported \n in part by grants from the National Institute of Environmental Health \n Sciences (R00ES022631), the National Science Foundation (1331399), and the \n Department of Energy (DE-FG02-08ER64644).","Published":"2017-01-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"storr","Version":"1.1.1","Title":"Simple Key Value Stores","Description":"Creates and manages simple key-value stores. These can\n use a variety of approaches for storing the data. This package\n implements the base methods and support for file system and\n in-memory stores.","Published":"2017-06-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stoRy","Version":"0.1.0","Title":"Theme Enrichment Analysis for Stories","Description":"An implementation of the hypergeometric test to check for over-represented themes in a storyset relative to a background set of stories.","Published":"2017-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"stosim","Version":"0.0.12","Title":"Stochastic Simulator for Reliability Modeling of Repairable\nSystems","Description":"A toolkit for Reliability Availability and Maintainability (RAM) modeling of industrial process systems.","Published":"2014-06-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"STPGA","Version":"4.0","Title":"Selection of Training Populations by Genetic Algorithm","Description":"To be utilized to select a test data calibrated training population in high dimensional prediction problems and assumes that the explanatory variables are observed for all of the individuals. Once a \"good\" training set is identified, the response variable can be obtained only for this set to build a model for predicting the response in the test set. The algorithms in the package can be tweaked to solve some other subset selection problems. ","Published":"2017-03-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"stplanr","Version":"0.1.8","Title":"Sustainable Transport Planning","Description":"Functionality and data access tools for transport planning,\n including origin-destination analysis, route allocation and modelling travel\n patterns.","Published":"2017-06-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stpm","Version":"1.6.6","Title":"Stochastic Process Model for Analysis of Longitudinal and\nTime-to-Event Outcomes","Description":"Utilities to estimate parameters of the models with survival functions \n induced by stochastic covariates. Miscellaneous functions for data preparation \n and simulation are also provided. For more information, see: \n (i)\"Stochastic model for analysis of longitudinal data on aging and mortality\" \n by Yashin A. et al. (2007), \n Mathematical Biosciences, 208(2), 538-551, ;\n (ii) \"Health decline, aging and mortality: how are they related?\" \n by Yashin A. et al. (2007), \n Biogerontology 8(3), 291(302), .","Published":"2017-04-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"stpp","Version":"1.0-5","Title":"Space-Time Point Pattern simulation, visualisation and analysis","Description":"A package for analysing, simulating and displaying space-time point patterns","Published":"2014-12-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"stppResid","Version":"1.1","Title":"Perform residual analysis on space-time point process models","Description":"Implement transformation-based and pixel-based residual\n analysis of spatial-temporal point process models.","Published":"2012-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stR","Version":"0.3","Title":"STR Decomposition","Description":"Methods for decomposing seasonal data: STR (a Seasonal-Trend decomposition procedure based on Regression) and Robust STR. In some ways, STR is similar to Ridge Regression and Robust STR can be related to LASSO. They allow for multiple seasonal components, multiple linear covariates with constant, flexible and seasonal influence. Seasonal patterns (for both seasonal components and seasonal covariates) can be fractional and flexible over time; moreover they can be either strictly periodic or have a more complex topology. The methods provide confidence intervals for the estimated components. The methods can be used for forecasting.","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"StrainRanking","Version":"1.1","Title":"Ranking of pathogen strains","Description":"Regression-based ranking of pathogen strains with respect to their contributions to natural epidemics, using demographic and genetic data sampled in the curse of the epidemics","Published":"2014-02-05","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"strap","Version":"1.4","Title":"Stratigraphic Tree Analysis for Palaeontology","Description":"Functions for the stratigraphic analysis of phylogenetic trees.","Published":"2014-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"strat","Version":"0.1","Title":"An Implementation of the Stratification Index","Description":"An implementation of the stratification index proposed by Zhou (2012) .\n The package provides two functions, srank, which returns stratum-specific\n information, including population share and average percentile rank; and strat,\n which returns the stratification index and its approximate standard error.\n When a grouping factor is specified, strat also provides a detailed decomposition\n of the overall stratification into between-group and within-group components.","Published":"2016-11-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"strataG","Version":"2.0.2","Title":"Summaries and Population Structure Analyses of Genetic Data","Description":"A toolkit for analyzing stratified population genetic data. \n Functions are provided for summarizing and checking loci \n (haploid, diploid, and polyploid), single stranded DNA sequences,\n calculating most population subdivision metrics, and running external programs \n such as structure and fastsimcoal. The package is further described in \n Archer et al (2016) .","Published":"2017-04-11","License":"GNU General Public License","snapshot_date":"2017-06-23"} {"Package":"stratbr","Version":"1.2","Title":"Optimal Stratification in Stratified Sampling","Description":"An Optimization Algorithm Applied to\n Stratification Problem.This function aims\n at constructing optimal strata with an optimization algorithm\n based on a global optimisation technique called Biased\n Random Key Genetic Algorithms.","Published":"2017-05-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"strategicplayers","Version":"1.0","Title":"Strategic Players","Description":"Identifies individuals in a social network who should be the intervention\n subjects for a network intervention in which you have a group of targets, a\n group of avoiders, and a group that is neither.","Published":"2016-09-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Strategy","Version":"1.0.0","Title":"Generic Framework to Analyze Trading Strategies","Description":"Users can build and test customized quantitative trading strategies. Some quantitative trading strategies are already implemented, e.g. various moving-average filters with trend following approaches.\n The implemented class called \"Strategy\" allows users to access several methods to analyze performance figures, plots and backtest the strategies.\n Furthermore, custom strategies can be added, a generic template is available. The custom strategies require a certain input and output so they can be called from the Strategy-constructor.","Published":"2016-12-09","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"stratification","Version":"2.2-6","Title":"Univariate Stratification of Survey Populations","Description":"Univariate stratification of survey populations with a generalization of the \n Lavallee-Hidiroglou method of stratum construction. The generalized method takes into account \n a discrepancy between the stratification variable and the survey variable. The determination \n of the optimal boundaries also incorporate, if desired, an anticipated non-response, a take-all \n stratum for large units, a take-none stratum for small units, and a certainty stratum to ensure \n that some specific units are in the sample. The well known cumulative root frequency rule of \n Dalenius and Hodges and the geometric rule of Gunning and Horgan are also implemented. ","Published":"2017-03-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"StratifiedBalancing","Version":"0.2.0","Title":"Performs Stratified Covariate Balancing for Data with Discrete\nand Continuous Outcome Variables","Description":"Stratified covariate balancing through naturally occurring strata to adjust for confounding and interaction effects. Contains 4 primary functions which perform stratification, sensitivity analysis and return adjusted odds along with naturally occurring strata.","Published":"2016-07-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"StratifiedRF","Version":"0.1.1","Title":"Builds Trees by Sampling Variables from Groups","Description":"Random Forest that works with groups of predictor variables. When building a tree, a number of variables is taken randomly from each group separately, thus ensuring that it contains variables from each group. Useful when rows contain information about different things (e.g. user information and product information) and it's not sensible to make a prediction with information from only one group of variables, or when there are far more variables from one group than the other and it's desired to have groups appear evenly on trees.\n Trees are grown using the C5.0 algorithm. Currently works for classification only.","Published":"2017-06-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"stratigraph","Version":"0.66","Title":"Toolkit for the plotting and analysis of stratigraphic and\npalaeontological data","Description":"A collection of tools for plotting and analyzing paleontological and geological data distributed through through time in stratigraphic cores or sections. Includes some miscellaneous functions for handling other kinds of palaeontological and paleoecological data.","Published":"2015-01-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"StratSel","Version":"1.2","Title":"Strategic Selection Estimator","Description":"Provides functions to estimate a strategic selection estimator. A strategic selection estimator is an agent error model in which the two random components are not assumed to be orthogonal. In addition this package provides generic functions to print and plot objects of its class as well as the necessary functions to create tables for LaTeX. There is also a function to create dyadic data sets.","Published":"2016-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stratvns","Version":"1.0","Title":"Optimal Stratification in Stratified Sampling Optimization\nAlgorithm","Description":"An Optimization Algorithm Applied\n to stratification Problem.\n It is aims to delimit the population strata\n and defining the allocation of sample,considering\n the following objective: minimize the sample size given\n a fixed precision level. Exhaustive enumeration method\n is applied in small problems, while in problems with greater\n complexity the algorithm is based on metaheuristic Variable\n Neighborhood Decomposition Search with Path Relink.","Published":"2017-05-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"straweib","Version":"1.0","Title":"Stratified Weibull Regression Model","Description":"The main function is icweib, which fits a stratified Weibull proportional hazards model for left censored, right censored, interval censored, and non-censored survival data. We parameterize the Weibull regression model so that it allows a stratum-specific baseline hazard function, but where the effects of other covariates are assumed to be constant across strata. ","Published":"2013-07-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stream","Version":"1.2-4","Title":"Infrastructure for Data Stream Mining","Description":"A framework for data stream modeling and associated data mining tasks such as clustering and classification. The development of this package was supported in part by NSF IIS-0948893 and NIH R21HG005912.","Published":"2017-02-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"StreamMetabolism","Version":"1.1.2","Title":"Calculate Single Station Metabolism from Diurnal Oxygen Curves","Description":"I provide functions to calculate Gross Primary Productivity, Net Ecosystem Production, and Ecosystem Respiration from single station diurnal Oxygen curves. ","Published":"2016-09-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"streamMOA","Version":"1.1-2","Title":"Interface for MOA Stream Clustering Algorithms","Description":"Interface for data stream clustering algorithms implemented in the MOA (Massive Online Analysis) framework.","Published":"2015-09-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"streamR","Version":"0.2.1","Title":"Access to Twitter Streaming API via R","Description":"This package provides a series of functions that allow R users\n to access Twitter's filter, sample, and user streams, and to\n parse the output into data frames.","Published":"2014-01-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"stremr","Version":"0.4","Title":"Streamlined Estimation of Survival for Static, Dynamic and\nStochastic Treatment and Monitoring Regimes","Description":"Analysis of longitudinal time-to-event or time-to-failure data. \n Estimates the counterfactual discrete survival curve under static, dynamic and \n stochastic interventions on treatment (exposure) and monitoring events over time. \n Estimators (IPW, MSM-IPW, GCOMP, longitudinal TMLE) adjust for measured time-varying \n confounding and informative right-censoring. Model fitting can be performed either \n with GLM or H2O-3 machine learning libraries.\n The exposure, monitoring and censoring variables can be coded as either binary, \n categorical or continuous. Each can be multivariate (e.g., can use more than one \n column of dummy indicators for different censoring events). \n The input data needs to be in long format.","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"stressr","Version":"1.0.0","Title":"Fetch and plot financial stress index and component data","Description":"Forms queries to submit to the Cleveland Federal Reserve Bank web\n site's financial stress index data site. Provides query functions for both\n the composite stress index and the components data. By default the download\n includes daily time series data starting September 25, 1991. The functions\n return a class of either type easing or cfsi which contain a list of items\n related to the query and its graphical presentation. The list includes the\n time series data as an xts object. The package provides four lattice time\n series plots to render the time series data in a manner similar to the\n bank's own presentation.","Published":"2014-06-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"StressStrength","Version":"1.0.2","Title":"Computation and Estimation of Reliability of Stress-Strength\nModels","Description":"Reliability of (normal) stress-strength models and for building two-sided or one-sided confidence intervals according to different approximate procedures.","Published":"2016-05-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"stringb","Version":"0.1.13","Title":"Convenient Base R String Handling","Description":"Base R already ships with string handling capabilities 'out-\n of-the-box' but lacks streamlined function names and workflow. The\n 'stringi' ('stringr') package on the other hand has well named functions,\n extensive Unicode support and allows for a streamlined workflow. On the other\n hand it adds dependencies and regular expression interpretation between base R\n functions and 'stringi' functions might differ. This packages aims at providing\n a solution to the use case of unwanted dependencies on the one hand but the need\n for streamlined text processing on the other. The packages' functions are solely\n based on wrapping base R functions into 'stringr'/'stringi' like function names.\n Along the way it adds one or two extra functions and last but not least provides\n all functions as generics, therefore allowing for adding methods for other text\n structures besides plain character vectors.","Published":"2016-11-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stringdist","Version":"0.9.4.4","Title":"Approximate String Matching and String Distance Functions","Description":"Implements an approximate string matching version of R's native\n 'match' function. Can calculate various string distances based on edits\n (Damerau-Levenshtein, Hamming, Levenshtein, optimal sting alignment), qgrams (q-\n gram, cosine, jaccard distance) or heuristic metrics (Jaro, Jaro-Winkler). An\n implementation of soundex is provided as well. Distances can be computed between\n character vectors while taking proper care of encoding or between integer\n vectors representing generic sequences.","Published":"2016-12-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"stringformattr","Version":"0.1.1","Title":"Dynamic String Formatting","Description":"Pass named and unnamed character vectors into specified positions\n in strings. This represents an attempt to replicate some of python's string\n formatting.","Published":"2017-01-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stringgaussnet","Version":"1.1","Title":"PPI and Gaussian Network Construction from Transcriptomic\nAnalysis Results Integrating a Multilevel Factor","Description":"A toolbox for a construction of protein-protein interaction networks through the 'STRING' application programming interface, and an inference of Gaussian networks through 'SIMoNe' and 'WGCNA' approach, from DE genes analysis results and expression data. Additional functions are provided to import automatically networks into an active 'Cytoscape' session.","Published":"2015-07-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"stringi","Version":"1.1.5","Title":"Character String Processing Facilities","Description":"Allows for fast, correct, consistent, portable,\n as well as convenient character string/text processing in every locale\n and any native encoding. Owing to the use of the ICU library,\n the package provides R users with platform-independent functions\n known to Java, Perl, Python, PHP, and Ruby programmers. Available\n features include: pattern searching (e.g., with ICU Java-like regular\n expressions or the Unicode Collation Algorithm), random string generation,\n case mapping, string transliteration, concatenation,\n Unicode normalization, date-time formatting and parsing, etc.","Published":"2017-04-07","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stringr","Version":"1.2.0","Title":"Simple, Consistent Wrappers for Common String Operations","Description":"A consistent, simple and easy to use set of wrappers around the\n fantastic 'stringi' package. All function and argument names (and positions)\n are consistent, all functions deal with \"NA\"'s and zero length vectors\n in the same way, and the output from one function is easy to feed into\n the input of another.","Published":"2017-02-18","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"strip","Version":"0.1.1","Title":"Lighten your R Model Outputs","Description":"The strip function deletes components of R model outputs that are useless for specific purposes, such as predict[ing], print[ing], summary[izing], etc.","Published":"2017-01-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"stripless","Version":"1.0-3","Title":"Structured Trellis Displays Without Strips for Lattice Graphics","Description":"For making Trellis-type conditioning plots without strip labels.\n This is useful for displaying the structure of results from factorial designs\n and other studies when many conditioning variables would clutter the display\n with layers of redundant strip labels. Settings of the variables are encoded by\n layout and spacing in the trellis array and decoded by a separate legend. The\n functionality is implemented by a single S3 generic strucplot() function that\n is a wrapper for the Lattice package's xyplot() function. This allows access to\n all Lattice graphics capabilities in the usual way.","Published":"2016-09-12","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"striprtf","Version":"0.4.5","Title":"Extract Text from RTF File","Description":"Extracts plain text from RTF (Rich Text Format) file.","Published":"2017-06-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"StroupGLMM","Version":"0.1.0","Title":"R Codes and Datasets for Generalized Linear Mixed Models: Modern\nConcepts, Methods and Applications by Walter W. Stroup","Description":"R Codes and Datasets for Stroup, W. W. (2012). Generalized Linear Mixed Models: Modern Concepts, Methods and Applications, CRC Press.","Published":"2016-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"strucchange","Version":"1.5-1","Title":"Testing, Monitoring, and Dating Structural Changes","Description":"Testing, monitoring and dating structural changes in (linear)\n regression models. strucchange features tests/methods from\n\t the generalized fluctuation test framework as well as from\n\t the F test (Chow test) framework. This includes methods to\n\t fit, plot and test fluctuation processes (e.g., CUSUM, MOSUM,\n\t recursive/moving estimates) and F statistics, respectively.\n It is possible to monitor incoming data online using\n fluctuation processes.\n Finally, the breakpoints in regression models with structural\n changes can be estimated together with confidence intervals.\n Emphasis is always given to methods for visualizing the data.","Published":"2015-06-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"StructFDR","Version":"1.2","Title":"False Discovery Control Procedure Integrating the Prior\nStructure Information","Description":"Perform more powerful false discovery control (FDR) for microbiome data, taking into account the prior phylogenetic relationship among bacteria species. As a general methodology, it is applicable to any type of (genomic) data with prior structure information.","Published":"2017-04-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"structree","Version":"1.1.3","Title":"Tree-Structured Clustering","Description":"Tree-structured modelling of categorical predictors or measurement\n units.","Published":"2016-12-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"structSSI","Version":"1.1.1","Title":"Multiple Testing for Hypotheses with Hierarchical or Group\nStructure","Description":"Performs multiple testing corrections that take specific structure of hypotheses into account.","Published":"2015-05-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"strum","Version":"0.6.2","Title":"STRUctural Modeling of Latent Variables for General Pedigree","Description":"Implements a broad class of latent variable and structural equation models for general pedigree data.","Published":"2015-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"strvalidator","Version":"1.9.0","Title":"Process Control and Internal Validation of Forensic STR Kits","Description":"An open source platform for validation and process control.\n Tools to analyse data from internal validation of forensic short tandem\n repeat (STR) kits are provided. The tools are developed to provide\n the necessary data to conform with guidelines for internal validation\n issued by the European Network of Forensic Science Institutes (ENFSI)\n DNA Working Group, and the Scientific Working Group on DNA Analysis Methods\n (SWGDAM). A front-end graphical user interface is provided.\n More information about each function can be found in the\n respective help documentation.","Published":"2017-03-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"stsm","Version":"1.9","Title":"Structural Time Series Models","Description":"Fit the basic structural time series model by maximum likelihood.","Published":"2016-10-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"stsm.class","Version":"1.3","Title":"Class and Methods for Structural Time Series Models","Description":"This package defines an S4 class for structural time series models \n and provides some basic methods to work with it.","Published":"2014-07-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"stubthat","Version":"1.2.0","Title":"Stubbing Framework for R","Description":"Create stubs of functions for use while testing.","Published":"2017-05-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"StVAR","Version":"1.1","Title":"Student's t Vector Autoregression (StVAR)","Description":"Estimation of\n multivariate Student's t dynamic regression models for a given degrees of freedom and lag length. Users can also specify the trends and dummies of any kind in matrix form.","Published":"2017-02-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"stylo","Version":"0.6.4","Title":"Functions for a Variety of Stylometric Analyses","Description":"A number of functions, supplemented by GUI, to perform various analyses in the field of computational stylistics, authorship attribution, etc.","Published":"2016-10-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"subcopem2D","Version":"1.2","Title":"Bivariate Empirical Subcopula","Description":"Calculate empirical subcopula and dependence measures from a given bivariate sample.","Published":"2017-01-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SubCultCon","Version":"1.0","Title":"Maximum-Likelihood Cultural Consensus Analysis with Sub-Cultures","Description":"The three functions in the package compute the maximum likelihood estimates of the informants' competence scores, tests for two answer keys with known groups, and finds \"best\" split of the informants into sub-culture groups.","Published":"2013-09-25","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"subdetect","Version":"1.1","Title":"Detect Subgroup with an Enhanced Treatment Effect","Description":"A test for the existence of a subgroup with enhanced treatment effect. And, a sample size calculation procedure for the subgroup detection test.","Published":"2016-05-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"subgroup","Version":"1.1","Title":"Methods for exploring treatment effect heterogeneity in subgroup\nanalysis of clinical trials","Description":"Produces various measures of expected treatment effect heterogeneity under an assumption of homogeneity across subgroups. Graphical presentations are created to compare these expected differences with the observed differences.","Published":"2014-08-01","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"SubgrpID","Version":"0.11","Title":"Patient Subgroup Identification for Clinical Drug Development","Description":"Function Wrapper contains four algorithms for developing threshold-based multivariate (prognostic/predictive) biomarker signatures via bootstrapping and aggregating of thresholds from trees, Monte-Carlo variations of the Adaptive Indexing method and Patient Rule Induction Method. Variable selection is automatically built-in to these algorithms. Final signatures are returned with interaction plots for predictive signatures. Cross-validation performance evaluation and testing dataset results are also output.","Published":"2017-03-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SubLasso","Version":"1.0","Title":"Gene selection using Lasso for Microarray data with user-defined\ngenes fixed in model","Description":"The package implements a convenient procedure for microarray study, which is to do gene selection and classification simultaneously for binary outcomes. Users needn't to tune the parameters and can fix any genes that they desire to keep in the model. The K-folds cross validation results are returned.","Published":"2014-03-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"sublime","Version":"1.3","Title":"Automatic Lesion Incidence Estimation and Detection using\nMulti-Modality Longitudinal Magnetic Resonance Images","Description":"Creates probability maps of incident and enlarging lesion voxels\n from a baseline and followup magnetic resonance imaging study in \n patients with multiple sclerosis.","Published":"2016-09-30","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"subniche","Version":"0.9.6","Title":"Within Outlying Mean Indexes: Refining the OMI Analysis","Description":"Complementary indexes calculation to the Outlying Mean Index analysis to explore niche shift of a community and biological constraint within an Euclidean space, with graphical displays.","Published":"2017-04-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SubpathwayGMir","Version":"1.0","Title":"Identify Metabolic Subpathways Mediated by MicroRNAs","Description":"Routines for identifying metabolic subpathways mediated by microRNAs (miRNAs) through topologically locating miRNAs and genes within reconstructed Kyoto Encyclopedia of Genes and Genomes (KEGG) metabolic pathway graphs embedded by miRNAs. (1) This package can obtain the reconstructed KEGG metabolic pathway graphs with genes and miRNAs as nodes, through converting KEGG metabolic pathways to graphs with genes as nodes and compounds as edges, and then integrating miRNA-target interactions verified by low-throughput experiments from four databases (TarBase, miRecords, mirTarBase and miR2Disease) into converted pathway graphs. (2) This package can locate metabolic subpathways mediated by miRNAs by topologically analyzing the \"lenient distance\" of miRNAs and genes within reconstructed KEGG metabolic pathway graphs.(3) This package can identify significantly enriched miRNA-mediated metabolic subpathways based on located subpathways by hypergenomic test. (4) This package can support six species for metabolic subpathway identification, such as caenorhabditis elegans, drosophila melanogaster, danio rerio, homo sapiens, mus musculus and rattus norvegicus, and user only need to update interested organism-specific environment variables.","Published":"2015-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SubpathwayLNCE","Version":"1.0","Title":"Identify Signal Subpathways Competitively Regulated by LncRNAs\nBased on ceRNA Theory","Description":"Identify dysfunctional subpathways competitively regulated by lncRNAs through integrating lncRNA-mRNA expression profile and pathway topologies. ","Published":"2016-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"subplex","Version":"1.2-2","Title":"Unconstrained Optimization using the Subplex Algorithm","Description":"The subplex algorithm for unconstrained optimization, developed by Tom Rowan .","Published":"2016-11-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"subprocess","Version":"0.8.0","Title":"Manage Sub-Processes in R","Description":"Create and handle multiple sub-processes in R, exchange\n data over standard input and output streams, control their life cycle.","Published":"2017-01-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"subrank","Version":"0.9.7","Title":"Computes Copula using Ranks and Subsampling","Description":"Estimation of copula using ranks and subsampling. The main feature of this method is that simulation studies show a low sensitivity to dimension, on realistic cases. Vignette provides some theoretical documentation.","Published":"2016-04-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"subscore","Version":"2.0","Title":"Computing Subscores in Classical Test Theory and Item Response\nTheory","Description":"Functions for computing subscores for a test using different\n methods in both classical test theory (CTT) and item response theory (IRT). This\n package enables three sets of subscoring methods within the framework of CTT\n and IRT: Wainer's augmentation method, Haberman's three subscoring methods, and\n Yen's objective performance index (OPI). The package also includes the function\n to compute Proportional Reduction of Mean Squared Errors (PRMSEs) in Haberman's\n methods which are used to examine whether test subscores are of added value.","Published":"2016-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"subselect","Version":"0.12-6","Title":"Selecting Variable Subsets","Description":"A collection of functions which (i) assess the quality of variable subsets as surrogates for a full data set, in either an exploratory data analysis or in the context of a multivariate linear model, and (ii) search for subsets which are optimal under various criteria.","Published":"2016-07-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"subsemble","Version":"0.0.9","Title":"An Ensemble Method for Combining Subset-Specific Algorithm Fits","Description":"Subsemble is a general subset ensemble prediction method, which can be used for small, moderate, or large datasets. Subsemble partitions the full dataset into subsets of observations, fits a specified underlying algorithm on each subset, and uses a unique form of V-fold cross-validation to output a prediction function that combines the subset-specific fits. An oracle result provides a theoretical performance guarantee for Subsemble. ","Published":"2014-07-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"subspace","Version":"1.0.4","Title":"Interface to OpenSubspace","Description":"An interface to 'OpenSubspace', an open source framework for\n evaluation and exploration of subspace clustering algorithms in WEKA \n (see for more\n information). Also performs visualization.","Published":"2015-10-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"subspaceMOA","Version":"0.6.0","Title":"Interface to 'subspaceMOA'","Description":"An interface to 'subspaceMOA', a Framework for the Evaluation of subspace stream clustering algorithms. (see for more information.)","Published":"2017-04-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"subtype","Version":"1.0","Title":"Cluster analysis to find molecular subtypes and their assessment","Description":"subtype performs a biclustering procedure on a input\n dataset and assess whether resulting clusters are promising\n subtypes. Note that the R-package rsmooth should be installed\n before implementing subtype. rsmooth can be downloaded from\n http://www.meb.ki.se/~yudpaw.","Published":"2013-01-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SubVis","Version":"2.0.2","Title":"Visual Exploration of Protein Alignments Resulting from Multiple\nSubstitution Matrices","Description":"Substitution matrices are important parameters in protein alignment algorithms. These matrices represent the likelihood that an amino acid will be substituted for another during mutation. This tool allows users to apply predefined and custom matrices and then explore the resulting alignments with interactive visualizations. 'SubVis' requires the availability of a web browser.","Published":"2017-05-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sudoku","Version":"2.6","Title":"Sudoku Puzzle Generator and Solver","Description":"Generates, plays, and solves Sudoku puzzles. The GUI\n playSudoku() needs package \"tkrplot\" if you are not on Windows.","Published":"2014-07-01","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"sudokuAlt","Version":"0.1-11","Title":"Tools for Making and Spoiling Sudoku Games","Description":"Tools for making, retrieving, displaying and solving sudoku games.\n This package is an alternative to the earlier sudoku-solver package,\n 'sudoku'. The present package uses a slightly different algorithm, has a\n simpler coding and presents a few more sugar tools, such as plot and print\n methods. Solved sudoku games are of some interest in Experimental Design\n as examples of Latin Square designs with additional balance constraints.","Published":"2017-05-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SUE","Version":"1.0","Title":"Subsampling method","Description":"This is a package for the subsampling method of robust\n estimation of linear regression models","Published":"2013-01-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"summariser","Version":"0.1.0","Title":"Easy Calculation and Visualisation of Confidence Intervals","Description":"Functions to speed up the exploratory analysis of simple\n datasets using 'dplyr' and 'ggplot2'. Functions are provided to do the \n common tasks of calculating confidence intervals and visualising the \n results. ","Published":"2017-03-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"summarytools","Version":"0.6.5","Title":"Dataframe Summaries, Frequency Tables and Descriptive Stats with\nVarious Output Formats","Description":"Built around three key functions: 1) freq() generates\n frequency tables reporting counts and proportions (including cumulative) for factors\n and other discrete data; 2) descr() gives all common central tendency statistics and \n measures of dispersion for numerical data; 3) dfSummary() gives as much information\n as possible on a dataframe's columns in a legible table. freq() and\n descr() support weights, and all three functions support 'Hmisc' or 'pander' labels. \n A variety of output formats are available (plain text, 'rmarkdown' and HTML).\n An additional misc function, what.is(), displays all common properties of an object\n (its class, type, mode, attributes, etc.) and extends the base is() function, \n checking the object against most is.() functions.","Published":"2016-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sunburstR","Version":"1.0.0","Title":"'Htmlwidget' for 'Kerry Rodden' 'd3.js' Sequence Sunburst","Description":"Make interactive 'd3.js' sequence sunburst diagrams in R with the\n convenience and infrastructure of an 'htmlwidget'.","Published":"2017-06-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"suncalc","Version":"0.1","Title":"Compute Sun Position, Sunlight Phases, Moon Position and Lunar\nPhase","Description":"R interface to 'suncalc.js' library, part of the 'SunCalc.net' project , \n for calculating sun position, sunlight phases (times for sunrise, sunset, dusk, etc.), \n moon position and lunar phase for the given location and time.","Published":"2017-05-15","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Sunder","Version":"0.0.4","Title":"Quantification of the effect of geographic versus environmental\nisolation on genetic differentiation","Description":"Quantification of the effect of geographic versus environmental isolation on genetic differentiation","Published":"2015-01-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SunterSampling","Version":"1.0.1","Title":"Sunter's sampling design","Description":"Functions for drawing samples according to Sunter's\n sampling design, and for computing first and second order\n inclusion probabilities","Published":"2014-11-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"supc","Version":"0.1","Title":"The Self-Updating Process Clustering Algorithms","Description":"Implements the self-updating process clustering algorithms proposed\n in Shiu and Chen (2016) .","Published":"2017-03-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"supclust","Version":"1.0-7","Title":"Supervised Clustering of Predictor Variables such as Genes","Description":"Methodology for supervised grouping aka \"clustering\" of\n potentially many predictor variables, such as genes etc.","Published":"2011-12-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"supcluster","Version":"1.0","Title":"Supervised Cluster Analysis","Description":"Clusters features under the assumption that each cluster has a\n random effect and there is an outcome variable that is related to the random \n effects by a linear regression. In this way the cluster analysis is \n ``supervised'' by the outcome variable. An alternate specification is that \n features in each cluster have the same compound symmetric normal distribution, \n and the conditional distribution of the outcome given the features\n has the same coefficient for each feature in a cluster. ","Published":"2015-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"superbiclust","Version":"1.1","Title":"Generating Robust Biclusters from a Bicluster Set (Ensemble\nBiclustering)","Description":"Biclusters are submatrices in the data matrix which\n satisfy certain conditions of homogeneity. Package contains\n functions for generating robust biclusters with respect to the\n initialization parameters for a given bicluster solution\n contained in a bicluster set in data, the procedure is also\n known as ensemble biclustering. The set of biclusters is\n evaluated based on the similarity of its elements (the\n overlap), and afterwards the hierarchical tree is constructed\n to obtain cut-off points for the classes of robust biclusters.\n The result is a number of robust (or super) biclusters with\n none or low overlap.","Published":"2014-11-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"superdiag","Version":"1.1","Title":"R Code for Testing Markov Chain Nonconvergence","Description":"A Comprehensive Test Suite for Markov Chain\n Nonconvergence.","Published":"2012-04-25","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SuperExactTest","Version":"0.99.4","Title":"Exact Test and Visualization of Multi-Set Intersections","Description":"Identification of sets of objects with shared features is a common operation in all disciplines. Analysis of intersections among multiple sets is fundamental for in-depth understanding of their complex relationships. This package implements a theoretical framework for efficient computation of statistical distributions of multi-set intersections based upon combinatorial theory, and provides multiple scalable techniques for visualizing the intersection statistics. The statistical algorithm behind this package was published in Wang et al. (2015) .","Published":"2017-06-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"superheat","Version":"0.1.0","Title":"A Graphical Tool for Exploring Complex Datasets Using Heatmaps","Description":"A system for generating extendable and customizable heatmaps for exploring complex datasets, including big data and data with multiple data types.","Published":"2017-02-04","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"SuperLearner","Version":"2.0-21","Title":"Super Learner Prediction","Description":"Implements the super learner prediction method and contains a\n library of prediction algorithms to be used in the super learner.","Published":"2016-12-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"superMDS","Version":"1.0.2","Title":"Implements the supervised multidimensional scaling (superMDS)\nproposal of Witten and Tibshirani (2011)","Description":"Witten and Tibshirani (2011) Supervised multidimensional scaling for visualization, classification, and bipartite ranking. Computational Statistics and Data Analysis 55(1): 789-801.","Published":"2013-12-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"superpc","Version":"1.09","Title":"Supervised principal components","Description":"Supervised principal components for regression and\n survival analsysis. Especially useful for high-dimnesional\n data, including microarray data.","Published":"2012-02-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SuperRanker","Version":"1.0.1","Title":"Sequential Rank Agreement","Description":"Tools for analysing the aggreement of two or more rankings of the same items. Examples are importance rankings of predictor variables and risk predictions of subjects. Benchmarks for agreement are computed based on random permutation and bootstrap.","Published":"2016-07-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"supervisedPRIM","Version":"2.0.0","Title":"Supervised Classification Learning and Prediction using Patient\nRule Induction Method (PRIM)","Description":"The Patient Rule Induction Method (PRIM) is typically\n used for \"bump hunting\" data mining to identify regions with abnormally\n high concentrations of data with large or small values. This package\n extends this methodology so that it can be applied to binary classification\n problems and used for prediction.","Published":"2016-10-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SuppDists","Version":"1.1-9.4","Title":"Supplementary Distributions","Description":"Ten distributions supplementing those built into R.\n Inverse Gauss, Kruskal-Wallis, Kendall's Tau, Friedman's chi\n squared, Spearman's rho, maximum F ratio, the Pearson product\n moment correlation coefficient, Johnson distributions, normal\n scores and generalized hypergeometric distributions. In\n addition two random number generators of George Marsaglia are\n included.","Published":"2016-09-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"support.BWS","Version":"0.1-4","Title":"Basic Functions for Supporting an Implementation of Best-Worst\nScaling","Description":"Provides three basic functions that support an implementation of object case (Case 1) best-worst scaling: one for converting a two-level orthogonal main-effect design/balanced incomplete block design into questions; one for creating a data set suitable for analysis; and one for calculating count-based scores.","Published":"2017-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"support.BWS2","Version":"0.1-1","Title":"Basic Functions for Supporting an Implementation of Case 2\nBest-Worst Scaling","Description":"Provides three basic functions that support an implementation of Case 2 (profile case) best-worst scaling. The first is to convert an orthogonal main-effect design into questions, the second is to create a dataset suitable for analysis, and the third is to calculate count-based scores. ","Published":"2017-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"support.CEs","Version":"0.4-1","Title":"Basic Functions for Supporting an Implementation of Choice\nExperiments","Description":"Provides seven basic functions that support an implementation of choice experiments.","Published":"2015-08-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"supportInt","Version":"1.1","Title":"Calculates Likelihood Support Intervals for Common Data Types","Description":"Calculates likelihood based support intervals for\n several common data types including binomial, Poisson, normal, lm(), and\n glm(). For the binomial, Poisson, and normal data likelihood intervals are\n calculated via root finding algorithm. Additional parameters allow the\n user to specify whether they would like to receive a parametric bootstrap\n estimate of the confidence level of said support interval. For lm() and glm(),\n the function returns profile likelihoods for each coefficient in the model.","Published":"2017-02-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"surface","Version":"0.4-1","Title":"Fitting Hansen Models to Investigate Convergent Evolution","Description":"SURFACE is a data-driven phylogenetic comparative method for fitting stabilizing selection models to continuous trait data, building on the ouch package. The main functions fit a series of Hansen models using stepwise AIC, then identify cases of convergent evolution where multiple lineages have shifted to the same adaptive peak. ","Published":"2014-02-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Surrogate","Version":"0.2","Title":"Evaluation of Surrogate Endpoints in Clinical Trials","Description":"In a clinical trial, it frequently occurs that the most credible outcome to evaluate the effectiveness of a new therapy (the true endpoint) is difficult to measure. In such a situation, it can be an effective strategy to replace the true endpoint by a (bio)marker that is easier to measure and that allows for a prediction of the treatment effect on the true endpoint (a surrogate endpoint). The package 'Surrogate' allows for an evaluation of the appropriateness of a candidate surrogate endpoint based on the meta-analytic, information-theoretic, and causal-inference frameworks. Part of this software has been developed using funding provided from the European Union's Seventh Framework Programme for research, technological development and demonstration under Grant Agreement no 602552.","Published":"2017-06-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"surrosurv","Version":"1.1.15","Title":"Evaluation of Failure Time Surrogate Endpoints in Individual\nPatient Data Meta-Analyses","Description":"Provides functions for the evaluation of\n surrogate endpoints when both the surrogate and the true endpoint are failure\n time variables. The approaches implemented are: (1) the two-step approach\n (Burzykowski et al, 2001) with a copula model (Clayton, Plackett, Hougaard) at\n the first step and either a linear regression of log-hazard ratios at the second\n step (either adjusted or not for measurement error); (2) mixed proportional\n hazard models estimated via mixed Poisson GLM.","Published":"2017-05-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"suRtex","Version":"0.9","Title":"LaTeX descriptive statistic reporting for survey data","Description":"suRtex was designed for easy descriptive statistic reporting of categorical survey data (e.g., Likert scales) in LaTeX. suRtex takes a matrix or data frame and produces the LaTeX code necessary for a sideways table creation. Mean, median, standard deviation, and sample size are optional.","Published":"2013-07-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"surv2sampleComp","Version":"1.0-5","Title":"Inference for Model-Free Between-Group Parameters for Censored\nSurvival Data","Description":"Performs inference of several model-free group contrast measures, which include difference/ratio of cumulative incidence rates at given time points, quantiles, and restricted mean survival times (RMST). Two kinds of covariate adjustment procedures (i.e., regression and augmentation) for inference of the metrics based on RMST are also included.","Published":"2017-06-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"survAccuracyMeasures","Version":"1.2","Title":"Estimate accuracy measures for risk prediction markers from\nsurvival data","Description":"This package provides a function to estimate the AUC, TPR(c),\n FPR(c), PPV(c), and NPV(c) for for a specific timepoint and marker cutoff\n value c using non-parametric and semi-parametric estimators. Standard errors \n and confidence intervals are also computed. Either analytic or bootstrap \n standard errors can be computed.","Published":"2014-08-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"survAUC","Version":"1.0-5","Title":"Estimators of prediction accuracy for time-to-event data","Description":"The package provides a variety of functions to estimate\n time-dependent true/false positive rates and AUC curves from a\n set of censored survival data.","Published":"2012-09-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"survAWKMT2","Version":"1.0.0","Title":"Two-Sample Tests Based on Differences of Kaplan-Meier Curves","Description":"Tests for equality of two survival functions based on integrated weighted differences of two Kaplan-Meier curves.","Published":"2016-11-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"survC1","Version":"1.0-2","Title":"C-statistics for risk prediction models with censored survival\ndata","Description":"Performs inference for C of risk prediction models with\n censored survival data, using the method proposed by Uno et al.\n (2011). Inference for the difference in C between two competing\n prediction models is also implemented.","Published":"2013-02-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SurvCorr","Version":"1.0","Title":"Correlation of Bivariate Survival Times","Description":"Estimates correlation coefficients with associated\n confidence limits \n for bivariate, partially censored survival times. Uses\n the iterative multiple imputation approach proposed\n by Schemper, Kaider, Wakounig and Heinze, Statistics\n in Medicine 2013. Provides a scatterplot function to visualize the bivariate \n distribution, either on the original time scale or as copula.","Published":"2015-02-26","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SurvDisc","Version":"0.1.0","Title":"Discrete Time Survival and Longitudinal Data Analysis","Description":"Various functions for discrete time survival analysis and longitudinal analysis. SIMEX method for correcting for bias for errors-in-variables\n in a mixed effects model. Asymptotic mean and variance of different proportional hazards test statistics using different ties methods given two\n survival curves and censoring distributions. Score test and Wald test for regression analysis of grouped survival data. Calculation of survival\n curves for events defined by the response variable in a mixed effects model crossing a threshold with or without confirmation.","Published":"2016-10-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"surveillance","Version":"1.13.1","Title":"Temporal and Spatio-Temporal Modeling and Monitoring of Epidemic\nPhenomena","Description":"Statistical methods for the modeling and monitoring of time series\n of counts, proportions and categorical data, as well as for the modeling\n of continuous-time point processes of epidemic phenomena.\n The monitoring methods focus on aberration detection in count data time\n series from public health surveillance of communicable diseases, but\n applications could just as well originate from environmetrics,\n reliability engineering, econometrics, or social sciences. The package\n implements many typical outbreak detection procedures such as the\n (improved) Farrington algorithm, or the negative binomial GLR-CUSUM\n method of Höhle and Paul (2008) .\n A novel CUSUM approach combining logistic and multinomial logistic\n modeling is also included. The package contains several real-world data\n sets, the ability to simulate outbreak data, and to visualize the\n results of the monitoring in a temporal, spatial or spatio-temporal\n fashion. A recent overview of the available monitoring procedures is\n given by Salmon et al. (2016) .\n For the retrospective analysis of epidemic spread, the package provides\n three endemic-epidemic modeling frameworks with tools for visualization,\n likelihood inference, and simulation. 'hhh4' estimates models for\n (multivariate) count time series following Paul and Held (2011)\n and Meyer and Held (2014) .\n 'twinSIR' models the susceptible-infectious-recovered (SIR) event\n history of a fixed population, e.g, epidemics across farms or networks,\n as a multivariate point process as proposed by Höhle (2009)\n . 'twinstim' estimates self-exciting point\n process models for a spatio-temporal point pattern of infective events,\n e.g., time-stamped geo-referenced surveillance data, as proposed by\n Meyer et al. (2012) .\n A recent overview of the implemented space-time modeling frameworks\n for epidemic phenomena is given by Meyer et al. (2017)\n .","Published":"2017-04-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"survexp.fr","Version":"1.0","Title":"Relative survival, AER and SMR based on French death rates","Description":"Relative survival, AER and SMR based on French death rates","Published":"2013-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"survey","Version":"3.32-1","Title":"Analysis of Complex Survey Samples","Description":"Summary statistics, two-sample tests, rank tests, generalised linear models, cumulative link models, Cox models, loglinear models, and general maximum pseudolikelihood estimation for multistage stratified, cluster-sampled, unequally weighted survey samples. Variances by Taylor series linearisation or replicate weights. Post-stratification, calibration, and raking. Two-phase subsampling designs. Graphics. PPS sampling without replacement. Principal components, factor analysis.","Published":"2017-06-22","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"surveybootstrap","Version":"0.0.1","Title":"Tools for the Bootstrap with Survey Data","Description":"Tools for using different kinds of bootstrap\n for estimating sampling variation using complex survey\n data. ","Published":"2016-05-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"surveydata","Version":"0.1-14","Title":"Tools to manipulate survey data","Description":"Data obtained from surveys contains information not only about the\n survey responses, but also the survey metadata, e.g. the original survey\n questions and the answer options. The surveydata package makes it easy to\n keep track of this metadata, and to easily extract columns with\n specific questions.","Published":"2013-10-22","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"surveyeditor","Version":"1.0","Title":"Generate a Survey that can be Completed by Survey Respondents","Description":"Help generate slides for surveys or experiments.\n The resulted slides allow the subject to respond with the use of the mouse (usual keyboard input is \n replaced with clicking on a virtual keyboard on the slide). Subjects' responses are saved to the user-\n specified location in the form of R-readable text file. To allow flexibility, each function in \n this package generates a particular type of slides thus general R function writing skills are \n required to compile these edited slides. ","Published":"2015-06-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"surveyoutliers","Version":"0.1","Title":"Helps Manage Outliers in Sample Surveys","Description":"At present, the only functionality is the calculation of optimal one-sided winsorizing cutoffs. The main function is optimal.onesided.cutoff.bygroup. It calculates the optimal tuning parameter for one-sided winsorisation, and so calculates winsorised values for a variable of interest. See the help file for this function for more details and an example.","Published":"2016-01-25","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"surveyplanning","Version":"2.8","Title":"Survey Planning Tools","Description":"Tools for sample survey planning, including sample size calculation, estimation of expected precision for the estimates of totals, and calculation of optimal sample size allocation.","Published":"2017-03-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Survgini","Version":"1.0","Title":"The Gini concentration test for survival data","Description":"The Gini concentration test for survival data is a nonparametric test based on the Gini index for testing the equality of two survival distributions from the point of view of concentration. The package compares different nonparametric tests (asymptotic Gini test, permutation Gini test, log-rank test, Gray-Tsiatis test and Wilcoxon test) and computes their p-values.","Published":"2011-11-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"survIDINRI","Version":"1.1-1","Title":"IDI and NRI for comparing competing risk prediction models with\ncensored survival data","Description":"Performs inference for a class of measures to compare\n competing risk prediction models with censored survival data.\n The class includes the integrated discrimination improvement\n index (IDI) and category-less net reclassification index (NRI).","Published":"2013-04-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"survidm","Version":"0.1.0","Title":"Inference and Prediction in an Illness-Death Model","Description":"Newly developed methods for the estimation of several probabilities\n in an illness-death model. The package can be used to obtain nonparametric and \n semiparametric estimates for: transition probabilities, occupation probabilities, \n cumulative incidence function and the sojourn time distributions. \n Several auxiliary functions are also provided which can be used for marginal \n estimation of the survival functions.","Published":"2017-05-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"survival","Version":"2.41-3","Title":"Survival Analysis","Description":"Contains the core survival analysis routines, including\n\t definition of Surv objects, \n\t Kaplan-Meier and Aalen-Johansen (multi-state) curves, Cox models,\n\t and parametric accelerated failure time models.","Published":"2017-04-04","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"survivalMPL","Version":"0.1.1","Title":"Penalised Maximum Likelihood for Survival Analysis Models","Description":"Estimate the regression coefficients and the baseline hazard \n of proportional hazard Cox models using maximum penalised likelihood. \n A 'non-parametric' smooth estimate of the baseline hazard function \n is provided.","Published":"2014-08-30","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"survivalROC","Version":"1.0.3","Title":"Time-dependent ROC curve estimation from censored survival data","Description":"Compute time-dependent ROC curve from censored survival\n data using Kaplan-Meier (KM) or Nearest Neighbor Estimation\n (NNE) method of Heagerty, Lumley & Pepe (Biometrics, Vol 56 No\n 2, 2000, PP 337-344)","Published":"2013-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"survivalsvm","Version":"0.0.2","Title":"Survival Support Vector Analysis","Description":"Performs support vectors analysis for data sets with survival\n outcome. Three approaches are available in the package: The regression approach\n takes censoring into account when formulating the inequality constraints of\n the support vector problem. In the ranking approach, the inequality constraints\n set the objective to maximize the concordance index for comparable pairs\n of observations. The hybrid approach combines the regression and ranking\n constraints in the same model.","Published":"2017-06-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"survJamda","Version":"1.1.4","Title":"Survival Prediction by Joint Analysis of Microarray Gene\nExpression Data","Description":"Microarray gene expression data can be analyzed individually or jointly using merging methods or meta-analysis to predict patients' survival and risk assessment. ","Published":"2015-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"survJamda.data","Version":"1.0.2","Title":"Data for Package 'survJambda'","Description":"Three breast cancer gene expression data sets that can be used for package 'survJamda'. This package contains the gene expression and phenotype data of GSE1992, GSE3143 and GSE4335. ","Published":"2015-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SurvLong","Version":"1.0","Title":"Analysis of Proportional Hazards Model with Sparse Longitudinal\nCovariates","Description":"Kernel weighting methods for estimation of proportional hazards models with intermittently observed longitudinal covariates.","Published":"2015-02-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"survminer","Version":"0.4.0","Title":"Drawing Survival Curves using 'ggplot2'","Description":"Contains the function 'ggsurvplot()' for drawing easily beautiful\n and 'ready-to-publish' survival curves with the 'number at risk' table\n and 'censoring count plot'. Other functions are also available to plot \n adjusted curves for `Cox` model and to visually examine 'Cox' model assumptions.","Published":"2017-06-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"survMisc","Version":"0.5.4","Title":"Miscellaneous Functions for Survival Data","Description":"A collection of functions to help in the analysis of\n right-censored survival data. These extend the methods available in\n package:survival.","Published":"2016-11-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"survPresmooth","Version":"1.1-9","Title":"Presmoothed Estimation in Survival Analysis","Description":"Presmoothed estimators of survival, density, cumulative and non-cumulative hazard functions with right-censored survival data.","Published":"2016-03-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SurvRank","Version":"0.1","Title":"Rank Based Survival Modelling","Description":"Estimation of the prediction accuracy in a unified survival AUC\n approach. Model selection and prediction estimation based on a survival AUC.\n Stepwise model selection, based on several ranking approaches.","Published":"2015-08-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"survrec","Version":"1.2-2","Title":"Survival analysis for recurrent event data","Description":"Estimation of survival function for recurrent event data\n using Peña-Strawderman-Hollander, Whang-Chang estimators and\n MLE estimation under a Gamma Frailty model.","Published":"2013-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SurvRegCensCov","Version":"1.4","Title":"Weibull Regression for a Right-Censored Endpoint with\nInterval-Censored Covariate","Description":"The main function of this package allows estimation of a Weibull Regression for a right-censored endpoint, one interval-censored covariate, and an arbitrary number of non-censored covariates. Additional functions allow to switch between different parametrizations of Weibull regression used by different R functions, inference for the mean difference of two arbitrarily censored Normal samples, and estimation of canonical parameters from censored samples for several distributional assumptions.","Published":"2015-10-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"survRM2","Version":"1.0-2","Title":"Comparing Restricted Mean Survival Time","Description":"Performs two-sample comparisons using the restricted mean survival time (RMST) as a summary measure of the survival time distribution. Three kinds of between-group contrast metrics (i.e., the difference in RMST, the ratio of RMST and the ratio of the restricted mean time lost (RMTL)) are computed. It performs an ANCOVA-type covariate adjustment as well as unadjusted analyses for those measures. ","Published":"2017-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"survsim","Version":"1.1.4","Title":"Simulation of Simple and Complex Survival Data","Description":"Simulation of simple and complex survival data including recurrent and multiple events and competing risks.","Published":"2015-10-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"survSNP","Version":"0.24","Title":"Power Calculations for SNP Studies with Censored Outcomes","Description":"Conduct asymptotic and empirical power and sample size calculations for Single-Nucleotide Polymorphism (SNP) association studies with right censored time to event outcomes.","Published":"2016-06-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"survutils","Version":"1.0.0","Title":"Utility Functions for Survival Analysis","Description":"Functional programming principles to iteratively run Cox \n regression and plot its results. The results are reported in tidy data \n frames. Additional utility functions are available for working with \n other aspects of survival analysis such as survival curves, C-statistics, \n etc.","Published":"2017-03-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sValues","Version":"0.1.4","Title":"Measures of the Sturdiness of Regression Coefficients","Description":"The sValues package implements the s-values proposed by Ed. Leamer.\n It provides a context-minimal approach for sensitivity analysis using extreme\n bounds to assess the sturdiness of regression coefficients.","Published":"2015-12-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"svapls","Version":"1.4","Title":"Surrogate variable analysis using partial least squares in a\ngene expression study","Description":"Accurate identification of genes that are truly differentially expressed over two sample varieties, after adjusting for hidden subject-specific effects of residual heterogeneity.","Published":"2013-09-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"svcm","Version":"0.1.2","Title":"2d and 3d Space-Varying Coefficient Models","Description":"2d and 3d space-varying coefficient models are fitted to\n regular grid data using either a full B-spline tensor product\n approach or a sequential approximation. The latter one is\n computationally more efficient. Resolution increment is\n enabled.","Published":"2009-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"svd","Version":"0.4","Title":"Interfaces to Various State-of-Art SVD and Eigensolvers","Description":"R bindings to SVD and eigensolvers (PROPACK, nuTRLan).","Published":"2016-02-11","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"svDialogs","Version":"0.9-57","Title":"SciViews GUI API - Dialog boxes","Description":"Rapidly construct dialog boxes for your GUI, including an automatic\n function assistant","Published":"2014-12-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svDialogstcltk","Version":"0.9-4","Title":"SciViews GUI API - Dialog boxes using Tcl/Tk","Description":"Reimplementation of the svDialogs dialog boxes in Tcl/Tk","Published":"2014-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svdvis","Version":"0.1","Title":"Singular Value Decomposition Visualization","Description":"Visualize singular value decompositions (SVD), principal component analysis (PCA), factor analysis (FA) and related methods.","Published":"2015-12-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svdvisual","Version":"1.1","Title":"SVD visualization tools","Description":"Some visualization tools based on Singular Value Decomposition","Published":"2013-12-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"svglite","Version":"1.2.0","Title":"An 'SVG' Graphics Device","Description":"A graphics device for R that produces 'Scalable Vector Graphics'.\n 'svglite' is a fork of the older 'RSvgDevice' package.","Published":"2016-11-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"svgPanZoom","Version":"0.3.3","Title":"R 'Htmlwidget' to Add Pan and Zoom to Almost any R Graphic","Description":"This 'htmlwidget' provides pan and zoom interactivity to R\n graphics, including 'base', 'lattice', and 'ggplot2'. The interactivity is\n provided through the 'svg-pan-zoom.js' library. Various options to the widget\n can tailor the pan and zoom experience to nearly any user desire.","Published":"2016-09-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"svGUI","Version":"0.9-55","Title":"SciViews GUI API - Functions to manage GUIs","Description":"Functions to manage GUIs from R","Published":"2014-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svgViewR","Version":"1.2","Title":"3D Animated Interactive Visualizations Using SVG","Description":"Creates 3D animated, interactive visualizations in Scalable Vector Graphics (SVG) format that can be viewed in a web browser.","Published":"2016-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"svHttp","Version":"0.9-55","Title":"SciViews GUI API - R HTTP server","Description":"Implements a simple HTTP server allowing to connect GUI clients to R","Published":"2014-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svIDE","Version":"0.9-52","Title":"SciViews GUI API - IDE and code editor functions","Description":"Function for the GUI API to interact with external IDE/code editors","Published":"2014-03-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svKomodo","Version":"0.9-63","Title":"SciViews GUI API - Functions to interface with Komodo Edit/IDE","Description":"Functions to manage the GUI client, like Komodo with the\n SciViews-K extension","Published":"2015-02-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svmadmm","Version":"0.3","Title":"Linear/Nonlinear SVM Classification Solver Based on ADMM and\nIADMM Algorithms","Description":"\n Solve large-scale regularised linear/kernel classification by using ADMM and IADMM algorithms. This package provides linear L2-regularised primal classification (both ADMM and IADMM are available), kernel L2-regularised dual classification (IADMM) as well as L1-regularised primal classification (both ADMM and IADMM are available).","Published":"2016-03-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svMisc","Version":"0.9-70","Title":"SciViews GUI API - Miscellaneous functions","Description":"Supporting functions for the GUI API (various utility functions)","Published":"2014-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SVMMaj","Version":"0.2-2","Title":"SVMMaj algorithm","Description":"Implements the SVM-Maj algorithm to train data with\n Support Vector Machine, this algorithm uses two efficient\n updates, one for linear kernel and one for the nonlinear\n kernel.","Published":"2011-01-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SVMMatch","Version":"1.1","Title":"Causal Effect Estimation and Diagnostics with Support Vector\nMachines","Description":"Causal effect estimation in observational data often requires identifying a set of untreated observations that are comparable to some treated group of interest. This package provides a suite of functions for identifying such a set of observations and for implementing standard and new diagnostics tools. The primary function, svmmatch(), uses support vector machines to identify a region of common support between treatment and control groups. A sensitivity analysis, balance checking, and assessment of the region of overlap between treated and control groups is included. The Bayesian implementation allows for recovery of uncertainty estimates for the treatment effect and all other parameters.","Published":"2015-02-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"svmpath","Version":"0.955","Title":"The SVM Path Algorithm","Description":"Computes the entire regularization path for the two-class svm classifier\n\t\twith essentially the same cost as a single SVM fit.","Published":"2016-08-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svs","Version":"1.1.0","Title":"Tools for Semantic Vector Spaces","Description":"Various tools for semantic vector spaces, such as\n correspondence analysis (simple, multiple and discriminant), latent\n semantic analysis, probabilistic latent semantic analysis, non-negative\n matrix factorization, latent class analysis and EM clustering. Furthermore,\n there are specialized distance measures, plotting functions and some helper\n functions.","Published":"2016-01-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"svSocket","Version":"0.9-57","Title":"SciViews GUI API - R Socket Server","Description":"Implements a simple socket server allowing to connect GUI clients to R","Published":"2014-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svSweave","Version":"0.9-8","Title":"SciViews GUI API - Sweave functions","Description":"Supporting functions for the GUI API (Sweave functions)","Published":"2013-01-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svTools","Version":"0.9-4","Title":"SciViews GUI API - Tools (wrapper for packages tools and\ncodetools)","Description":"Set of tools aimed at wrapping some of the functionalities\n of the packages tools, utils and codetools into a nicer format so\n that an IDE can use them","Published":"2014-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svUnit","Version":"0.7-12","Title":"SciViews GUI API - Unit testing","Description":"A complete unit test system and functions to implement its GUI part","Published":"2014-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svWidgets","Version":"0.9-44","Title":"SciViews GUI API - Widgets & Windows","Description":"High level management of widgets, windows and other graphical resources.","Published":"2014-03-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SvyNom","Version":"1.1","Title":"Nomograms for Right-Censored Outcomes from Survey Designs","Description":"Builds, evaluates and validates a nomogram with survey data and right-censored outcomes.","Published":"2015-02-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"svyPVpack","Version":"0.1-1","Title":"A package for complex surveys including plausible values","Description":"This package deals with data which stem from survey designs including plausible values. This package has been created to handle data from Large Scale Assessments like PISA, PIAAC etc. which use complex survey designs to draw the sample and plausible values to report person related estimates. Various functions/statistics (mean, quantile, GLM etc.) are provided to handle this kind of data.","Published":"2014-03-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"swamp","Version":"1.2.3","Title":"Visualization, analysis and adjustment of high-dimensional data\nin respect to sample annotations","Description":"The package contains functions to connect the structure of\n the data with the information on the samples. Three types of\n associations are covered: 1. linear model of principal\n components. 2. hierarchical clustering analysis. 3.\n distribution of features-sample annotation associations.\n Additionally, the inter-relation between sample annotations can\n be analyzed. Simple methods are provided for the correction of\n batch effects and removal of principal components.","Published":"2013-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"swapClass","Version":"1.0.1","Title":"A Null Model Adapted to Abundance Class Data in Ecology","Description":"A null model randomizing semi-quantitative multi-classes (or ordinal) data by swapping sub-matrices while both the row and the column marginal sums are held constant.","Published":"2017-06-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"SwarmSVM","Version":"0.1-2","Title":"Ensemble Learning Algorithms Based on Support Vector Machines","Description":"Three ensemble learning algorithms based on support vector machines. \n They all train support vector machines on subset of data and combine the result.","Published":"2016-08-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SWATmodel","Version":"0.5.9","Title":"A multi-OS implementation of the TAMU SWAT model","Description":"The Soil and Water Assessment Tool is a river basin or\n watershed scale model developed by Dr. Jeff Arnold for the\n USDA-ARS.","Published":"2014-01-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"swCRTdesign","Version":"2.1","Title":"Stepped Wedge Cluster Randomized Trial (SW CRT) Design","Description":"A set of tools for examining the design and analysis aspects of stepped wedge cluster randomized trials (SW CRT) based on a repeated cross-sectional sampling scheme (Hussey MA and Hughes JP (2007) Contemporary Clinical Trials 28:182-191. ).","Published":"2016-12-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SweaveListingUtils","Version":"0.7.7","Title":"Utilities for Sweave Together with TeX 'listings' Package","Description":"Provides utilities for defining R / Rd as \"language\" for TeX-package \"listings\" and for including R / Rd source file\n (snippets) copied from R-forge in its most recent version (or another URL) thereby avoiding inconsistencies between\n vignette and documented source code.","Published":"2017-04-22","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"sweidnumbr","Version":"1.4.1","Title":"Handling of Swedish Identity Numbers","Description":"Structural handling of identity numbers used in the Swedish\n administration such as personal identity numbers ('personnummer') and\n organizational identity numbers ('organisationsnummer').","Published":"2016-09-14","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"swfscMisc","Version":"1.2","Title":"Miscellaneous Functions for Southwest Fisheries Science Center","Description":"Collection of conversion, analytical, geodesic, mapping, and\n plotting functions. Used to support packages and code written by\n researchers at the Southwest Fisheries Science Center of the National\n Oceanic and Atmospheric Administration.","Published":"2016-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"swirl","Version":"2.4.3","Title":"Learn R, in R","Description":"Use the R console as an interactive learning\n environment. Users receive immediate feedback as they are guided through\n self-paced lessons in data science and R programming.","Published":"2017-03-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"swirlify","Version":"0.5.0","Title":"A Toolbox for Writing 'swirl' Courses","Description":"A set of tools for writing and sharing interactive courses\n to be used with swirl.","Published":"2016-07-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"SwissAir","Version":"1.1.4","Title":"Air Quality Data of Switzerland for one year in 30 min\nResolution","Description":"Ozone, NOx (= Sum of Nitrogenmonoxide and\n Nitrogendioxide), Nitrogenmonoxide, ambient temperature, dew\n point, wind speed and wind direction at 3 sites around lake of\n Lucerne in Central Switzerland in 30 min time resolution for\n year 2004.","Published":"2012-11-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"switchnpreg","Version":"0.8-0","Title":"Switching nonparametric regression models for a single curve and\nfunctional data","Description":"Functions for estimating the parameters from the latent\n state process and the functions corresponding to the J states as\n proposed by De Souza and Heckman (2013).","Published":"2013-07-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"switchr","Version":"0.11.2","Title":"Installing, Managing, and Switching Between Distinct Sets of\nInstalled Packages","Description":"Provides an abstraction for managing, installing,\n and switching between sets of installed R packages. This allows users to\n maintain multiple package libraries simultaneously, e.g. to maintain\n strict, package-version-specific reproducibility of many analyses, or\n work within a development/production release paradigm. Introduces a\n generalized package installation process which supports multiple repository\n and non-repository sources and tracks package provenance.","Published":"2017-01-12","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"switchrGist","Version":"0.2.1","Title":"Publish Package Manifests to GitHub Gists","Description":"Provides a simple plugin to the switchr\n\t framework which allows users to publish manifests of packages - or of specific versions thereof - as single-file GitHub repositories (Gists). These manifest files can then be used as remote seeds (see switchr documentation) when creating new package libraries.","Published":"2015-06-10","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"SWMPr","Version":"2.2.0","Title":"Retrieving, Organizing, and Analyzing Estuary Monitoring Data","Description":"Tools for retrieving, organizing, and analyzing environmental\n data from the System Wide Monitoring Program of the National Estuarine\n Research Reserve System . These tools \n address common challenges associated with continuous time series data \n for environmental decision making.","Published":"2016-11-08","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"sybil","Version":"2.0.0","Title":"Efficient Constrained Based Modelling in R","Description":"This Systems Biology Library for R implements algorithms for constraint based analyses of metabolic networks (e.g. flux-balance analysis (FBA), minimization of metabolic adjustment (MOMA), regulatory on/off minimization (ROOM), robustness analysis and flux variability analysis). Most of the current LP/MILP solvers are supported via additional packages.","Published":"2016-06-06","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sybilccFBA","Version":"2.0.0","Title":"Cost Constrained FLux Balance Analysis: MetabOlic Modeling with\nENzyme kineTics (MOMENT)","Description":"An implementation of a cost constrained flux balance analysis technique (i.e. MetabOlic Modeling with ENzyme kineTics (MOMENT)).\n MOMENT uses enzyme kinetic data and enzyme molecular weights to constrain flux balance analysis(FBA) and it is described in \n\t\t\t Adadi, R., Volkmer, B., Milo, R., Heinemann, M., & Shlomi, T. (2012). Prediction of Microbial Growth Rate versus Biomass \n\t\t\t Yield by a Metabolic Network with Kinetic Parameters, 8(7). doi:10.1371/journal.pcbi.1002575. \n This package also implements an improvement of MOMENT that considers multi-functional enzymes. \n\t\t\t FBA is a mathematical technique to find fluxes in metabolic models at steady state. \n\t\t\t It is described in Orth, J.D., Thiele, I. and Palsson, B.O. What is flux balance analysis? Nat. Biotech. 28, 245-248(2010).","Published":"2015-04-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sybilcycleFreeFlux","Version":"2.0.0","Title":"Cycle-Free Flux Balance Analysis","Description":"Implement cycle-Free flux balance analysis, flux variability, and Random Sampling of solution space. Flux balance analysis is a technique to find fluxes in metabolic models at steady state. It is described in Orth, J.D., Thiele, I. and Palsson, B.O. What is flux balance analysis? Nat. Biotech. 28, 245-248 (2010).","Published":"2016-06-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sybilDynFBA","Version":"1.0.1","Title":"Dynamic FBA : Dynamic Flux Balance Analysis","Description":"Implements dynamic FBA technique proposed by Varma et al 1994.","Published":"2016-07-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sybilEFBA","Version":"1.0.2","Title":"Using Gene Expression Data to Improve Flux Balance Analysis\nPredictions","Description":"Three different approaches to use gene expression data (or protein measurements) for improving FBA predictions.","Published":"2015-02-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"sybilSBML","Version":"3.0.1","Title":"SBML Integration in Package 'Sybil'","Description":"'SBML' (Systems Biology Markup Language) with FBC (Flux Balance Constraints) integration in 'sybil'. Many constraint based metabolic models are published in 'SBML' format ('*.xml'). Herewith is the ability to read, write, and check 'SBML' files in 'sybil' provided.","Published":"2017-01-12","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sylcount","Version":"0.1-0","Title":"Syllable Counting and Readability Measurements","Description":"An English language syllable counter, plus readability score\n measure-er. The package has been carefully optimized and should be very\n efficient, both in terms of run time performance and memory consumption.\n The main methods are 'vectorized' by document, and scores for multiple\n documents are computed in parallel via 'OpenMP'.","Published":"2017-04-07","License":"BSD 2-clause License + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"syllable","Version":"0.1.3","Title":"A Small Collection of Syllable Counting Functions","Description":"Tools for counting syllables and polysyllables. The tools\n rely primarily on a 'data.table' hash table lookup, resulting\n in fast syllable counting.","Published":"2017-02-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"symbolicDA","Version":"0.4-2","Title":"Analysis of Symbolic Data","Description":"Symbolic data analysis methods: importing/ exporting data from ASSO XML Files, distance calculation for symbolic data (Ichino-Yaguchi, de Carvalho measure), zoom star plot, 3d interval plot, multidimensional scaling for symbolic interval data, dynamic clustering based on distance matrix, HINoV method for symbolic data, Ichino's feature selection method, principal component analysis for symbolic interval data, decision trees for symbolic data based on optimal split with bagging, boosting and random forest approach (+visualization), kernel discriminant analysis for symbolic data, Kohonen's self-organizing maps for symbolic, replication and profiling, artificial symbolic data generation.","Published":"2015-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"symbols","Version":"1.1","Title":"Symbol plots","Description":"Package that implements various symbol plots (bars,\n profiles, stars, Chernoff faces, color icons, stick figures).","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"symDMatrix","Version":"1.0.0","Title":"Partitioned Symmetric Matrices","Description":"A class that partitions a symmetric matrix into matrix-like\n objects (blocks) while behaving similarly to a base R matrix. Very large\n symmetric matrices are supported if the blocks are memory-mapped objects.","Published":"2017-05-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"symmoments","Version":"1.2","Title":"Symbolic central and noncentral moments of the multivariate\nnormal distribution","Description":"Symbolic central and non-central moments of the multivariate normal distribution. Computes a standard representation, LateX code, and values at specified mean and covariance matrices.","Published":"2014-08-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"SymTS","Version":"1.0","Title":"Symmetric Tempered Stable Distributions","Description":"Contains methods for simulation and for evaluating the pdf, cdf, and quantile functions for symmetric stable, symmetric classical tempered stable, and symmetric power tempered stable distributions. ","Published":"2017-03-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"synbreed","Version":"0.12-6","Title":"Framework for the Analysis of Genomic Prediction Data using R","Description":"A collection of functions required for genomic prediction which were developed within the Synbreed project for synergistic plant and animal breeding (). This covers data processing, data visualization, and analysis. All functions are embedded within the framework of a single, unified data object. The implementation is flexible with respect to a wide range of data formats in plant and animal breeding. This research was funded by the German Federal Ministry of Education and Research (BMBF) within the AgroClustEr Synbreed - Synergistic plant and animal breeding (FKZ 0315528A).","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"synbreedData","Version":"1.5","Title":"Data for the Synbreed Package","Description":"Data sets for the 'synbreed' package with three data sets from cattle, maize and mice to illustrate the functions in the 'synbreed' R package. All data sets are stored in the gpData format introduced in the 'synbreed'\n package. This research was funded by the German Federal Ministry of Education and Research (BMBF) within the AgroClustEr Synbreed - Synergistic plant and animal breeding (FKZ 0315528A).","Published":"2015-04-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"synchronicity","Version":"1.1.9.1","Title":"Boost Mutex Functionality in R","Description":"Boost mutex functionality in R.","Published":"2016-02-17","License":"LGPL-3 | Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"synchrony","Version":"0.2.3","Title":"Methods for computing spatial, temporal, and spatiotemporal\nstatistics","Description":"Methods for computing spatial, temporal, and spatiotemporal\n statistics including: empirical univariate, bivariate and multivariate\n variograms; fitting variogram models; phase locking and synchrony analysis;\n generating autocorrelated and cross-correlated matrices.","Published":"2014-07-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SynchWave","Version":"1.1.1","Title":"Synchrosqueezed Wavelet Transform","Description":"This package carries out synchrosqueezed wavelet transform. The package is a translation of MATLAB Synchrosqueezing Toolbox, version 1.1 originally developed by Eugene Brevdo (2012). The C code for curve_ext was authored by Jianfeng Lu, and translated to Fortran by Dongik Jang. Synchrosqueezing is based on the papers: [1] Daubechies, I., Lu, J. and Wu, H. T. (2011) Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool. Applied and Computational Harmonic Analysis, 30. 243-261. [2] Thakur, G., Brevdo, E., Fukar, N. S. and Wu, H-T. (2013) The Synchrosqueezing algorithm for time-varying spectral analysis: Robustness properties and new paleoclimate applications. Signal Processing, 93, 1079-1094.","Published":"2013-08-19","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"SyncMove","Version":"0.1-0","Title":"Subsample Temporal Data to Synchronal Events and Compute the MCI","Description":"The function 'syncSubsample' subsamples temporal data of different entities so that the result only contains synchronal events. The function 'mci' calculates the Movement Coordination Index (MCI, see reference on help page for function 'mci') of a data set created with the function 'syncSubsample'.","Published":"2015-10-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SyncRNG","Version":"1.2.1","Title":"A Synchronized Tausworthe RNG for R and Python","Description":"Random number generation designed for cross-language usage.","Published":"2016-10-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SYNCSA","Version":"1.3.2","Title":"SYNCSA - Analysis of functional and phylogenetic patterns in\nmetacommunities","Description":"Analysis of metacommunities based on functional traits and phylogeny of the community components. The functions that are offered here implement for the R environment methods that have been available in the SYNCSA application written in C++ (by Valerio Pillar, available at http://ecoqua.ecologia.ufrgs.br/ecoqua/SYNCSA.html).","Published":"2014-02-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SynergizeR","Version":"0.2","Title":"Interface to The Synergizer service for translating between sets\nof biological identifiers","Description":"This package provides programmatic access to\n The Synergizer service for translating between sets of\n biological identifiers.","Published":"2011-11-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"SyNet","Version":"2.0","Title":"Inference and Analysis of Sympatry Networks","Description":"Infers sympatry matrices from distributional data and analyzes them in order to identify groups of species cohesively connected.","Published":"2011-11-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"synlik","Version":"0.1.1","Title":"Synthetic Likelihood methods for intractable likelihoods","Description":"Framework to perform synthetic likelihood inference\n for models where the likelihood function is unavailable or\n intractable.","Published":"2014-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"synRNASeqNet","Version":"1.0","Title":"Synthetic RNA-Seq Network Generation and Mutual Information\nEstimates","Description":"It implements various estimators of mutual information, such as\n\tthe maximum likelihood and the Millow-Madow estimator, various Bayesian\n\testimators, the shrinkage estimator, and the Chao-Shen estimator. It also\n\toffers wrappers to the kNN and kernel density estimators. Furthermore, it\n\tprovides various index of performance evaluation such as precision, recall,\n\tFPR, F-Score, ROC-PR Curves and so on. Lastly, it provides a brand new way\n\tof generating synthetic RNA-Seq Network with known dependence structure.","Published":"2015-04-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"Synth","Version":"1.1-5","Title":"Synthetic Control Group Method for Comparative Case Studies","Description":"Implements the synthetic control group method for comparative case studies as described in Abadie and Gardeazabal (2003) and Abadie, Diamond, and Hainmueller (2010, 2011, 2014). The synthetic control method allows for effect estimation in settings where a single unit (a state, country, firm, etc.) is exposed to an event or intervention. It provides a data-driven procedure to construct synthetic control units based on a weighted combination of comparison units that approximates the characteristics of the unit that is exposed to the intervention. A combination of comparison units often provides a better comparison for the unit exposed to the intervention than any comparison unit alone.","Published":"2014-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"synthACS","Version":"1.0.1","Title":"Synthetic Microdata and Spatial MicroSimulation Modeling for ACS\nData","Description":"Firstly provides a wrapper to library(acs) to access curated set\n of American Community Survey (ACS) base tables which may be of interest\n to many researchers. Secondly, it builds synthetic micro-datasets of ACS data\n at any specified geographic level with 10 default individual attributes. Thirdly,\n provides functionality for data-extensibility of micro-datasets; allowing users\n to both add data attributes and marginalize undesired attributes. And\n finally, the package also conducts spatial microsimulation modeling (SMSM)\n via simulated annealing. SMSM is conducted in parallel by default.","Published":"2017-02-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"synthpop","Version":"1.3-1","Title":"Generating Synthetic Versions of Sensitive Microdata for\nStatistical Disclosure Control","Description":"A tool for producing synthetic versions of microdata containing confidential information so that they are safe to be released to users for exploratory analysis. The key objective of generating synthetic data is to replace sensitive original values with synthetic ones causing minimal distortion of the statistical information contained in the data set. Variables, which can be categorical or continuous, are synthesised one-by-one using sequential modelling. Replacements are generated by drawing from conditional distributions fitted to the original data using parametric or classification and regression trees models. Data are synthesised via the function syn() which can be largely automated, if default settings are used, or with methods defined by the user. Optional parameters can be used to influence the disclosure risk and the analytical quality of the synthesised data. For a description of the implemented method see Nowok, Raab and Dibben (2016) .","Published":"2016-11-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"sys","Version":"1.3","Title":"Portable System Utilities","Description":"Powerful replacements for base system2 with consistent behavior\n across platforms. Supports interruption, background tasks, and full control over\n STDOUT / STDERR binary or text streams. On Unix systems the package also has\n functions for evaluating expressions inside a temporary fork. Such evaluations\n have no side effects on the main R process, and support reliable interrupts and\n timeouts. This provides the basis for a sandboxing mechanism.","Published":"2017-04-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"sysfonts","Version":"0.5","Title":"Loading System Fonts into R","Description":"Using FreeType to load system fonts\n and Google Fonts (https://www.google.com/fonts) into R.\n It is supposed to support other packages such as 'R2SWF'\n and 'showtext'.","Published":"2015-04-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"sysid","Version":"1.0.4","Title":"System Identification in R","Description":"Provides functions for constructing mathematical models of dynamical systems from measured input-output data. ","Published":"2017-01-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"systemfit","Version":"1.1-20","Title":"Estimating Systems of Simultaneous Equations","Description":"Fitting simultaneous\n systems of linear and nonlinear equations using Ordinary Least\n Squares (OLS), Weighted Least Squares (WLS), Seemingly Unrelated\n Regressions (SUR), Two-Stage Least Squares (2SLS), Weighted\n Two-Stage Least Squares (W2SLS), and Three-Stage Least Squares (3SLS).","Published":"2017-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"systemicrisk","Version":"0.4","Title":"A Toolbox for Systemic Risk","Description":"A toolbox for systemic risk based on liabilities matrices. Contains\n a Gibbs sampler for liabilities matrices where only row and column sums of the\n liabilities matrix as well as some other fixed entries are observed. Includes models \n for power law distribution on the degree distribution.","Published":"2017-01-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"syuzhet","Version":"1.0.1","Title":"Extracts Sentiment and Sentiment-Derived Plot Arcs from Text","Description":"Extracts sentiment and sentiment-derived plot arcs\n from text using three sentiment dictionaries conveniently\n packaged for consumption by R users. Implemented dictionaries include\n \"syuzhet\" (default) developed in the Nebraska Literary Lab\n \"afinn\" developed by Finn {\\AA}rup Nielsen, \"bing\" developed by Minqing Hu\n and Bing Liu, and \"nrc\" developed by Mohammad, Saif M. and Turney, Peter D.\n Applicable references are available in README.md and in the documentation\n for the \"get_sentiment\" function. The package also provides a hack for\n implementing Stanford's coreNLP sentiment parser. The package provides\n several methods for plot arc normalization.","Published":"2017-03-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"T2EQ","Version":"1.1","Title":"Functions for Applying the T^2-Test for Equivalence","Description":"Contains functions for applying the T^2-test for equivalence.\n The T^2-test for equivalence is a multivariate two-sample equivalence test. \n Distance measure of the test is the Mahalanobis distance.\n For multivariate normally distributed data the T^2-test for equivalence \n is exact and UMPI.\n The function T2EQ() implements the T^2-test for equivalence \n according to Wellek (2010) .\n The function T2EQ.dissolution.profiles.hoffelder() implements a variant \n of the T^2-test for equivalence according to Hoffelder (2016) \n for the \n equivalence comparison of highly variable dissolution profiles.","Published":"2016-08-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tab","Version":"3.1.2","Title":"Functions for Creating Summary Tables for Statistical Reports","Description":"Contains functions for generating tables for statistical reports written in Microsoft Word or LaTeX. There are functions for I-by-J frequency tables, comparison of means or medians across levels of a categorical variable, and summarizing fitted generalized linear models, generalized estimating equations, and Cox proportional hazards regression. Functions are available to handle data simple random samples or survey data. The package is intended to make it easier for researchers to translate results from statistical analyses in R to their reports or manuscripts.","Published":"2016-09-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"taber","Version":"0.1.0","Title":"Split and Recombine Your Data","Description":"Sometimes you need to split your data and work on the two chunks independently before bringing them back together. 'Taber' allows you to do that with its two functions.","Published":"2015-08-03","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tablaxlsx","Version":"1.2.2","Title":"Write Formatted Tables in Excel Workbooks","Description":"Some functions are included in this package for writing tables in Excel format suitable for distribution.","Published":"2017-05-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Table1Heatmap","Version":"1.1","Title":"Table 1 Heatmap","Description":"Table 1 is the classical way to describe the patients in a\n clinical study. The amount of splits in the data in such a table is\n limited. Table1Heatmap draws a heatmap of all crosstables that can be\n generated with the data. Users can choose between showing the actual\n crosstables or direction of effect of associations, and highlight\n associations by number of patients or p-values.","Published":"2014-03-04","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"table1xls","Version":"0.3.2","Title":"Produces Summary Tables and Exports Them to Multi-Tab\nSpreadsheet Format (.xls or .xlsx)","Description":"A collection of time-saving wrappers for producing and exporting\n summary tables commonly used in scientific articles, to .xls/.xlsx multi-tab spreadsheets, while controlling spreadsheet layout. Powered by 'XLConnect'/'rJava' utilities.","Published":"2016-07-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tableHTML","Version":"1.0.1","Title":"A Tool to Create HTML Tables","Description":"A tool to create and style HTML tables with CSS. These can be exported and used in any application that accepts HTML\n (e.g. 'shiny', 'rmarkdown', 'PowerPoint'). It also provides functions to create CSS files (which also work with shiny). ","Published":"2017-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tableMatrix","Version":"0.8","Title":"Combines 'data.table' and 'matrix' Classes","Description":"Provides two classes extending 'data.table' class. Simple 'tableList' class wraps 'data.table' and any additional structures together. More complex 'tableMatrix' class combines strengths of 'data.table' and 'matrix'. See for more information and examples.","Published":"2016-09-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TableMonster","Version":"1.2","Title":"Table Monster","Description":"Provides a user friendly interface to generation of booktab style\n tables using xtable. ","Published":"2015-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tableone","Version":"0.8.1","Title":"Create 'Table 1' to Describe Baseline Characteristics","Description":"Creates 'Table 1', i.e., description of baseline patient\n characteristics, which is essential in every medical research.\n Supports both continuous and categorical variables, as well as\n p-values and standardized mean differences. Weighted data are\n supported via the 'survey' package. See 'github' for a screen cast.\n 'tableone' was inspired by descriptive statistics functions in\n 'Deducer' , a Java-based GUI package by Ian Fellows. This package\n does not require GUI or Java, and intended for command-line users.","Published":"2017-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tableplot","Version":"0.3-5","Title":"Represents tables as semi-graphic displays","Description":"Description:","Published":"2012-11-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"tables","Version":"0.8","Title":"Formula-Driven Table Generation","Description":"Computes and displays complex tables of summary statistics.\n Output may be in LaTeX, HTML, plain text, or an R\n matrix for further processing.","Published":"2017-01-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TableToLongForm","Version":"1.3.1","Title":"TableToLongForm","Description":"TableToLongForm automatically converts hierarchical Tables intended for a human reader into a simple LongForm Dataframe that is machine readable.","Published":"2014-08-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tabplot","Version":"1.3-1","Title":"Tableplot, a Visualization of Large Datasets","Description":"A tableplot is a visualisation of a (large) dataset with a dozen of variables, both numeric and categorical. Each column represents a variable and each row bin is an aggregate of a certain number of records. Numeric variables are visualized as bar charts, and categorical variables as stacked bar charts. Missing values are taken into account. Also supports large 'ffdf' datasets from the 'ff' package.","Published":"2017-01-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tabplotd3","Version":"0.3.3","Title":"Tabplotd3, interactive inspection of large data","Description":"A tableplot is a visualisation of a (large)\n dataset with a dozen of variables, both numeric and\n categorical. This package contains an interactive version of\n tableplot working in your browser.","Published":"2013-09-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tabuSearch","Version":"1.1","Title":"R based tabu search algorithm","Description":"R based tabu search algorithm for binary configurations","Published":"2012-03-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tadaatoolbox","Version":"0.12.0","Title":"Helpers for Data Analysis and Presentation Focused on Undergrad\nPsychology","Description":"Contains functions for the easy display of statistical tests as well as\n some convenience functions for data cleanup. It is meant to ease existing workflows\n with packages like 'sjPlot', 'dplyr', and 'ggplot2'. The primary components are the functions\n prefixed with 'tadaa_', which are built to work in an interactive environment, but also print\n tidy markdown tables powered by 'pixiedust' for the creation of 'RMarkdown' reports.","Published":"2017-06-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tagcloud","Version":"0.6","Title":"Tag Clouds","Description":"Generating Tag and Word Clouds.","Published":"2015-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tailDepFun","Version":"1.0.0","Title":"Minimum Distance Estimation of Tail Dependence Models","Description":"Provides functions implementing minimal distance estimation methods for parametric tail dependence models.","Published":"2016-03-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tailloss","Version":"1.0","Title":"Estimate the Probability in the Upper Tail of the Aggregate Loss\nDistribution","Description":"Set of tools to estimate the probability in the upper tail of the aggregate loss distribution using different methods: Panjer recursion, Monte Carlo simulations, Markov bound, Cantelli bound, Moment bound, and Chernoff bound.","Published":"2015-07-08","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"TAM","Version":"2.2-49","Title":"Test Analysis Modules","Description":"\n Includes marginal maximum likelihood estimation of uni- and \n multidimensional item response models (Rasch, 2PL, 3PL, \n Generalized Partial Credit, Multi Facets,\n Nominal Item Response, Structured Latent Class Analysis,\n Mixture Distribution IRT Models, Located Latent Class Models)\n and joint maximum likelihood estimation for models\n from the Rasch family. \n Latent regression models and plausible value imputation are \n also supported.","Published":"2017-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TanB","Version":"0.1","Title":"The TanB Distribution","Description":"Density, distribution function, quantile function, random\n generation and survival function for the Tangent Burr Type XII Distribution as\n defined by SOUZA, L. New Trigonometric Class of Probabilistic Distributions.\n 219 p. Thesis (Doctorate in Biometry and Applied Statistics) - Department of\n Statistics and Information, Federal Rural University of Pernambuco, Recife,\n Pernambuco, 2015 (available at ) and BRITO, C. C. R. Method Distributions generator and\n Probability Distributions Classes. 241 p. Thesis (Doctorate in Biometry and\n Applied Statistics) - Department of Statistics \n University of Pernambuco, Recife, Pernambuco, 2014 (available upon request).","Published":"2016-07-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"TANDEM","Version":"1.0.2","Title":"A Two-Stage Approach to Maximize Interpretability of Drug\nResponse Models Based on Multiple Molecular Data Types","Description":"A two-stage regression method that can be used when various input data types are correlated, for example gene expression and methylation in drug response prediction. In the first stage it uses the upstream features (such as methylation) to predict the response variable (such as drug response), and in the second stage it uses the downstream features (such as gene expression) to predict the residuals of the first stage. In our manuscript (Aben et al., 2016, ), we show that using TANDEM prevents the model from being dominated by gene expression and that the features selected by TANDEM are more interpretable.","Published":"2017-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tangram","Version":"0.2.6","Title":"The Grammar of Tables","Description":"Provides an extensible formula system to quickly and easily create\n production quality tables. The steps of the process are formula parser,\n statistical content generation from data, to rendering. Each step of the process\n is separate and user definable thus creating a set of building blocks for\n highly extensible table generation. A user is not limited by any of the \n choices of the package creator other than the formula grammar. For example,\n one could chose to add a different S3 rendering function and output a format\n not provided in the default package. Or possibly one would rather have Gini\n coefficients for their statistical content. Routines to achieve New England\n Journal of Medicine style, Lancet style and Hmisc::summaryM() statistics are\n provided. The package contains rendering for HTML5, Rmarkdown and an indexing\n format for use in tracing and tracking are provided.","Published":"2017-05-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TANOVA","Version":"1.0.0","Title":"Time Course Analysis of Variance for Microarray","Description":"Functions for performing analysis of variance on time\n course microarray data","Published":"2012-10-29","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"TaoTeProgramming","Version":"1.0","Title":"Illustrations from Tao Te Programming","Description":"Art-like behavior based on randomness","Published":"2014-06-22","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"TapeR","Version":"0.3.3","Title":"Flexible Tree Taper Curves Based on Semiparametric Mixed Models","Description":"Implementation of functions for fitting taper curves (a semiparametric linear mixed effects taper model) to diameter measurements along stems. Further functions are provided to estimate the uncertainty around the predicted curves, to calculate timber volume (also by sections) and marginal (e.g., upper) diameters. For cases where tree\n heights are not measured, methods for estimating\n additional variance in volume predictions resulting from uncertainties in\n tree height models (tariffs) are provided. The example data include the taper curve parameters for Norway spruce used in the 3rd German NFI fitted to 380 trees and a subset of section-wise diameter measurements of these trees. The functions implemented here are detailed in the following publication: Kublin, E., Breidenbach, J., Kaendler, G. (2013) A flexible stem taper and volume prediction method based on mixed-effects B-spline regression, Eur J For Res, 132:983-997.","Published":"2015-08-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TAQMNGR","Version":"2016.12-1","Title":"Manage Tick-by-Tick Transaction Data","Description":"Manager of tick-by-tick transaction data that performs 'cleaning', 'aggregation' and 'import' in an efficient and fast way. The package engine, written in C++, exploits the 'zlib' and 'gzstream' libraries to handle gzipped data without need to uncompress them. 'Cleaning' and 'aggregation' are performed according to Brownlees and Gallo (2006) . Currently, TAQMNGR processes raw data from WRDS (Wharton Research Data Service, ).","Published":"2017-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TAR","Version":"1.0","Title":"Bayesian Modeling of Autoregressive Threshold Time Series Models","Description":"Identification and estimation of the autoregressive threshold models with Gaussian noise, as well as positive-valued time series. The package provides the identification of the number of regimes, the thresholds and the autoregressive orders, as well as the estimation of remain parameters. The package implements the methodology from the 2005 paper: Modeling Bivariate Threshold Autoregressive Processes in the Presence of Missing Data .","Published":"2017-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Tariff","Version":"1.0.2","Title":"Replicate Tariff Method for Verbal Autopsy","Description":"Implement the Tariff algorithm for coding cause-of-death from verbal autopsies. The Tariff method was originally proposed in James et al (2011) and later refined as Tariff 2.0 in Serina, et al. (2015) . Note that this package was not developed by authors affiliated with the Institute for Health Metrics and Evaluation and thus unintentional discrepancies may exist between the this implementation and the implementation available from IHME.","Published":"2016-03-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"taRifx","Version":"1.0.6","Title":"Collection of utility and convenience functions","Description":"A collection of various utility and convenience functions.","Published":"2014-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"taskscheduleR","Version":"1.0","Title":"Schedule R Scripts and Processes with the Windows Task Scheduler","Description":"Schedule R scripts/processes with the Windows task scheduler. This\n allows R users to automate R processes on specific time points from R itself.","Published":"2017-03-03","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"tatoo","Version":"1.0.6","Title":"Combine and Export Data Frames","Description":"\n Functions to combine data.frames in ways that require additional effort in \n base R, and to add metadata (id, title, ...) that can be used for printing and \n xlsx export. The 'Tatoo_report' class is provided as a \n convenient helper to write several such tables to a workbook, one table per \n worksheet.","Published":"2017-06-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tau","Version":"0.0-19","Title":"Text Analysis Utilities","Description":"Utilities for text analysis.","Published":"2017-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TauP.R","Version":"1.1","Title":"Earthquake Traveltime Calculations for 1-D Earth Models","Description":"Evaluates traveltimes and ray paths using predefined Earth\n (or other planet) models. Includes phase plotting routines.\n The IASP91 and AK135 Earth models are included, and most\n important arrival phases can be evaluated.","Published":"2012-08-13","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"TauStar","Version":"1.1.3","Title":"Efficient Computation and Testing of the Bergsma-Dassios Sign\nCovariance","Description":"Computes the t* statistic corresponding to the tau* population\n coefficient introduced by Bergsma and Dassios (2014) \n and does so in O(n^2) time following the algorithm of Heller and\n Heller (2016) building off of the work of Weihs,\n Drton, and Leung (2016) . Also allows for\n independence testing using the asymptotic distribution of t* as described by\n Nandy, Weihs, and Drton (2016) .","Published":"2017-03-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"tawny","Version":"2.1.6","Title":"Clean Covariance Matrices Using Random Matrix Theory and\nShrinkage Estimators for Portfolio Optimization","Description":"Portfolio optimization typically requires an estimate of a covariance matrix of asset returns. There are many approaches for constructing such a covariance matrix, some using the sample covariance matrix as a starting point. This package provides implementations for two such methods: random matrix theory and shrinkage estimation. Each method attempts to clean or remove noise related to the sampling process from the sample covariance matrix.","Published":"2016-07-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tawny.types","Version":"1.1.3","Title":"Common types for tawny","Description":"Base library of types for tawny and related packages","Published":"2014-05-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"taxize","Version":"0.8.4","Title":"Taxonomic Information from Around the Web","Description":"Interacts with a suite of web 'APIs' for taxonomic tasks,\n such as getting database specific taxonomic identifiers, verifying \n species names, getting taxonomic hierarchies, fetching downstream and \n upstream taxonomic names, getting taxonomic synonyms, converting \n scientific to common names and vice versa, and more.","Published":"2017-01-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"taxizedb","Version":"0.1.4","Title":"Tools for Working with 'Taxonomic' Databases","Description":"Tools for working with 'taxonomic' databases, including\n utilities for downloading databases, loading them into various\n 'SQL' databases, cleaning up files, and providing a 'SQL' connection\n that can be used to do 'SQL' queries directly or used in 'dplyr'.","Published":"2017-06-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"taxlist","Version":"0.1.0","Title":"Handling Taxonomic Lists","Description":"Handling taxonomic lists through objects of class 'taxlist'.\n This package provides functions to import species lists from 'Turboveg'\n () and the possibility to create\n backups from resulting R-objects.\n Also quick displays are implemented in the summary-methods.","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"taxonomizr","Version":"0.2.2","Title":"Functions to Work with NCBI Accessions and Taxonomy","Description":"Functions for assigning taxonomy to NCBI accession numbers and taxon IDs based on NCBI's accession2taxid and taxdump files. This package allows the user to downloads NCBI data dumps and create a local database for fast and local taxonomic assignment.","Published":"2017-03-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Taxonstand","Version":"2.0","Title":"Taxonomic Standardization of Plant Species Names","Description":"Automated standardization of taxonomic names and removal of orthographic errors in plant species names using 'The Plant List' website (www.theplantlist.org).","Published":"2017-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tbart","Version":"1.0","Title":"Teitz and Bart's p-Median Algorithm","Description":"Solves Teitz and Bart's p-median problem - given a set of\n points attempts to find subset of size p such that summed distances of any\n point in the set to the nearest point in p is minimised. Although\n generally effective, this algorithm does not guarantee that a globally\n optimal subset is found.","Published":"2015-02-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tbdiag","Version":"0.1","Title":"Functions for tuberculosis diagnostics research","Description":"This package provides functions to assist researchers\n working in the field of tuberculosis diagnostics. Functions\n for the interpretation of two popular interferon-gamma release\n assays are provided, and additional functionality is planned.","Published":"2013-06-11","License":"MIT","snapshot_date":"2017-06-23"} {"Package":"TBEST","Version":"5.0","Title":"Tree Branches Evaluated Statistically for Tightness","Description":"Our method introduces mathematically well-defined measures for tightness of branches in a hierarchical tree. Statistical significance of the findings is determined, for all branches of the tree, by performing permutation tests, optionally with generalized Pareto p-value estimation.","Published":"2014-12-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tbl2xts","Version":"0.1.0","Title":"Convert Tibbles or Data Frames to Xts Easily","Description":"Facilitate the movement between data frames to 'xts'. Particularly\n useful when moving from 'tidyverse' to the widely used 'xts' package, which is\n the input format of choice to various other packages. It also allows the user \n to use a 'spread_by' argument for a character column 'xts' conversion.","Published":"2017-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TBSSurvival","Version":"1.3","Title":"Survival Analysis using a Transform-Both-Sides Model","Description":"Functions to perform the reliability/survival\n analysis using a parametric Transform-both-sides (TBS) model.","Published":"2017-01-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"TCGA2STAT","Version":"1.2","Title":"Simple TCGA Data Access for Integrated Statistical Analysis in R","Description":"Automatically downloads and processes TCGA genomics and clinical data into a format convenient for statistical analyses in the R environment.","Published":"2015-10-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TCGAretriever","Version":"1.3","Title":"Retrieve Genomic and Clinical Data from TCGA","Description":"The Cancer Genome Atlas (TCGA) is a program aimed at improving our understanding of Cancer Biology. Several TCGA Datasets are available online. 'TCGAretriever' helps accessing and downloading TCGA data hosted on 'cBioPortal' via its Web Interface (see for more information). Features of 'TCGAretriever' include: 1) it is very simple to use (get all the TCGA data you need with a few lines of code); 2) performance (smooth and reliable data download via 'httr'); 3) it is tailored for downloading large volumes of data. ","Published":"2016-07-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TcGSA","Version":"0.10.5","Title":"Time-Course Gene Set Analysis","Description":"Implementation of Time-course Gene Set Analysis(TcGSA), a method for analyzing longitudinal gene-expression data at the gene set level.","Published":"2017-05-03","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tcgsaseq","Version":"1.4.1","Title":"Time-Course Gene Set Analysis for RNA-Seq Data","Description":"Gene set analysis of longitudinal RNA-seq data with variance component\n score test accounting for data heteroscedasticity through precision weights. \n Method is detailed in: Agniel D, Hejblum BP (2016) Variance component score test \n for time-course gene set analysis of longitudinal RNA-seq data .","Published":"2017-02-21","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tcltk2","Version":"1.2-11","Title":"Tcl/Tk Additions","Description":"A series of additional Tcl commands and Tk widgets with style\n and various functions (under Windows: DDE exchange, access to the\n registry and icon manipulation) to supplement the tcltk package","Published":"2014-12-20","License":"LGPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tclust","Version":"1.2-3","Title":"Robust Trimmed Clustering","Description":"Robust Trimmed Clustering","Published":"2014-10-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Tcomp","Version":"1.0.0","Title":"Data from the 2010 Tourism Forecasting Competition","Description":"The 1311 time series from the tourism forecasting competition conducted in 2010 and described in Athanasopoulos et al. (2011) .","Published":"2016-10-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tcpl","Version":"1.2.2","Title":"ToxCast Data Analysis Pipeline","Description":"A set of tools for processing and modeling high-throughput and\n high-content chemical screening data. The package was developed for the\n the chemical screening data generated by the US EPA ToxCast program, but\n can be used for diverse chemical screening efforts.","Published":"2016-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tcR","Version":"2.2.1.11","Title":"Advanced Data Analysis of Immune Receptor Repertoires","Description":"Platform for the advanced analysis of T cell receptor and\n Immunoglobulin repertoires data and visualisation of the analysis results.","Published":"2016-04-22","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"TDA","Version":"1.5.1","Title":"Statistical Tools for Topological Data Analysis","Description":"Tools for the statistical analysis of persistent homology and for density clustering. For that, this package provides an R interface for the efficient algorithms of the C++ libraries GUDHI, Dionysus, and PHAT.","Published":"2017-04-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TDAmapper","Version":"1.0","Title":"Analyze High-Dimensional Data Using Discrete Morse Theory","Description":"Topological Data Analysis using Mapper (discrete Morse theory).\n Generate a 1-dimensional simplicial complex from a filter \n function defined on the data: 1. Define a filter function (lens) on the \n data. 2. Perform clustering within within each level set and generate \n one node (vertex) for each cluster. 3. For each pair of clusters in \n adjacent level sets with a nonempty intersection, generate one edge \n between vertices. The function mapper1D uses a filter function with\n codomain R, while the the function mapper2D uses a filter function with\n codomain R^2.","Published":"2015-05-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TDboost","Version":"1.2","Title":"A Boosted Tweedie Compound Poisson Model","Description":"A boosted Tweedie compound Poisson model using the gradient boosting. It is capable of fitting a flexible nonlinear Tweedie compound Poisson model (or a gamma model) and capturing interactions among predictors. ","Published":"2016-03-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TDCor","Version":"0.1-2","Title":"Gene Network Inference from Time-Series Transcriptomic Data","Description":"The Time-Delay Correlation algorithm (TDCor) reconstructs the topology of a gene regulatory network (GRN) from time-series transcriptomic data. The algorithm is described in details in Lavenus et al., Plant Cell, 2015. It was initially developed to infer the topology of the GRN controlling lateral root formation in Arabidopsis thaliana. The time-series transcriptomic dataset which was used in this study is included in the package to illustrate how to use it.","Published":"2015-10-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TDD","Version":"0.4","Title":"Time-Domain Deconvolution of Seismometer Response","Description":"Deconvolution of instrument responses from seismic traces and seismogram lists from RSEIS. Includes pre-calculated instrument responses for several common instruments.","Published":"2013-10-04","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"TDPanalysis","Version":"0.99","Title":"Granier's Sap Flow Sensors (TDP) Analysis","Description":"Set of functions designed to help in the\n analysis of TDP sensors. Features includes dates and time conversion, weather\n data interpolation, daily maximum of tension analysis and calculations required\n to convert sap flow density data to sap flow rates at the tree and plot scale (For more information see : Granier (1985) & Granier (1987) ).","Published":"2016-10-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tdr","Version":"0.11","Title":"Target Diagram","Description":"Implementation of target diagrams using 'lattice' and 'ggplot2' graphics. Target diagrams provide a graphical overview of the respective contributions of the unbiased RMSE and MBE to the total RMSE (Jolliff, J. et al., 2009. \"Summary Diagrams for Coupled Hydrodynamic-Ecosystem Model Skill Assessment.\" Journal of Marine Systems 76: 64–82.)","Published":"2015-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tdROC","Version":"1.0","Title":"Nonparametric Estimation of Time-Dependent ROC Curve from Right\nCensored Survival Data","Description":"Compute time-dependent ROC curve from censored survival data using\n nonparametric weight adjustments.","Published":"2016-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tdthap","Version":"1.1-7","Title":"TDT tests for extended haplotypes","Description":"Transmission/disequilibrium tests for extended marker haplotypes","Published":"2013-12-10","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"tea","Version":"1.0","Title":"Threshold Estimation Approaches","Description":"Different approaches for selecting the threshold in generalized Pareto distributions. Most of them are based on minimizing the AMSE-criterion or at least by reducing the bias of the assumed GPD-model. Others are heuristically motivated by searching for stable sample paths, i.e. a nearly constant region of the tail index estimator with respect to k, which is the number of data in the tail. The third class is motivated by graphical inspection. In addition to the very helpful eva package which includes many goodness of fit tests for the generalized Pareto distribution, the sequential testing procedure provided in Thompson et al. (2009) is also implemented here.","Published":"2017-01-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TeachBayes","Version":"1.0","Title":"Teaching Bayesian Inference","Description":"Several functions for communicating Bayesian thinking including Bayes rule for deciding among spinners, visualizations for Bayesian inference for one proportion and for one mean, and comparison of two proportions using a discrete prior.","Published":"2017-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TeachingDemos","Version":"2.10","Title":"Demonstrations for Teaching and Learning","Description":"Demonstration functions that can be used in a classroom to demonstrate statistical concepts, or on your own to better understand the concepts or the programming.","Published":"2016-02-12","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"TeachingSampling","Version":"3.2.2","Title":"Selection of Samples and Parameter Estimation in Finite\nPopulation","Description":"Allows the user to draw probabilistic samples and make inferences from a finite population based on several sampling designs.","Published":"2015-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TeachNet","Version":"0.7","Title":"Fits neural networks to learn about back propagation","Description":"Can fit neural networks with up to two hidden layer and two different error functions. Also able to handle a weight decay. But just able to compute one output neuron and very slow. ","Published":"2014-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TED","Version":"1.1.1","Title":"Turbulence Time Series Event Detection and Classification","Description":"TED performs Turbulence time series Event Detection and classification.","Published":"2014-10-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"teda","Version":"0.1.1","Title":"An Implementation of the Typicality and Eccentricity Data\nAnalysis Framework","Description":"The typicality and eccentricity data analysis (TEDA) framework was\n put forward by Angelov (2013) . It has been further developed into multiple\n different techniques since, and provides a non-parametric way of determining how\n similar an observation, from a process that is not purely random, is to other\n observations generated by the process. This package provides code to use the\n batch and recursive TEDA methods that have been published.","Published":"2017-01-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"TEEReg","Version":"1.1","Title":"Trimmed Elemental Estimation for Linear Models","Description":"For fitting multiple linear regressions, the ordinary least squares approach is sensitive to outliers and/or violations of model assumptions. The trimmed elemental estimators are more robust to such situations. This package contains functions for computing the trimmed elemental estimates, as well as for creating the bias-corrected and accelerated bootstrap confidence intervals based on elemental regressions. ","Published":"2016-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"teigen","Version":"2.2.0","Title":"Model-Based Clustering and Classification with the Multivariate\nt Distribution","Description":"Fits mixtures of multivariate t-distributions (with eigen-decomposed covariance structure) via the expectation conditional-maximization algorithm under a clustering or classification paradigm.","Published":"2016-12-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"telegram","Version":"0.6.0","Title":"R Wrapper Around the Telegram Bot API","Description":"R wrapper around the Telegram Bot API (http://core.telegram.org/bots/api) to access Telegram's messaging facilities with ease (e.g. you send messages, images, files from R to your smartphone).","Published":"2016-09-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TELP","Version":"1.0","Title":"Social Representation Theory Application: The Free Evocation of\nWords Technique","Description":"Using The Free Evocation of Words Technique method with some functions, this package will make a\n social representation and other analysis. The Free Evocation of Words Technique consists of collecting a number of words evoked by a subject facing exposure to an inducer term. The purpose of this technique is to understand the relationships created between words evoked by the individual and the inducer term. This technique is included in the theory of social representations, therefore, on the information transmitted by an individual, seeks to create a profile that define a social group.","Published":"2016-04-24","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"tempcyclesdata","Version":"1.0.1","Title":"Climate Data from Wang and Dillon","Description":"This is the data companion package to the package tempcycles.\n This package includes the metadata, linear, and cycling parameters from\n \"Recent geographic convergence in diurnal and annual temperature cycling\n flattens global thermal profiles\", Wang & Dillon, Nature Climate Change,\n 4, 988-992 (2014). doi:10.1038/nclimate2378.","Published":"2016-01-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tempdisagg","Version":"0.25.0","Title":"Methods for Temporal Disaggregation and Interpolation of Time\nSeries","Description":"Temporal disaggregation methods are used to disaggregate and\n interpolate a low frequency time series to a higher frequency series, where\n either the sum, the average, the first or the last value of the resulting\n high frequency series is consistent with the low frequency series. Temporal\n disaggregation can be performed with or without one or more high frequency\n indicator series. Contains the methods of Chow-Lin, Santos-Silva-Cardoso, \n Fernandez, Litterman, Denton and Denton-Cholette.","Published":"2016-07-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"templates","Version":"0.2.0","Title":"A System for Working with Templates","Description":"Provides tools to work with template code and text in R. It aims to\n provide a simple substitution mechanism for R-expressions inside these\n templates. Templates can be written in other languages like 'SQL', can\n simply be represented by characters in R, or can themselves be R-expressions\n or functions.","Published":"2017-06-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tempR","Version":"0.9.9.10","Title":"Temporal Sensory Data Analysis","Description":"Analysis and visualization of data from temporal sensory methods, including for temporal check-all-that-apply (TCATA) and temporal dominance of sensations (TDS).","Published":"2017-03-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tensor","Version":"1.5","Title":"Tensor product of arrays","Description":"The tensor product of two arrays is notionally an outer\n product of the arrays collapsed in specific extents by summing\n along the appropriate diagonals.","Published":"2012-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tensorA","Version":"0.36","Title":"Advanced tensors arithmetic with named indices","Description":"The package provides convenience functions for advance\n linear algebra with tensors and computation with datasets of\n tensors on a higher level abstraction. It includes Einstein and\n Riemann summing conventions, dragging, co- and contravariate\n indices, parallel computations on sequences of tensors.","Published":"2010-12-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tensorBF","Version":"1.0.1","Title":"Bayesian Tensor Factorization","Description":"Bayesian Tensor Factorization for decomposition of tensor data sets using the trilinear CANDECOMP/PARAFAC (CP) factorization, with automatic component selection. The complete data analysis pipeline is provided, including functions and recommendations for data normalization and model definition, as well as missing value prediction and model visualization. The method performs factorization for three-way tensor datasets and the inference is implemented with Gibbs sampling.","Published":"2017-01-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tensorBSS","Version":"0.3","Title":"Blind Source Separation Methods for Tensor-Valued Observations","Description":"Contains several utility functions for manipulating tensor-valued data (centering, multiplication from a single mode etc.) and the implementations of the following blind source separation methods for tensor-valued data: 'tPCA', 'tFOBI', 'tJADE', 'tgFOBI', 'tgJADE', 'tSOBI', 'tNSS.SD', 'tNSS.JD' and 'tNSS.TD.JD'.","Published":"2017-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tensorflow","Version":"0.8.1","Title":"R Interface to TensorFlow","Description":"Interface to 'TensorFlow' , \n an open source software library for numerical computation using data\n flow graphs. Nodes in the graph represent mathematical operations, \n while the graph edges represent the multidimensional data arrays \n (tensors) communicated between them. The flexible architecture allows\n you to deploy computation to one or more 'CPUs' or 'GPUs' in a desktop, \n server, or mobile device with a single 'API'. 'TensorFlow' was originally\n developed by researchers and engineers working on the Google Brain Team \n within Google's Machine Intelligence research organization for the \n purposes of conducting machine learning and deep neural networks research,\n but the system is general enough to be applicable in a wide variety\n of other domains as well.","Published":"2017-05-26","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"tensorr","Version":"0.1.0","Title":"Sparse Tensors in R","Description":"Provides methods to manipulate and store sparse tensors. Tensors \n are multidimensional generalizations of matrices (two dimensional) and \n vectors (one dimensional).","Published":"2017-04-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tensr","Version":"1.0.0","Title":"Covariance Inference and Decompositions for Tensor Datasets","Description":"A collection of functions for Kronecker structured covariance\n estimation and testing under the array normal model. For estimation,\n maximum likelihood and Bayesian equivariant estimation procedures are\n implemented. For testing, a likelihood ratio testing procedure is\n available. This package also contains additional functions for manipulating\n and decomposing tensor data sets. This work was partially supported by NSF\n grant DMS-1505136.","Published":"2016-02-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TEQR","Version":"6.0-0","Title":"Target Equivalence Range Design","Description":"The TEQR package contains software to calculate the operating characteristics for the TEQR and the ACT designs.The TEQR (toxicity equivalence range) design is a toxicity based cumulative cohort design with added safety rules. The ACT (Activity constrained for toxicity) design is also a cumulative cohort design with additional safety rules. The unique feature of this design is that dose is escalated based on lack of activity rather than on lack of toxicity and is de-escalated only if an unacceptable level of toxicity is experienced.","Published":"2016-02-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TERAplusB","Version":"1.0","Title":"Test for A+B Traditional Escalation Rule","Description":"This package is for the comparison of various types of A+B\n escalation rules for dose finding trials.","Published":"2012-10-29","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"tergm","Version":"3.4.0","Title":"Fit, Simulate and Diagnose Models for Network Evolution Based on\nExponential-Family Random Graph Models","Description":"An integrated set of extensions to the 'ergm' package to analyze and simulate network evolution based on exponential-family random graph models (ERGM). 'tergm' is a part of the 'statnet' suite of packages for network analysis.","Published":"2016-03-28","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"termstrc","Version":"1.3.7","Title":"Zero-coupon Yield Curve Estimation","Description":"The package offers a wide range of functions for term\n structure estimation based on static and dynamic coupon bond\n and yield data sets. The implementation focuses on the cubic\n splines approach of McCulloch (1971, 1975) and the Nelson and\n Siegel (1987) method with extensions by Svensson (1994),\n Diebold and Li (2006) and De Pooter (2007). We propose a\n weighted constrained optimization procedure with analytical\n gradients and a globally optimal start parameter search\n algorithm. Extensive summary statistics and plots are provided\n to compare the results of the different estimation methods.\n Several demos are available using data from European government\n bonds and yields.","Published":"2013-11-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ternvis","Version":"1.1","Title":"Visualisation, verification and calibration of ternary\nprobabilistic forecasts","Description":"A suite of functions for visualising ternary probabilistic\n forecasts.","Published":"2013-02-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TESS","Version":"2.1.0","Title":"Diversification Rate Estimation and Fast Simulation of\nReconstructed Phylogenetic Trees under Tree-Wide\nTime-Heterogeneous Birth-Death Processes Including\nMass-Extinction Events","Description":"Simulation of reconstructed phylogenetic trees under tree-wide time-heterogeneous birth-death processes and estimation of diversification parameters under the same model. Speciation and extinction rates can be any function of time and mass-extinction events at specific times can be provided. Trees can be simulated either conditioned on the number of species, the time of the process, or both. Additionally, the likelihood equations are implemented for convenience and can be used for Maximum Likelihood (ML) estimation and Bayesian inference.","Published":"2015-10-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tesseract","Version":"1.4","Title":"Open Source OCR Engine","Description":"An OCR engine with unicode (UTF-8) support that can recognize\n over 100 languages out of the box.","Published":"2017-03-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"testassay","Version":"0.1.0","Title":"A Hypothesis Testing Framework for Validating an Assay for\nPrecision","Description":"A common way of validating a biological assay for is through a\n procedure, where m levels of an analyte are measured with n replicates at each\n level, and if all m estimates of the coefficient of variation (CV) are less\n than some prespecified level, then the assay is declared validated for precision\n within the range of the m analyte levels. Two limitations of this procedure are:\n there is no clear statistical statement of precision upon passing, and it is\n unclear how to modify the procedure for assays with constant standard deviation.\n We provide tools to convert such a procedure into a set of m hypothesis tests.\n This reframing motivates the m:n:q procedure, which upon completion delivers\n a 100q% upper confidence limit on the CV. Additionally, for a post-validation\n assay output of y, the method gives an ``effective standard deviation interval''\n of log(y) plus or minus r, which is a 68% confidence interval on log(mu), where\n mu is the expected value of the assay output for that sample. Further, the m:n:q\n procedure can be straightforwardly applied to constant standard deviation assays.\n We illustrate these tools by applying them to a growth inhibition assay.","Published":"2016-11-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"TestDataImputation","Version":"1.0","Title":"Missing Item Responses Imputation for Test and Assessment Data","Description":"Functions for imputing missing item responses for dichotomous and\n polytomous test and assessment data. This package enables missing imputation\n methods that are suitable for test and assessment data, including: listwise (LW)\n deletion, treating as incorrect (IN), person mean imputation (PM), item mean\n imputation (IM), two-way imputation (TW), logistic regression imputation (LR),\n and EM imputation.","Published":"2016-08-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tester","Version":"0.1.7","Title":"Tests and checks characteristics of R objects","Description":"tester allows you to test characteristics of common R objects.","Published":"2013-11-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"testforDEP","Version":"0.2.0","Title":"Dependence Tests for Two Variables","Description":"Provides test statistics, p-value, and confidence intervals based on 9 hypothesis tests for dependence.","Published":"2017-01-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TestFunctions","Version":"0.2.0","Title":"Test Functions for Simulation Experiments and Evaluating\nOptimization and Emulation Algorithms","Description":"Test functions are often used to test computer code.\n They are used in optimization to test algorithms and in\n metamodeling to evaluate model predictions. This package provides\n test functions that can be used for any purpose.\n Some functions are taken from , but\n their R code is not used.","Published":"2017-05-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TestingSimilarity","Version":"1.0","Title":"Bootstrap Test for Similarity of Dose Response Curves Concerning\nthe Maximum Absolute Deviation","Description":"Provides a bootstrap test which decides whether two dose response curves can be assumed as equal concerning their maximum absolute deviation. A plenty of choices for the model types are available, which can be found in the 'DoseFinding' package, which is used for the fitting of the models.","Published":"2015-09-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"testit","Version":"0.7","Title":"A Simple Package for Testing R Packages","Description":"Provides two convenience functions assert() and test_pkg() to\n facilitate testing R packages.","Published":"2017-05-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"TestScorer","Version":"1.7.2","Title":"GUI for Entering Test Items and Obtaining Raw and Transformed\nScores","Description":"GUI for entering test items and obtaining raw\n and transformed scores. The results are shown on the\n console and can be saved to a tabular text file for further\n statistical analysis. The user can define his own tests and\n scoring procedures through a GUI.","Published":"2016-02-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TestSurvRec","Version":"1.2.1","Title":"Statistical tests to compare two survival curves with recurrent\nevents","Description":"These are weighted tests type logrank for recurrent events.","Published":"2013-10-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"testthat","Version":"1.0.2","Title":"Unit Testing for R","Description":"A unit testing system designed to be fun, flexible and easy to\n set up.","Published":"2016-04-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"TeXCheckR","Version":"0.2.0","Title":"Parses LaTeX Documents for Errors","Description":"Checks LaTeX documents and .bib files for typing errors, such as spelling errors, incorrect quotation marks. Also provides useful functions for parsing and linting bibliography files.","Published":"2017-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"texmex","Version":"2.3","Title":"Statistical Modelling of Extreme Values","Description":"Statistical extreme value modelling of threshold excesses,\n maxima and multivariate extremes. Univariate models for threshold\n excesses and maxima are the Generalised Pareto, and Generalised\n Extreme Value model respectively. These models may be fitted by\n using maximum (optionally penalised-)likelihood, or Bayesian\n estimation, and both classes of models may be fitted with covariates\n in any/all model parameters. Model diagnostics support the fitting\n process. Graphical output for visualising fitted models and return\n level estimates is provided. For serially dependent sequences, the\n intervals declustering algorithm of Ferro and Segers is provided,\n with diagnostic support to aid selection of threshold and declustering\n horizon. Multivariate modelling is performed via the conditional\n approach of Heffernan and Tawn, with graphical tools for threshold\n selection and to diagnose estimation convergence.","Published":"2016-11-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"texmexseq","Version":"0.3","Title":"Treatment Effect eXplorer for Microbial Ecology eXperiments\n(using Sequence Counts)","Description":"Analysis and visualization of community dynamics in microbial\n ecology experiments (that use sequence count data) using the\n truncated Poisson lognormal distribution.","Published":"2016-07-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TExPosition","Version":"2.6.10","Title":"Two-table ExPosition","Description":"TExPosition is an extension of ExPosition for two table analyses, specifically, discriminant analyses.","Published":"2013-12-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"texPreview","Version":"1.0.0","Title":"Compile and Preview Snippets of 'LaTeX' in 'RStudio'","Description":"Compile and preview snippets of 'LaTeX'. Can be used directly from the R console, from 'RStudio', \n in Shiny apps and R Markdown documents. Must have 'pdflatex' or 'xelatex' or 'lualatex' in 'PATH'.","Published":"2017-04-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"texreg","Version":"1.36.23","Title":"Conversion of R Regression Output to LaTeX or HTML Tables","Description":"Converts coefficients, standard errors, significance stars, and goodness-of-fit statistics of statistical models into LaTeX tables or HTML tables/MS Word documents or to nicely formatted screen output for the R console for easy model comparison. A list of several models can be combined in a single table. The output is highly customizable. New model types can be easily implemented.","Published":"2017-03-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"text2vec","Version":"0.4.0","Title":"Modern Text Mining Framework for R","Description":"Fast and memory-friendly tools for text vectorization, \n topic modeling (LDA, LSA), word embeddings (GloVe), similarities. \n This package provides a source-agnostic streaming API, which allows researchers \n to perform analysis of collections of documents which are larger than available RAM. \n All core functions are parallelized to benefit from multicore machines.","Published":"2016-10-04","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"textcat","Version":"1.0-5","Title":"N-Gram Based Text Categorization","Description":"Text categorization based on n-grams.","Published":"2017-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"textclean","Version":"0.3.1","Title":"Text Cleaning Tools","Description":"Tools to clean and process text. Tools are geared at\n checking for substrings that are not optimal for analysis and\n replacing or removing them with more analysis friendly\n substrings. For example, emoticons are often used in text but\n not always easily handled by analysis algorithms. The\n 'replace_emoticon' function replaces emoticons with word\n equivalents.","Published":"2017-02-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"textgRid","Version":"1.0.1","Title":"Praat TextGrid Objects in R","Description":"The software application Praat can be used to annotate\n waveform data (e.g., to mark intervals of interest or to label events).\n (See for more information about Praat.)\n These annotations are stored in a Praat TextGrid object, which consists of\n a number of interval tiers and point tiers. An interval tier consists of\n sequential (i.e., not overlapping) labeled intervals. A point tier consists\n of labeled events that have no duration. The 'textgRid' package provides\n S4 classes, generics, and methods for accessing information that is stored\n in Praat TextGrid objects.","Published":"2016-09-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"textir","Version":"2.0-4","Title":"Inverse Regression for Text Analysis","Description":"Multinomial [inverse] regression inference for text documents and associated attributes. Provides fast sparse multinomial logistic regression for phrase counts. A minimalist partial least squares routine is also included. Note that the topic modeling capability of textir is now a separate package, maptpx.","Published":"2015-08-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"textmineR","Version":"2.0.5","Title":"Functions for Text Mining and Topic Modeling","Description":"An aid for text mining in R, with a syntax that\n should be familiar to experienced R users. Provides a wrapper for several \n topic models that take similarly-formatted input and give similarly-formatted\n output. Has additional functionality for analyzing and diagnostics for\n topic models.","Published":"2017-04-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"textmining","Version":"0.0.1","Title":"Integration of Text Mining and Topic Modeling Packages","Description":"A framework for text mining and topic modelling. It provides an easy interface for using different topic modeling methods within R, by integrating the already existing packages. Full functionality of the package requires a local installation of 'TreeTagger'.","Published":"2016-09-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"textometry","Version":"0.1.4","Title":"Textual Data Analysis Package used by the TXM Software","Description":"Statistical exploration of textual corpora using several methods\n from French 'Textometrie' (new name of 'Lexicometrie') and French 'Data Analysis' schools.\n It includes methods for exploring irregularity of distribution of lexicon features across\n text sets or parts of texts (Specificity analysis); multi-dimensional exploration (Factorial analysis), etc. \n Those methods are used in the TXM software.","Published":"2015-01-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"textreadr","Version":"0.5.1","Title":"Read Text Documents into R","Description":"A small collection of convenience tools for reading text documents\n into R.","Published":"2017-04-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"textreg","Version":"0.1.4","Title":"n-Gram Text Regression, aka Concise Comparative Summarization","Description":"Function for sparse regression on raw text, regressing a labeling\n vector onto a feature space consisting of all possible phrases.","Published":"2017-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"textreuse","Version":"0.1.4","Title":"Detect Text Reuse and Document Similarity","Description":"Tools for measuring similarity among documents and detecting\n passages which have been reused. Implements shingled n-gram, skip n-gram,\n and other tokenizers; similarity/dissimilarity functions; pairwise\n comparisons; minhash and locality sensitive hashing algorithms; and a\n version of the Smith-Waterman local alignment algorithm suitable for\n natural language.","Published":"2016-11-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"textshape","Version":"1.0.2","Title":"Tools for Reshaping Text","Description":"Tools that can be used to reshape and restructure text\n data.","Published":"2017-02-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"textstem","Version":"0.0.1","Title":"Tools for Stemming and Lemmatizing Text","Description":"Tools that stem and lemmatize text. Stemming is a process\n that removes endings such as affixes. Lemmatization is the\n process of grouping inflected forms together as a single base\n form.","Published":"2017-02-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"textTinyR","Version":"1.0.7","Title":"Text Processing for Small or Big Data Files","Description":"Processes big text data files in batches efficiently. For this purpose, it offers functions for splitting, parsing, tokenizing and creating a vocabulary. Moreover, it includes functions for building either a document-term matrix or a term-document matrix and extracting information from those (term-associations, most frequent terms). Lastly, it embodies functions for calculating token statistics (collocations, look-up tables, string dissimilarities) and functions to work with sparse matrices. The source code is based on 'C++11' and exported in R through the 'Rcpp', 'RcppArmadillo' and 'BH' packages.","Published":"2017-06-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"textutils","Version":"0.1-6","Title":"Utilities for Handling Strings and Text","Description":"Utilities for handling character vectors\n that store human-readable text (either plain or with\n markup, such as HTML or LaTeX). The package provides,\n in particular, functions that help with the\n preparation of plain-text reports (e.g. for expanding\n and aligning strings that form the lines of such\n reports); the package also provides generic functions for\n transforming R objects to HTML and to plain text.","Published":"2016-12-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TFDEA","Version":"0.9.8.3","Title":"Technology Forecasting using DEA (Data Envelopment Analysis)","Description":"The TFDEA algorithm for technology forecasts when future products\n will be introduced based upon their features.\n It also includes DEA (Data Envelopment Analysis) functions including extensions dealing with\n with infeasibility.\n In addition it includes some standard technology forecasting data sets.","Published":"2015-03-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tfer","Version":"1.1","Title":"Forensic Glass Transfer Probabilities","Description":"Statistical interpretation of forensic glass transfer\n (Simulation of the probability distribution of recovered glass\n fragments).","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TFMPvalue","Version":"0.0.6","Title":"Efficient and Accurate P-Value Computation for Position Weight\nMatrices","Description":"In putative Transcription Factor Binding Sites (TFBSs) \n identification from sequence/alignments,\n we are interested in the significance of certain match score.\n TFMPvalue provides the accurate calculation of P-value with \n score threshold for Position Weight Matrices, \n or the score with given P-value. \n This package is an interface to code originally made available by \n Helene Touzet and Jean-Stephane Varre, 2007, \n Algorithms Mol Biol:2, 15.","Published":"2015-11-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tfplot","Version":"2015.12-1","Title":"Time Frame User Utilities","Description":"Utilities for simple manipulation and quick \t\n\tplotting of time series data. These utilities use the tframe package\n\twhich provides a programming kernel for time series. Extensions to\n\ttframe provided in tframePlus can also be used. See the Guide vignette\n\tfor examples.","Published":"2015-12-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tframe","Version":"2015.12-1","Title":"Time Frame Coding Kernel","Description":"A kernel of functions for programming \n\ttime series methods in a way that is relatively independently of the \n\trepresentation of time. Also provides plotting, time windowing, \n\tand some\n\tother utility functions which are specifically intended for time series.\n\tSee the Guide distributed as a vignette, or ?tframe.Intro for more\n\tdetails. (User utilities are in package tfplot.)","Published":"2015-12-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tframePlus","Version":"2016.7-1","Title":"Time Frame Coding Kernel Extensions","Description":"Extensions and additional 'tframe' utilities.","Published":"2016-08-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TFX","Version":"0.1.0","Title":"R API to TrueFX(tm)","Description":"Connects R to TrueFX(tm) for free streaming real-time and\n historical tick-by-tick market data for dealable interbank\n foreign exchange rates with millisecond detail.","Published":"2012-11-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tgcd","Version":"2.0","Title":"Thermoluminescence Glow Curve Deconvolution","Description":"Deconvolving thermoluminescence glow curves according to the general-order \n empirical expression or the semi-analytical expression derived from the one trap-\n one recombination (OTOR) model using a modified Levenberg-Marquardt algorithm. \n It provides the possibility of setting constraints or fixing any of parameters. \n It offers an interactive way to initialize parameters by clicking with a mouse \n on a plot at positions where peak maxima should be located. The optimal estimate \n is obtained by \"trial-and-error\". It also provides routines for simulating \n first-order, second-order, and general-order glow peaks (curves).","Published":"2016-09-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"tggd","Version":"0.1.1","Title":"The Standard Distribution Functions for the Truncated\nGeneralised Gamma Distribution","Description":"Density, distribution function, quantile function and random generation for the Truncated Generalised Gamma Distribution (also in log10(x) and ln(x) space).","Published":"2015-12-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tglm","Version":"1.0","Title":"Binary Regressions under Independent Student-t Priors","Description":"Use Gibbs sampler with Polya-Gamma data augmentation to fit logistic and probit regression under independent Student-t priors (including Cauchy priors and normal priors as special cases). ","Published":"2015-07-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tgp","Version":"2.4-14","Title":"Bayesian Treed Gaussian Process Models","Description":"Bayesian nonstationary, semiparametric nonlinear regression \n and design by treed Gaussian processes (GPs) with jumps to the limiting \n linear model (LLM). Special cases also implemented include Bayesian \n linear models, CART, treed linear models, stationary separable and \n isotropic GPs, and GP single-index models. Provides 1-d and 2-d plotting functions \n (with projection and slice capabilities) and tree drawing, designed for \n visualization of tgp-class output. Sensitivity analysis and \n multi-resolution models are supported. Sequential experimental \n design and adaptive sampling functions are also provided, including ALM, \n ALC, and expected improvement. The latter supports derivative-free\n optimization of noisy black-box functions.","Published":"2016-02-07","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"tgram","Version":"0.2-2","Title":"Functions to compute and plot tracheidograms","Description":"Functions to compute and plot tracheidograms","Published":"2013-07-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TH.data","Version":"1.0-8","Title":"TH's Data Archive","Description":"Contains data sets used in other packages Torsten Hothorn\n maintains.","Published":"2017-01-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"thankr","Version":"1.0.0","Title":"Find Out Who Maintains the Packages you Use","Description":"Find out who maintains the packages you use in\n your current session or in your package library and\n maybe say 'thank you'.","Published":"2017-04-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"themetagenomics","Version":"0.1.0","Title":"Exploring Thematic Structure and Predicted Functionality of 16s\nrRNA Amplicon Data","Description":"A means to explore the structure of 16S rRNA surveys using a Structural \n Topic Model coupled with functional prediction. The user provides an abundance \n table, sample metadata, and taxonomy information, and themetagenomics infers \n associations between topics and sample features, as well as topics and predicted \n functional content. Functional prediction can be accomplished via Tax4Fun (for \n Silva references) and PICRUSt (for GreenGeenes references).","Published":"2017-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Thermimage","Version":"3.0.0","Title":"Thermal Image Analysis","Description":"A collection of functions and routines for inputting thermal\n image video files, plotting and converting binary raw data into estimates of\n temperature. First published 2015-03-26. Written primarily for research purposes\n in biological applications of thermal images. v1 included the base calculations \n for converting thermal image binary values to temperatures. v2 included additional\n equations for providing heat transfer calculations and an import function for thermal\n image files (v2.2.3 fixed error importing thermal image to windows OS). v3. Added numerous\n functions for importing thermal image videos, rewriting and exporting.","Published":"2017-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"thermocouple","Version":"1.0.2","Title":"Temperature Measurement with Thermocouples, RTD and IC Sensors","Description":"Temperature measurement data, equations and methods for thermocouples,\n wire RTD, thermistors, IC thermometers, bimetallic strips and the ITS-90.","Published":"2015-07-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"thgenetics","Version":"0.3-4.1","Title":"Genetic Rare Variants Tests","Description":"Tests for genetic rare variants.","Published":"2016-07-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"thief","Version":"0.2","Title":"Temporal Hierarchical Forecasting","Description":"Methods and tools for generating forecasts at different temporal\n frequencies using a hierarchical time series approach.","Published":"2016-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Thinknum","Version":"1.3.0","Title":"Thinknum Data Connection","Description":"This package interacts directly with the Thinknum API to offer data\n in a number of formats usable in R","Published":"2014-07-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ThreeArmedTrials","Version":"1.0-0","Title":"Design and Analysis of Clinical Non-Inferiority or Superiority\nTrials with Active and Placebo Control","Description":"Design and analyze three-arm non-inferiority or superiority trials\n which follow a gold-standard design, i.e. trials with an experimental treatment,\n an active, and a placebo control.","Published":"2016-05-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"threeboost","Version":"1.1","Title":"Thresholded variable selection and prediction based on\nestimating equations","Description":"This package implements a thresholded version of the EEBoost\n algorithm described in [Wolfson (2011, JASA)]. EEBoost is a general-purpose\n method for variable selection which can be applied whenever inference would\n be based on an estimating equation. The package currently implements\n variable selection based on the Generalized Estimating Equations, but can\n also accommodate user-provided estimating functions. Thresholded EEBoost is\n a generalization which allows multiple variables to enter the model at each\n boosting step.","Published":"2014-08-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ThreeGroups","Version":"0.21","Title":"ML Estimator for Baseline-Placebo-Treatment (Three-Group)\nExperiments","Description":"Implements the Maximum Likelihood estimator for baseline, placebo, and treatment groups (three-group) experiments with non-compliance proposed by Gerber, Green, Kaplan, and Kern (2010).","Published":"2015-09-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"threejs","Version":"0.2.2","Title":"Interactive 3D Scatter Plots, Networks and Globes","Description":"Create interactive 3D scatter plots, network plots, and\n globes using the 'three.js' visualization library (\"http://threejs.org\").","Published":"2016-04-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ThreeWay","Version":"1.1.3","Title":"Three-Way Component Analysis","Description":"Component analysis for three-way data arrays by means of Candecomp/Parafac, Tucker3, Tucker2 and Tucker1 models.","Published":"2015-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"threewords","Version":"0.1.0","Title":"Represent Precise Coordinates in Three Words","Description":"A connector to the 'What3Words' (http://what3words.com/) service, which represents each 3m by 3m square on earth\n with a unique trio of English-language words.","Published":"2015-08-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"threg","Version":"1.0.3","Title":"Threshold Regression","Description":"Fit a threshold regression model based on the first-hitting-time of a boundary by the sample path of a Wiener diffusion process. The threshold regression methodology is well suited to applications involving survival and time-to-event data.","Published":"2015-08-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"thregI","Version":"1.0.3","Title":"Threshold Regression for Interval-Censored Data with a Cure Rate\nOption","Description":"Fit a threshold regression model for Interval Censored Data based on the first-hitting-time of a boundary by the sample path of a Wiener diffusion process. The threshold regression methodology is well suited to applications involving survival and time-to-event data.","Published":"2017-05-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ThresholdROC","Version":"2.3","Title":"Threshold Estimation","Description":"Point and interval estimations of optimum thresholds for continuous diagnostic tests (two- and three- state settings).","Published":"2015-12-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"thsls","Version":"0.1","Title":"Three-Stage Least Squares Estimation for Systems of Simultaneous\nEquations","Description":"Fit the Simultaneous Systems of Linear Equations using Three-stage Least Squares.","Published":"2015-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tibble","Version":"1.3.3","Title":"Simple Data Frames","Description":"Provides a 'tbl_df' class (the 'tibble') that provides\n stricter checking and better formatting than the traditional data frame.","Published":"2017-05-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tibbrConnector","Version":"1.5.1","Title":"R Interface to TIBCO 'tibbr'","Description":"Post messages to tibbr from within R.","Published":"2016-12-15","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"TickExec","Version":"1.1","Title":"Execution Functions for Tick Data Back Test","Description":"Functions to execute orders in backtesting using tick data. A testing platform was established by the four major execution functions, namely 'LimitBuy', 'LimitSell', 'MarketBuy' and 'MarketSell', which enclosed all tedious aspects (such as queueing for order executions and calculate actual executed volumes) for order execution using tick data. Such that one can focus on the logic of strategies, rather than its execution.","Published":"2015-05-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tictactoe","Version":"0.2.2","Title":"Tic-Tac-Toe Game","Description":"\n Implements tic-tac-toe game to play on console, either with human or AI players.\n Various levels of AI players are trained through the Q-learning algorithm.","Published":"2017-05-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tictoc","Version":"1.0","Title":"Functions for timing R scripts, as well as implementations of\nStack and List structures","Description":"This package provides the timing functions 'tic' and 'toc' that\n can be nested. One can record all timings while a complex script is\n running, and examine the values later. It is also possible to instrument\n the timing calls with custom callbacks. In addition, this package provides\n class 'Stack', implemented as a vector, and class 'List', implemented as a\n list, both of which support operations 'push', 'pop', 'first', 'last' and\n 'clear'.","Published":"2014-06-17","License":"Apache License (== 2.0) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"TiddlyWikiR","Version":"1.0.1","Title":"Create dynamic reports using a TiddlyWiki template","Description":"Utilities to generate wiki reports in TiddlyWiki format.","Published":"2013-12-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TideCurves","Version":"0.0.2","Title":"Analysis and Prediction of Tides","Description":"Tidal analysis of evenly spaced observed time series (time step 1 to 60 min) with or\n without shorter gaps.\n The analysis should preferably cover an observation period of at least 19 years.\n For shorter periods low frequency constituents are not taken into account, in accordance with the Rayleigh-Criterion.\n The main objective of this package is to synthesize or predict a tidal time series.","Published":"2017-06-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TideHarmonics","Version":"0.1-1","Title":"Harmonic Analysis of Tides","Description":"Implements harmonic analysis of tidal and sea-level data.\n Over 400 harmonic tidal constituents can be estimated, all with \n daily nodal corrections. Time-varying mean sea-levels can also\n be used.","Published":"2017-05-04","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Tides","Version":"1.2","Title":"Quasi-Periodic Time Series Characteristics","Description":"Calculate Characteristics of Quasi-Periodic Time Series, e.g. Estuarine Water Levels.","Published":"2016-09-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"TideTables","Version":"0.0.1","Title":"Tide Analysis and Prediction of Predominantly Semi-Diurnal Tides","Description":"Tide analysis and prediction of predominantly semi-diurnal tides\n with two high waters and two low waters during one lunar day (~24.842 hours,\n ~1.035 days). The analysis should preferably cover an observation period of at\n least 19 years. For shorter periods, for example, the nodal cycle can not be\n taken into account, which particularly affects the height calculation. The main\n objective of this package is to produce tide tables.","Published":"2015-12-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tidycensus","Version":"0.1.2","Title":"Load US Census Boundary and Attribute Data as 'tidyverse' and\n'sf'-Ready Data Frames","Description":"An integrated R interface to the decennial US Census and American Community Survey APIs and\n the US Census Bureau's geographic boundary files. Allows R users to return Census and ACS data as\n tidyverse-ready data frames, and optionally returns a list-column with feature geometry for many \n geographies. ","Published":"2017-06-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tidyjson","Version":"0.2.2","Title":"A Grammar for Turning 'JSON' into Tidy Tables","Description":"An easy and consistent way to turn 'JSON' into tidy data frames\n that are natural to work with in 'dplyr', 'ggplot2' and other tools.","Published":"2017-04-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tidyquant","Version":"0.5.1","Title":"Tidy Quantitative Financial Analysis","Description":"Bringing financial analysis to the 'tidyverse'. The 'tidyquant' \n package provides a convenient wrapper to various 'xts', 'zoo', 'quantmod', 'TTR' \n and 'PerformanceAnalytics' package \n functions and returns the objects in the tidy 'tibble' format. The main \n advantage is being able to use quantitative functions with the 'tidyverse'\n functions including 'purrr', 'dplyr', 'tidyr', 'ggplot2', 'lubridate', etc. See \n the 'tidyquant' website for more information, documentation and examples.","Published":"2017-04-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tidyr","Version":"0.6.3","Title":"Easily Tidy Data with 'spread()' and 'gather()' Functions","Description":"An evolution of 'reshape2'. It's designed specifically for data\n tidying (not general reshaping or aggregating) and works well with\n 'dplyr' data pipelines.","Published":"2017-05-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tidyRSS","Version":"1.2.1","Title":"Tidy RSS for R","Description":"\n With the objective of including data from RSS feeds into your analysis, 'tidyRSS' parses RSS and Atom XML feeds and returns a tidy data frame.","Published":"2017-06-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tidytext","Version":"0.1.3","Title":"Text Mining using 'dplyr', 'ggplot2', and Other Tidy Tools","Description":"Text mining for word processing and sentiment analysis using\n 'dplyr', 'ggplot2', and other tidy tools.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tidyverse","Version":"1.1.1","Title":"Easily Install and Load 'Tidyverse' Packages","Description":"The 'tidyverse' is a set of packages that work in harmony\n because they share common data representations and 'API' design. This\n package is designed to make it easy to install and load multiple\n 'tidyverse' packages in a single step. Learn more about the 'tidyverse'\n at .","Published":"2017-01-27","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tidyxl","Version":"0.2.1","Title":"Read Untidy Excel Files","Description":"Imports non-tabular from Excel files into R. Exposes cell content,\n position and formatting in a tidy structure for further manipulation.\n Provides functions for selecting cells by position and relative position,\n and for associating data cells with header cells by proximity in given\n directions. Supports '.xlsx' and '.xlsm' via the embedded 'RapidXML' C++\n library . Does not support '.xlsb' or\n '.xls'.","Published":"2017-01-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tiff","Version":"0.1-5","Title":"Read and write TIFF images","Description":"This package provides an easy and simple way to read, write and display bitmap images stored in the TIFF format. It can read and write both files and in-memory raw vectors.","Published":"2013-09-04","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"tiger","Version":"0.2.3.1","Title":"TIme series of Grouped ERrors","Description":"Temporally resolved groups of typical differences (errors) between two time series are determined and visualized ","Published":"2014-08-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tigerhitteR","Version":"1.1.0","Title":"Pre-Process of Time Series Data Set in R","Description":"Pre-process for discrete time series data set which is not continuous at the column\n of 'date'. Refilling records of missing 'date' and other columns to the hollow data set so that\n final data set is able to be dealt with time series analysis.","Published":"2016-10-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"tigerstats","Version":"0.3","Title":"R Functions for Elementary Statistics","Description":"A collection of data sets and functions that are useful in the\n teaching of statistics at an elementary level to students who may have\n little or no previous experience with the command line. The functions for\n elementary inferential procedures follow a uniform interface for user\n input. Some of the functions are instructional applets that can only be\n run on the R Studio integrated development environment with package\n 'manipulate' installed. Other instructional applets are Shiny apps\n that may be run locally. In teaching the package is used alongside of\n package 'mosaic', 'mosaicData' and 'abd', which are therefore listed as\n dependencies.","Published":"2016-12-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"tigger","Version":"0.2.9.999","Title":"R Tools for Inferring New Immunoglobulin Alleles from Rep-Seq\nData","Description":"Infers the V genotype of an individual from immunoglobulin (Ig)\n repertoire-sequencing (Rep-Seq) data, including detection of any novel\n alleles. This information is then used to correct existing V allele calls\n from among the sample sequences.","Published":"2017-05-16","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"tightClust","Version":"1.0","Title":"Tight Clustering","Description":"This package contains functions for tight clustering\n Algorithm.","Published":"2012-08-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tigreBrowserWriter","Version":"0.1.4","Title":"'tigreBrowser' Database Writer","Description":"Write modelling results into a database for\n 'tigreBrowser', a web-based tool for browsing figures and summary\n data of independent model fits, such as Gaussian process models\n fitted for each gene or other genomic element. The browser is\n available at .","Published":"2016-12-12","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"tigris","Version":"0.5.3","Title":"Load Census TIGER/Line Shapefiles into R","Description":"Download TIGER/Line shapefiles from the United States Census Bureau\n and load into R as 'SpatialDataFrame' or 'sf' objects.","Published":"2017-05-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tikzDevice","Version":"0.10-1","Title":"R Graphics Output in LaTeX Format","Description":"Provides a graphics output device for R that records plots\n in a LaTeX-friendly format. The device transforms plotting\n commands issued by R functions into LaTeX code blocks. When\n included in a LaTeX document, these blocks are interpreted with\n the help of 'TikZ'---a graphics package for TeX and friends\n written by Till Tantau. Using the 'tikzDevice', the text of R\n plots can contain LaTeX commands such as mathematical formula.\n The device also allows arbitrary LaTeX code to be inserted into\n the output stream.","Published":"2016-02-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tilegramsR","Version":"0.2.0","Title":"R Spatial Data for Tilegrams","Description":"R spatial objects for Tilegrams.\n Tilegrams are tiled maps where the region size is proportional to\n the certain characteristics of the dataset.","Published":"2017-03-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tileHMM","Version":"1.0-7","Title":"Hidden Markov Models for ChIP-on-Chip Analysis","Description":"Methods and classes to build HMMs\n that are suitable for the analysis of ChIP-chip data. The\n provided parameter estimation methods include the Baum-Welch\n algorithm and Viterbi training as well as a combination of\n both.","Published":"2015-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TileManager","Version":"0.1.11","Title":"Tile Manager","Description":"Tools for creating and detecting tiling schemes for raster data sets.","Published":"2017-01-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"TilePlot","Version":"1.3.1","Title":"Characterization of functional genes in complex microbial\ncommunities using tiling DNA microarrays","Description":"This package is intended for processing the output from\n functional gene tiling DNA microarray experiments. It produces\n hybridization pattern plots for each gene on the array, and\n statistics for each gene including mean probe intensity, median\n probe intensity, bright probe fraction, bright segment length\n dependent score, bright probe mean intensity, and bright probe\n median intensity. Output is generated in order of bright\n segment length dependent score in both a latex/eps format and\n tab-delimited text file. The package works in two modes: single\n array, and comparison of two arrays. Array comparison includes\n array comparison statistics: median of logarithm of one array\n probe divided by its counterpart on the other array, median\n absolute deviation of that value, and the binomial test to see\n whether the genes are equally abundant in both arrays.","Published":"2013-12-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tilting","Version":"1.1.1","Title":"Variable Selection via Tilted Correlation Screening Algorithm","Description":"Implements an algorithm for variable selection in high-dimensional linear regression using the \"tilted correlation\", a new way of measuring the contribution of each variable to the response which takes into account high correlations among the variables in a data-driven way.","Published":"2016-12-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"time2event","Version":"0.1.0","Title":"Survival and Competing Risk Analyses with Time-to-Event Data as\nCovariates","Description":"Cox proportional hazard and competing risk regression analyses can be performed with time-to-event data as covariates.","Published":"2016-07-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"timeDate","Version":"3012.100","Title":"Rmetrics - Chronological and Calendar Objects","Description":"Environment for teaching \n\t\"Financial Engineering and Computational Finance\".\n\tManaging chronological and calendar objects.","Published":"2015-01-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"timedelay","Version":"1.0.7","Title":"Time Delay Estimation for Stochastic Time Series of\nGravitationally Lensed Quasars","Description":"We provide a toolbox to estimate the time delay between the brightness time series of gravitationally lensed quasar images via Bayesian and profile likelihood approaches. The model is based on a state-space representation for irregularly observed time series data generated from a latent continuous-time Ornstein-Uhlenbeck process. Our Bayesian method adopts scientifically motivated hyper-prior distributions and a Metropolis-Hastings within Gibbs sampler, producing posterior samples of the model parameters that include the time delay. A profile likelihood of the time delay is a simple approximation to the marginal posterior distribution of the time delay. Both Bayesian and profile likelihood approaches complement each other, producing almost identical results; the Bayesian way is more principled but the profile likelihood is easier to implement.","Published":"2017-05-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"timekit","Version":"0.3.0","Title":"A Collection of Tools for Working with Time Series in R","Description":"\n Get the time series index, signature, and summary from time series objects and\n time-based tibbles. Create future time series based on properties of \n existing time series index. \n Coerce between time-based tibbles ('tbl') and 'xts', 'zoo', and 'ts'. ","Published":"2017-05-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"timeline","Version":"0.9","Title":"Timelines for a Grammar of Graphics","Description":"Create timeline plots.","Published":"2013-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"timelineR","Version":"0.1.0","Title":"Visualization for Time Series Data","Description":"Helps to visualize multi-variate time-series having numeric and factor variables.\n You can use the package for visual analysis of data by plotting the data for each variable in the desired order and study\n interaction between a factor and a numeric variable by creating overlapping plots.","Published":"2017-05-25","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"timelineS","Version":"0.1.1","Title":"Timeline and Time Duration-Related Tools","Description":"An easy tool for plotting annotated timelines, grouped timelines, and exploratory graphics (boxplot/histogram/density plot/scatter plot/line plot). Filter, summarize date data by duration and convert to calendar units.","Published":"2016-08-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TimeMachine","Version":"1.2","Title":"Time Machine","Description":"Implements the Time Machine, a simulation approach for stochastic\n trees.","Published":"2014-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"timeordered","Version":"0.9.8","Title":"Time-ordered and time-aggregated network analyses","Description":"Methods for incorporating time into network analysis. Construction of time-ordered networks (temporal graphs). Shortest-time and shortest-path-length analyses. Resource spread calculations. Data resampling and rarefaction for null model construction. Reduction to time-aggregated networks with variable window sizes; application of common descriptive statistics to these networks. Vector clock latencies. Plotting functionalities.","Published":"2015-01-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TimeProjection","Version":"0.2.0","Title":"Time Projections","Description":"Extract useful time components of a date object, such as\n day of week, weekend, holiday, day of month, etc, and put it in\n a data frame. This can be used to create many predictor\n variables out of a single time variable, which can then be used\n in a regression or decision tree. Also includes function\n plotCalendarHeatmap which draws a calendar and overlays a\n heatmap based on values.","Published":"2013-02-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"timereg","Version":"1.9.1","Title":"Flexible Regression Models for Survival Data","Description":"Programs for Martinussen and Scheike (2006), `Dynamic Regression\n Models for Survival Data', Springer Verlag. Plus more recent developments.\n Additive survival model, semiparametric proportional odds model, fast cumulative\n residuals, excess risk models and more. Flexible competing risks regression\n including GOF-tests. Two-stage frailty modelling. \n PLS for the additive risk model. Lasso in the 'ahaz' package.","Published":"2017-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"timeROC","Version":"0.3","Title":"Time-Dependent ROC Curve and AUC for Censored Survival Data","Description":"Estimation of time-dependent ROC curve and area under time dependent ROC curve (AUC) in the presence of censored data, with or without competing risks. Confidence intervals of AUCs and tests for comparing AUCs of two rival markers measured on the same subjects can be computed, using the iid-representation of the AUC estimator. Plot functions for time-dependent ROC curves and AUC curves are provided. Time-dependent Positive Predictive Values (PPV) and Negative Predictive Values (NPV) can also be computed.","Published":"2015-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"timesboot","Version":"1.0","Title":"Bootstrap computations for time series objects","Description":"Computes bootstrap CI for the sample ACF and periodogram","Published":"2013-08-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"timeSeq","Version":"1.0.2","Title":"Detecting Differentially Expressed Genes in Time Course RNA-Seq\nData","Description":"Uses a negative binomial mixed-effects (NBME) model to detect\n nonparallel differential expression(NPDE) genes and parallel differential\n expression(PDE) genes in the time course RNA-seq data.","Published":"2016-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"timeSeries","Version":"3022.101.2","Title":"Rmetrics - Financial Time Series Objects","Description":"Environment for teaching \n\t\"Financial Engineering and Computational Finance\".\n\tManaging financial time series objects.","Published":"2015-12-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"timeseriesdb","Version":"0.2.1","Title":"Manage Time Series with R and PostgreSQL","Description":"Store and organize a large amount of low frequency time series data. \n The package was designed to manage a large catalog of official statistics which are\n typically published on monthly, quarterly or yearly basis. Thus timeseriesdb is\n optimized to handle a large number of lower frequency time series as opposed to a\n smaller amount of high frequency time series such as real time data from measuring devices.\n Hence timeseriesdb provides the opportunity to store extensive multi-lingual\n meta information. The package also provides a web GUI to explore the underlying\n PostgreSQL database interactively. ","Published":"2015-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"timetools","Version":"1.11.4","Title":"Seasonal/Sequential (Instants/Durations, Even or not) Time\nSeries","Description":"Objects to manipulate sequential and seasonal time series. Sequential time series based on time instants and time durations are handled. Both can be regularly or unevenly spaced (overlapping durations are allowed). Only POSIX* format are used for dates and times. The following classes are provided : 'POSIXcti', 'POSIXctp', 'TimeIntervalDataFrame', 'TimeInstantDataFrame', 'SubtimeDataFrame' ; methods to switch from a class to another and to modify the time support of series (hourly time series to daily time series for instance) are also defined. Tools provided can be used for instance to handle environmental monitoring data (not always produced on a regular time base).","Published":"2017-04-10","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"timetree","Version":"1.0","Title":"Interface to the TimeTree of Life Webpage","Description":"A interface to the TimeTree of Life Webpage (www.timetree.org). TimeTree is a public database for information on the evolutionary timescale of life. This package includes functions for searching divergence time for taxa or all nodes of a phylogeny.","Published":"2015-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"timevis","Version":"0.4","Title":"Create Interactive Timeline Visualizations in R","Description":"Create rich and fully interactive timeline visualizations.\n Timelines can be included in Shiny apps and R markdown documents, or viewed\n from the R console and RStudio Viewer. 'timevis' includes an extensive API\n to manipulate a timeline after creation, and supports getting data out of\n the visualization into R. Based on the 'vis.js' Timeline module and the\n 'htmlwidgets' R package.","Published":"2016-09-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"TimeWarp","Version":"1.0.15","Title":"Date Calculations and Manipulation","Description":"Date sequence, relative date calculations, and date manipulation with business days\n and holidays. Works with Date and POSIXt classes.","Published":"2016-07-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"timma","Version":"1.2.1","Title":"Target Inhibition Interaction using Maximization and\nMinimization Averaging","Description":"Prediction and ranking of drug combinations based on their drug-target interaction profiles and single-drug sensitivities in a given cancer cell line or patient-derived sample.","Published":"2015-02-28","License":"Artistic License 2.0","snapshot_date":"2017-06-23"} {"Package":"TIMP","Version":"1.13.0","Title":"Fitting Separable Nonlinear Models in Spectroscopy and\nMicroscopy","Description":"A problem-solving environment (PSE) for fitting\n separable nonlinear models to measurements arising in physics\n and chemistry experiments; has been extensively applied to\n time-resolved spectroscopy and FLIM-FRET data.","Published":"2015-10-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"timsac","Version":"1.3.5","Title":"Time Series Analysis and Control Package","Description":"Functions for statistical analysis, prediction and control of time series.","Published":"2016-09-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Tinflex","Version":"1.2","Title":"A Universal Non-Uniform Random Number Generator","Description":"A universal non-uniform random number generator\n for quite arbitrary distributions with piecewise twice\n differentiable densities.","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TInPosition","Version":"0.13.6","Title":"Inference tests for TExPosition","Description":"Non-parametric resampling-based inference tests for TExPosition.","Published":"2013-12-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tinsel","Version":"0.0.1","Title":"Transform Functions using Decorators","Description":"Instead of nesting function calls, annotate and transform \n functions using \"#.\" comments.","Published":"2016-11-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tint","Version":"0.0.3","Title":"Tint is not Tufte","Description":"A 'tufte'-alike style for 'rmarkdown'.","Published":"2016-10-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TipDatingBeast","Version":"1.0-5","Title":"Using Tip Dates with Phylogenetic Trees in BEAST (Software for\nPhylogenetic Analysis)","Description":"Assist performing tip-dating of phylogenetic trees with BEAST. ","Published":"2016-09-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tipom","Version":"1.0.2-1","Title":"Automated measure-based classification for flint tools","Description":"TIPOM is based on a methodology that was developed in the\n 1960s by Bernardino Bagolini. The basic idea is to use\n the three simple dimensions of length, width and thickness\n of each lithic artefact to classify them in discrete\n groups and infer their function.","Published":"2013-08-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"TippingPoint","Version":"1.1.0","Title":"Enhanced Tipping Point Displays the Results of Sensitivity\nAnalyses for Missing Data","Description":"Using the idea of \"tipping point\" (proposed in\n Gregory Campbell, Gene Pennello and Lilly Yue(2011)\n ) to visualize the results\n of sensitivity analysis for missing data, the package provides\n a set of functions to list out all the possible combinations\n of the values of missing data in two treatment arms, calculate\n corresponding estimated treatment effects and p values and draw\n a colored heat-map to visualize them. It could deal with randomized\n experiments with a binary outcome or a continuous outcome. In addition,\n the package provides a visualized method to compare various imputation\n methods by adding the rectangles or convex hulls on the basic plot.","Published":"2016-05-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tis","Version":"1.32","Title":"Time Indexes and Time Indexed Series","Description":"Functions and S3 classes for time indexes and time indexed\n series, which are compatible with FAME frequencies.","Published":"2017-01-26","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"titan","Version":"1.0-16","Title":"Titration analysis for mass spectrometry data","Description":"GUI to analyze mass spectrometric data on the relative\n abundance of two substances from a titration series.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TITAN2","Version":"2.1","Title":"Threshold Indicator Taxa Analysis","Description":"Uses indicator species scores across binary partitions of\n a sample set to detect congruence in taxon-specific changes of abundance\n and occurrence frequency along an environmental gradient as evidence of\n an ecological community threshold. Relevant references include: Baker,\n ME and RS King. 2010. A new method for detecting and interpreting\n biodiversity and ecological community thresholds. Methods in Ecology and\n Evolution 1(1): 25:37. King, RS and ME Baker. 2010. Considerations for\n identifying and interpreting ecological community thresholds. Journal\n of the North American Benthological Association 29(3):998-1008. Baker ME\n and RS King. 2013. Of TITAN and straw men: an appeal for greater\n understanding of community data. Freshwater Science 32(2):489-506.","Published":"2015-12-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"titanic","Version":"0.1.0","Title":"Titanic Passenger Survival Data Set","Description":"This data set provides information on the fate of passengers on\n the fatal maiden voyage of the ocean liner \"Titanic\", summarized according\n to economic status (class), sex, age and survival. Whereas the base R\n Titanic data found by calling data(\"Titanic\") is an array resulting from\n cross-tabulating 2201 observations, these data sets are the individual\n non-aggregated observations and formatted in a machine learning context\n with a training sample, a testing sample, and two additional data sets\n that can be used for deeper machine learning analysis. These data sets\n are also the data sets downloaded from the Kaggle competition and thus\n lowers the barrier to entry for users new to R or machine learing.","Published":"2015-08-31","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"titrationCurves","Version":"0.1.0","Title":"Acid/Base, Complexation, Redox, and Precipitation Titration\nCurves","Description":"A collection of functions to plot acid/base titration \n curves (pH vs. volume of titrant), complexation titration curves \n (pMetal vs. volume of EDTA), redox titration curves (potential \n vs.volume of titrant), and precipitation titration curves (either \n pAnalyte or pTitrant vs. volume of titrant). Options include the \n titration of mixtures, the ability to overlay two or more \n titration curves, and the ability to show equivalence points.","Published":"2016-02-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TKF","Version":"0.0.8","Title":"Pairwise Distance Estimation with TKF91 and TKF92 Model","Description":"Pairwise evolutionary distance estimation between protein sequences with the TKF91 and TKF92 model, which consider all the possible paths of transforming from one sequence to another.","Published":"2015-04-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tkrgl","Version":"0.7","Title":"TK widget tools for rgl package","Description":"TK widget tools for rgl package ","Published":"2011-11-27","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"tkrplot","Version":"0.0-23","Title":"TK Rplot","Description":"simple mechanism for placing R graphics in a Tk widget","Published":"2011-11-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"Tlasso","Version":"1.0.1","Title":"Non-Convex Optimization and Statistical Inference for Sparse\nTensor Graphical Models","Description":"An optimal alternating optimization algorithm for estimation of precision matrices of sparse tensor graphical models, and an efficient inference procedure for support recovery of the precision matrices.","Published":"2016-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TLBC","Version":"1.0","Title":"Two-Level Behavior Classification","Description":"Contains functions for training and applying two-level random forest and hidden Markov models for human behavior classification from raw tri-axial accelerometer and/or GPS data. Includes functions for training a two-level model, applying the model to data, and computing performance.","Published":"2015-10-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TLdating","Version":"0.1.3","Title":"Tools for Thermoluminescences Dating","Description":"A series of function to make thermoluminescence dating using the MAAD or the SAR protocol.\n This package completes the R package \"Luminescence.\"","Published":"2016-08-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tlemix","Version":"0.1.3","Title":"Trimmed Maximum Likelihood Estimation","Description":"TLE implements a general framework for robust fitting of\n finite mixture models. Parameter estimation is performed using\n the EM algorithm.","Published":"2013-08-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"tlm","Version":"0.1.5","Title":"Effects under Linear, Logistic and Poisson Regression Models\nwith Transformed Variables","Description":"Computation of effects under linear, logistic and Poisson regression models with transformed variables. Logarithm and power transformations are allowed. Effects can be displayed both numerically and graphically in both the original and the transformed space of the variables.","Published":"2017-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tlmec","Version":"0.0-2","Title":"Linear Student-t Mixed-Effects Models with Censored Data","Description":"Fit a linear mixed effects model for censored data with\n Student-t or normal distributions. The errors are assumed\n independent and identically distributed.","Published":"2012-01-28","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"TLMoments","Version":"0.7.2.1","Title":"Calculate TL-Moments and Convert Them to Distribution Parameters","Description":"Calculates empirical TL-moments (trimmed L-moments) of arbitrary \n order and trimming, and converts them to distribution parameters. ","Published":"2017-04-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tm","Version":"0.7-1","Title":"Text Mining Package","Description":"A framework for text mining applications within R.","Published":"2017-03-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tm.plugin.alceste","Version":"1.1","Title":"Import texts from files in the Alceste format using the tm text\nmining framework","Description":"This package provides a tm Source to create corpora from\n a corpus prepared in the format used by the Alceste application (i.e.\n a single text file with inline meta-data). It is able to import both\n text contents and meta-data (starred) variables.","Published":"2014-06-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tm.plugin.dc","Version":"0.2-8","Title":"Text Mining Distributed Corpus Plug-In","Description":"A plug-in for the text mining framework tm to support text mining \n in a distributed way. The package provides a convenient interface for\n handling distributed corpus objects based on distributed list objects.","Published":"2015-09-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tm.plugin.europresse","Version":"1.4","Title":"Import Articles from 'Europresse' Using the 'tm' Text Mining\nFramework","Description":"Provides a 'tm' Source to create corpora from\n articles exported from the 'Europresse' content provider as\n HTML files. It is able to read both text content and meta-data\n information (including source, date, title, author and pages).","Published":"2016-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tm.plugin.factiva","Version":"1.6","Title":"Import Articles from 'Factiva' Using the 'tm' Text Mining\nFramework","Description":"Provides a 'tm' Source to create corpora from\n articles exported from the Dow Jones 'Factiva' content provider as\n XML or HTML files. It is able to read both text content and meta-data\n information (including source, date, title, author, subject,\n geographical coverage, company, industry, and various\n provider-specific fields).","Published":"2017-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tm.plugin.lexisnexis","Version":"1.3","Title":"Import Articles from 'LexisNexis' Using the 'tm' Text Mining\nFramework","Description":"Provides a 'tm' Source to create corpora from\n articles exported from the 'LexisNexis' content provider as\n HTML files. It is able to read both text content and meta-data\n information (including source, date, title, author and pages).\n Note that the file format is highly unstable: there is no warranty\n that this package will work for your corpus, and you may have\n to adjust the code to adapt it to your particular format.","Published":"2016-06-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tm.plugin.mail","Version":"0.1","Title":"Text Mining E-Mail Plug-In","Description":"A plug-in for the tm text mining framework providing mail handling\n functionality.","Published":"2014-06-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tm.plugin.webmining","Version":"1.3","Title":"Retrieve Structured, Textual Data from Various Web Sources","Description":"Facilitate text retrieval from feed\n formats like XML (RSS, ATOM) and JSON. Also direct retrieval from\n HTML is supported. As most (news) feeds only incorporate small\n fractions of the original text tm.plugin.webmining even retrieves\n and extracts the text of the original text source.","Published":"2015-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tmap","Version":"1.10","Title":"Thematic Maps","Description":"Thematic maps are geographical maps in which spatial data distributions are visualized. This package offers a flexible, layer-based, and easy to use approach to create thematic maps, such as choropleths and bubble maps.","Published":"2017-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tmaptools","Version":"1.2-1","Title":"Thematic Map Tools","Description":"Set of tools for reading and processing spatial data. The aim is to supply the workflow to create thematic maps. This package also facilitates 'tmap', the package for visualizing thematic maps.","Published":"2017-05-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TMB","Version":"1.7.10","Title":"Template Model Builder: A General Random Effect Tool Inspired by\n'ADMB'","Description":"With this tool, a user should be able to quickly implement complex\n random effect models through simple C++ templates. The package combines\n 'CppAD' (C++ automatic differentiation), 'Eigen' (templated matrix-vector\n library) and 'CHOLMOD' (sparse matrix routines available from R) to obtain an\n efficient implementation of the applied Laplace approximation with exact\n derivatives. Key features are: Automatic sparseness detection, parallelism\n through 'BLAS' and parallel user templates.","Published":"2017-05-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tmcn","Version":"0.2-8","Title":"A Text Mining Toolkit for Chinese","Description":"A Text mining toolkit for Chinese, which includes facilities for \n Chinese string processing, Chinese NLP supporting, encoding detecting and \n converting. Moreover, it provides some functions to support 'tm' package \n in Chinese.","Published":"2017-06-12","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"TMDb","Version":"1.0","Title":"Access to TMDb API - Apiary","Description":"Provides an R-interface to the TMDb API (see TMDb API on ). The Movie Database (TMDb) is a popular user editable database for movies and TV shows (see ).","Published":"2015-06-16","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"tmg","Version":"0.3","Title":"Truncated Multivariate Gaussian Sampling","Description":"Random number generation of truncated multivariate Gaussian distributions using Hamiltonian Monte Carlo. The truncation is defined using linear and/or quadratic polynomials.","Published":"2015-02-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Tmisc","Version":"0.1.17","Title":"Turner Miscellaneous","Description":"Miscellaneous utility functions for data manipulation,\n data tidying, and working with gene expression data.","Published":"2017-06-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tmle","Version":"1.2.0-5","Title":"Targeted Maximum Likelihood Estimation","Description":"Targeted maximum likelihood estimation of point treatment effects (Targeted Maximum Likelihood Learning, The International Journal of biostatistics, 2(1), 2006. This version adds the tmleMSM() function to the package, for estimating the parameters of a marginal structural model for a binary point treatment effect. The tmle() function calculates the adjusted marginal difference in mean outcome associated with a binary point treatment, for continuous or binary outcomes. Relative risk and odds ratio estimates are also reported for binary outcomes. Missingness in the outcome is allowed, but not in treatment assignment or baseline covariate values. Effect estimation stratified by a binary mediating variable is also available. The population mean is calculated when there is missingness, and no variation in the treatment assignment. An ID argument can be used to identify repeated measures. Default settings call 'SuperLearner' to estimate the Q and g portions of the likelihood, unless values or a user-supplied regression function are passed in as arguments. ","Published":"2017-01-07","License":"BSD_3_clause + file LICENSE | GPL-2","snapshot_date":"2017-06-23"} {"Package":"tmle.npvi","Version":"0.10.0","Title":"Targeted Learning of a NP Importance of a Continuous Exposure","Description":"Targeted minimum loss estimation (TMLE) of a non-parametric variable importance measure of a continuous exposure 'X' on an outcome 'Y', taking baseline covariates 'W' into account. ","Published":"2015-05-22","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"tmlenet","Version":"0.1.0","Title":"Targeted Maximum Likelihood Estimation for Network Data","Description":"Estimation of average causal effects for single time point\n interventions in network-dependent data (e.g., in the presence of spillover\n and/or interference). Supports arbitrary interventions (static or\n stochastic). Implemented estimation algorithms are the targeted maximum\n likelihood estimation (TMLE), the inverse-probability-of-treatment (IPTW)\n estimator and the parametric G-computation formula estimator. Asymptotically\n correct influence-curve-based confidence intervals are constructed for the\n TMLE and IPTW. The data are assumed to consist of rows of unit-specific\n observations, each row i represented by variables (F.i,W.i,A.i,Y.i), where\n F.i is a vector of friend IDs of unit i (i's network), W.i is a vector of \n i's baseline covariates, A.i is i's exposure (can be binary, categorical or\n continuous) and Y.i is i's binary outcome. Exposure A.i depends on \n (multivariate) user-specified baseline summary measure(s) sW.i, where sW.i\n is any function of i's baseline covariates W.i and the baseline covariates\n of i's friends in F.i. Outcome Y.i depends on sW.i and (multivariate)\n user-specified summary measure(s) sA.i, where sA.i is any function of i's\n baseline covariates and exposure (W.i,A.i) and the baseline covariates and\n exposures of i's friends. The summary measures are defined with functions\n def.sW and def.sA. See ?'tmlenet-package' for a general overview.","Published":"2015-09-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tmod","Version":"0.31","Title":"Feature Set Enrichment Analysis for Metabolomics and\nTranscriptomics","Description":"Methods and feature set definitions for feature or gene set\n enrichment analysis in transcriptional and metabolic profiling data.\n Package includes tests for enrichment based on ranked lists of features,\n functions for visualisation and multivariate functional analysis.","Published":"2016-09-08","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"tmpm","Version":"1.0.3","Title":"Trauma Mortality Prediction Model","Description":"Trauma Mortality prediction for ICD-9, ICD-10, and AIS lexicons in\n long or wide format based on Dr. Alan Cook's tmpm mortality model.","Published":"2016-02-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tmvnsim","Version":"1.0-2","Title":"Truncated Multivariate Normal Simulation","Description":"Importance sampling from the truncated multivariate normal using the GHK (Geweke-Hajivassiliou-Keane) simulator.\n Unlike Gibbs sampling which can get stuck in one truncation sub-region depending on initial values, this package allows \n truncation based on disjoint regions that are created by truncation of absolute values. The GHK algorithm uses simple Cholesky\n transformation followed by recursive simulation of univariate truncated normals hence there are also no convergence issues. \n Importance sample is returned along with sampling weights, based on which, one can calculate integrals over truncated regions\n for multivariate normals.","Published":"2016-12-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tmvtnorm","Version":"1.4-10","Title":"Truncated Multivariate Normal and Student t Distribution","Description":"Random number generation for the truncated multivariate normal and Student t distribution. \n Computes probabilities, quantiles and densities, \n including one-dimensional and bivariate marginal densities. Computes first and second moments (i.e. mean and covariance matrix) for the double-truncated multinormal case.","Published":"2015-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tnam","Version":"1.6.5","Title":"Temporal Network Autocorrelation Models (TNAM)","Description":"Temporal and cross-sectional network autocorrelation models. These are models for variation in attributes of nodes nested in a network (e.g., drinking behavior of adolescents nested in a school class, or democracy versus autocracy of countries nested in the network of international relations). These models can be estimated for cross-sectional data or panel data, with complex network dependencies as predictors, multiple networks and covariates, arbitrary outcome distributions, and random effects or time trends. Basic references: Doreian, Teuter and Wang (1984) ; Hays, Kachi and Franzese (2010) ; Leenders, Roger Th. A. J. (2002) .","Published":"2017-04-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tnet","Version":"3.0.14","Title":"Software for Analysis of Weighted, Two-Mode, and Longitudinal\nNetworks","Description":"R package for analyzing weighted, two-mode, and longitudinal networks.","Published":"2015-11-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Tnseq","Version":"0.1.2","Title":"Identification of Conditionally Essential Genes in Transposon\nSequencing Studies","Description":"Identification of conditionally essential genes using high-throughput sequencing data from transposon mutant libraries.","Published":"2017-04-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"toaster","Version":"0.5.5","Title":"Big Data in-Database Analytics that Scales with Teradata Aster\nDistributed Platform","Description":"A consistent set of tools to perform in-database analytics\n on Teradata Aster Big Data Discovery Platform. toaster (a.k.a 'to Aster')\n embraces simple 2-step approach: compute in Aster - visualize and analyze\n in R. Its `compute` functions use combination of parallel SQL, SQL-MR and\n SQL-GR executing in Aster database - highly scalable parallel\n and distributed analytical platform. Then `create` functions visualize\n results with boxplots, scatterplots, histograms, heatmaps, word clouds,\n maps, networks, or slope graphs. Advanced options such as faceting, coloring,\n labeling, and others are supported with most visualizations.","Published":"2017-01-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TOC","Version":"0.0-4","Title":"Total Operating Characteristic Curve and ROC Curve","Description":"Construction of the Total Operating Characteristic (TOC) Curve and the Receiver (aka Relative) Operating Characteristic (ROC) Curve for spatial and non-spatial data. The TOC method is a modification of the ROC method which measures the ability of an index variable to diagnose either presence or absence of a characteristic. The diagnosis depends on whether the value of an index variable is above a threshold. Each threshold generates a two-by-two contingency table, which contains four entries: hits (H), misses (M), false alarms (FA), and correct rejections (CR). While ROC shows for each threshold only two ratios, H/(H + M) and FA/(FA + CR), TOC reveals the size of every entry in the contingency table for each threshold (Pontius Jr., R.G., Si, K. 2014. The total operating characteristic to measure diagnostic ability for multiple thresholds. Int. J. Geogr. Inf. Sci. 28 (3), 570-583). ","Published":"2015-12-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tokenizers","Version":"0.1.4","Title":"A Consistent Interface to Tokenize Natural Language Text","Description":"Convert natural language text into tokens. The tokenizers have a\n consistent interface and are compatible with Unicode, thanks to being built\n on the 'stringi' package. Includes tokenizers for shingled n-grams, skip\n n-grams, words, word stems, sentences, paragraphs, characters, lines, and\n regular expressions.","Published":"2016-08-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tolBasis","Version":"1.0","Title":"Fundamental Definitions and Utilities of the Time Oriented\nLanguage (TOL)","Description":"Imports the fundamental definitions and utilities of the Time Oriented Language (TOL), focused on time series analysis and stochastic processes, and provides the basis for the integration of TOL in R. See for more information about the TOL project.","Published":"2015-11-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tolerance","Version":"1.3.0","Title":"Statistical Tolerance Intervals and Regions","Description":"Statistical tolerance limits provide the limits between which we can expect to find a specified proportion of a sampled population with a given level of confidence. This package provides functions for estimating tolerance limits (intervals) for various univariate distributions (binomial, Cauchy, discrete Pareto, exponential, two-parameter exponential, extreme value, hypergeometric, Laplace, logistic, negative binomial, negative hypergeometric, normal, Pareto, Poisson-Lindley, Poisson, uniform, and Zipf-Mandelbrot), Bayesian normal tolerance limits, multivariate normal tolerance regions, nonparametric tolerance intervals, tolerance bands for regression settings (linear regression, nonlinear regression, nonparametric regression, and multivariate regression), and analysis of variance tolerance intervals. Visualizations are also available for most of these settings.","Published":"2017-02-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"toOrdinal","Version":"0.0-6","Title":"Function for Converting Cardinal to Ordinal Numbers by Adding a\nLanguage Specific Ordinal Indicator to the Number","Description":"Function for converting cardinal to ordinal numbers by adding a language specific ordinal indicator (http://en.wikipedia.org/wiki/Ordinal_indicator) to the number. ","Published":"2016-03-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"topicmodels","Version":"0.2-6","Title":"Topic Models","Description":"Provides an interface to the C code for Latent Dirichlet\n\t Allocation (LDA) models and Correlated Topics Models\n\t (CTM) by David M. Blei and co-authors and the C++ code\n\t for fitting LDA models using Gibbs sampling by Xuan-Hieu\n\t Phan and co-authors.","Published":"2017-04-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TopKLists","Version":"1.0.6","Title":"Inference, Aggregation and Visualization for Top-K Ranked Lists","Description":"For multiple ranked input lists (full or partial) representing the same set of N objects, the package TopKLists offers (1) statistical inference on the lengths of informative top-k lists, (2) stochastic aggregation of full or partial lists, and (3) graphical tools for the statistical exploration of input lists, and for the visualization of aggregation results.","Published":"2015-11-13","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"topmodel","Version":"0.7.2-2","Title":"Implementation of the hydrological model TOPMODEL in R","Description":"Set of hydrological functions including an R\n implementation of the hydrological model TOPMODEL, which is\n based on the 1995 FORTRAN version by Keith Beven. From version\n 0.7.0, the package is put into maintenance mode. New functions\n for hydrological analysis are now developed as part of the\n RHydro package. RHydro can be found on R-forge and is built on\n a set of dedicated S4 classes.","Published":"2011-01-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"topologyGSA","Version":"1.4.6","Title":"Gene Set Analysis Exploiting Pathway Topology","Description":"Using Gaussian graphical models we propose a novel approach to\n perform pathway analysis using gene expression. Given the\n structure of a graph (a pathway) we introduce two statistical\n tests to compare the mean and the concentration matrices between\n two groups. Specifically, these tests can be performed on the\n graph and on its connected components (cliques).","Published":"2016-09-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"topsis","Version":"1.0","Title":"TOPSIS method for multiple-criteria decision making (MCDM)","Description":"Evaluation of alternatives based on multiple criteria using TOPSIS method.","Published":"2013-09-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tosls","Version":"1.0","Title":"Instrumental Variables Two Stage Least Squares estimation","Description":"Fit an Instrumental Variables Two Stage Least Squares model","Published":"2014-04-01","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"TOSTER","Version":"0.2.3","Title":"Two One-Sided Tests (TOST) Equivalence Testing","Description":"Two one-sided tests (TOST) procedure to test equivalence for t-tests, correlations,\n and meta-analyses, including power analysis for t-tests and correlations. Allows you to\n specify equivalence bounds in raw scale units or in terms of effect sizes.","Published":"2017-03-11","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TotalCopheneticIndex","Version":"0.1","Title":"Total Cophenetic Index","Description":"Quantifies how balanced a phylogenetic tree is, using the Total Cophenetic Index\n - per A. Mir, F. Rossello, L. A. Rotger (2013), A new balance index for phylogenetic trees.\n Math. Biosci. 241, 125-136 .","Published":"2016-03-26","License":"Unlimited","snapshot_date":"2017-06-23"} {"Package":"touch","Version":"0.1-3","Title":"Tools of Utilization and Cost in Healthcare","Description":"Tools of utilization and cost in healthcare is an R implementation\n of the software tools developed in the H-CUP (Healthcare Cost and\n Utilization Project) \n \n at AHRQ (Agency for Healthcare Research and Quality) \n . It currently contains functions to map ICD9 code \n to AHRQ comorbidity measures.","Published":"2016-12-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"tourr","Version":"0.5.4","Title":"Implement Tour Methods in R Code","Description":"Implements geodesic interpolation and basis\n generation functions that allow you to create new tour\n methods from R.","Published":"2014-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tourrGui","Version":"0.4","Title":"A Tour GUI using gWidgets","Description":"The GUI allows user to control the tour by checkboxes for\n the variable selection, slider for the speed, and toggle boxes\n for pause.","Published":"2012-06-18","License":"MIT | GPL-2","snapshot_date":"2017-06-23"} {"Package":"toxboot","Version":"0.1.1","Title":"Bootstrap Methods for 'ToxCast' High Throughput Screening Data","Description":"Provides methods to use bootstrapping to quantify uncertainty\n in fitting 'ToxCast' concentration response data. Data is stored in memory,\n written to file, or stored in 'MySQL' or 'MongoDB' databases.","Published":"2016-08-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"toxtestD","Version":"2.0","Title":"Experimental design for binary toxicity tests","Description":"Calculates sample size and dose allocation for binary toxicity \n tests, using the Fish Embryo Toxicity Test as example. \n An optimal test design is obtained by running \n (i) spoD (calculate the number of individuals to test under control \n conditions), (ii) setD (estimate the minimal sample size per treatment\n given the users precision requirements) and (iii) doseD (construct \n an individual dose scheme).","Published":"2014-11-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TP.idm","Version":"1.2","Title":"Estimation of Transition Probabilities for the Illness-Death\nModel","Description":"Estimation of transition probabilities for the illness-death model. Both the Aalen-Johansen estimator for a Markov model and a novel non-Markovian estimator by de Una-Alvarez and Meira-Machado (2015) are included.","Published":"2016-12-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tpAUC","Version":"2.1.1","Title":"Estimation and Inference of Two-Way pAUC, pAUC and pODC","Description":"Tools for estimating and inferring two-way partial area under receiver operating characteristic curves (two-way pAUC), partial area under receiver operating characteristic curves (pAUC), and partial area under ordinal dominance curves (pODC). Methods includes Mann-Whitney statistic and Jackknife, etc. ","Published":"2017-04-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tpe","Version":"1.0.1","Title":"Tree preserving embedding","Description":"This package implements the greedy approximation for tree\n preserving embedding.","Published":"2013-03-19","License":"MIT","snapshot_date":"2017-06-23"} {"Package":"TPEA","Version":"3.0.1","Title":"A Novel Topology-Based Pathway Enrichment Analysis Approach","Description":"We described a novel Topology-based pathway enrichment analysis, which integrated the global position of the nodes and the topological property of the pathways in Kyoto Encyclopedia of Genes and Genomes Database.\n We also provide some functions to obtain the latest information about pathways to finish pathway enrichment analysis using this method. ","Published":"2017-06-14","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TPmsm","Version":"1.2.1","Title":"Estimation of Transition Probabilities in Multistate Models","Description":"Estimation of transition probabilities for the\n illness-death model and or the three-state progressive model.","Published":"2015-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tpr","Version":"0.3-1","Title":"Temporal Process Regression","Description":"Regression models for temporal process responses with\n time-varying coefficient.","Published":"2010-04-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"TR8","Version":"0.9.16","Title":"A Tool for Downloading Functional Traits Data for Plant Species","Description":"Plant ecologists often need to collect \"traits\" data\n about plant species which are often scattered among various\n databases: TR8 contains a set of tools which take care of\n automatically retrieving some of those functional traits data\n for plant species from publicly available databases (Biolflor,\n The Ecological Flora of the British Isles, LEDA traitbase, Ellenberg\n values for Italian Flora, Mycorrhizal intensity databases, Catminat, BROT,\n PLANTS, Jepson Flora Project).\n The TR8 name, inspired by \"car plates\" jokes, was chosen since\n it both reminds of the main object of the package and is\n extremely short to type.","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tracer","Version":"1.0.0","Title":"Slick Call Stacks","Description":"Better looking call stacks after an error.","Published":"2017-01-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tracheideR","Version":"0.1.1","Title":"Standardize Tracheidograms","Description":"Contains functions to standardize tracheid profiles\n using the traditional method (Vaganov) and a new method to standardize\n tracheidograms based on the relative position of tracheids within tree rings.","Published":"2015-11-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"track","Version":"1.1.9","Title":"Store Objects on Disk Automatically","Description":"Automatically stores objects in files on disk\n so that files are rewritten when objects are changed, and\n so that objects are accessible but do not occupy memory\n until they are accessed. Keeps track of times when objects\n are created and modified, and caches some basic\n characteristics of objects to allow for fast summaries of\n objects. Also provides a command history mechanism that\n saves the last command to a history file after each\n command completes.","Published":"2016-07-23","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"trackdem","Version":"0.1","Title":"Particle Tracking and Demography","Description":"Obtain population density and body size structure, using video material or image sequences as input. Functions assist in the creation of image sequences from videos, background detection and subtraction, particle identification and tracking. An artificial neural network can be trained for noise filtering. The goal is to supply accurate estimates of population size, structure and/or individual behavior, for use in evolutionary and ecological studies. ","Published":"2017-04-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"trackeR","Version":"0.0.5","Title":"Infrastructure for Running and Cycling Data from GPS-Enabled\nTracking Devices","Description":"The aim of this package is to provide infrastructure for handling running and cycling\n data from GPS-enabled tracking devices. After extraction and appropriate\n manipulation of the training or competition attributes, the data are placed\n into session-based and unit-aware data objects of class trackeRdata (S3 class). The\n information in the resulting data objects can then be visualised, summarised,\n and analysed through corresponding flexible and extensible methods.","Published":"2017-01-12","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"TrackReconstruction","Version":"1.1","Title":"Reconstruct animal tracks from magnetometer, accelerometer,\ndepth and optional speed data","Description":"Reconstructs animal tracks from magnetometer, accelerometer, depth and optional speed data. Designed primarily using data from Wildlife Computers Daily Diary tags deployed on northern fur seals.","Published":"2014-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tractor.base","Version":"3.1.0","Title":"Read, Manipulate and Visualise Magnetic Resonance Images","Description":"Functions for working with magnetic resonance images. Analyze,\n NIfTI-1, NIfTI-2 and MGH format images can be read and written; DICOM files\n can only be read.","Published":"2017-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TRADER","Version":"1.2-3","Title":"Tree Ring Analysis of Disturbance Events in R","Description":"Tree Ring Analysis of Disturbance Events in R (TRADER) package provides only one way for disturbance reconstruction from tree-ring data.","Published":"2017-01-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"Trading","Version":"1.1","Title":"Trades, Curves, Rating Tables, Add-on Tables, CSAs","Description":"Contains trades from the five major assets classes and also\n functionality to use pricing curves, rating tables, CSAs and add-on tables. The\n implementation follows an object oriented logic whereby each trade inherits from\n more abstract classes while also the curves/tables are objects. There is a lot\n of functionality focusing on the counterparty credit risk calculations however\n the package can be used for trading applications in general.","Published":"2016-11-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"traitr","Version":"0.14","Title":"An interface for creating GUIs modeled in part after traits UI\nmodule for python","Description":"An interface for creating GUIs modeled in part after the\n traits UI module for python. The basic design takes advantage of\n the model-view-controller design pattern. The interface makes basic\n dialogs quite easy to produce.","Published":"2014-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"traits","Version":"0.2.0","Title":"Species Trait Data from Around the Web","Description":"Species trait data from many different sources, including\n sequence data from 'NCBI', plant trait data from 'BETYdb', invasive species\n data from the Global Invasive Species Database and 'EOL', 'Traitbank' data\n from 'EOL', Coral traits data from http://coraltraits.org, 'nativity' status\n ('Flora Europaea' or 'ITIS'), and 'Birdlife' International.","Published":"2016-03-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Traitspace","Version":"1.1","Title":"A Predictive Model for Trait Based Community Assembly of Plant\nSpecies","Description":"Implements a predictive model of community assembly called 'Traitspace' (Laughlin et al. 2012, Ecology Letters). Traitspace is a hierarchical Bayesian model that translates the theory of trait-based environmental filtering into a statistical model that incorporates intraspecific trait variation to predict the relative abundances and the distributions of species. \n\tThe package includes functions to plot the predicted and the observed values. It also includes functions to compare the predicted values against the observed values using a variety of different distance measures and to implement permutation tests to test their statistical significance.","Published":"2015-11-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"traj","Version":"1.2","Title":"Trajectory Analysis","Description":"Implements the three-step procedure proposed by Leffondree et al. (2004) to identify clusters of individual longitudinal trajectories. The procedure involves (1) calculating 24 measures describing the features of the trajectories; (2) using factor analysis to select a subset of the 24 measures and (3) using cluster analysis to identify clusters of trajectories, and classify each individual trajectory in one of the clusters.","Published":"2015-01-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"trajectories","Version":"0.1-4","Title":"Classes and Methods for Trajectory Data","Description":"Classes and methods for trajectory data, with nested classes for individual trips, and collections for different entities. Methods include selection, generalization, aggregation, intersection, and plotting.","Published":"2015-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TraMineR","Version":"2.0-6","Title":"Trajectory Miner: a Toolbox for Exploring and Rendering\nSequences","Description":"Toolbox for the manipulation, description and rendering of sequences, and more generally the mining of sequence data in the field of social sciences. Although the toolbox is primarily intended for analyzing state or event sequences that describe life courses such as family formation histories or professional careers, its features also apply to many other kinds of categorical sequence data. It accepts many different sequence representations as input and provides tools for converting sequences from one format to another. It offers several functions for describing and rendering sequences, for computing distances between sequences with different metrics (among which optimal matching), original dissimilarity-based analysis tools, and simple functions for extracting the most frequent subsequences and identifying the most discriminating ones among them. A user's guide can be found on the TraMineR web page.","Published":"2017-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TraMineRextras","Version":"0.4.0","Title":"TraMineR Extension","Description":"Collection of ancillary functions and utilities to be used in conjunction with the 'TraMineR' package for sequence data exploration. Most of the functions are in test phase, lack systematic consistency check of the arguments and are subject to changes. Once fully checked, some of the functions of this collection could be included in a next release of 'TraMineR'.","Published":"2017-05-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TRAMPR","Version":"1.0-8","Title":"'TRFLP' Analysis and Matching Package for R","Description":"Matching terminal restriction fragment length\n polymorphism ('TRFLP') profiles between unknown samples and a\n database of known samples. TRAMPR facilitates analysis of\n many unknown profiles at once, and provides tools for working\n directly with electrophoresis output through to generating\n summaries suitable for community analyses with R's rich set of\n statistical functions. TRAMPR also resolves the issues of\n multiple 'TRFLP' profiles within a species, and shared 'TRFLP'\n profiles across species.","Published":"2016-06-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"transcribeR","Version":"0.0.0","Title":"Automated Transcription of Audio Files Through the HP IDOL API","Description":"Transcribes audio to text with the HP IDOL API. Includes functions to upload files, \n\t retrieve transcriptions, and monitor jobs. ","Published":"2015-08-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TransferEntropy","Version":"1.4","Title":"The Transfer Entropy Package","Description":"Estimates the transfer entropy from one time series to another, where each time series consists of continuous random variables. The transfer entropy is an extension of mutual information which takes into account the direction of information flow, under the assumption that the underlying processes can be described by a Markov model. Two estimation methods are provided. The first calculates transfer entropy as the difference of mutual information. Mutual information is estimated using the Kraskov method, which builds on a nearest-neighbor framework (see package references). The second estimation method estimate transfer entropy via the a generalized correlation sum.","Published":"2016-04-26","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"translate","Version":"0.1.2","Title":"Bindings for the Google Translate API v2","Description":"Bindings for the Google Translate API v2","Published":"2014-07-16","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"translateR","Version":"1.0","Title":"Bindings for the Google and Microsoft Translation APIs","Description":"translateR provides easy access to the Google and Microsoft APIs. The package is easy to use with the related R package \"stm\" for the estimation of multilingual topic models.","Published":"2014-07-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"translateSPSS2R","Version":"1.0.0","Title":"Toolset for Translating SPSS-Syntax to R-Code","Description":"Package with translated commands of SPSS. The usage is oriented\n on the handling of SPSS-Syntax. Mainly the package has two purposes:\n It facilitates SPSS-Users to change over to R and aids migration projects from SPSS to R.","Published":"2015-06-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"translation.ko","Version":"0.0.1.5.2","Title":"R Manuals Literally Translated in Korean","Description":"R version 2.1.0 and later support Korean translations of program messages. The continuous efforts have been made by The R Documentation files are licensed under the General Public License, version 2 or 3. This means that the pilot project to translate them into Korean has permission to reproduce them and translate them. This work is done with GNU 'gettext' utilities. The portable object template is updated a weekly basis or whenever changes are necessary. Comments and corrections via email to the maintainer is of course most welcome. In order to voluntarily participate in or offer your help with this translation, please contact the maintainer. To check the change and progress of Korean translation, please visit .","Published":"2015-07-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TransModel","Version":"2.1","Title":"Fit Linear Transformation Models for Right Censored Data","Description":"A unified estimation procedure for the analysis of right censored data using linear transformation models.","Published":"2017-05-11","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TransP","Version":"0.1","Title":"Implementation of Transportation Problem Algorithms","Description":"Implementation of two transportation problem algorithms. \n 1. North West Corner Method \n 2. Minimum Cost Method or Least cost method.\n For more technical details about the algorithms please refer below URLs.\n .\n .","Published":"2016-02-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"transport","Version":"0.8-2","Title":"Optimal Transport in Various Forms","Description":"Solve optimal transport problems. Compute Wasserstein distances (a.k.a. Kantorovitch, Fortet--Mourier, Mallows, Earth Mover's, or minimal L_p distances), return the corresponding transference plans, and display them graphically. Objects that can be compared include grey-scale images, (weighted) point patterns, and mass vectors.","Published":"2017-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tranSurv","Version":"1.1-4","Title":"Estimating a Survival Distribution in the Presence of Dependent\nLeft Truncation and Right Censoring","Description":"A structural transformation model for a latent, quasi-independent\n truncation time as a function of the observed dependent truncation\n time and the event time, and an unknown dependence parameter. \n The dependence parameter is chosen to minimize the conditional\n Kendall's tau. The marginal distribution for the\n truncation time and the event time are completely left unspecified.","Published":"2017-02-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"trapezoid","Version":"2.0-0","Title":"The Trapezoidal Distribution","Description":"The trapezoid package provides dtrapezoid, ptrapezoid,\n qtrapezoid, and rtrapezoid functions for the trapezoidal\n distribution.","Published":"2012-12-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TRD","Version":"1.1","Title":"Transmission Ratio Distortion","Description":"Transmission Ratio Distortion (TRD) is a genetic phenomenon where the two alleles from either parent are not transmitted to the offspring at the expected 1:1 ratio under Mendelian inheritance, leading to spurious signals in genetic association studies. Functions in this package are developed to account for this phenomenon using loglinear model and Transmission Disequilibrium Test (TDT). Some population information can also be calculated.","Published":"2015-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TreatmentSelection","Version":"2.0.3","Title":"Evaluate Treatment Selection Biomarkers","Description":"A suite of descriptive and inferential methods designed to evaluate one or more biomarkers for their ability to guide patient treatment recommendations. Package includes functions to assess the calibration of risk models; and plot, evaluate, and compare markers.","Published":"2017-02-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"treatSens","Version":"2.1.2","Title":"Sensitivity Analysis for Causal Inference","Description":"Utilities to investigate sensitivity to unmeasured confounding in\n parametric models with either binary or continuous treatment.","Published":"2017-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tree","Version":"1.0-37","Title":"Classification and Regression Trees","Description":"Classification and regression trees.","Published":"2016-01-21","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"treebase","Version":"0.1.4","Title":"Discovery, Access and Manipulation of 'TreeBASE' Phylogenies","Description":"Interface to the API for 'TreeBASE' \n from 'R.' 'TreeBASE' is a repository of user-submitted phylogenetic\n trees (of species, population, or genes) and the data used to create\n them.","Published":"2017-02-06","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"TreeBUGS","Version":"1.1.0","Title":"Hierarchical Multinomial Processing Tree Modeling","Description":"User-friendly analysis of hierarchical multinomial processing tree (MPT) models that are often used in cognitive psychology. Implements the latent-trait MPT approach (Klauer, 2010) and the beta-MPT approach (Smith & Batchelder, 2010) to model heterogeneity of participants. MPT models are conveniently specified by an .eqn-file as used by other MPT software and data are provided by a .csv-file or directly in R. Models are either fitted by calling JAGS or by an MPT-tailored Gibbs sampler in C++ (only for nonhierarchical and beta MPT models). Provides tests of heterogeneity and MPT-tailored summaries and plotting functions.","Published":"2017-04-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"treeclim","Version":"2.0.0","Title":"Numerical Calibration of Proxy-Climate Relationships","Description":"Bootstrapped response and correlation functions,\n seasonal correlations and evaluation of reconstruction\n skills for use in dendroclimatology and dendroecology.","Published":"2016-09-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"treeClust","Version":"1.1-6","Title":"Cluster Distances Through Trees","Description":"Create a measure of inter-point dissimilarity useful \n for clustering mixed data, and, optionally, perform the clustering.","Published":"2016-07-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"treecm","Version":"1.2.2","Title":"Centre of Mass Assessment and Consolidation of Trees","Description":"The centre of mass is a crucial data for arborists in order to\n consolidate a tree using steel or dynamic cables. Given field-recorded data\n on branchiness of a tree, the package: (i) computes and plots the centre of\n mass of the tree itself, (ii) computes branches slenderness coefficient in\n order to aid the arborist identify potentially dangerous branches, and\n (iii) computes the force acting on a ground plinth and its best position\n relating to the tree centre of mass, should the tree need to be stabilized\n by a steel cable.","Published":"2015-12-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"treeHFM","Version":"1.0.3","Title":"Hidden Factor Graph Models","Description":"Hidden Factor graph models generalise Hidden Markov Models to tree structured data. The distinctive feature of 'treeHFM' is that it learns a transition matrix for first order (sequential) and for second order (splitting) events. It can be applied to all discrete and continuous data that is structured as a binary tree. In the case of continuous observations, 'treeHFM' has Gaussian distributions as emissions.","Published":"2016-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"treelet","Version":"1.1","Title":"An Adaptive Multi-Scale Basis for High-Dimensional, Sparse and\nUnordered Data","Description":"Treelets provides a novel construction of multi-scale bases that extends\n wavelets to non-smooth signals. It returns a multi-scale orthonormal \n\tbasis, where the final computed basis functions are supported \n\ton nested clusters in a hierarchical tree. Both the tree and\n\tthe basis, which are constructed simultaneously, reflect the \n\tinternal structure of the data.","Published":"2015-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"treeman","Version":"1.1","Title":"Phylogenetic Tree Manipulation Class and Methods","Description":"S4 class and methods for intuitive and efficient phylogenetic tree manipulation.","Published":"2017-06-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"treemap","Version":"2.4-2","Title":"Treemap Visualization","Description":"A treemap is a space-filling visualization of hierarchical\n structures. This package offers great flexibility to draw treemaps.","Published":"2017-01-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TreePar","Version":"3.3","Title":"Estimating birth and death rates based on phylogenies","Description":"(i) For a given species phylogeny on present day data which is calibrated to calendar-time, a method for estimating maximum likelihood speciation and extinction processes is provided. The method allows for non-constant rates. Rates may change (1) as a function of time, i.e. rate shifts at specified times or mass extinction events (likelihood implemented as LikShifts, optimization as bd.shifts.optim and visualized as bd.shifts.plot) or (2) as a function of the number of species, i.e. density-dependence (likelihood implemented as LikDD and optimization as bd.densdep.optim) or (3) extinction rate may be a function of species age (likelihood implemented as LikAge and optimization as bd.age.optim.matlab). Note that the methods take into account the whole phylogeny, in particular it accounts for the \"pull of the present\" effect. (1-3) can take into account incomplete species sampling, as long as each species has the same probability of being sampled. For a given phylogeny on higher taxa (i.e. all but one species per taxa are missing), where the number of species is known within each higher taxa, speciation and extinction rates can be estimated under model (1) (implemented within LikShifts and bd.shifts.optim with groups !=0). (ii) For a given phylogeny with sequentially sampled tips, e.g. a virus phylogeny, rates can be estimated under a model where rates vary across time using bdsky.stt.optim based on likelihood LikShiftsSTT (extending LikShifts and bd.shifts.optim). Furthermore, rates may vary as a function of host types using LikTypesSTT (multitype branching process extending functions in R package diversitree). This function can furthermore calculate the likelihood under an epidemiological model where infected individuals are first exposed and then infectious.","Published":"2015-01-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"treeperm","Version":"1.6","Title":"Exact and Asymptotic K Sample Permutation Test","Description":"An implementation of permutation tests in R, supporting both exact and asymptotic K sample test of data locations. The p value of exact tests is found using tree algorithms. Tree algorithms treat permutations of input data as tree nodes and perform constraint depth-first searches for permutations that fall into the critical region of a test systematically. Pruning of tree search and optimisations at C level enable exact tests for certain large data sets.","Published":"2015-04-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"treeplyr","Version":"0.1.2","Title":"'dplyr' Functionality for Matched Tree and Data Objects","Description":"Matches phylogenetic trees and trait data, and\n allows simultaneous manipulation of the tree and data using 'dplyr'.","Published":"2016-06-24","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"TreeSim","Version":"2.3","Title":"Simulating Phylogenetic Trees","Description":"Simulation methods for phylogenetic trees where (i) all tips are sampled at one time point or (ii) tips are sampled sequentially through time. (i) For sampling at one time point, simulations are performed under a constant rate birth-death process, conditioned on having a fixed number of final tips (sim.bd.taxa()), or a fixed age (sim.bd.age()), or a fixed age and number of tips (sim.bd.taxa.age()). When conditioning on the number of final tips, the method allows for shifts in rates and mass extinction events during the birth-death process (sim.rateshift.taxa()). The function sim.bd.age() (and sim.rateshift.taxa() without extinction) allow the speciation rate to change in a density-dependent way. The LTT plots of the simulations can be displayed using LTT.plot(), LTT.plot.gen() and LTT.average.root(). TreeSim further samples trees with n final tips from a set of trees generated by the common sampling algorithm stopping when a fixed number m>>n of tips is first reached (sim.gsa.taxa()). This latter method is appropriate for m-tip trees generated under a big class of models (details in the sim.gsa.taxa() man page). For incomplete phylogeny, the missing speciation events can be added through simulations (corsim()). (ii) sim.rateshifts.taxa() is generalized to sim.bdsky.stt() for serially sampled trees, where the trees are conditioned on either the number of sampled tips or the age. Furthermore, for a multitype-branching process with sequential sampling, trees on a fixed number of tips can be simulated using sim.bdtypes.stt.taxa(). This function further allows to simulate under epidemiological models with an exposed class. The function sim.genespeciestree() simulates coalescent gene trees within birth-death species trees, and sim.genetree() simulates coalescent gene trees.","Published":"2017-03-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TreeSimGM","Version":"2.1","Title":"Simulating Phylogenetic Trees under a General Model with or\nwithout Shifts","Description":"Provides a flexible simulation tool for phylogenetic trees under a general model for speciation and extinction. Trees with a user-specified number of extant tips, or a user-specified stem age are simulated. It is possible to assume any probability distribution for the waiting time until speciation and extinction. Furthermore, the waiting times to speciation / extinction may be scaled in different parts of the tree, meaning we can simulate trees with clade-dependent diversification processes. At a speciation event, one species splits into two. We allow for two different modes at these splits: (i) symmetric, where for every speciation event new waiting times until speciation and extinction are drawn for both daughter lineages; and (ii) asymmetric, where a speciation event results in one species with new waiting times, and another that carries the extinction time and age of its ancestor. The symmetric mode can be seen as an vicariant or allopatric process where divided populations suffer equal evolutionary forces while the asymmetric mode could be seen as a peripatric speciation where a mother lineage continues to exist. ","Published":"2017-05-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"treespace","Version":"1.0.0","Title":"Statistical Exploration of Landscapes of Phylogenetic Trees","Description":"Tools for the exploration of distributions of phylogenetic trees.\n This package includes a shiny interface which can be started from R using\n 'treespaceServer()'.","Published":"2017-03-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"treethresh","Version":"0.1-9","Title":"Methods for Tree-Based Local Adaptive Thresholding","Description":"An implementation of TreeThresh, a locally adaptive version of EbayesThresh.","Published":"2016-05-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"trelliscope","Version":"0.9.7","Title":"Create and Navigate Large Multi-Panel Visual Displays","Description":"An extension of Trellis Display that enables creation,\n organization, and interactive viewing of multi-panel displays created\n against potentially very large data sets. The dynamic viewer tiles\n panels of a display across the screen in a web browser and allows the\n user to interactively page through the panels and sort and filter them\n based on \"cognostic\" metrics computed for each panel. Panels can be\n created using many of R's plotting capabilities, including base R\n graphics, 'lattice', 'ggplot2', and many 'htmlwidgets'. Conditioning is\n handled through the 'datadr' package, which enables 'Trelliscope' displays\n with potentially millions of panels to be created against terabytes of\n data on systems like 'Hadoop'. While designed to scale, 'Trelliscope'\n displays can also be very useful for small data sets.","Published":"2016-10-03","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"trelloR","Version":"0.1.0","Title":"R API for Trello","Description":"Provides access to Trello API ().\n A family of GET functions make it easy to retrieve cards, labels, members,\n teams and other data from both public and private boards. Server responses\n are formatted upon retrieval. Automated paging allows for large requests\n that exceed server limit. See for more\n information.","Published":"2016-09-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"trend","Version":"0.2.0","Title":"Non-Parametric Trend Tests and Change-Point Detection","Description":"The analysis of environmental data often requires\n\t the detection of trends and change-points. \n\t This package provides the Mann-Kendall Trend Test,\n seasonal Mann-Kendall Test,\n correlated seasonal Mann-Kendall Test,\n partial Mann-Kendall Trend test,\n\t (Seasonal) Sen's slope, partial correlation trend test and\n\t change-point test after Pettitt.","Published":"2016-05-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TrendInTrend","Version":"1.0.1","Title":"Odds Ratio Estimation for the Trend in Trend Model","Description":"Estimation of causal odds ratio given trends in exposure prevalence\n and outcome frequencies of stratified data.","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TriadSim","Version":"0.1.1","Title":"Simulating Triad Genomewide Genotypes","Description":"Simulate genotypes for case-parent triads, case-control, and quantitative trait samples with realistic linkage diequilibrium structure and allele frequency distribution. For studies of epistasis one can simulate models that involve specific SNPs at specific sets of loci, which we will refer to as \"pathways\". TriadSim generates genotype data by resampling triad genotypes from existing data. The details of the method is described in the manuscript under preparation \"Simulating Autosomal Genotypes with Realistic Linkage Disequilibrium and a Spiked in Genetic Effect\" Shi, M., Umbach, D.M., Wise A.S., Weinberg, C.R. \t","Published":"2017-05-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TrialSize","Version":"1.3","Title":"R functions in Chapter 3,4,6,7,9,10,11,12,14,15","Description":"functions and examples in Sample Size Calculation in\n Clinical Research.","Published":"2013-06-03","License":"GPL (>= 2.15.1)","snapshot_date":"2017-06-23"} {"Package":"triangle","Version":"0.10","Title":"Provides the Standard Distribution Functions for the Triangle\nDistribution","Description":"Provides the \"r, q, p, and d\" distribution functions for the triangle distribution.","Published":"2016-01-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"triangulation","Version":"0.5.0","Title":"Determine Position of Observer","Description":"Measuring angles between points in a landscape is much easier\n than measuring distances. When the location of three points is known the\n position of the observer can be determined based solely on the angles between\n these points as seen by the observer. This task (known as triangulation)\n however requires onerous calculations - these calculations are automated by this\n package.","Published":"2016-10-29","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"trib","Version":"1.2.0","Title":"Analysing and Visualizing Tribology Measurements","Description":"Tribology test devices such as pin-on-disk, ball-on-disk machines and profilometers give important data for tribological analysis. trib is built to analyze and visualize that data into human understandable form. It has functions for finding coefficient of friction, Friction Force vs Distance graphs and calculating Wear Track Volume and Wear Track plotting.","Published":"2015-08-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"triebeard","Version":"0.3.0","Title":"'Radix' Trees in 'Rcpp'","Description":"'Radix trees', or 'tries', are key-value data structures optimised for efficient lookups, similar in purpose\n to hash tables. 'triebeard' provides an implementation of 'radix trees' for use in R programming and in\n developing packages with 'Rcpp'.","Published":"2016-08-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"trifield","Version":"1.1","Title":"Some basic facilities for ternary fields and plots","Description":"The package contains routines to 1) project unity-summed\n triples to unit-square doubles and vice versa, 2) make a grid\n of unity-summed triples paired to doubles, 3) evaluate a\n function over the grid and 4) make simple plots including\n ternary contour plots over a grid of values.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TriMatch","Version":"0.9.7","Title":"Propensity Score Matching of Non-Binary Treatments","Description":"Propensity score matching for non-binary treatments.","Published":"2016-02-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"trimcluster","Version":"0.1-2","Title":"Cluster analysis with trimming","Description":"Trimmed k-means clustering.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"trimr","Version":"1.0.1","Title":"An Implementation of Common Response Time Trimming Methods","Description":"Provides various commonly-used response time trimming\n methods, including the recursive / moving-criterion methods reported by\n Van Selst and Jolicoeur (1994). By passing trimming functions raw data files,\n the package will return trimmed data ready for inferential testing.","Published":"2015-08-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"trimTrees","Version":"1.2","Title":"Trimmed opinion pools of trees in a random forest","Description":"Creates point and probability forecasts from the trees in a random forest using a trimmed opinion pool.","Published":"2014-08-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"trioGxE","Version":"0.1-1","Title":"A data smoothing approach to explore and test gene-environment\ninteraction in case-parent trio data","Description":"The package contains functions that 1) estimates\n gene-environment interaction between a SNP and a continuous\n non-genetic attribute by fitting a generalized additive model\n to case-parent trio data, 2) produces graphical displays of\n estimated interaction, 3) performs permutation test of\n gene-environment interaction; 4) simulates informative\n case-parent trios.","Published":"2013-04-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"trip","Version":"1.5.0","Title":"Tools for the Analysis of Animal Track Data","Description":"Functions for accessing and manipulating spatial data for animal\n tracking, with straightforward coercion from and to other formats. Filter\n for speed and create time spent maps from animal track data. There are\n coercion methods to convert between 'trip' and 'ltraj' from 'adehabitatLT', \n and between 'trip' and 'psp' and 'ppp' from 'spatstat'. ","Published":"2016-10-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tripack","Version":"1.3-8","Title":"Triangulation of Irregularly Spaced Data","Description":"A constrained two-dimensional Delaunay triangulation package\n providing both triangulation and generation of voronoi mosaics of \n irregular spaced data.","Published":"2016-12-16","License":"ACM | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tripEstimation","Version":"0.0-44","Title":"Metropolis Sampler and Supporting Functions for Estimating\nAnimal Movement from Archival Tags and Satellite Fixes","Description":"Data handling and estimation functions for animal movement\n estimation from archival or satellite tags. Helper functions are included\n for making image summaries binned by time interval from Markov Chain Monte Carlo\n simulations. ","Published":"2016-01-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TripleR","Version":"1.5.3","Title":"Social Relation Model (SRM) Analyses for Single or Multiple\nGroups","Description":"Social Relation Model (SRM) analyses for single or multiple\n round-robin groups are performed. These analyses are either based on one\n manifest variable, one latent construct measured by two manifest variables,\n two manifest variables and their bivariate relations, or two latent\n constructs each measured by two manifest variables. Within-group t-tests\n for variance components and covariances are provided for single groups.\n For multiple groups two types of significance tests are provided:\n between-groups t-tests (as in SOREMO) and enhanced standard errors based on\n Lashley and Bond (1997) . Handling for missing values is provided.","Published":"2016-07-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TROM","Version":"1.2","Title":"Transcriptome Overlap Measure","Description":"A new bioinformatic tool for comparing transcriptomes of two biological samples from the same or different species. The mapping is conducted based on the overlap of the associated genes of different samples. More examples and detailed explanations are available in the vignette.","Published":"2016-10-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TropFishR","Version":"1.1.3","Title":"Tropical Fisheries Analysis with R","Description":"Fish stock assessment methods and fisheries models based on the FAO\n Manual \"Introduction to tropical fish stock assessment\" by P. Sparre and\n S.C. Venema . Focus is the\n analysis of length-frequency data and data-poor fisheries.","Published":"2017-04-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tRophicPosition","Version":"0.7.0","Title":"Bayesian Trophic Position Calculation with Stable Isotopes","Description":"Estimates the trophic position of a consumer relative \n to a baseline species. It implements a Bayesian approach which combines an \n interface to the JAGS MCMC library of rjags and stable isotopes. This is the\n version 0.7.0, so users are encouraged to test the package and send bugs and \n trophicposition-support@googlegroups.com.","Published":"2017-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tropr","Version":"0.1.2","Title":"TV Tropes Statistics","Description":"Fetch TV Tropes , which collects various\n conventions in creative works, and convert it to data frame.","Published":"2017-06-13","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"trotter","Version":"0.6","Title":"Pseudo-Vectors Containing All Permutations, Combinations and\nSubsets of Objects Taken from a Vector","Description":"Class definitions and constructors for pseudo-vectors containing\n all permutations, combinations and subsets of objects taken from a vector.\n Simplifies working with structures commonly encountered in combinatorics.","Published":"2014-09-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TRSbook","Version":"1.0.1","Title":"Functions and Datasets to Accompany the Book \"The R Software:\nFundamentals of Programming and Statistical Analysis\"","Description":"Functions and datasets for the reader of the book \"The R Software: Fundamentals of Programming and Statistical Analysis\".","Published":"2014-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"trueskill","Version":"0.1","Title":"Implementation the TrueSkill algorithm in R","Description":"An implementation of the TrueSkill algorithm (Herbrich,\n R., Minka, T. and Grapel, T) in R; a Bayesian skill rating\n system with inference by approximate message passing on a\n factor graph. Used by Xbox to rank gamers and identify\n appropriate matches.\n http://research.microsoft.com/en-us/projects/trueskill/default.aspx\n Current version allows for one player per team. Will update as\n time permits. Requires R version 3.0 as it is written with\n Reference Classes. URL:\n https://github.com/bhoung/trueskill-in-r Acknowledgements to\n Doug Zongker and Heungsub Lee for their python implementations\n of the algorithm and for the liberal reuse of Doug's code\n comments (@dougz and @sublee on github).","Published":"2013-05-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TruncatedNormal","Version":"1.0","Title":"Truncated Multivariate Normal","Description":"A collection of functions to deal with the truncated univariate and multivariate normal distributions.","Published":"2015-11-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"truncdist","Version":"1.0-2","Title":"Truncated Random Variables","Description":"A collection of tools to evaluate probability density\n functions, cumulative distribution functions, quantile functions\n and random numbers for truncated random variables. These functions are\n provided to also compute the expected value and variance. Nadarajah\n and Kotz (2006) developed most of the functions. QQ plots can be produced.\n All the probability functions in the stats, stats4 and evd\n packages are automatically available for truncation..","Published":"2016-08-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"truncgof","Version":"0.6-0","Title":"GoF tests allowing for left truncated data","Description":"Goodness-of-fit tests and some adjusted exploratory tools\n allowing for left truncated data","Published":"2012-12-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"truncnorm","Version":"1.0-7","Title":"Truncated normal distribution","Description":"r/d/p/q functions for the truncated normal distribution","Published":"2014-01-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"truncreg","Version":"0.2-4","Title":"Truncated Gaussian Regression Models","Description":"Estimation of models for truncated Gaussian variables by maximum likelihood.","Published":"2016-08-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"truncSP","Version":"1.2.2","Title":"Semi-parametric estimators of truncated regression models","Description":"Semi-parametric estimation of truncated linear regression models","Published":"2014-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"trust","Version":"0.1-7","Title":"Trust Region Optimization","Description":"Does local optimization using two derivatives and trust regions.\n Guaranteed to converge to local minimum of objective function.","Published":"2015-07-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"trustOptim","Version":"0.8.6","Title":"Trust Region Optimization for Nonlinear Functions with Sparse\nHessians","Description":"Trust region algorithm for nonlinear optimization. Efficient when\n the Hessian of the objective function is sparse (i.e., relatively few nonzero\n cross-partial derivatives). See Braun, M. (2014) .","Published":"2017-04-26","License":"MPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"TSA","Version":"1.01","Title":"Time Series Analysis","Description":"Contains R functions and datasets detailed in the book\n \"Time Series Analysis with Applications in R (second edition)\"\n by Jonathan Cryer and Kung-Sik Chan","Published":"2012-11-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tsallisqexp","Version":"0.9-2","Title":"Tsallis q-Exp Distribution","Description":"Tsallis distribution also known as the q-exponential family distribution. Provide distribution d, p, q, r functions, fitting and testing functions. Project initiated by Paul Higbie and based on Cosma Shalizi's code.","Published":"2015-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tsbridge","Version":"1.1","Title":"Calculate normalising constants for Bayesian time series models","Description":"The tsbridge package contains a collection of R functions that can be used to estimate normalising constants using the bridge sampler of Meng and Wong (1996). The functions can be applied to calculate posterior model probabilities for a variety of time series Bayesian models, where parameters are estimated using BUGS, and models themselves are created using the tsbugs package.","Published":"2014-05-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tsBSS","Version":"0.2","Title":"Tools for Blind Source Separation for Time Series","Description":"Different estimates are provided to solve the blind source separation problem for time series with stochastic volatility.","Published":"2016-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tsbugs","Version":"1.2","Title":"Create time series BUGS models","Description":"The tsbugs package contains a collection of R functions\n that can be used to create time series BUGS models of various\n order. Included are function to create BUGS with non-constant\n variance such stochastic volatility models and random variance\n shift models.","Published":"2013-02-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tsc","Version":"1.0-3","Title":"Likelihood-ratio Tests for Two-Sample Comparisons","Description":"Performs the two-sample comparisons using the following exact test procedures: the exact likelihood-ratio test (LRT) for equality of two normal populations proposed in Zhang et al. (2012); the combined test based on the LRT and Shapiro-Wilk test for normality via the Bonferroni correction technique; the newly proposed density-based empirical likelihood (DBEL) ratio test. To calculate p-values of the DBEL procedures, three procedures are used: (a) the traditional Monte Carlo (MC) method implemented in C++, (b) a new interpolation method based on regression techniques to operate with tabulated critical values of the test statistic; (c) a Bayesian type method that uses the tabulated critical values as the prior information and MC generated DBEL-test-statistic's values as data.","Published":"2015-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TSclust","Version":"1.2.3","Title":"Time Series Clustering Utilities","Description":"This package contains a set of measures of dissimilarity between time series to perform time series clustering. Metrics based on raw data, on generating models and on the forecast behavior are implemented. Some additional utilities related to time series clustering are also provided, such as clustering algorithms and cluster evaluation metrics.","Published":"2014-11-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TScompare","Version":"2015.4-1","Title":"'TSdbi' Database Comparison","Description":"Utilities for comparing the equality of series on two databases.\n\tComprehensive examples of all the 'TS*' packages is provided in the\n\tvignette Guide.pdf with the 'TSdata' package.","Published":"2015-05-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TSCS","Version":"0.1.0","Title":"Time Series Cointegrated System","Description":"A set of functions to implement Time Series Cointegrated System (TSCS)\n spatial interpolation and relevant data visualization.","Published":"2017-06-06","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"TSdata","Version":"2016.8-1","Title":"'TSdbi' Illustration","Description":"Illustrates the various 'TSdbi' packages with a vignette using time series \n\tdata from several sources. The vignette also illustrates some simple time series\n\tmanipulation and plotting using packages 'tframe' and 'tfplot'.","Published":"2016-08-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TSdbi","Version":"2017.4-1","Title":"Time Series Database Interface","Description":"Provides a common interface to time series databases. The\n\tobjective is to define a standard interface so users can retrieve time \n\tseries data from various sources with a simple, common, set of \n\tcommands, and so programs can be written to be portable with respect \n\tto the data source. The SQL implementations also provide a database \n\ttable design, so users needing to set up a time series database \n\thave a reasonably complete way to do this easily. The interface \n\tprovides for a variety of options with respect to the representation \n\tof time series in R. The interface, and the SQL implementations, also\n\thandle vintages of time series data (sometime called editions or \n\treal-time data). There is also a (not yet well tested) mechanism to\n\thandle multilingual data documentation.\n\tComprehensive examples of all the 'TS*' packages is provided in the\n\tvignette Guide.pdf with the 'TSdata' package.","Published":"2017-05-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tsdecomp","Version":"0.2","Title":"Decomposition of Time Series Data","Description":"ARIMA-model-based decomposition of quarterly and \n monthly time series data.\n The methodology is developed and described, among others, in \n Burman (1980) and \n Hillmer and Tiao (1982) .","Published":"2017-01-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tsdisagg2","Version":"0.1.0","Title":"Time Series Disaggregation","Description":"Disaggregates low frequency time series data to higher frequency series. Implements the following methods for temporal disaggregation: Boot, Feibes and Lisman (1967) , Chow and Lin (1971) , Fernandez (1981) and Litterman (1983) .","Published":"2016-11-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TSdist","Version":"3.4","Title":"Distance Measures for Time Series Data","Description":"A set of commonly used distance measures and some additional functions which, although initially not designed for this purpose, can be used to measure the dissimilarity between time series. These measures can be used to perform clustering, classification or other data mining tasks which require the definition of a distance measure between time series.","Published":"2017-04-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tsDyn","Version":"0.9-44","Title":"Nonlinear Time Series Models with Regime Switching","Description":"Implements nonlinear autoregressive (AR) time series models. For univariate series, a non-parametric approach is available through additive nonlinear AR. Parametric modeling and testing for regime switching dynamics is available when the transition is either direct (TAR: threshold AR) or smooth (STAR: smooth transition AR, LSTAR). For multivariate series, one can estimate a range of TVAR or threshold cointegration TVECM models with two or three regimes. Tests can be conducted for TVAR as well as for TVECM (Hansen and Seo 2002 and Seo 2006). ","Published":"2016-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tseries","Version":"0.10-42","Title":"Time Series Analysis and Computational Finance","Description":"Time series analysis and computational finance.","Published":"2017-06-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tseriesChaos","Version":"0.1-13","Title":"Analysis of nonlinear time series","Description":"Routines for the analysis of nonlinear time series. This\n work is largely inspired by the TISEAN project, by Rainer\n Hegger, Holger Kantz and Thomas Schreiber:\n http://www.mpipks-dresden.mpg.de/~tisean/","Published":"2013-04-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tseriesEntropy","Version":"0.6-0","Title":"Entropy Based Analysis and Tests for Time Series","Description":"Implements an Entropy measure of dependence based on the Bhattacharya-Hellinger-Matusita distance. Can be used as a (nonlinear) autocorrelation/crosscorrelation function for continuous and categorical time series. The package includes tests for serial dependence and nonlinearity based on it. Some routines have a parallel version that can be used in a multicore/cluster environment. The package makes use of S4 classes.","Published":"2017-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TSeriesMMA","Version":"0.1.1","Title":"Multiscale Multifractal Analysis of Time Series Data","Description":"Multiscale multifractal analysis (MMA) (Gierałtowski et al.,\n 2012) is a time series analysis method,\n designed to describe scaling properties of fluctuations within the signal\n analyzed. The main result of this procedure is the so called Hurst surface\n h(q,s) , which is a dependence of the local Hurst exponent h (fluctuation\n scaling exponent) on the multifractal parameter q and the scale of observation s\n (data window width).","Published":"2017-01-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tsfa","Version":"2014.10-1","Title":"Time Series Factor Analysis","Description":"Extraction of Factors from Multivariate Time Series. See ?00tsfa-Intro for more details.","Published":"2015-05-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TSfame","Version":"2015.4-1","Title":"'TSdbi' Extensions for Fame","Description":"A 'fame' interface for 'TSdbi'.\n\tComprehensive examples of all the 'TS*' packages is provided in the\n\tvignette Guide.pdf with the 'TSdata' package.","Published":"2015-04-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TSGSIS","Version":"0.1","Title":"Two Stage-Grouped Sure Independence Screening","Description":"To provide a high dimensional grouped variable selection approach for detection of whole-genome SNP effects and SNP-SNP interactions, as described in Fang et al. (2017, under review).","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TSHRC","Version":"0.1-5","Title":"Two Stage Hazard Rate Comparison","Description":"Two-stage procedure compares hazard rate functions,\n which may or may not cross each other.","Published":"2017-03-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tsintermittent","Version":"1.9","Title":"Intermittent Time Series Forecasting","Description":"Functions for analysing and forecasting intermittent demand/slow moving items time series.","Published":"2016-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tsiR","Version":"0.2.0","Title":"An Implementation of the TSIR Model","Description":"An implementation of the time-series Susceptible-Infected-Recovered (TSIR) model using a number of different\n fitting options for infectious disease time series data. The method implemented here is described by Finkenstadt and Grenfell (2000) .","Published":"2017-05-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TSMining","Version":"1.0","Title":"Mining Univariate and Multivariate Motifs in Time-Series Data","Description":"Implementations of a number of functions used to mine numeric time-series data. It covers the implementation of SAX transformation, univariate motif discovery (based on the random projection method), multivariate motif discovery (based on graph clustering), and several functions used for the ease of visualizing the motifs discovered. The details of SAX transformation can be found in J. Lin. E. Keogh, L. Wei, S. Lonardi, Experiencing SAX: A novel symbolic representation of time series, Data Mining and Knowledge Discovery 15 (2) (2007) 107-144. Details on univariate motif discovery method implemented can be found in B. Chiu, E. Keogh, S. Lonardi, Probabilistic discovery of time series motifs, ACM SIGKDD, Washington, DC, USA, 2003, pp. 493-498. Details on the multivariate motif discovery method implemented can be found in A. Vahdatpour, N. Amini, M. Sarrafzadeh, Towards unsupervised activity discovery using multi-dimensional motif detection in time series, IJCAI 2009 21st International Joint Conference on Artificial Intelligence.","Published":"2015-06-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TSmisc","Version":"2016.8-1","Title":"'TSdbi' Extensions to Wrap Miscellaneous Data Sources","Description":"Methods to \n\tretrieve data from several different sources. This include historical\n\tquote data from 'Yahoo' and 'Oanda', economic data from 'FRED', and\n\t'xls' and 'csv' data from different sources.\n\tComprehensive examples of all the 'TS*' packages is provided in the\n\tvignette Guide.pdf with the 'TSdata' package.","Published":"2016-08-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TSMN","Version":"1.0.0","Title":"Truncated Scale Mixtures of Normal Distributions","Description":"Return the first four moments of the SMN distributions (Normal, Student-t, Pearson VII, Slash or Contaminated Normal).","Published":"2017-04-04","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"} {"Package":"tsModel","Version":"0.6","Title":"Time Series Modeling for Air Pollution and Health","Description":"Tools for specifying time series regression models","Published":"2013-06-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TSMySQL","Version":"2015.4-1","Title":"'TSdbi' Extensions for 'MySQL'","Description":"A 'MySQL' interface for 'TSdbi'.\n\tComprehensive examples of all the 'TS*' packages is provided in the\n\tvignette Guide.pdf with the 'TSdata' package.","Published":"2015-04-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tsna","Version":"0.2.0","Title":"Tools for Temporal Social Network Analysis","Description":"Temporal SNA tools for continuous- and discrete-time longitudinal networks having vertex, edge, and attribute dynamics stored in the 'networkDynamic' format. This work was supported by grant R01HD68395 from the National Institute of Health.","Published":"2016-03-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tsne","Version":"0.1-3","Title":"T-Distributed Stochastic Neighbor Embedding for R (t-SNE)","Description":"A \"pure R\" implementation of the t-SNE algorithm.","Published":"2016-07-15","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"TSodbc","Version":"2015.4-1","Title":"'TSdbi' Extensions for ODBC","Description":"An ODBC interface for 'TSdbi'.\n\tComprehensive examples of all the 'TS*' packages is provided in the\n\tvignette Guide.pdf with the 'TSdata' package.","Published":"2015-04-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tsoutliers","Version":"0.6-6","Title":"Detection of Outliers in Time Series","Description":"Detection of outliers in time series following the \n Chen and Liu (1993) procedure. \n Innovational outliers, additive outliers, level shifts, \n temporary changes and seasonal level shifts are considered.","Published":"2017-05-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TSP","Version":"1.1-5","Title":"Traveling Salesperson Problem (TSP)","Description":"Basic infrastructure and some algorithms for the traveling\n salesperson problem (also traveling salesman problem; TSP).\n The package provides some simple algorithms and\n an interface to the Concorde TSP solver and its implementation of the\n Chained-Lin-Kernighan heuristic. The code for Concorde\n itself is not included in the package and has to be obtained separately.","Published":"2017-02-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Tsphere","Version":"1.0","Title":"Transposable Sphering for Large-Scale Inference with Correlated\nData","Description":"Adjusts for correlations among the rows and columns via\n the Transposable Sphering Algorithm when conducting large-scale\n inference on the rows of a data matrix.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tsPI","Version":"1.0.1","Title":"Improved Prediction Intervals for ARIMA Processes and Structural\nTime Series","Description":"Prediction intervals for ARIMA and structural time series\n models using importance sampling approach with uninformative priors for model\n parameters, leading to more accurate coverage probabilities in frequentist\n sense. Instead of sampling the future observations and hidden states of the\n state space representation of the model, only model parameters are sampled,\n and the method is based solving the equations corresponding to the conditional\n coverage probability of the prediction intervals. This makes method relatively\n fast compared to for example MCMC methods, and standard errors of prediction\n limits can also be computed straightforwardly.","Published":"2016-03-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tspmeta","Version":"1.2","Title":"Instance Feature Calculation and Evolutionary Instance\nGeneration for the Traveling Salesman Problem","Description":"Instance feature calculation and evolutionary instance generation\n for the traveling salesman problem. Also contains code to \"morph\" two TSP\n instances into each other. And the possibility to conveniently run a couple\n of solvers on TSP instances.","Published":"2015-07-08","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"TSPostgreSQL","Version":"2015.4-1","Title":"'TSdbi' Extensions for 'PostgreSQL'","Description":"A 'PostgreSQL' interface for 'TSdbi'.\n\tComprehensive examples of all the 'TS*' packages is provided in the\n\tvignette Guide.pdf with the 'TSdata' package.","Published":"2015-04-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TSPred","Version":"3.0.2","Title":"Functions for Benchmarking Time Series Prediction","Description":"Functions for time series prediction and accuracy assessment using automatic linear modelling. The generated linear models and its yielded prediction errors can be used for benchmarking other time series prediction methods and for creating a demand for the refinement of such methods. For this purpose, benchmark data from prediction competitions may be used.","Published":"2017-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tsqn","Version":"1.0.0","Title":"Applications of the Qn Estimator to Time Series (Univariate and\nMultivariate)","Description":"Time Series Qn is a package with applications of the Qn estimator of Rousseeuw and Croux (1993) to univariate and multivariate Time Series in time and frequency domains. More specifically, the robust estimation of autocorrelation or autocovariance matrix functions from Ma and Genton (2000, 2001) , and Cotta (2017) are provided. The robust pseudo-periodogram of Molinares et. al. (2009) is also given. This packages also provides the M-estimator of the long-memory parameter d based on the robustification of the GPH estimator proposed by Reisen et al. (2017) . ","Published":"2017-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TSS.RESTREND","Version":"0.1.02","Title":"Time Series Segmentation of Residual Trends","Description":"\n To perform the Time Series Segmented Residual Trend (TSS-RESTREND) method.\n The full details are available in (Burrell et al. 2016???? To be updated after the paper is published).","Published":"2016-09-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TSsdmx","Version":"2016.8-1","Title":"'TSdbi' Extension to Connect with 'SDMX'","Description":"Methods to \n\tretrieve data in the Statistical Data and Metadata Exchange ('SDMX') format\n\tfrom several database. (For example,\n\t'EuroStat', the European Central Bank, the Organisation for Economic \n\tCo-operation and Development, the 'Unesco' Institute for Statistics,\n\tand the International Labor Organization.)\n\tThis is a wrapper for package 'RJSDMX'.\n\tComprehensive examples of all the 'TS*' packages is provided in the\n\tvignette Guide.pdf with the 'TSdata' package.","Published":"2016-08-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tsSelect","Version":"0.1.8","Title":"Execution of Time Series Models","Description":"Execution of various time series models and choosing the best one\n either by a specific error metric or by picking the best one by majority vote.\n The models are based on the \"forecast\" package, written by Prof. Rob Hyndman.","Published":"2016-10-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TSsql","Version":"2017.4-1","Title":"Generic SQL Helper Functions for 'TSdbi' SQL Plugins","Description":"Standard SQL query functions used by \n\tSQL plugins packages for the 'TSdbi' interface to time series databases. \n\tIt will mainly be used by other packages rather than directly by end\n\tusers. The one exception is the function 'TSquery' which can be used to\n\tconstruct a time series from a database containing observations over\n\ttime (e.g. balance statements for multiple years), but where the database\n\tis not specifically designed to store time series (as with other \n\t'TSdbi' SQL plugin packages).\n\tComprehensive examples of all the 'TS*' packages is provided in the\n\tvignette Guide.pdf with the 'TSdata' package.","Published":"2017-05-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TSSQLite","Version":"2015.4-1","Title":"'TSdbi' Extensions for 'SQLite'","Description":"An 'SQLite' interface for 'TSdbi'.\n\tComprehensive examples of all the 'TS*' packages is provided in the\n\tvignette Guide.pdf with the 'TSdata' package.","Published":"2015-04-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TSTr","Version":"1.2","Title":"Ternary Search Tree for Auto-Completion and Spell Checking","Description":"A ternary search tree is a type of prefix tree with up to three children and the ability for incremental string search. The package uses this ability for word auto-completion and spell checking. Includes a dataset with the 10001 most frequent English words.","Published":"2015-10-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TSTutorial","Version":"1.2.3","Title":"Fitting and Predict Time Series Interactive Laboratory","Description":"Interactive laboratory of Time Series based in Box-Jenkins methodology.","Published":"2013-12-24","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"tswge","Version":"1.0.0","Title":"Applied Time Series Analysis","Description":"Accompanies the text Applied Time Series Analysis with R, 2nd edition by Woodward, Gray, and Elliott. It is helpful for data analysis and for time series instruction.","Published":"2016-12-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tsxtreme","Version":"0.3.1","Title":"Bayesian Modelling of Extremal Dependence in Time Series","Description":"Characterisation of the extremal dependence structure of time series, avoiding pre-processing and filtering as done typically with peaks-over-threshold methods. It uses the conditional approach of Heffernan and Tawn (2004) which is very flexible in terms of extremal and asymptotic dependence structures, and Bayesian methods improve efficiency and allow for deriving measures of uncertainty. For example, the extremal index, related to the size of clusters in time, can be estimated and samples from its posterior distribution obtained.","Published":"2017-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"TTAinterfaceTrendAnalysis","Version":"1.5.3","Title":"Temporal Trend Analysis Graphical Interface","Description":"This interface was created to develop a standard procedure \n to analyse temporal trend in the framework of the OSPAR convention.\n The analysis process run through 4 successive steps : 1) manipulate your data, 2)\n select the parameters you want to analyse, 3) build your regulated \n time series, 4) perform diagnosis and analysis and 5) read the results. \n Statistical analysis call other package function such as Kendall tests\n or cusum() function.","Published":"2016-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ttbbeer","Version":"1.1.0","Title":"US Beer Statistics from TTB","Description":"U.S. Department of the Treasury, Alcohol and Tobacco Tax and\n Trade Bureau (TTB) collects data and reports on monthly beer\n industry production and operations. This data package includes\n a collection of 10 years (2006 - 2015) worth of data on materials\n used at U.S. breweries in pounds reported by the Brewer's Report\n of Operations and the Quarterly Brewer's Report of Operations\n forms, ready for data analysis. This package also includes historical\n tax rates on distilled spirits, wine, beer, champagne, and tobacco\n products as individual data sets.","Published":"2016-08-01","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"TTCA","Version":"0.1.1","Title":"Transcript Time Course Analysis","Description":"The analysis of microarray time series promises a deeper insight into the dynamics of the cellular response following stimulation. A common observation in this type of data is that some genes respond with quick, transient dynamics, while other genes change their expression slowly over time. The existing methods for detecting significant expression dynamics often fail when the expression dynamics show a large heterogeneity. Moreover, these methods often cannot cope with irregular and sparse measurements. The method proposed here is specifically designed for the analysis of perturbation responses. It combines different scores to capture fast and transient dynamics as well as slow expression changes, and performs well in the presence of low replicate numbers and irregular sampling times. The results are given in the form of tables including links to figures showing the expression dynamics of the respective transcript. These allow to quickly recognise the relevance of detection, to identify possible false positives and to discriminate early and late changes in gene expression. An extension of the method allows the analysis of the expression dynamics of functional groups of genes, providing a quick overview of the cellular response. The performance of this package was tested on microarray data derived from lung cancer cells stimulated with epidermal growth factor (EGF). Paper: Albrecht, Marco, et al. (2017).","Published":"2017-01-29","License":"EUPL","snapshot_date":"2017-06-23"} {"Package":"tth","Version":"4.3-2-1","Title":"TeX to HTML/MathML Translators tth/ttm","Description":"C source code and R wrappers for the tth/ttm TeX to \n HTML/MathML translators.","Published":"2016-04-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TTmoment","Version":"1.0","Title":"Sampling and Calculating the First and Second Moments for the\nDoubly Truncated Multivariate t Distribution","Description":"Computing the first two moments of the truncated multivariate t (TMVT) distribution under the double truncation. Appling the slice sampling algorithm to generate random variates from the TMVT distribution.","Published":"2015-05-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TTR","Version":"0.23-1","Title":"Technical Trading Rules","Description":"Functions and data to construct technical trading rules with R.","Published":"2016-03-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TTS","Version":"1.0","Title":"Master Curve Estimates Corresponding to Time-Temperature\nSuperposition","Description":"Time-Temperature Superposition analysis is often applied to frequency modulated data obtained by Dynamic\n\t\tMechanic Analysis (DMA) and Rheometry in the analytical chemistry and physics\n\t\tareas. These techniques provide estimates of material mechanical properties\n\t\t(such as moduli) at different temperatures in a wider range of time. This\n\t\tpackage provides the Time-Temperature superposition Master Curve at a referred\n\t\ttemperature by the three methods: the two wider used methods, Arrhenius based\n\t\tmethods and WLF, and the newer methodology based on derivatives procedure.\n\t\tThe Master Curve is smoothed by B-splines basis. The package output is composed\n\t\tof plots of experimental data, horizontal and vertical shifts, TTS data, and TTS\n\t\tdata fitted using B-splines with bootstrap confidence intervals.","Published":"2015-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ttScreening","Version":"1.5","Title":"Genome-wide DNA methylation sites screening by use of training\nand testing samples","Description":"This package utilizes training and testing samples to filter out uninformative DNA methylation sites. Surrogate variables (SVs) of DNA methylation are included in the filtering process to explain unknown factor effects. ","Published":"2014-11-14","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"tttplot","Version":"1.1.1","Title":"Time to Target Plot","Description":"Implementation of Time to Target plot based on the work \n of Ribeiro and Rosseti (2015) , \n that describe a numerical method that gives the probability of \n an algorithm A finds a solution at least as good as a given \n target value in smaller computation time than algorithm B.","Published":"2016-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ttutils","Version":"1.0-1","Title":"Utility functions","Description":"Contains some auxiliary functions.","Published":"2010-06-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ttwa","Version":"0.8.5.1","Title":"Travel To Work Area","Description":"This package makes Travel To Work Area from a commuting flow data frame.","Published":"2013-09-08","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"tuber","Version":"0.9.0","Title":"Client for the YouTube API","Description":"Get comments posted on YouTube videos, information on how many \n times a video has been liked, search for videos with particular content, and \n much more. You can also scrape captions from a few videos. To learn more about\n the YouTube API, see .","Published":"2017-05-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tubern","Version":"0.1.0","Title":"R Client for the YouTube Analytics and Reporting API","Description":"Get statistics and reports from YouTube. To learn more about\n the YouTube Analytics and Reporting API, see .","Published":"2017-04-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"tuckerR.mmgg","Version":"1.5.0","Title":"Three-Mode Principal Components Analysis","Description":"Performs Three-Mode Principal Components Analysis,\n which carries out Tucker Models.","Published":"2017-02-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tufte","Version":"0.2","Title":"Tufte's Styles for R Markdown Documents","Description":"Provides R Markdown output formats to use Tufte styles for PDF and HTML output.","Published":"2016-02-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"tufterhandout","Version":"1.2.1","Title":"Tufte-style html document format for rmarkdown","Description":"Custom template and output formats for use with rmarkdown. Produce\n Edward Tufte-style handouts in html formats with full support for rmarkdown\n features","Published":"2015-01-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"TukeyC","Version":"1.1-5","Title":"Conventional Tukey Test","Description":"Perform the conventional Tukey test from aov and aovlist\n objects","Published":"2014-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tukeytrend","Version":"0.4","Title":"Tukeys Trend Test via Multiple Marginal Models","Description":"Provides wrapper functions to the multiple marginal model function mmm() of package 'multcomp' to implement the trend test of Tukey, Ciminera and Heyse (1985) for general parametric models.","Published":"2017-06-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tumblR","Version":"1.1","Title":"Access to Tumblr v2 API","Description":"Provides an R-interface to the Tumblr web API (see Tumblr v2 API on https://www.tumblr.com/docs/en/api/v2). Tumblr is a microblogging platform and social networking website (https://www.tumblr.com).","Published":"2015-03-25","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"tumgr","Version":"0.0.4","Title":"Tumor Growth Rate Analysis","Description":"A tool to obtain tumor growth rates from clinical trial patient data. Output includes individual and summary data for tumor growth rate estimates as well as optional plots of the observed and predicted tumor quantity over time.","Published":"2016-02-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"TunePareto","Version":"2.4","Title":"Multi-objective parameter tuning for classifiers","Description":"Generic methods for parameter tuning of classification algorithms using multiple scoring functions","Published":"2014-03-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"tuneR","Version":"1.3.2","Title":"Analysis of Music and Speech","Description":"Analyze music and speech, extract features like MFCCs, handle wave files and their representation in various ways, read mp3, read midi, perform steps of a transcription, ...\n Also contains functions ported from the 'rastamat' 'Matlab' package.","Published":"2017-04-10","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"tuple","Version":"0.4-02","Title":"Find every match, or orphan, duplicate, triplicate, or other\nreplicated values","Description":"Functions to find all matches or non-matches, orphans, and\n duplicate or other replicated elements.","Published":"2014-10-31","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"turboEM","Version":"2014.8-1","Title":"A Suite of Convergence Acceleration Schemes for EM, MM and other\nfixed-point algorithms","Description":"Algorithms for accelerating the convergence of slow,\n monotone sequences from smooth, contraction mapping such as the\n EM and MM algorithms. It can be used to accelerate any smooth,\n linearly convergent acceleration scheme. A tutorial style\n introduction to this package is available in a vignette on the\n CRAN download page or, when the package is loaded in an R\n session, with vignette(\"turboEM\").","Published":"2014-08-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"turfR","Version":"0.8-7","Title":"TURF Analysis for R","Description":"Package for analyzing TURF (Total Unduplicated Reach and Frequency) data in R. \n No looping in TURF algorithm results in fast processing times. Allows for individual-level \n\tweights, depth specification, and user-truncated combination set(s). Allows user to substitute \n\tMonte Carlo simulated combination set(s) after set(s) exceed a user-specified limit.","Published":"2014-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"turner","Version":"0.1.7","Title":"Turn vectors and lists of vectors into indexed structures","Description":"Package designed for working with vectors and lists of vectors,\n mainly for turning them into other indexed data structures.","Published":"2014-02-17","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"TurtleGraphics","Version":"1.0-5","Title":"Turtle graphics in R","Description":"An implementation of turtle graphics\n (http://en.wikipedia.org/wiki/Turtle_graphics) in R.\n Turtle graphics comes from Papert's language Logo and has\n been used to teach concepts of computer programming.","Published":"2016-12-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"tutorial","Version":"0.4.3","Title":"Convert R Markdown Files to DataCamp Light HTML Files","Description":"DataCamp Light () is a light-weight implementation of the DataCamp UI,\n that allows you to embed interactive exercises inside HTML documents. The tutorial package makes it easy to create these\n HTML files from R Markdown files. An extension to knitr, tutorial detects appropriately formatted code chunks and replaces them\n with DataCamp Light readable chunks in the resulting HTML file.","Published":"2016-10-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"TUWmodel","Version":"0.1-8","Title":"Lumped Hydrological Model for Education Purposes","Description":"The model, developed at the Vienna University of Technology, is a lumped conceptual rainfall-runoff model, following the structure of the HBV model. \n The model runs on a daily or shorter time step and consists of a snow routine, a soil moisture routine and a flow routing routine. \n See Parajka, J., R. Merz, G. Bloeschl (2007) Uncertainty and multiple objective calibration in regional water balance modelling: case study in 320 Austrian catchments, Hydrological Processes, 21, 435-446. ","Published":"2016-09-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tvd","Version":"0.1.0","Title":"Total Variation Denoising","Description":"Total Variation Denoising is a regularized denoising method which\n effectively removes noise from piecewise constant signals whilst preserving\n edges. This package contains a C++ implementation of Condat's very fast 1D\n squared error loss TVD algorithm. Additional methods and loss functions may\n be added in future versions.","Published":"2014-08-13","License":"EPL (>= 1.0)","snapshot_date":"2017-06-23"} {"Package":"tvm","Version":"0.3.0","Title":"Time Value of Money Functions","Description":"Functions for managing cashflows and interest rate curves.","Published":"2015-08-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"twang","Version":"1.4-9.5","Title":"Toolkit for Weighting and Analysis of Nonequivalent Groups","Description":"Provides functions for propensity score\n estimating and weighting, nonresponse weighting, and diagnosis\n of the weights. This package was originally developed by Drs.\n Ridgeway, McCaffrey, and Morral. Burgette, Griffin and\n McCaffrey updated the package during 2011-2016.","Published":"2016-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tweedie","Version":"2.2.5","Title":"Tweedie Exponential Family Models","Description":"Maximum likelihood computations for Tweedie families.","Published":"2016-12-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tweenr","Version":"0.1.5","Title":"Interpolate Data for Smooth Animations","Description":"In order to create smooth animation between states of data,\n tweening is necessary. This package provides a range of functions for\n creating tweened data that plugs right into functions such as gg_animate()\n from the 'gganimate' package. Furthermore it adds a number of vectorized\n interpolaters for common R data types such as numeric, date and colour.","Published":"2016-10-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"tweet2r","Version":"1.0","Title":"Twitter Collector for R and Export to 'SQLite', 'postGIS' and\n'GIS' Format","Description":"This is an improved implementation of the package 'StreamR' to\n capture tweets and store it into R, SQLite, 'postGIS' data base or GIS format. The package\n performs a description of harvested data and performs space time exploratory analysis.","Published":"2016-06-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"twiddler","Version":"0.5-0","Title":"Interactive manipulation of R expressions","Description":"Twiddler is an interactive tool that automatically creates\n a Tcl/Tk GUI for manipulating variables in any R expression.\n See the documentation of the function twiddle to get started.","Published":"2013-06-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"twilio","Version":"0.1.0","Title":"An Interface to the Twilio API for R","Description":"The Twilio web service provides an API for computer programs\n to interact with telephony. The included functions wrap the SMS and MMS \n portions of Twilio's API, allowing users to send and receive text messages\n from R. See for more information.","Published":"2017-03-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"twitteR","Version":"1.1.9","Title":"R Based Twitter Client","Description":"Provides an interface to the Twitter web API.","Published":"2015-07-29","License":"Artistic-2.0","snapshot_date":"2017-06-23"} {"Package":"TwoCop","Version":"1.0","Title":"Nonparametric test of equality between two copulas","Description":"This package implements the nonparametric test of equality\n between two copulas proposed by Remillard and Scaillet in their\n 2009 JMVA paper.","Published":"2012-10-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TwoPhaseInd","Version":"1.1.1","Title":"Estimate Gene-Treatment Interaction Exploiting Randomization","Description":"Estimation of gene-treatment interactions in randomized clinical trials exploiting gene-treatment independence.","Published":"2016-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"twoStageGwasPower","Version":"0.99.0","Title":"Compute thresholds and power for two-stage gwas","Description":"This program computes thresholds and power for a two-stage\n genome-wide association study. It follows the methods described\n in Skol AD, Scott, LJ, Abecasis GR, Boehnke M (2006) Nature\n Genetics doi:10.1038/ng1706 and in the \"CaTS\" computer program\n (http://www.sph.umich.edu/csg/abecasis/CaTS/)","Published":"2012-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"twostageTE","Version":"1.3","Title":"Two-Stage Threshold Estimation","Description":"Implements a variety of non-parametric methods for computing one-stage and two-stage confidence intervals, as well as point estimates of threshold values.","Published":"2015-09-27","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"TwoStepCLogit","Version":"1.2.5","Title":"Conditional Logistic Regression: A Two-Step Estimation Method","Description":"Conditional logistic regression with longitudinal follow up and\n individual-level random coefficients: A stable and efficient\n two-step estimation method.","Published":"2016-03-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"txtplot","Version":"1.0-3","Title":"Text based plots","Description":"Provides functions to produce rudimentary ascii graphics\n directly in the terminal window. Provides a basic plotting\n function (and equivalents of curve, density, acf and barplot)\n as well as a boxplot function.","Published":"2012-07-25","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"types","Version":"1.0.0","Title":"Type Annotations","Description":"Provides a simple type annotation for R that is usable in scripts,\n in the R console and in packages. It is intended as a convention to allow other\n packages to use the type information to provide error checking,\n automatic documentation or optimizations.","Published":"2016-10-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"uaparserjs","Version":"0.1.0","Title":"Parse Browser 'User-Agent' Strings into Data Frames","Description":"Despite there being a section in RFC 7231\n defining a suggested\n structure for 'User-Agent' headers this data is notoriously difficult\n to parse consistently. A function is provided that will take in user agent\n strings and return structured R objects. This is a 'V8'-backed package\n based on the 'ua-parser' project .","Published":"2016-08-05","License":"Apache License","snapshot_date":"2017-06-23"} {"Package":"UBCRM","Version":"1.0.1","Title":"Functions to Simulate and Conduct Dose-Escalation Phase I\nStudies","Description":"Two Phase I designs are implemented in the package: the classical 3+3 and the Continual Reassessment Method. Simulations tools are also available to estimate the operating characteristics of the methods with several user-dependent options.","Published":"2015-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ubeR","Version":"0.1.4","Title":"Interface to the Uber API","Description":"The Uber API provides a programmatic way to interact with the Uber\n international online transportation network. This package enables access to\n the Uber API from within R. Specifically it is possible to: extract information\n about a user's account, find out about nearby Uber vehicles, get estimates for\n rides, book or cancel a ride. See for more\n information.","Published":"2017-02-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"UBL","Version":"0.0.5","Title":"An Implementation of Re-Sampling Approaches to Utility-Based\nLearning for Both Classification and Regression Tasks","Description":"Provides a set of functions that can be used to obtain better predictive performance on cost-sensitive and cost/benefits tasks (for both regression and classification). This includes re-sampling approaches that modify the original data set biasing it towards the user preferences.","Published":"2016-07-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ucbthesis","Version":"1.0","Title":"UC Berkeley graduate division thesis template","Description":"This package contains latex, knitr and R Markdown templates that\n adhere to the UC Berkeley Graduate Division's thesis guidelines. The\n templates are located in the inst/ directory.","Published":"2014-05-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ucminf","Version":"1.1-4","Title":"General-Purpose Unconstrained Non-Linear Optimization","Description":"An algorithm for general-purpose unconstrained non-linear optimization.\n The algorithm is of quasi-Newton type with BFGS updating of the inverse\n Hessian and soft line search with a trust region type monitoring of the\n input to the line search algorithm. The interface of 'ucminf' is\n designed for easy interchange with 'optim'.","Published":"2016-08-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"UCR.ColumnNames","Version":"0.1.0","Title":"Fixes Column Names for Uniform Crime Report \"Offenses Known and\nClearance by Arrest\" Datasets","Description":"Changes the column names of the inputted dataset to the correct\n names from the Uniform Crime Report codebook for the \"Offenses Known and\n Clearance by Arrest\" datasets from 1998-2014.","Published":"2016-09-19","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"udapi","Version":"0.1.0","Title":"Urban Dictionary API Client","Description":"A client for the Urban Dictionary API.","Published":"2016-07-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"udunits2","Version":"0.13","Title":"Udunits-2 Bindings for R","Description":"Provides simple bindings to Unidata's udunits library.","Published":"2016-11-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"uHMM","Version":"1.0","Title":"Construct an Unsupervised Hidden Markov Model","Description":"Construct a Hidden Markov Model with states learnt by unsupervised classification.","Published":"2016-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"uiucthemes","Version":"0.1.1","Title":"'R' 'Markdown' Themes for 'UIUC' Documents and Presentations","Description":"A set of custom 'R' 'Markdown' templates for documents and\n presentations with the University of Illinois at Urbana-Champaign (UIUC)\n color scheme and identity standards.","Published":"2016-10-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ukbabynames","Version":"0.1.1","Title":"UK Baby Names Data","Description":"Full listing of UK baby names occurring more than three times per year between 1996 and 2015, and rankings of baby name popularity by decade from 1904 to 1994.","Published":"2017-06-20","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"ukds","Version":"0.1.0","Title":"Reproducible Data Retrieval from the UK Data Service","Description":"Reproducible, programmatic retrieval of datasets from the \n UK Data Service . The UKDS is \"the\n UK's largest collection of social, economic and population data resources,\"\n but researchers taking advantage of these datasets are caught in a bind. \n The UKDS terms and conditions sharply limit redistribution of downloaded\n datasets, but to ensure that one's work can be reproduced, assessed, and\n built upon by others, one must provide access to the raw data one employed.\n The ukds package cuts this knot by providing programmatic, reproducible\n access to the UKDS datasets from within R. ","Published":"2017-03-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ukgasapi","Version":"0.13","Title":"API for UK Gas Market Information","Description":"Allows users to access live UK gas market information via National Grid's API.","Published":"2015-10-26","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Ultimixt","Version":"2.1","Title":"Bayesian Analysis of Location-Scale Mixture Models using a\nWeakly Informative Prior","Description":"A generic reference Bayesian analysis of unidimensional mixture distributions obtained by a location-scale parameterisation of the model is implemented. The including functions simulate and summarize posterior samples for location-scale mixture models using a weakly informative prior. There is no need to define priors for scale-location parameters except two hyperparameters in which are associated with a Dirichlet prior for weights and a simplex.","Published":"2017-03-09","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"ump","Version":"0.5-8","Title":"Uniformly Most Powerful Tests","Description":"Does uniformly most powerful (UMP) and uniformly most\n powerful unbiased (UMPU) tests. At present only distribution implemented\n is binomial distribution. Also does fuzzy tests and confidence intervals\n (following Geyer and Meeden, 2005, )\n for the binomial\n distribution (one-tailed procedures based on UMP test and two-tailed\n procedures based on UMPU test).","Published":"2017-03-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"umx","Version":"1.7.5","Title":"Structural Equation Modelling in R with 'OpenMx'","Description":"Quickly create, run, and report structural equation and twin models.\n See '?umx' to learn more.","Published":"2017-04-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"unbalanced","Version":"2.0","Title":"Racing for Unbalanced Methods Selection","Description":"A dataset is said to be unbalanced when the class of interest (minority class) is much rarer than normal behaviour (majority class). The cost of missing a minority class is typically much higher that missing a majority class. Most learning systems are not prepared to cope with unbalanced data and several techniques have been proposed. This package implements some of most well-known techniques and propose a racing algorithm to select adaptively the most appropriate strategy for a given unbalanced task.","Published":"2015-06-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"unbalhaar","Version":"2.0","Title":"Function estimation via Unbalanced Haar wavelets","Description":"The package implements top-down and bottom-up algorithms\n for nonparametric function estimation in Gaussian noise using\n Unbalanced Haar wavelets.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"UncerIn2","Version":"2.0","Title":"Implements Models of Uncertainty into the Interpolation\nFunctions","Description":"Provides a basic (random) data, grids, 6 models of uncertainty, 3 automatic interpolations (idw, spline, kriging), variogram and basic data visualization.","Published":"2015-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"UncertainInterval","Version":"0.3.0","Title":"Uncertain Area Methods for Cut-Point Determination in Tests","Description":"Functions for the determination of an Uncertain Interval, i.e., a range\n of test scores that are inconclusive and do not allow a diagnosis, other than\n 'Uncertain'.","Published":"2017-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"UNCLES","Version":"2.0","Title":"Unification of Clustering Results from Multiple Datasets using\nExternal Specifications","Description":"Consensus clustering by the unification of clustering results from multiple datasets using external specifications.","Published":"2016-06-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"uncmbb","Version":"0.1.0","Title":"UNC Men's Basketball Match Results Since 1949-1950 Season","Description":"Dataset contains select attributes for each match result since 1949-1950 season for UNC men's basketball team.","Published":"2017-05-18","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"UNF","Version":"2.0.6","Title":"Tools for Creating Universal Numeric Fingerprints for Data","Description":"Computes a universal numeric fingerprint (UNF) for an R data\n object. UNF is a cryptographic hash or signature that can be used to uniquely\n identify (a version of) a rectangular dataset, or a subset thereof. UNF can\n be used, in tandem with a DOI, to form a persistent citation to a versioned\n dataset.","Published":"2017-06-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"unfoldr","Version":"0.6","Title":"Stereological Unfolding for Spheroidal Particles","Description":"Stereological unfolding as implemented in this package consists in\n the estimation of the joint size-shape-orientation distribution of spheroidal\n shaped particles based on the same measured quantities of corresponding planar\n vertical section profiles. A single trivariate discretized version of the (stereological)\n integral equation in the case of prolate and oblate spheroids is solved\n numerically by the EM algorithm. The estimation of diameter distribution of\n spheres from planar sections (Wicksell's corpuscle problem) is also implemented.\n Further, the package provides routines for the simulation of a Poisson germ-\n grain process with either spheroids, spherocylinders or spheres as grains together\n with functions for planar sections.\n For the purpose of exact simulation a bivariate size-shape distribution is implemented.","Published":"2016-10-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ungeneanno","Version":"0.1.6","Title":"Collate Gene Annotation Data from Uniprot and NIH Gene Databases","Description":"Taking a list of genes, the package collates together the summary\n information about those genes from the publicly available resources at Uniprot\n and NCBI. Additionally, the package is able to collate publication information\n from a search of the NCBI Pubmed database.","Published":"2016-09-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"uniah","Version":"1.0","Title":"Unimodal Additive Hazards Model","Description":"Nonparametric estimation of a unimodal or U-shape covariate effect under additive hazards model.","Published":"2016-12-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Unicode","Version":"9.0.0-1","Title":"Unicode Data and Utilities","Description":"Data from Unicode 9.0.0 and related utilities.","Published":"2017-01-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"uniCox","Version":"1.0","Title":"Univarate shrinkage prediction in the Cox model","Description":"Univarate shrinkage prediction for survival analysis using\n in the Cox model.. Especially useful for high-dimensional data,\n including microarray data.","Published":"2009-04-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"uniftest","Version":"1.1","Title":"Tests for Uniformity","Description":"Goodness-of-fit tests for the uniform distribution.","Published":"2015-05-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"uniqtag","Version":"1.0","Title":"Abbreviate Strings to Short, Unique Identifiers","Description":"For each string in a set of strings, determine a unique tag that is a substring of fixed size k unique to that string, if it has one. If no such unique substring exists, the least frequent substring is used. If multiple unique substrings exist, the lexicographically smallest substring is used. This lexicographically smallest substring of size k is called the \"UniqTag\" of that string.","Published":"2015-04-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"uniqueAtomMat","Version":"0.1-2","Title":"Finding Unique or Duplicated Rows or Columns for Atomic Matrices","Description":"An alternative implementation and extension (grpDuplicated) of base::duplicated.matrix, base:anyDuplicated.matrix and base::unique.matrix for matrices of atomic mode, avoiding the time consuming collapse of the matrix into a character vector. ","Published":"2016-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"uniReg","Version":"1.1","Title":"Unimodal Penalized Spline Regression using B-Splines","Description":"Univariate spline regression. It is possible to add the shape constraint of unimodality and predefined or\n\tself-defined penalties on the B-spline coefficients.","Published":"2016-06-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"unitedR","Version":"0.2","Title":"Assessment and Evaluation of Formations in United","Description":"United is a software tool which can be downloaded at the following\n website . In general, it is\n a virtual manager game for football teams. This package contains helpful\n functions for determining an optimal formation for a virtual match in\n United. E.g. knowing that the opponent has a strong defensive it is\n advisable to beat him in the midfield. Furthermore, this package contains\n functions for computing the optimal usage of hardness in a game.","Published":"2015-12-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"unitizer","Version":"1.4.3","Title":"Interactive R Unit Tests","Description":"Simplifies regression tests by comparing objects produced by test\n code with earlier versions of those same objects. If objects are unchanged\n the tests pass, otherwise execution stops with error details. If in\n interactive mode, tests can be reviewed through the provided interactive\n environment.","Published":"2017-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"units","Version":"0.4-5","Title":"Measurement Units for R Vectors","Description":"Support for measurement units in R vectors; automatic propagation,\n conversion, derivation and simplification of units; raising errors in case\n of unit incompatibility. Compatible with the difftime class. Uses the UNIDATA\n udunits library and unit database for unit conversion and compatibility\n checking.","Published":"2017-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"unittest","Version":"1.2-0","Title":"TAP-Compliant Unit Testing","Description":"\n Concise TAP (http://testanything.org/) compliant unit testing package. Authored tests can be run using CMD check with minimal implementation overhead.","Published":"2015-02-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"univOutl","Version":"0.1-3","Title":"Detection of Univariate Outliers","Description":"Well known outlier detection techniques in the univariate case. Methods to deal with skewed distribution are included too. The Hidiroglou-Berthelot (1986) method to search for outliers in ratios of historical data is implemented as well.","Published":"2017-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"UnivRNG","Version":"1.0","Title":"Univariate Pseudo-Random Number Generation","Description":"Pseudo-random number generation of 17 univariate distributions.","Published":"2017-05-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"unix","Version":"1.3","Title":"Unix System Utilities","Description":"Bindings to system utilities found in most Unix systems such as\n POSIX functions which are not part of the Standard C Library.","Published":"2017-04-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"unmarked","Version":"0.12-2","Title":"Models for Data from Unmarked Animals","Description":"Fits hierarchical models of animal abundance and occurrence to data collected using survey methods such as point counts, site occupancy sampling, distance sampling, removal sampling, and double observer sampling. Parameters governing the state and observation processes can be modeled as functions of covariates.","Published":"2017-05-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"unpivotr","Version":"0.1.1","Title":"Unpivot Complex and Irregular Data Layouts","Description":"Tools for converting data from complex or irregular layouts to a\n columnar structure. For example, tables with multilevel column or row\n headers, or spreadsheets. Header and data cells are selected by their\n contents and position, as well as formatting and comments where available,\n and are associated with one other by their proximity in given directions.","Published":"2017-04-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"unrtf","Version":"1.0","Title":"Extract Text from Rich Text Format (RTF) Documents","Description":"Wraps the 'unrtf' utility to extract text from RTF files. Supports\n document conversion to HTML, LaTeX or plain text. Output in HTML is recommended\n because 'unrtf' has limited support for converting between character encodings.","Published":"2017-06-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"unsystation","Version":"0.1.1","Title":"Stationarity Test Based on Unsystematic Sub-Sampling","Description":"Performs a test for second-order stationarity of time series based\n on unsystematic sub-samples.","Published":"2016-11-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"untb","Version":"1.7-2","Title":"ecological drift under the UNTB","Description":"A collection of utilities for biodiversity data.\n Includes the simulation of ecological drift under Hubbell's Unified\n Neutral Theory of Biodiversity, and the calculation of various\n diagnostics such as Preston curves. Now includes functionality\n provided by Francois Munoz and Andrea Manica.","Published":"2013-12-12","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"unvotes","Version":"0.2.0","Title":"United Nations General Assembly Voting Data","Description":"Historical voting data of the United Nations General Assembly. This\n includes votes for each country in each roll call, as well as descriptions and\n topic classifications for each vote.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"upclass","Version":"2.0","Title":"Updated Classification Methods using Unlabeled Data","Description":"This package contains a collection of functions which \n implement data classification. It uses unlabeled data to obtain \n parameter estimates of models. The functions can be implemented \n over a number of models with the best model selected and \n displayed.","Published":"2014-09-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"uplift","Version":"0.3.5","Title":"Uplift Modeling","Description":"An integrated package for building and testing uplift models ","Published":"2014-03-17","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"UPMASK","Version":"1.1","Title":"Unsupervised Photometric Membership Assignment in Stellar\nClusters","Description":"An implementation of the UPMASK method for performing membership\n assignment in stellar clusters in R. It is prepared to use photometry and\n spatial positions, but it can take into account other types of data. The\n method is able to take into account arbitrary error models, and it is\n unsupervised, data-driven, physical-model-free and relies on as few\n assumptions as possible. The approach followed for membership assessment is\n based on an iterative process, principal component analysis, a clustering\n algorithm and a kernel density estimation.","Published":"2017-04-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"upmfit","Version":"0.1.0","Title":"Unified Probability Model Fitting","Description":"Fitting a Unified Probability Model for household-community tuberculosis transmission dynamics.","Published":"2017-02-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"UpSetR","Version":"1.3.3","Title":"A More Scalable Alternative to Venn and Euler Diagrams for\nVisualizing Intersecting Sets","Description":"Creates visualizations of intersecting sets using a novel matrix\n design, along with visualizations of several common set, element and attribute\n related tasks.","Published":"2017-03-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"uptimeRobot","Version":"1.0.0","Title":"Access the UptimeRobot Ping API","Description":"Provide a set of wrappers to call all the endpoints of UptimeRobot API\n which includes various kind of ping, keep-alive and speed tests.\n See for more information.","Published":"2015-10-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"uqr","Version":"1.0.0","Title":"Unconditional Quantile Regression","Description":"Estimation and Inference for Unconditional Quantile Regression for cross-sectional and panel data (see Firpo et al. (2009) ).","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"urca","Version":"1.3-0","Title":"Unit Root and Cointegration Tests for Time Series Data","Description":"Unit root and cointegration tests encountered in applied \n econometric analysis are implemented.","Published":"2016-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"urlshorteneR","Version":"0.9.2","Title":"R Wrapper for the 'Bit.ly', 'Goo.gl' and 'Is.gd' URL Shortening\nServices","Description":"Allows using different URL shortening services, which also provide\n expanding and analytic functions. Specifically developed for 'Bit.ly', 'Goo.gl'\n (both OAuth2) and 'is.gd' (no API key). Others can be added by request.","Published":"2016-12-05","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"urltools","Version":"1.6.0","Title":"Vectorised Tools for URL Handling and Parsing","Description":"A toolkit for all URL-handling needs, including encoding and decoding,\n parsing, parameter extraction and modification. All functions are\n designed to be both fast and entirely vectorised. It is intended to be\n useful for people dealing with web-related datasets, such as server-side\n logs, although may be useful for other situations involving large sets of\n URLs.","Published":"2016-10-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"uroot","Version":"2.0-9","Title":"Unit Root Tests for Seasonal Time Series","Description":"Seasonal unit roots and seasonal stability tests.\n P-values based on response surface regressions are available for both tests.\n P-values based on bootstrap are available for seasonal unit root tests.\n A parallel implementation of the bootstrap method requires a CUDA capable GPU \n with compute capability >= 3.0, otherwise a debugging version fully coded in R is used.","Published":"2017-01-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"USAboundaries","Version":"0.2.0","Title":"Historical and Contemporary Boundaries of the United States of\nAmerica","Description":"The boundaries for geographical units in the United States of\n America contained in this package include state, county, congressional\n district, and zip code tabulation area. Contemporary boundaries are provided\n by the U.S. Census Bureau (public domain). Historical boundaries for the\n years from 1629 to 2000 are provided form the Newberry Library's 'Atlas of\n Historical County Boundaries' (licensed CC BY-NC-SA). Additional high\n resolution data is provided in the 'USAboundariesData' package;\n this package provides an interface to access that data.","Published":"2016-01-04","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"} {"Package":"UScancer","Version":"0.1-2","Title":"Create US cancer datasets from SEER, IARC, and US Census data","Description":"This package contains functions to read cancer data from SEER (http://seer.cancer.gov/) and IARC (http://www.iarc.fr) to create datasets at the county level based on US census information.","Published":"2014-08-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"UScensus2000cdp","Version":"0.03","Title":"US Census 2000 Designated Places Shapefiles and Additional\nDemographic Data","Description":"US Census 2000 Designated Places shapefiles and additional\n demographic data from the SF1 100 percent files. This data set\n contains polygon files in lat/lon coordinates and the\n corresponding demographic data for a number of different\n variables.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"UScensus2000tract","Version":"0.03","Title":"US Census 2000 Tract Level Shapefiles and Additional Demographic\nData","Description":"US 2000 Census Tract shapefiles and additional demographic\n data from the SF1 100 percent files. This data set contains\n polygon files in lat/lon coordinates and the corresponding\n demographic data for a number of different variables.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"UScensus2010","Version":"0.11","Title":"US Census 2010 Suite of R Packages","Description":"US Census 2010 shape files and additional demographic data\n from the SF1 100 percent files. This package contains a number\n of helper functions for the UScensus2010blk,\n UScensus2010blkgrp, UScensus2010tract, UScensus2010cdp\n packages.","Published":"2012-07-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"usdm","Version":"1.1-15","Title":"Uncertainty Analysis for Species Distribution Models","Description":"This is a framework that aims to provide methods and tools for assessing the impact of different sources of uncertainties (e.g.positional uncertainty) on performance of species distribution models (SDMs).)","Published":"2015-08-01","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"usedist","Version":"0.1.0","Title":"Distance Matrix Utilities","Description":"Functions to re-arrange, extract, and work with distances.","Published":"2017-05-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"useful","Version":"1.2.3","Title":"A Collection of Handy, Useful Functions","Description":"A set of little functions that have been found useful to do little\n odds and ends such as plotting the results of K-means clustering, substituting\n special text characters, viewing parts of a data.frame, constructing formulas\n from text and building design and response matrices.","Published":"2017-06-07","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"userfriendlyscience","Version":"0.6-1","Title":"Quantitative Analysis Made Accessible","Description":"Contains a number of functions that serve\n two goals. First, to make R more accessible to people migrating from\n SPSS by adding a number of functions that behave roughly like their\n SPSS equivalents. Second, to make a number of slightly more\n advanced functions more user friendly to relatively novice users. The\n package also conveniently houses a number of additional functions that\n are intended to increase the quality of methodology and statistics in\n psychology, not by offering technical solutions, but by shifting\n perspectives, for example towards reasoning based on sampling\n distributions as opposed to on point estimates.","Published":"2017-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"USGSstates2k","Version":"1.0.1","Title":"Replaced by 'states2k' -- United States of America Map with the\nNAD 1983 Albers Projection","Description":"A map of the USA from the United States Geological Survey (USGS).\n Irucka worked with this data set while a Cherokee Nation Technology\n Solutions (CNTS) USGS Contractor and/or USGS employee. It is replaced by\n 'states2k'.","Published":"2017-01-10","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"UsingR","Version":"2.0-5","Title":"Data Sets, Etc. for the Text \"Using R for Introductory\nStatistics\", Second Edition","Description":"A collection of data sets to accompany the\n textbook \"Using R for Introductory Statistics,\" second\n edition.","Published":"2015-08-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"uskewFactors","Version":"2.0","Title":"Model-Based Clustering via Mixtures of Unrestricted Skew-t\nSactor Analyzer Models","Description":"Implements mixtures of unrestricted skew-t factor analyzer models via the EM algorithm.","Published":"2016-05-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"usl","Version":"1.7.0","Title":"Analyze System Scalability with the Universal Scalability Law","Description":"The Universal Scalability Law is a model to predict hardware and\n software scalability. It uses system capacity as a function of load to\n forecast the scalability for the system.","Published":"2016-10-14","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"usmap","Version":"0.2.0","Title":"US Maps Including Alaska and Hawaii","Description":"Obtain United States map data frames of varying region types (e.g. county, \n state). The map data frames include Alaska and Hawaii conveniently placed to the\n bottom left, as they appear in most maps of the US. Convenience functions for plotting\n choropleths and working with FIPS codes are also provided.","Published":"2017-04-29","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"UStatBookABSC","Version":"1.0.0","Title":"A Companion Package to the Book \"U-Statistics, M-Estimation and\nResampling\"","Description":"A set of functions leading to multivariate response L1 regression. \n This includes functions on computing Euclidean inner products and norms, \n weighted least squares estimates on multivariate responses, function to compute \n fitted values and residuals. This package is a companion to the book \"U-Statistics,\n M-estimation and Resampling\", by Arup Bose and Snigdhansu Chatterjee, to appear \n in 2017 as part of the \"Texts and Readings in Mathematics\" (TRIM) series of \n Hindustan Book Agency and Springer-Verlag.","Published":"2016-12-27","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ustyc","Version":"1.0.0","Title":"Fetch US Treasury yield curve data","Description":"Forms a query to submit for US Treasury yield curve data, posting\n this query to the US Treasury web site's data feed service. By default the\n download includes data yield data for 12 products from January 1, 1990,\n some of which are NA during this span. The caller can pass parameters to\n limit the query to a certain year or year and month, but the full download\n is not especially large. The download data from the service is in XML\n format. The package's main function transforms that XML data into a numeric \n data frame with treasury product items (constant maturity yields for 12 kinds \n of bills, notes, and bonds) as columns and dates as row names. The function \n returns a list which includes an item for this data frame as well as query-related\n values for reference and the update date from the service.","Published":"2014-06-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"utf8latex","Version":"1.0.4","Title":"Importing, Exporting and Converting Between Datasets and LaTeX","Description":"Methods to assist with importing data stored in text files with Unicode characters and to convert text or data with foreign characters or mathematical symbols to LaTeX. It also escapes UTF8 code points (fixing the \"warning: found non-ASCII strings\" problem), detects languages, encodings and more. ","Published":"2016-12-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"utility","Version":"1.4.1","Title":"Construct, Evaluate and Plot Value and Utility Functions","Description":"Construct and plot objective hierarchies and associated value and utility functions. \n Evaluate the values and utilities and visualize the results as colored objective hierarchies or tables. \n Visualize uncertainty by plotting median and quantile intervals within the nodes of objective hierarchies.\n Get numerical results of the evaluations in standard R data types for further processing.","Published":"2017-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"utiml","Version":"0.1.2","Title":"Utilities for Multi-Label Learning","Description":"Multi-label learning strategies and others procedures to support multi-\n label classification in R. The package provides a set of multi-label procedures such as\n sampling methods, transformation strategies, threshold functions, pre-processing \n techniques and evaluation metrics. A complete overview of the matter can be seen in\n Zhang, M. and Zhou, Z. (2014) and Gibaja, E. and \n Ventura, S. (2015) .","Published":"2017-04-06","License":"GPL | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"uuid","Version":"0.1-2","Title":"Tools for generating and handling of UUIDs","Description":"Tools for generating and handling of UUIDs (Universally Unique Identifiers).","Published":"2015-07-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"UWHAM","Version":"1.0","Title":"Unbinned weighted histogram analysis method (UWHAM)","Description":"A method for estimating log-normalizing constants (or free\n energies) and expectations from multiple distributions (such as\n multiple generalized ensembles).","Published":"2013-01-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"uwIntroStats","Version":"0.0.3","Title":"Descriptive Statistics, Inference, Regression, and Plotting in\nan Introductory Statistics Course","Description":"A set of tools designed to facilitate easy adoption of R for students in introductory classes with little programming experience. Compiles output from existing routines together in an intuitive format, and adds functionality to existing functions. For instance, the regression function can perform linear models, generalized linear models, Cox models, or generalized estimating equations. The user can also specify multiple-partial F-tests to print out with the model coefficients. We also give many routines for descriptive statistics and plotting. ","Published":"2016-09-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"V8","Version":"1.5","Title":"Embedded JavaScript Engine for R","Description":"An R interface to Google's open source JavaScript engine.\n V8 is written in C++ and implements ECMAScript as specified in ECMA-262,\n 5th edition. In addition, this package implements typed arrays as\n specified in ECMA 6 used for high-performance computing and libraries\n compiled with 'emscripten'.","Published":"2017-04-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"vaersNDvax","Version":"1.0.4","Title":"Non-Domestic Vaccine Adverse Event Reporting System (VAERS)\nVaccine Data for Present","Description":"Non-Domestic VAERS vaccine data for 01/01/2016 - 06/14/2016. If\n you want to explore the full VAERS data for 1990 - Present (data, symptoms,\n and vaccines), then check out the 'vaersND' package from the URL below. The\n URL and BugReports below correspond to the 'vaersND' package, of which\n 'vaersNDvax' is a small subset (2016 only). 'vaersND' is not hosted on CRAN\n due to the large size of the data set. To install the Suggested 'vaers' and\n 'vaersND' packages, use the following R code:\n 'devtools::install_git(\"https://gitlab.com/iembry/vaers.git\",\n build_vignettes = TRUE)' and\n 'devtools::install_git(\"https://gitlab.com/iembry/vaersND.git\",\n build_vignettes = TRUE)'. \"VAERS is a national vaccine safety\n surveillance program co-sponsored by the US Centers for Disease Control and\n Prevention (CDC) and the US Food and Drug Administration (FDA). VAERS is a\n post-marketing safety surveillance program, collecting information about\n adverse events (possible side effects) that occur after the administration\n of vaccines licensed for use in the United States.\" For more information\n about the data, visit . For information about\n vaccination/immunization hazards, visit\n .","Published":"2016-08-09","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"vaersvax","Version":"1.0.4","Title":"US Vaccine Adverse Event Reporting System (VAERS) Vaccine Data\nfor Present","Description":"US VAERS vaccine data for 01/01/2016 - 06/14/2016. If you want to\n explore the full VAERS data for 1990 - Present (data, symptoms, and\n vaccines), then check out the 'vaers' package from the URL below. The URL\n and BugReports below correspond to the 'vaers' package, of which 'vaersvax'\n is a small subset (2016 only). 'vaers' is not hosted on CRAN due to the\n large size of the data set. To install the Suggested 'vaers' and 'vaersND'\n packages, use the following R code:\n 'devtools::install_git(\"https://gitlab.com/iembry/vaers.git\",\n build_vignettes = TRUE)' and\n 'devtools::install_git(\"https://gitlab.com/iembry/vaersND.git\",\n build_vignettes = TRUE)'. \"VAERS is a national vaccine safety\n surveillance program co-sponsored by the US Centers for Disease Control and\n Prevention (CDC) and the US Food and Drug Administration (FDA). VAERS is a\n post-marketing safety surveillance program, collecting information about\n adverse events (possible side effects) that occur after the administration\n of vaccines licensed for use in the United States.\" For more information\n about the data, visit . For information about\n vaccination/immunization hazards, visit\n .","Published":"2016-08-09","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"valaddin","Version":"0.1.0","Title":"Functional Input Validation","Description":"A set of basic tools to transform functions into functions with\n input validation checks, in a manner suitable for both programmatic and\n interactive use.","Published":"2017-03-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"validann","Version":"1.2.1","Title":"Validation Tools for Artificial Neural Networks","Description":"Methods and tools for analysing and validating the outputs\n and modelled functions of artificial neural networks (ANNs) in terms\n of predictive, replicative and structural validity. Also provides a\n method for fitting feed-forward ANNs with a single hidden layer.","Published":"2017-04-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"validate","Version":"0.1.7","Title":"Data Validation Infrastructure","Description":"Declare data validation rules and data quality indicators; confront\n data with them and analyze or visualize the results. The package supports\n rules that are per-field, in-record, cross-record or cross-dataset. Rules\n can be automatically analyzed for rule type and connectivity.","Published":"2017-04-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"validatejsonr","Version":"1.0.4","Title":"Validate JSON Against JSON Schemas","Description":"The current implementation uses the C++ library 'RapidJSON' to supply the schema functionality, it supports JSON Schema Draft v4. As of 2016-09-09, 'RapidJSON' passed 262 out of 263 tests in JSON Schema Test Suite (JSON Schema draft 4).","Published":"2016-10-20","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"validateRS","Version":"1.0.0","Title":"One-Sided Multivariate Testing Procedures for Rating Systems","Description":"An implementation of statistical tests for the validation of rating systems as described in the ECB Working paper ''Advances in multivariate back-testing for credit risk underestimation'', by F. Coppens, M. Mayer, L. Millischer, F. Resch, S. Sauer, K. Schulze (ECB WP series, forthcoming).","Published":"2015-12-26","License":"EUPL","snapshot_date":"2017-06-23"} {"Package":"valorate","Version":"1.0-1","Title":"Velocity and Accuracy of the LOg-RAnk TEst","Description":"The algorithm implemented in this package was\n designed to quickly estimates the distribution of the \n log-rank especially for heavy unbalanced groups. VALORATE \n estimates the null distribution and the p-value of the \n log-rank test based on a recent formulation. For a given \n number of alterations that define the size of survival \n groups, the estimation involves a weighted sum of \n distributions that are conditional on a co-occurrence term \n where mutations and events are both present. The estimation \n of conditional distributions is quite fast allowing the \n analysis of large datasets in few minutes \n .","Published":"2016-10-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"valottery","Version":"0.0.1","Title":"Results from the Virginia Lottery Draw Games","Description":"Historical results for the state of Virginia lottery draw games. Data were downloaded from https://www.valottery.com/. ","Published":"2015-09-02","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"valr","Version":"0.3.0","Title":"Genome Interval Arithmetic in R","Description":"Read and manipulate genome intervals and signals. Provides\n functionality similar to command-line tool suites within R,\n enabling interactive analysis and visualization of genome-scale data. ","Published":"2017-06-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"valuer","Version":"1.1.1","Title":"Pricing of Variable Annuities","Description":"Pricing of variable annuity life insurance\n contracts by means of Monte Carlo methods. Monte Carlo is used to price\n the contract in case the policyholder cannot surrender while\n Least Squares Monte Carlo is used if the insured can surrender.\n This package implements the pricing framework and algorithm described in\n Bacinello et al. (2011) .\n It also implements the state-dependent fee structure\n discussed in Bernard et al. (2014) .","Published":"2017-01-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"VAR.etp","Version":"0.7","Title":"VAR modelling: estimation, testing, and prediction","Description":"Estimation, Hypothesis Testing, Prediction for Stationary Vector Autoregressive Models","Published":"2014-12-02","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"varband","Version":"0.9.0","Title":"Variable Banding of Large Precision Matrices","Description":"Implementation of the variable banding procedure for modeling local dependence and estimating precision matrices that is introduced in Yu & Bien (2016) and is available at .","Published":"2016-11-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"varbvs","Version":"2.0-8","Title":"Large-Scale Bayesian Variable Selection Using Variational\nMethods","Description":"Fast algorithms for fitting Bayesian variable selection\n models and computing Bayes factors, in which the outcome (or\n response variable) is modeled using a linear regression or a\n logistic regression. The algorithms are based on the variational\n approximations described in \"Scalable variational inference for\n Bayesian variable selection in regression, and its accuracy in\n genetic association studies\" (P. Carbonetto & M. Stephens, 2012,\n ). This software has been applied to large\n data sets with over a million variables and thousands of samples.","Published":"2017-03-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"varComp","Version":"0.1-360","Title":"Variance Component Models","Description":"Variance component models: REML estimation, testing fixed effect contrasts through Satterthwaite or Kenward-Roger methods, testing the nullity of variance components through (linear or quadratic) score tests or likelihood ratio tests.","Published":"2015-02-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vardiag","Version":"0.2-1","Title":"Variogram Diagnostics","Description":"Interactive variogram diagnostics.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vardpoor","Version":"0.9.4","Title":"Variance Estimation for Sample Surveys by the Ultimate Cluster\nMethod","Description":"Generation of domain variables, linearization of several nonlinear population statistics (the ratio of two totals, weighted income percentile, relative median income ratio, at-risk-of-poverty rate, at-risk-of-poverty threshold, Gini coefficient, gender pay gap, the aggregate replacement ratio, the relative median income ratio, median income below at-risk-of-poverty gap, income quintile share ratio, relative median at-risk-of-poverty gap), computation of regression residuals in case of weight calibration, variance estimation of sample surveys by the ultimate cluster method (Hansen, Hurwitz and Madow,Theory, vol. I: Methods and Applications; vol. II: Theory. 1953, New York: John Wiley and Sons), variance estimation for longitudinal, cross-sectional measures and measures of change for single and multistage stage cluster sampling designs (Berger, Y. G., 2015, ). Several other precision measures are derived - standard error, the coefficient of variation, the margin of error, confidence interval, design effect.","Published":"2017-05-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VarED","Version":"1.0.0","Title":"Variance Estimation using Difference-Based Methods","Description":"Generating functions for both optimal and ordinary difference sequences, and the difference-based estimation functions.","Published":"2017-03-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"VaRES","Version":"1.0","Title":"Computes value at risk and expected shortfall for over 100\nparametric distributions","Description":"Computes Value at risk and expected shortfall, two most popular measures of financial risk, for over one hundred parametric distributions, including all commonly known distributions. Also computed are the corresponding probability density function and cumulative distribution function.","Published":"2013-08-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VarfromPDB","Version":"2.2.7","Title":"Disease-Gene-Variant Relations Mining from the Public Databases\nand Literature","Description":"Captures and compiles the genes and variants related to a disease, a phenotype or a clinical feature from the public databases including HPO (Human Phenotype Ontology, ), Orphanet , OMIM (Online Mendelian Inheritance in Man, ), ClinVar , and UniProt (Universal Protein Resource, ) and PubMed abstracts. HPO provides a standardized vocabulary of phenotypic abnormalities encountered in human disease. HPO currently contains approximately 11,000 terms and over 115,000 annotations to hereditary diseases. Orphanet is the reference portal for information on rare diseases and orphan drugs, whose aim is to help improve the diagnosis, care and treatment of patients with rare diseases. OMIM is a continuously updated catalog of human genes and genetic disorders and traits, with particular focus on the molecular relationship between genetic variation and phenotypic expression. ClinVar is a freely accessible, public archive of reports of the relationships among human variations and phenotypes, with supporting evidence. UniProt focuses on amino acid altering variants imported from Ensembl Variation databases. For Homo sapiens, the variants including human polymorphisms and disease mutations in the UniProt are manually curated from UniProtKB/Swiss-Prot. Additionally, PubMed provides the primary and latest source of the information. Text mining was employed to capture the information from PubMed abstracts. ","Published":"2017-05-06","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"varhandle","Version":"2.0.2","Title":"Functions for Robust Variable Handling","Description":"Variables are the fundamental parts of each programming language but handling them might be frustrating for programmers from time to time. This package contains some functions to help user (especially data explorers) to make more sense of their variables and take the most out of variables as well as their hardware. These functions are written, collected and crafted over some years of experience in statistical data analysis and for each of them there was a need. Functions in this package are suppose to be efficient and easy to use, hence they will be frequently updated to make them more convenient.","Published":"2017-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VariABEL","Version":"0.9-2.1","Title":"Testing of Genotypic Variance Heterogeneity to Detect\nPotentially Interacting SNP","Description":"Presence of interaction between a SNP and another SNP (or another factor) can result \n\tin heterogeneity of variance between the genotypes of an interacting SNP. \n\tDetecting such heterogeneity gives prior knowledge for constructing a genetic model \n\tunderlying complex trait.","Published":"2016-07-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"variables","Version":"0.0-30","Title":"Variable Descriptions","Description":"Abstract descriptions of (yet) unobserved variables.","Published":"2016-02-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"VariableScreening","Version":"0.1.1","Title":"High-Dimensional Screening for Semiparametric Longitudinal\nRegression","Description":"Implements a screening procedure proposed by Wanghuan Chu, Runze Li\n and Matthew Reimherr (2016) for varying coefficient\n longitudinal models with ultra-high dimensional predictors . The effect of each\n predictor is allowed to vary over time, approximated by a low-dimensional B-spline.\n Within-subject correlation is handled using a generalized estimation equation\n approach with structure specified by the user. Variance is allowed to change\n over time, also approximated by a B-spline.","Published":"2016-07-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"varian","Version":"0.2.2","Title":"Variability Analysis in R","Description":"Uses a Bayesian model to\n estimate the variability in a repeated\n measure outcome and use that as an outcome or a predictor\n in a second stage model.","Published":"2016-02-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"VarianceGamma","Version":"0.3-1","Title":"The Variance Gamma Distribution","Description":"This package provides functions for the variance gamma\n distributions. Density, distribution and quantile functions.\n Functions for random number generation and fitting of the\n variance gamma to data. Also, functions for computing moments\n of the variance gamma distribution of any order about any\n location. In addition, there are functions for checking the\n validity of parameters and to interchange different sets of\n parameterizatons for the variance gamma distribution.","Published":"2015-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VarReg","Version":"1.0.1","Title":"Semi-Parametric Variance Regression","Description":"Methods for fitting semi-parametric mean and variance models, with normal or censored data. Also extended to allow a regression in the location, scale and shape parameters.","Published":"2017-05-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vars","Version":"1.5-2","Title":"VAR Modelling","Description":"Estimation, lag selection, diagnostic testing, forecasting, causality analysis, forecast error variance decomposition and impulse response functions of VAR models and estimation of SVAR and SVEC models.","Published":"2013-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VARSEDIG","Version":"1.3","Title":"An Algorithm for Morphometric Characters Selection and\nStatistical Validation in Morphological Taxonomy","Description":"An algorithm (Guisande et al., 2016 ) which identifies the morphometric features that significantly discriminate two taxa and validates the morphological distinctness between them via a Monte-Carlo test, polar coordinates and overlap of the area under the density curve.","Published":"2016-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"varSel","Version":"0.1","Title":"Sequential Forward Floating Selection using Jeffries-Matusita\nDistance","Description":"Feature selection using Sequential Forward Floating feature Selection and Jeffries-Matusita distance. It returns a suboptimal set of features to use for image classification. Reference: Dalponte, M., Oerka, H.O., Gobakken, T., Gianelle, D. & Naesset, E. (2013). Tree Species Classification in Boreal Forests With Hyperspectral Data. IEEE Transactions on Geoscience and Remote Sensing, 51, 2632-2645, .","Published":"2016-11-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"VarSelLCM","Version":"1.2","Title":"Variable Selection for Model-Based Clustering using the\nIntegrated Complete-Data Likelihood of a Latent Class Model","Description":"Uses a finite mixture model for performing the cluster analysis with variable selection of continuous data by assuming independence between classes. The package deals dataset with missing values by assuming that values are missing at random. The one-dimensional marginals of the components follow Gaussian distributions for facilitating both model interpretation and model selection. The variable selection is led by the Maximum Integrated Complete-Data Likelihood criterion. The maximum likelihood inference is done by an EM algorithm for the selected model. This package also performs the imputation of missing values.","Published":"2015-06-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"varSelRF","Version":"0.7-5","Title":"Variable Selection using Random Forests","Description":"Variable selection from random forests using both\n backwards variable elimination (for the selection of small sets\n of non-redundant variables) and selection based on the\n importance spectrum (somewhat similar to scree plots; for the\n selection of large, potentially highly-correlated variables).\n Main applications in high-dimensional data (e.g., microarray\n data, and other genomics and proteomics applications). ","Published":"2014-12-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VARsignR","Version":"0.1.3","Title":"Sign Restrictions, Bayesian, Vector Autoregression Models","Description":"Provides routines for identifying structural shocks in vector autoregressions (VARs) using sign restrictions.","Published":"2015-12-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"VarSwapPrice","Version":"1.0","Title":"Pricing a variance swap on an equity index","Description":"Computes a portfolio of European options that replicates\n the cost of capturing the realised variance of an equity index.","Published":"2012-03-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vbdm","Version":"0.0.4","Title":"Variational Bayes Discrete Mixture Model","Description":"Efficient algorithm for solving discrete mixture\n\t\tregression model for rare variant association analysis.\n\t\tUses variational Bayes algorithm to efficiently search over\n\t\tmodel space. Outputs an approximate likelihood ratio test\n\t\tas well as variant level posterior probabilities of\n\t\tassociation.","Published":"2014-02-01","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"VBLPCM","Version":"2.4.4","Title":"Variational Bayes Latent Position Cluster Model for Networks","Description":"Fit and simulate latent position and cluster\n models for network data, using a fast Variational Bayes approximation.","Published":"2015-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VBmix","Version":"0.3.2","Title":"Variational Bayesian Mixture Models","Description":"Variational algorithms and methods for fitting mixture\n models.","Published":"2017-04-01","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"vbsr","Version":"0.0.5","Title":"Variational Bayes Spike Regression Regularized Linear Models","Description":"Efficient algorithm for solving ultra-sparse\n\t\tregularized regression models using a variational\n\t\tBayes algorithm with a spike (l0) prior. Algorithm\n\t\tis solved on a path, with coordinate updates, and is\n\t\tcapable of generating very sparse models. There are\n\t\tvery general model diagnostics for controling type-1\n\t\terror included in this package.","Published":"2014-06-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"VCA","Version":"1.3.2","Title":"Variance Component Analysis","Description":"\n ANOVA and REML estimation of linear mixed models is implemented, once following\n Searle et al. (1991, ANOVA for unbalanced data), once making use of the 'lme4' package.\n The primary objective of this package is to perform a variance component analysis (VCA)\n according to CLSI EP05-A3 guideline \"Evaluation of Precision of Quantitative Measurement\n Procedures\" (2014). There are plotting methods for visualization of an experimental design,\n plotting random effects and residuals. For ANOVA type estimation two methods for computing\n ANOVA mean squares are implemented (SWEEP and quadratic forms). The covariance matrix of \n variance components can be derived, which is used in estimating confidence intervals. Linear\n hypotheses of fixed effects and LS means can be computed. LS means can be computed at specific\n values of covariables and with custom weighting schemes for factor variables. See ?VCA for a\n more comprehensive description of the features. ","Published":"2016-07-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"vcd","Version":"1.4-3","Title":"Visualizing Categorical Data","Description":"Visualization techniques, data sets, summary and inference\n procedures aimed particularly at categorical data. Special\n emphasis is given to highly extensible grid graphics. The\n package was package was originally inspired by the book \n\t\"Visualizing Categorical Data\" by Michael Friendly and is \n\tnow the main support package for a new book, \n\t\"Discrete Data Analysis with R\" by Michael Friendly and \n\tDavid Meyer (2015).","Published":"2016-09-17","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"vcdExtra","Version":"0.7-0","Title":"'vcd' Extensions and Additions","Description":"Provides additional data sets, methods and documentation to complement the 'vcd' package for Visualizing Categorical Data\n and the 'gnm' package for Generalized Nonlinear Models.\n\tIn particular, 'vcdExtra' extends mosaic, assoc and sieve plots from 'vcd' to handle 'glm()' and 'gnm()' models and\n\tadds a 3D version in 'mosaic3d'. Additionally, methods are provided for comparing and visualizing lists of\n\t'glm' and 'loglm' objects. This package is now a support package for the book, \"Discrete Data Analysis with R\" by\n Michael Friendly and David Meyer.","Published":"2016-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vcfR","Version":"1.5.0","Title":"Manipulate and Visualize VCF Data","Description":"Facilitates easy manipulation of variant call format (VCF) data.\n Functions are provided to rapidly read from and write to VCF files. Once\n VCF data is read into R a parser function extracts matrices of data. This\n information can then be used for quality control or other purposes. Additional\n functions provide visualization of genomic data. Once processing is complete\n data may be written to a VCF file (*.vcf.gz). It also may be converted into\n other popular R objects (e.g., genlight, DNAbin). VcfR provides a link between\n VCF data and familiar R software.","Published":"2017-05-18","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"vcrpart","Version":"0.4-2","Title":"Tree-Based Varying Coefficient Regression for Generalized Linear\nand Ordinal Mixed Models","Description":"Recursive partitioning for varying coefficient generalized linear models and ordinal linear mixed models. Special features are coefficient-wise partitioning, non-varying coefficients and partitioning of time-varying variables in longitudinal regression.","Published":"2016-11-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VDA","Version":"1.3","Title":"VDA","Description":"Multicategory Vertex Discriminant Analysis: A novel supervised multicategory classification method","Published":"2013-07-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"VDAP","Version":"2.0.0","Title":"Peptide Array Analysis Tools","Description":"Analyze Peptide Array Data and characterize peptide\n sequence space. Allows for high level visualization of global signal, Quality control based\n on replicate correlation and/or relative Kd, calculation of peptide Length/Charge/Kd parameters,\n Hits selection based on RFU Signal, and amino acid composition/basic motif recognition with RFU\n signal weighting. Basic signal trends can be used to generate peptides that follow the observed\n compositional trends.","Published":"2016-05-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"vdg","Version":"1.2.0","Title":"Variance Dispersion Graphs and Fraction of Design Space Plots","Description":"Facilities for constructing variance dispersion graphs, fraction-\n of-design-space plots and similar graphics for exploring the properties of\n experimental designs. The design region is explored via random sampling, which\n allows for more flexibility than traditional variance dispersion graphs. A\n formula interface is leveraged to provide access to complex model formulae.\n Graphics can be constructed simultaneously for multiple experimental designs\n and/or multiple model formulae. Instead of using pointwise optimization to\n find the minimum and maximum scaled prediction variance curves, which can be\n inaccurate and time consuming, this package uses quantile regression as an\n alternative.","Published":"2016-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Vdgraph","Version":"2.2-2","Title":"Variance dispersion graphs and Fraction of design space plots\nfor response surface designs","Description":"Uses a modification of the published FORTRAN code in \"A Computer Program for Generating Variance Dispersion Graphs\" by G. Vining, Journal of Quality Technology, Vol. 25 No. 1 January 1993, to produce variance dispersion graphs. Also produces fraction of design space plots, and contains data frames for several minimal run response surface designs. ","Published":"2014-12-13","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"VdgRsm","Version":"1.5","Title":"Plots of Scaled Prediction Variances for Response Surface\nDesigns","Description":"Functions for creating variance dispersion graphs, fraction of design space plots, and contour plots of scaled prediction variances for second-order response surface designs in spherical and cuboidal regions. Also, some standard response surface designs can be generated.","Published":"2015-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vdiffr","Version":"0.1.1","Title":"Visual Regression Testing and Graphical Diffing","Description":"An extension to the 'testthat' package that makes it easy\n to add graphical unit tests. It provides a Shiny application to\n manage the test cases.","Published":"2016-11-15","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"vdmR","Version":"0.2.4","Title":"Visual Data Mining Tools for R","Description":"This provides web-based visual data-mining tools by adding\n interactive functions to 'ggplot2' graphics. Brushing and linking between the\n multiple plots is one of the main feature of this package. Currently scatter\n plots, histograms, parallel coordinate plots and choropleth maps are supported.","Published":"2017-06-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"vec2dtransf","Version":"1.1","Title":"2D Cartesian Coordinate Transformation","Description":"A package for applying affine and similarity transformations on vector spatial data (sp objects). Transformations can be defined from control points or directly from parameters. If redundant control points are provided Least Squares is applied allowing to obtain residuals and RMSE.","Published":"2014-10-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vecsets","Version":"1.1","Title":"like base::sets tools but keeps duplicate elements","Description":"The base 'sets' tools follow the algebraic definition\n that each element of a set must be unique. \n Since it's often helpful to compare all elements of two vectors,\n this toolset treats every element as unique for counting purposes.\n For ease of use, all functions in vecsets have an argument\n 'multiple' which, when set to FALSE,\n reverts them to the base::set tools functionality.","Published":"2014-10-25","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"VecStatGraphs2D","Version":"1.8","Title":"Vector Analysis using Graphical and Analytical Methods in 2D","Description":"A 2D statistical analysis is performed, both numerical and graphical, of a set of vectors. Since a vector has two components (module and azimuth) vector analysis is performed in three stages: modules are analyzed by means of linear statistics, azimuths are analyzed by circular statistics, and the joint analysis of modules and azimuths is done using density maps that allow detecting another distribution properties (i.e. anisotropy) and outliers. Tests and circular statistic parameters have associated a full range of graphing: histograms, maps of distributions, point maps, vector maps, density maps, distribution modules and azimuths.","Published":"2016-04-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"VecStatGraphs3D","Version":"1.6","Title":"Vector analysis using graphical and analytical methods in 3D","Description":"This package performs a 3D statistical analysis, both numerical and graphical, of a set of vectors. Since a vector has three components (a module and two angles) vectorial analysis is performed in two stages: modules are analyzed by means of linear statistics and orientations are analyzed by spherical statistics. Tests and spherical statistic parameters are accompanied by a graphs as: density maps, distribution modules and angles. The tests, spherical statistic parameters and graphs allow us detecting another distribution properties (i.e. anisotropy) and outliers.","Published":"2014-10-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vegalite","Version":"0.6.1","Title":"Tools to Encode Visualizations with the 'Grammar of\nGraphics'-Like 'Vega-Lite' 'Spec'","Description":"The 'Vega-Lite' 'JavaScript' framework provides a higher-level grammar\n for visual analysis, akin to 'ggplot' or 'Tableau', that generates complete 'Vega'\n specifications. Functions exist which enable building a valid 'spec' from scratch\n or importing a previously created 'spec' file. Functions also exist to export 'spec'\n files and to generate code which will enable plots to be embedded in properly\n configured web pages. The default behavior is to generate an 'htmlwidget'.","Published":"2016-03-22","License":"AGPL + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"vegan","Version":"2.4-3","Title":"Community Ecology Package","Description":"Ordination methods, diversity analysis and other\n functions for community and vegetation ecologists.","Published":"2017-04-07","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"vegan3d","Version":"1.1-0","Title":"Static and Dynamic 3D Plots for the 'vegan' Package","Description":"Static and dynamic 3D plots to be used with ordination\n results and in diversity analysis, especially with the vegan package.","Published":"2017-04-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"vegclust","Version":"1.6.5","Title":"Fuzzy Clustering of Vegetation Data","Description":"Contains functions used to perform fuzzy clustering of vegetation data under different models. It also includes functions to measure community dissimilarity on the basis of structure and composition.","Published":"2016-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vegdata","Version":"0.9","Title":"Access Vegetation Databases and Treat Taxonomy","Description":"Handling of vegetation data from Turboveg () and other sources (). Taxonomic harmonization given appropriate taxonomic lists (e.g. the German taxonomical standard list \"GermanSL\", ).","Published":"2016-07-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vegetarian","Version":"1.2","Title":"Jost Diversity Measures for Community Data","Description":"This package computes diversity for community data sets\n using the methods outlined by Jost (2006, 2007). While there\n are differing opinions on the ideal way to calculate diversity\n (e.g. Magurran 2004), this method offers the advantage of\n providing diversity numbers equivalents, independent alpha and\n beta diversities, and the ability to incorporate 'order' (q) as\n a continuous measure of the importance of rare species in the\n metrics. The functions provided in this package largely\n correspond with the equations offered by Jost in the cited\n papers. The package computes alpha diversities, beta\n diversities, gamma diversities, and similarity indices.\n Confidence intervals for diversity measures are calculated\n using a bootstrap method described by Chao et al. (2008). For\n datasets with many samples (sites, plots), sim.table creates\n tables of all pairwise comparisons possible, and for grouped\n samples sim.groups calculates pairwise combinations of within-\n and between-group comparisons.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"vein","Version":"0.2.1-8","Title":"Vehicular Emissions Inventories","Description":"Emissions inventories elaboration and visualization,\n consists the three stages, pre-processing activity data, processing\n or estimating the emissions and post-processing of emissions in\n maps and databases.","Published":"2017-06-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"velociraptr","Version":"1.0","Title":"Fossil Analysis","Description":"Functions for downloading, reshaping, culling, cleaning, and analyzing fossil data from the Paleobiology Database .","Published":"2017-02-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"velox","Version":"0.1.0","Title":"Fast Raster Manipulation and Extraction","Description":"C++ accelerated raster manipulation and extraction.","Published":"2016-08-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vembedr","Version":"0.1.2","Title":"Functions to Embed Video in HTML","Description":"A set of functions for generating HTML to\n embed hosted video in your R Markdown documents or Shiny apps.","Published":"2017-01-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"venn","Version":"1.2","Title":"Draw Venn Diagrams","Description":"Draws and displays Venn diagrams up to 7 sets, and any Boolean union of set intersections.","Published":"2016-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VennDiagram","Version":"1.6.17","Title":"Generate High-Resolution Venn and Euler Plots","Description":"A set of functions to generate high-resolution Venn and Euler plots. Includes handling for several special cases, including two-case scaling, and extensive customization of plot shape and structure.","Published":"2016-04-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"venneuler","Version":"1.1-0","Title":"Venn and Euler Diagrams","Description":"Calculates and displays Venn and Euler Diagrams","Published":"2011-08-10","License":"MPL-1.1","snapshot_date":"2017-06-23"} {"Package":"vennplot","Version":"0.9.02","Title":"Venn Diagrams in 2D and 3D","Description":"Calculate and plot Venn diagrams in 2D and 3D.","Published":"2017-04-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"verification","Version":"1.42","Title":"Weather Forecast Verification Utilities","Description":"Utilities for verifying discrete, continuous and probabilistic forecasts, and forecasts expressed as parametric distributions are included.","Published":"2015-07-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"versions","Version":"0.3","Title":"Query and Install Specific Versions of Packages on CRAN","Description":"Installs specified versions of R packages hosted on CRAN and\n provides functions to list available versions and the versions of currently\n installed packages. These tools can be used to help make R projects and\n packages more reproducible. 'versions' fits in the narrow gap between\n the 'devtools' install_version() function and the 'checkpoint' package.\n devtools::install_version() installs a stated package version from source files\n stored on the CRAN archives. However CRAN does not store binary versions of\n packages so Windows users need to have RTools installed and Windows and OSX\n users get longer installation times. 'checkpoint' uses the Revolution Analytics\n MRAN server to install packages (from source or binary) as they were available\n on a given date. It also provides a helpful interface to detect the packages\n in use in a directory and install all of those packages for a given date.\n 'checkpoint' doesn't provide install.packages-like functionality however, and\n that's what 'versions' aims to do, by querying MRAN. As MRAN only goes back to\n 2014-09-17, 'versions' can't install packages archived before this date.","Published":"2016-09-01","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"vertexenum","Version":"1.0.1","Title":"Vertex Enumeration of Polytopes","Description":"When given a description of a polyhedral set by a system of linear inequalities Ax <= b, produce the list of the vertices of the set.","Published":"2015-07-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VertexSimilarity","Version":"0.1","Title":"Creates Vertex Similarity Matrix for an Undirected Graph","Description":"Creates Vertex Similarity matrix of an undirected graph based\n on the method stated by E. A. Leicht, Petter Holme, AND M. E. J. Newman in\n their paper .","Published":"2016-01-24","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"VertexSort","Version":"0.1-1","Title":"Network Hierarchical Structure and Randomization","Description":"Permits to apply the 'Vertex Sort' algorithm (Jothi et al. (2009) <10.1038/msb.2009.52>) to a graph in order to elucidate its hierarchical structure. It also allows graphic visualization of the sorted graph by exporting the results to a cytoscape friendly format. Moreover, it offers five different algorithms of graph randomization: 1) Randomize a graph with preserving node degrees, 2) with preserving similar node degrees, 3) without preserving node degrees, 4) with preserving node in-degrees and 5) with preserving node out-degrees.","Published":"2017-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vesselr","Version":"0.2.1","Title":"Gradient and Vesselness Tools for Arrays and NIfTI Images","Description":"Simple functions for calculating the image gradient, image hessian, volume ratio filter, and Frangi vesselness filter of 3-dimensional volumes.","Published":"2017-04-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"vetools","Version":"1.3-28","Title":"Tools for Venezuelan Environmental Data","Description":"Integrated data management library that offers a variety of tools concerning the loading and manipulation of environmental data available from different Venezuelan governmental sources. Facilities are provided to plot temporal and spatial data as well as understand the health of a collection of meteorological data.","Published":"2014-10-15","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"VetResearchLMM","Version":"0.2.0","Title":"Linear Mixed Models - An Introduction with Applications in\nVeterinary Research","Description":"R Codes and Datasets for Duchateau, L. and Janssen, P. and Rowlands, G. J. (1998). Linear Mixed Models. An Introduction with applications in Veterinary Research. International Livestock Research Institute (ISBN 92-9146-038-9).","Published":"2017-04-04","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"vfcp","Version":"1.1.0","Title":"Computation of v Values for U and Copula C(U, v)","Description":"Computation the value of one of two uniformly \n distributed marginals if the copula probability value is known\n and the value of the second marginal is also known.\n Computation and plotting corresponding cumulative\n distribution function or survival function.","Published":"2017-05-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"VGAM","Version":"1.0-3","Title":"Vector Generalized Linear and Additive Models","Description":"An implementation of about 6 major classes of\n statistical regression models. At the heart of it are the\n vector generalized linear and additive model (VGLM/VGAM)\n classes, and the book \"Vector Generalized Linear and\n Additive Models: With an Implementation in R\" (Yee, 2015)\n \n gives details of the statistical framework and VGAM package.\n Currently only fixed-effects models are implemented,\n i.e., no random-effects models. Many (150+) models and\n distributions are estimated by maximum likelihood estimation\n (MLE) or penalized MLE, using Fisher scoring. VGLMs can be\n loosely thought of as multivariate GLMs. VGAMs are data-driven\n VGLMs (i.e., with smoothing). The other classes are RR-VGLMs\n (reduced-rank VGLMs), quadratic RR-VGLMs, reduced-rank VGAMs,\n RCIMs (row-column interaction models)---these classes perform\n constrained and unconstrained quadratic ordination (CQO/UQO)\n models in ecology, as well as constrained additive ordination\n (CAO). Note that these functions are subject to change;\n see the NEWS and ChangeLog files for latest changes.","Published":"2017-01-11","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"VGAMdata","Version":"1.0-3","Title":"Data Supporting the 'VGAM' Package","Description":"Data sets to accompany the VGAM package and\n\tthe book \"Vector Generalized Linear and\n\tAdditive Models: With an Implementation in R\" (Yee, 2015)\n\t.\n\tThese are used to illustrate vector generalized\n\tlinear and additive models (VGLMs/VGAMs), and associated models\n\t(Reduced-Rank VGLMs, Quadratic RR-VGLMs, Row-Column\n\tInteraction Models, and constrained and unconstrained ordination\n\tmodels in ecology).","Published":"2017-01-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"VHDClassification","Version":"0.3","Title":"Discrimination/Classification in very high dimension with linear\nand quadratic rules","Description":"This package provides an implementation of Linear discriminant analysis and quadratic discriminant analysis that works fine in very high dimension (when there are many more variables than observations). ","Published":"2013-12-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"vhica","Version":"0.2.4","Title":"Vertical and Horizontal Inheritance Consistence Analysis","Description":"The \"Vertical and Horizontal Inheritance Consistence Analysis\" method is described in the following publication: \"VHICA: a new method to discriminate between vertical and horizontal transposon transfer: application to the mariner family within Drosophila\" by G. Wallau. et al. (2016) . The purpose of the method is to detect horizontal transfers of transposable elements, by contrasting the divergence of transposable element sequences with that of regular genes. ","Published":"2016-04-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"VideoComparison","Version":"0.15","Title":"Video Comparison Tool","Description":"It will take the vectors of motion for two videos\n\t(coming from a variant of shotdetect code allowing to store\n\tdetailed motion vectors in JSON format, for instance) and it\n\twill look for comparing taking out the common chunk.\n\tThen, provided you have some image's hashes it will compare\n\ttheir signature in order to make up the decision about\n\tchunk similarity of two video files.\n ShotDetect is a free software which detects shots and scenes\n from a video (http://johmathe.name/shotdetect.html).","Published":"2015-07-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vietnamcode","Version":"0.1.1","Title":"Convert Vietnam Provincial Codes","Description":"Converts Vietnam's provinces' names and ID\n across different formats. Handles diacritics and different spellings.","Published":"2016-07-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"VIF","Version":"1.0","Title":"VIF Regression: A Fast Regression Algorithm For Large Data","Description":"This package implements a fast regression algorithm for\n building linear model for large data as defined in the paper\n \"VIF-Regression: A Fast Regression Algorithm for Large Data\n (2011), Journal of the American Statistical Association, Vol.\n 106, No. 493: 232-247\" by Dongyu Lin, Dean P. Foster, and Lyle\n H. Ungar.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VIFCP","Version":"1.2","Title":"Detecting Change-Points via VIFCP Method","Description":"Contains a function to support the following paper:\n Xiaoping Shi, Xiang-Sheng Wang, Dongwei Wei, Yuehua Wu (2016), ,\n A sequential multiple change-point detection procedure via VIF regression,\n Computational Statistics, 31(2): 671-691.","Published":"2016-08-31","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"VIGoR","Version":"1.0","Title":"Variational Bayesian Inference for Genome-Wide Regression","Description":"Conducts linear regression using variational Bayesian inference, particularly optimized for genome-wide association mapping and whole-genome prediction which use a number of DNA markers as the explanatory variables. Provides seven regression models which select the important variables (i.e., the variables related to response variables) among the given explanatory variables in different ways (i.e., model structures).","Published":"2015-05-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"VIM","Version":"4.7.0","Title":"Visualization and Imputation of Missing Values","Description":"New tools for the visualization of missing and/or imputed values\n are introduced, which can be used for exploring the data and the structure of\n the missing and/or imputed values. Depending on this structure of the missing\n values, the corresponding methods may help to identify the mechanism generating\n the missing values and allows to explore the data including missing values.\n In addition, the quality of imputation can be visually explored using various\n univariate, bivariate, multiple and multivariate plot methods. A graphical user\n interface available in the separate package VIMGUI allows an easy handling of\n the implemented plot methods.","Published":"2017-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VIMGUI","Version":"0.10.0","Title":"Visualization and Imputation of Missing Values - Graphical User\nInterface","Description":"A graphical user interface for the methods implemented in the\n package VIM. It allows an easy handling of the\n implemented plot and imputation methods.","Published":"2016-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VineCopula","Version":"2.1.2","Title":"Statistical Inference of Vine Copulas","Description":"Provides tools for the statistical analysis of vine copula models.\n The package includes tools for parameter estimation, model selection,\n simulation, goodness-of-fit tests, and visualization. Tools for estimation,\n selection and exploratory data analysis of bivariate copula models are also\n provided.","Published":"2017-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vines","Version":"1.1.5","Title":"Multivariate Dependence Modeling with Vines","Description":"Implementation of the vine graphical model for building\n high-dimensional probability distributions as a factorization of\n bivariate copulas and marginal density functions. This package\n provides S4 classes for vines (C-vines and D-vines) and methods\n for inference, goodness-of-fit tests, density/distribution\n function evaluation, and simulation.","Published":"2016-07-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"violinmplot","Version":"0.2.1","Title":"Combination of violin plot with mean and standard deviation","Description":"A lattice violin-plot is overlayed with the arithmetic\n mean and standard deviation.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"vioplot","Version":"0.2","Title":"Violin plot","Description":"A violin plot is a combination of a box plot and a kernel density plot. ","Published":"2005-10-29","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"viopoints","Version":"0.2-1","Title":"1-D Scatter Plots with Jitter Using Kernel Density Estimates","Description":"viopoints draws one dimensional scatter plots with jitter\n using kernel density estimates in a similar way to violin\n plots.","Published":"2011-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vipor","Version":"0.4.5","Title":"Plot Categorical Data Using Quasirandom Noise and Density\nEstimates","Description":"Generate a violin point plot, a combination of a violin/histogram\n plot and a scatter plot by offsetting points within a category based on their\n density using quasirandom noise.","Published":"2017-03-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"viridis","Version":"0.4.0","Title":"Default Color Maps from 'matplotlib'","Description":"Port of the new 'matplotlib' color maps ('viridis' - the default\n -, 'magma', 'plasma' and 'inferno') to 'R'. 'matplotlib' is a popular plotting library for 'python'. These color maps are designed\n in such a way that they will analytically be perfectly perceptually-uniform,\n both in regular form and also when converted to black-and-white. They are\n also designed to be perceived by readers with the most common form of color\n blindness.","Published":"2017-03-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"viridisLite","Version":"0.2.0","Title":"Default Color Maps from 'matplotlib' (Lite Version)","Description":"Port of the new 'matplotlib' color maps ('viridis' - the default\n -, 'magma', 'plasma' and 'inferno') to 'R'. 'matplotlib' is a popular plotting library for 'python'. These color maps are designed\n in such a way that they will analytically be perfectly perceptually-uniform,\n both in regular form and also when converted to black-and-white. They are\n also designed to be perceived by readers with the most common form of color\n blindness. This is the 'lite' version of the more complete 'viridis' package\n that can be found at .","Published":"2017-03-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"virtualspecies","Version":"1.4-1","Title":"Generation of Virtual Species Distributions","Description":"Provides a framework for generating virtual species distributions,\n a procedure increasingly used in ecology to improve species distribution\n models. This package integrates the existing methodological approaches with the\n objective of generating virtual species distributions with increased ecological\n realism.","Published":"2016-12-22","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"virustotal","Version":"0.2.1","Title":"R Client for the VirusTotal API","Description":"Use VirusTotal, a Google service that analyzes files and URLs \n for viruses, worms, trojans etc., provides category of the content hosted by a \n domain from a variety of prominent services, provides passive DNS information,\n among other things. See for more information. ","Published":"2017-05-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ViSiElse","Version":"1.2.0","Title":"A Visual Tool for Behaviour Analysis","Description":"A graphical tool designed to visualize and to give an overview of behavioural observations realized on individuals or groups. Visualization of raw data during experimental observations of the realization of a procedure. It graphically presents an overview of individuals and group actions usually acquired from timestamps during video recorded sessions. Options of the package allow adding graphical information as statistical indicators (mean, standard deviation, quantile or statistical test) but also for each action green or black zones providing visual information about the accuracy of the realized actions.","Published":"2016-08-25","License":"AGPL-3","snapshot_date":"2017-06-23"} {"Package":"visNetwork","Version":"1.0.3","Title":"Network Visualization using 'vis.js' Library","Description":"Provides an R interface to the 'vis.js' JavaScript charting\n library. It allows an interactive visualization of networks.","Published":"2016-12-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"visreg","Version":"2.4-0","Title":"Visualization of Regression Models","Description":"Provides a convenient interface for constructing plots to visualize the fit of regression models arising from a wide variety of models in R ('lm', 'glm', 'coxph', 'rlm', 'gam', 'locfit', 'randomForest', etc.)","Published":"2017-06-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vistime","Version":"0.4.0","Title":"Pretty Timeline Creation","Description":"Create timelines or Gantt charts, offline and interactive, that are usable in the 'RStudio' viewer pane, in 'R Markdown' documents and in 'Shiny' apps using 'plotly.js', a high-level, declarative charting library (see ). Hover the mouse pointer over a point or task to show details or drag a rectangle to zoom in. Timelines (and the data behind them) can be manipulated using 'plotly_build()' or, once uploaded to a 'plotly' account, viewed and modified in a web browser.","Published":"2017-06-03","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"visualFields","Version":"0.4.3","Title":"Statistical Methods for Visual Fields","Description":"A collection of tools for analyzing the field of vision. It provides a framework for development and use of innovative methods for visualization, statistical analysis, and clinical interpretation of visual-field loss and its change over time. It is intended to be a tool for collaborative research.","Published":"2016-01-16","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"visualize","Version":"4.3.0","Title":"Graph Probability Distributions with User Supplied Parameters\nand Statistics","Description":"Graphs the pdf or pmf and highlights what area or probability is\n present in user defined locations. Visualize is able to provide lower tail,\n bounded, upper tail, and two tail calculations. Supports strict and equal\n to inequalities. Also provided on the graph is the mean and variance of\n the distribution. ","Published":"2017-04-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"VisuClust","Version":"1.2","Title":"Visualisation of Clusters in Multivariate Data","Description":"Displays multivariate data, based on Sammon's nonlinear mapping.","Published":"2016-02-17","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"vita","Version":"1.0.0","Title":"Variable Importance Testing Approaches","Description":"Implements the novel testing approach by Janitza et al.(2015)\n \n for the permutation variable importance measure in a random forest and the\n PIMP-algorithm by Altmann et al.(2010) .\n Janitza et al.(2015) \n do not use the \"standard\" permutation variable\n importance but the cross-validated permutation variable\n importance for the novel test approach. The cross-validated\n permutation variable importance is not based on the out-of-bag\n observations but uses a similar strategy which is inspired by\n the cross-validation procedure. The novel test approach can be\n applied for classification trees as well as for regression\n trees. However, the use of the novel testing approach has not\n been tested for regression trees so far, so this routine is\n meant for the expert user only and its current state is rather\n experimental.","Published":"2015-12-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vitality","Version":"1.2","Title":"Fitting Routines for the Vitality Family of Mortality Models","Description":"Provides fitting routines for four versions of the\n Vitality family of mortality models.","Published":"2016-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VizOR","Version":"0.8-5","Title":"Graphical Visualization Tools for Complex Observational Data\nwith Focus on Health Sciences","Description":"Provides individual- and aggregate-level graphical depictions of\n patterns of treatment and response in patient registries, and a graphical\n tool for examining potential for confounding in analyses of observational\n data.","Published":"2016-12-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vkR","Version":"0.1","Title":"Access to VK API via R","Description":"Provides an interface to the VK API .\n VK is the largest European online social networking\n service, based in Russia.","Published":"2016-12-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"VLF","Version":"1.0","Title":"Frequency Matrix Approach for Assessing Very Low Frequency\nVariants in Sequence Records","Description":"Using frequency matrices, very low frequency variants (VLFs) are assessed for amino acid and nucleotide sequences. The VLFs are then compared to see if they occur in only one member of a species, singleton VLFs, or if they occur in multiple members of a species, shared VLFs. The amino acid and nucleotide VLFs are then compared to see if they are concordant with one another. Amino acid VLFs are also assessed to determine if they lead to a change in amino acid residue type, and potential changes to protein structures.","Published":"2013-11-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"VLMC","Version":"1.4-1","Title":"Variable Length Markov Chains ('VLMC') Models","Description":"Functions, Classes & Methods for estimation, prediction, and\n simulation (bootstrap) of Variable Length Markov Chain ('VLMC') Models.","Published":"2015-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vmsbase","Version":"2.1.3","Title":"GUI Tools to Process, Analyze and Plot Fisheries Data","Description":"The tools you need to process, analyze, combine, integrate and\n plot your fishery data: the georeferenced dataset from the Vessel Monitoring\n System (VMS), from the Automatic Information System (AIS) or other tracking\n devices, as well as the catches or landings dataset from the Logbook or Vessel\n Register. Package 'vmsbase' is equipped by Viewer Tools to visually inspect data\n at different steps of the analyses and to produce effective outputs for reports\n and scientific publications. Viewers are conceived to show the VMS pings, to\n visualize single or multiple tracks for fishing vessels, or to represent the VMS\n data on Google Viewer, so that the user can produce easy to interpret and more\n realistic visualization of both fishing effort and effort behaviour. Package\n 'vmsbase' represents the implementation of several R routines which have been\n developed by the \"Tor Vergata\" University of Rome Team involved in the Italian\n National Program for the Data Collection Framework for Fisheries Data between\n 2009-2012.","Published":"2016-06-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VNM","Version":"4.1","Title":"Using V-algorithm and Newton-Raphson Method to Obtain\nMultiple-objective Optimal Design","Description":"Using V-algorithm and Newton-Raphson method to obtain multiple-objective optimal design for estimating the shape of dose-response, the ED50 (the dose producing an effect midway between the expected responses at the extreme doses) and the MED (the minimum effective dose level) for the 2,3,4-parameter logistic models and for evaluating its efficiencies for the three objectives. ","Published":"2016-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vocaldia","Version":"0.8.1","Title":"Create and Manipulate Vocalisation Diagrams","Description":"Create adjacency matrices of vocalisation graphs from\n dataframes containing sequences of speech and silence intervals,\n transforming these matrices into Markov diagrams, and generating\n datasets for classification of these diagrams by 'flattening' them\n and adding global properties (functionals) etc. Vocalisation\n diagrams date back to early work in psychiatry (Jaffe and Feldstein,\n 1970) and social psychology (Dabbs and Ruback, 1987) but have only\n recently been employed as a data representation method for machine\n learning tasks including meeting segmentation (Luz, 2012)\n and classification (Luz,\n 2013) .","Published":"2017-04-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vortexR","Version":"1.1.4","Title":"Post Vortex Simulation Analysis","Description":"Facilitate Post Vortex Simulation Analysis by offering\n tools to collate multiple Vortex (v10) output files into one R object, and\n analyse the collated output statistically. Vortex is a software for\n the development of individual-based model for population dynamic simulation\n (see ).","Published":"2017-06-07","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vortexRdata","Version":"1.0.3","Title":"Example Data for R Package 'vortexR'","Description":"Contains selected data from two publications,\n Campbell et al. (2016) \n and Pacioni et al. (2017) .\n The data is provided both as raw outputs from the population viability\n analysis software Vortex and packaged as R objects.\n The R package 'vortexR' uses the raw data provided here to illustrate its\n functionality of parsing raw Vortex output into R objects.","Published":"2017-03-25","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Voss","Version":"0.1-4","Title":"Generic Voss algorithm (random sequential additions)","Description":"Voss package provides functionality for generating\n realizations of a fractal Brownian function on uniform 1D & 2D\n grid with classic and generic versions of the Voss algorithm\n (random sequential additions)","Published":"2012-06-04","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vote","Version":"1.0-0","Title":"Election Vote Counting","Description":"Counting election votes and determining election results by different methods, including the single transferable vote, approval, score and plurality methods.","Published":"2017-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vottrans","Version":"1.0","Title":"Voter Transition Analysis","Description":"Calculates voter transitions comparing two elections, using the function solve.QP() in package 'quadprog'.","Published":"2016-03-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vowels","Version":"1.2-1","Title":"Vowel Manipulation, Normalization, and Plotting","Description":"Procedures for the manipulation, normalization, and plotting of phonetic and sociophonetic vowel formant data. vowels is the backend for the NORM website.","Published":"2014-11-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vows","Version":"0.5","Title":"Voxelwise Semiparametrics","Description":"Parametric and semiparametric inference for massively parallel\n models, i.e., a large number of models with common design matrix, as often\n occurs with brain imaging data.","Published":"2016-08-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"voxel","Version":"1.3.2","Title":"Mass-Univariate Voxelwise Analysis of Medical Imaging Data","Description":"Functions for the mass-univariate voxelwise analysis of medical imaging data that follows the NIfTI format. ","Published":"2017-05-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"VoxR","Version":"0.5.1","Title":"Metrics extraction of trees from T-LiDAR data","Description":"Tools for tree crown structure description based on T-LiDAR data voxelisation","Published":"2014-01-29","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"vqtl","Version":"1.2.0","Title":"Genome Scans to Accommodate and Target Genetic and Non-Genetic\nEffects on Trait Variance in Test Crosses","Description":"In recognition that there are many factors (genetic loci, macro-\n genetic factors such as sex, and environmental factors) that influence the\n extent of environmental variation, the 'vqtl' package conducts genome scans\n that accommodate and target these factors. The main functions of this package,\n scanonevar() and scanonevar.perm() take as input a cross object from the popular\n 'qtl' package.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vrcp","Version":"0.1.1","Title":"Change Point Estimation for Regression with Varying Segments and\nHeteroscedastic Variances","Description":"Estimation of varying regression segments and a change point in\n 2-segment regression models with heteroscedastic variances, and with or\n without a smoothness constraint at the change point.","Published":"2016-01-08","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"vrmlgen","Version":"1.4.9","Title":"Generate 3D visualizations for data exploration on the web","Description":"vrmlgen creates 3D scatter and bar plots, visualizations of 3D meshes, parametric functions and height maps in web-formats like the Virtual Reality Markup Language (VRML, filetype .wrl) and the LiveGraphics3D format.","Published":"2013-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VRPM","Version":"1.1","Title":"Visualizing Risk Prediction Models","Description":"This is a package to visualize risk prediction models. For each\n predictor, a color bar represents the contribution to the linear predictor\n or latent variable. A conversion from the linear predictor to the\n estimated risk or survival is also given. (Cumulative) contribution charts\n enable to visualize how the estimated risk for one particular observation\n is obtained by the model. Several options allow to choose different color\n maps, and to select the zero level of the contributions. The package is\n able to deal with 'glm', 'coxph', 'mfp', 'multinom' and 'ksvm' objects. For 'ksvm'\n objects, the visualization is not always exact. Functions providing tools\n to indicate the accuracy of the approximation are provided in addition to\n the visualization.","Published":"2016-09-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vrtest","Version":"0.97","Title":"Variance Ratio tests and other tests for Martingale Difference\nHypothesis","Description":"A collection of statistical tests for martingale difference hypothesis","Published":"2014-08-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"vscc","Version":"0.2","Title":"Variable selection for clustering and classification","Description":"Performs variable selection/feature reduction under a clustering or classification framework. In particular, it can be used in an automated fashion using mixture model-based methods (tEIGEN and MCLUST are currently supported). ","Published":"2013-11-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VSE","Version":"0.99","Title":"Variant Set Enrichment","Description":"Calculates the enrichment of associated variant set (AVS) for an array of genomic regions. The AVS is the collection of disjoint LD blocks computed from a list of disease associated SNPs and their linked (LD) SNPs. VSE generates a null distribution of matched random variant sets (MRVSs) from 1000 Genome Project Phase III data that are identical to AVS, LD block by block. It then computes the enrichment of AVS intersecting with user provided genomic features (e.g., histone marks or transcription factor binding sites) compared with the null distribution.","Published":"2016-03-21","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"VSURF","Version":"1.0.3","Title":"Variable Selection Using Random Forests","Description":"Three steps variable selection procedure based on random forests.\n Initially developed to handle high dimensional data (for which number of\n variables largely exceeds number of observations), the package is very\n versatile and can treat most dimensions of data, for regression and\n supervised classification problems. First step is dedicated to eliminate\n irrelevant variables from the dataset. Second step aims to select all\n variables related to the response for interpretation purpose. Third step\n refines the selection by eliminating redundancy in the set of variables\n selected by the second step, for prediction purpose.","Published":"2016-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VTrack","Version":"1.11","Title":"A Collection of Tools for the Analysis of Remote Acoustic\nTelemetry Data","Description":"Designed to facilitate the assimilation, analysis and synthesis of animal location and movement data collected by the VEMCO suite of acoustic transmitters and receivers. As well as database and geographic information capabilities the principal feature of VTrack is the qualification and identification of ecologically relevant events from the acoustic detection and sensor data. This procedure condenses the acoustic detection database by orders of magnitude, greatly enhancing the synthesis of acoustic detection data.","Published":"2015-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vtreat","Version":"0.5.32","Title":"A Statistically Sound 'data.frame' Processor/Conditioner","Description":"A 'data.frame' processor/conditioner that prepares real-world data for predictive modeling in a statistically sound manner.\n 'vtreat' prepares variables so that data has fewer exceptional cases, making\n it easier to safely use models in production. Common problems 'vtreat' defends\n against: 'Inf', 'NA', too many categorical levels, rare categorical levels, and new\n categorical levels (levels seen during application, but not during training).\n 'vtreat::prepare' should be used as you would use 'model.matrix'.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"vudc","Version":"1.1","Title":"Visualization of Univariate Data for Comparison","Description":"Contains functions for visualization univariate data: ccdplot and qddplot.","Published":"2016-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"VWPre","Version":"0.9.6","Title":"Tools for Preprocessing Visual World Data","Description":"Gaze data from the Visual World Paradigm requires significant\n preprocessing prior to plotting and analyzing the data. This package \n provides functions for preparing visual world eye-tracking data for \n statistical analysis and plotting. It can prepare data for linear \n analyses (e.g., ANOVA, Gaussian-family LMER, Gaussian-family GAMM) as\n well as logistic analyses (e.g., binomial-family LMER and binomial-family GAMM).\n Additionally, it contains various plotting functions for creating grand average and\n conditional average plots. See the vignette for samples of the functionality.\n Currently, the functions in this package are designed for handling data\n collected with SR Research Eyelink eye trackers using Sample Reports created\n in SR Research Data Viewer. While we would like to add functionality \n for data collected with other systems in the future, the current package is \n considered to be feature-complete and will shortly enter maintenance mode.","Published":"2017-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"vwr","Version":"0.3.0","Title":"Useful functions for visual word recognition research","Description":"Functions and data for use in visual word recognition research: \n\t Computation of neighbors (Hamming and Levenshtein\n\t distances), average distances to neighbors (e.g., OLD20),\n\t and Coltheart's N. Also includes the LD1NN algorithm to\n\t detect bias in the composition of a lexical decision task. Most of\n\t the functions support parallel execution. Supplies wordlists for \n\t several languages. Uses the string distance functions from the stringdist package by Mark van der Loo.","Published":"2013-08-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"W2CWM2C","Version":"2.0","Title":"A Graphical Tool for Wavelet (Cross) Correlation and Wavelet\nMultiple (Cross) Correlation Analysis","Description":"Set of functions that improves the graphical presentations of the functions 'wave.correlation' and 'spin.correlation' (waveslim package, Whitcher 2012) and the 'wave.multiple.correlation' and 'wave.multiple.cross.correlation' (wavemulcor package, Fernandez-Macho 2012b). The plot outputs (heatmaps) can be displayed in the screen or can be saved as PNG or JPG images or as PDF or EPS formats. The W2CWM2C package also helps to handle the (input data) multivariate time series easily as a list of N elements (times series) and provides a multivariate data set (dataexample) to exemplify its use. A description of the package was published in Computing in Science & Engineering (Volume:16, Issue: 6) on Sep. 09, 2014, doi:10.1109/MCSE.2014.96. ","Published":"2015-08-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"W3CMarkupValidator","Version":"0.1-6","Title":"R Interface to W3C Markup Validation Services","Description":"\n R interface to a W3C Markup Validation service.\n See for more information.","Published":"2017-02-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"WACS","Version":"1.0","Title":"Multivariate Weather-State Approach Conditionally Skew-Normal\nGenerator","Description":"Multivariate weather generator for daily climate variables based \n on weather-states using a Markov chain for modeling the succession of weather states. \n Conditionally to the weather states, the multivariate variables are modeled using the family \n of Complete Skew-Normal distributions. Parameters are estimated on measured series. Data must \n include the variable 'Rain' and can accept as many other variables as desired. ","Published":"2016-02-25","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"waffect","Version":"1.2","Title":"A package to simulate constrained phenotypes under a disease\nmodel H1","Description":"waffect (pronounced 'double-u affect' for 'weighted\n affectation') is a package to simulate phenotypic (case or\n control) datasets under a disease model H1 such that the total\n number of cases is constant across all the simulations (the\n constrain in the title). The package also makes it possible to\n generate phenotypes in the case of more than two classes, so\n that the number of phenotypes belonging to each class is\n constant across all the simulations. waffect is used to assess\n empirically the statistical power of Genome Wide Association\n studies.","Published":"2012-04-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"waffle","Version":"0.7.0","Title":"Create Waffle Chart Visualizations in R","Description":"Square pie charts (a.k.a. waffle charts) can be used\n to communicate parts of a whole for categorical quantities. To emulate the\n percentage view of a pie chart, a 10x10 grid should be used with each square\n representing 1% of the total. Modern uses of waffle charts do not\n necessarily adhere to this rule and can be created with a grid of any\n rectangular shape. Best practices suggest keeping the number of categories\n small, just as should be done when creating pie charts. Tools are provided\n to create waffle charts as well as stitch them together, and to use glyphs\n for making isotype pictograms.","Published":"2017-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wahc","Version":"1.0","Title":"Autocorrelation and Heteroskedasticity Correction in Fixed\nEffect Panel Data Model","Description":"Fit the fixed effect panel data model with heteroskedasticity and\n autocorrelation correction.","Published":"2015-02-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wakefield","Version":"0.3.0","Title":"Generate Random Data Sets","Description":"Generates random data sets including: data.frames, lists,\n and vectors.","Published":"2016-06-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"walker","Version":"0.1.0","Title":"Efficient Bayesian Linear Regression with Time-Varying\nCoefficients","Description":"Fully Bayesian linear regression where the regression \n coefficients are allowed to vary over \"time\", either as independent random \n walks. All computations are done using Hamiltonian Monte Carlo provided by \n Stan, using a state space representation of the model in order to marginalise \n over the coefficients for efficient sampling.","Published":"2017-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"walkr","Version":"0.3.4","Title":"Random Walks in the Intersection of Hyperplanes and the\nN-Simplex","Description":"Consider the intersection of two spaces: the complete solution space\n to Ax = b and the N-simplex. The intersection of these two spaces is \n a non-negative convex polytope. The package walkr samples from this \n intersection using two Monte-Carlo Markov Chain (MCMC) methods: \n hit-and-run and Dikin walk. walkr also provide tools to examine sample \n quality.","Published":"2017-02-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"walkscoreAPI","Version":"1.2","Title":"Walk Score and Transit Score API","Description":"A collection of functions to perform the Application\n Programming Interface (API) calls associated with the Walk\n Score website (www.walkscore.com) within the R environment.\n These functions can be used to query the Walk Score and Transit\n Score database for a wide variety of information using R\n scripts. This package includes the simple Walk Score and\n Transit Score API calls, which return the scores associated\n with an input location, as well as calls which return some data\n used to calculate the scores. These functions are especially\n useful for mass data collection and gathering Walk Score and\n Transit Score values for large lists of locations.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wallace","Version":"0.6.4","Title":"A Modular Platform for Reproducible Modeling of Species Niches\nand Distributions","Description":"The 'shiny' application Wallace is a modular platform for reproducible modeling of species niches and distributions. Wallace guides users through a complete analysis, from the acquisition of species occurrence and environmental data to visualizing model predictions on an interactive map, thus bundling complex workflows into a single, streamlined interface.","Published":"2017-06-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wally","Version":"1.0.9","Title":"The Wally Calibration Plot for Risk Prediction Models","Description":"A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. A calibration plot provides a simple, yet useful, way of assessing the calibration assumption. The Wally plot consists of a sequence of usual calibration plots. Among the plots contained within the sequence, one is the actual calibration plot which has been obtained from the data and the others are obtained from similar simulated data under the calibration assumption. It provides the investigator with a direct visual understanding of the shape and sampling variability that are common under the calibration assumption. The original calibration plot from the data is included randomly among the simulated calibration plots, similarly to a police lineup. If the original calibration plot is not easily identified then the calibration assumption is not contradicted by the data. The method handles the common situations in which the data contain censored observations and occurrences of competing events.","Published":"2017-05-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"walrus","Version":"1.0.0","Title":"Robust Statistical Methods","Description":"A toolbox of common robust statistical tests, including robust\n descriptives, robust t-tests, and robust ANOVA. It is also available as a\n module for 'jamovi' (see for more information).\n Walrus is based on the WRS2 package by Patrick Mair, which is in turn based on\n the scripts and work of Rand Wilcox. These analyses are described in depth in\n the book 'Introduction to Robust Estimation & Hypothesis Testing'.","Published":"2017-05-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wand","Version":"0.2.0","Title":"Retrieve 'Magic' Attributes from Files and Directories","Description":"The 'libmagic' library provides functions to determine\n 'MIME' type and other metadata from files through their \"magic\"\n attributes. This is useful when you do not wish to rely solely on\n the honesty of a user or the extension on a file name. It also\n incorporates other metadata from the mime-db database\n .","Published":"2016-08-16","License":"AGPL","snapshot_date":"2017-06-23"} {"Package":"warbleR","Version":"1.1.8","Title":"Streamline Bioacoustic Analysis","Description":"A tool to streamline the analysis of animal acoustic signal structure. The package offers functions for downloading avian vocalizations from the open-access online repository Xeno-Canto, displaying the geographic extent of the recordings, manipulating sound files, detecting acoustic signals or importing detected signals from other software, assessing performance of methods that measure acoustic similarity, conducting cross-correlations, dynamic time warping, measuring acoustic parameters and analysing interactive vocal signals, among others. Most functions working iteratively allow parallelization to improve computational efficiency.","Published":"2017-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WARN","Version":"1.2-2","Title":"Weaning Age Reconstruction with Nitrogen Isotope Analysis","Description":"This estimates precise weaning ages\n\tfor a given skeletal population\n\tby analyzing the stable nitrogen isotope ratios of them.\n\tBone collagen turnover rates estimated anew and\n\tthe approximate Bayesian computation (ABC)\n\twere adopted in this package.","Published":"2017-02-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"warpMix","Version":"0.1.0","Title":"Mixed Effects Modeling with Warping for Functional Data Using\nB-Spline","Description":"Mixed effects modeling with warping for functional data using B-\n spline. Warping coefficients are considered as random effects, and warping\n functions are general functions, parameters representing the projection onto B-\n spline basis of a part of the warping functions. Warped data are modelled by a\n linear mixed effect functional model, the noise is Gaussian and independent from\n the warping functions.","Published":"2017-02-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"wasim","Version":"1.1.2","Title":"Visualisation and analysis of output files of the hydrological\nmodel WASIM","Description":"Helpful tools for data processing and visualisation of results of the hydrological model WASIM-ETH.","Published":"2011-12-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"water","Version":"0.6","Title":"Actual Evapotranspiration with Energy Balance Models","Description":"Tools and functions to calculate actual Evapotranspiration\n using surface energy balance models.","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"waterData","Version":"1.0.8","Title":"Retrieval, Analysis, and Anomaly Calculation of Daily Hydrologic\nTime Series Data","Description":"Imports U.S. Geological Survey (USGS) daily hydrologic data from USGS web services (see for more information), plots the data, addresses some common data problems, and calculates and plots anomalies. ","Published":"2017-04-28","License":"Unlimited | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"waterfall","Version":"1.0.2","Title":"Waterfall Charts","Description":"Provides support for creating waterfall charts in R\n using both traditional base and lattice graphics.","Published":"2016-04-03","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"waterfalls","Version":"0.1.1","Title":"Create Waterfall Charts using 'ggplot2' Simply","Description":"A not uncommon task for quants is to create 'waterfall charts'. There seems to be no simple way to do this in 'ggplot2' currently. This package contains a single function (waterfall) that simply draws a waterfall chart in a 'ggplot2' object. Some flexibility is provided, though often the object created will need to be modified through a theme.","Published":"2017-02-02","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"WaterML","Version":"1.7.1","Title":"Fetch and Analyze Data from 'WaterML' and 'WaterOneFlow' Web\nServices","Description":"Lets you connect to any of the Consortium of Universities for the Advancement\n of Hydrologic Sciences, Inc. ('CUAHSI') Water Data Center 'WaterOneFlow' web services\n and read any 'WaterML' hydrological time series data file. To see list of available\n web services, see . All versions of 'WaterML'\n (1.0, 1.1 and 2.0) and both types of the web service protocol ('SOAP' and 'REST') are supported.\n The package has six data download functions: GetServices(): show all public web\n services from the HIS Central Catalog. HISCentral_GetSites() and HISCentral_GetSeriesCatalog():\n search for sites or time series from the HIS Central catalog based on geographic bounding box,\n server, or keyword. GetVariables(): Show a data.frame with all variables on the server.\n GetSites(): Show a data.frame with all sites on the server. GetSiteInfo(): Show what variables,\n methods and quality control levels are available at the specific site. GetValues(): Given a site\n code, variable code, start time and end time, fetch a data.frame of all the observation time\n series data values. The GetValues() function can also parse 'WaterML' data from a custom URL or\n from a local file. The package also has five data upload functions:\n AddSites(), AddVariables(), AddMethods(), AddSources(), and AddValues().\n These functions can be used for uploading data to a 'HydroServer Lite' Observations\n Data Model ('ODM') database via the 'JSON' data upload web service interface.","Published":"2016-03-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"Watersheds","Version":"1.1","Title":"Spatial Watershed Aggregation and Spatial Drainage Network\nAnalysis","Description":"Methods for watersheds aggregation and spatial drainage network analysis.","Published":"2016-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Wats","Version":"0.10.3","Title":"Wrap Around Time Series Graphics","Description":"Wrap-around Time Series (WATS) plots for interrupted time series\n designs with seasonal patterns.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"waveband","Version":"4.6","Title":"Computes credible intervals for Bayesian wavelet shrinkage","Description":"Computes Bayesian wavelet shrinkage credible intervals","Published":"2012-10-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"waved","Version":"1.1-2","Title":"Wavelet Deconvolution","Description":"Makes available code necessary to reproduce figures and\n tables in papers on the WaveD method for wavelet deconvolution\n of noisy signals as presented in The WaveD Transform in R,\n Journal of Statistical Software Volume 21, No. 3, 2007.","Published":"2012-11-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"WaveletComp","Version":"1.0","Title":"Computational Wavelet Analysis","Description":"Wavelet analysis and reconstruction of time series, cross-wavelets and phase-difference (with filtering options), significance with simulation algorithms.","Published":"2014-12-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"WaveLetLongMemory","Version":"0.1.1","Title":"Estimating Long Memory Parameter using Wavelet","Description":"Estimation of the long memory parameter using wavelets. Other estimation techniques like \n GPH (Geweke and Porter-Hudak,1983, ) \n and Semiparametric methods(Robinson, P. M.,1995, ) also have included.","Published":"2017-05-11","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"wavelets","Version":"0.3-0","Title":"A package of functions for computing wavelet filters, wavelet\ntransforms and multiresolution analyses","Description":"This package contains functions for computing and plotting\n discrete wavelet transforms (DWT) and maximal overlap discrete\n wavelet transforms (MODWT), as well as their inverses.\n Additionally, it contains functionality for computing and\n plotting wavelet transform filters that are used in the above\n decompositions as well as multiresolution analyses.","Published":"2013-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wavemulcor","Version":"1.2","Title":"Wavelet routine for multiple correlation","Description":"Wavelet routines that calculate single sets of wavelet\n multiple correlations and cross-correlations out of n variables\n (either 1D time series, 2D images or 3D arrays). They can later\n be plotted in single graphs, as an alternative to trying to\n make sense out of several sets of wavelet correlations or\n wavelet cross-correlations. The code is based on the\n calculation, at each wavelet scale, of the square root of the\n coefficient of determination in a linear combination of\n variables for which such coefficient of determination is a\n maximum. The code provided here is based on the\n wave.correlation routine in Brandon Whitcher's waveslim R\n package Version: 1.6.4, which in turn is based on wavelet\n methodology developed in Percival and Walden (2000); Gencay,\n Selcuk and Whitcher (2001) and others.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"waver","Version":"0.2.0","Title":"Calculate Fetch and Wave Energy","Description":"Functions for calculating the\n fetch (length of open water distance along given directions)\n and estimating wave energy from wind and wave monitoring data.","Published":"2017-01-16","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"WaverR","Version":"1.0","Title":"Data Estimation using Weighted Averages of Multiple Regressions","Description":"For multivariate datasets, this function enables the estimation of missing data using the Weighted AVERage of all possible Regressions using the data available.","Published":"2016-02-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"waveslim","Version":"1.7.5","Title":"Basic wavelet routines for one-, two- and three-dimensional\nsignal processing","Description":"Basic wavelet routines for time series (1D), image (2D) \n and array (3D) analysis. The code provided here is based on\n wavelet methodology developed in Percival and Walden (2000);\n Gencay, Selcuk and Whitcher (2001); the dual-tree complex wavelet\n transform (DTCWT) from Kingsbury (1999, 2001) as implemented by\n Selesnick; and Hilbert wavelet pairs (Selesnick 2001, 2002). All\n figures in chapters 4-7 of GSW (2001) are reproducible using this \n package and R code available at the book website(s) below.","Published":"2015-01-10","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"wavethresh","Version":"4.6.8","Title":"Wavelets Statistics and Transforms","Description":"Performs 1, 2 and 3D real and complex-valued wavelet transforms,\n\tnondecimated transforms, wavelet packet transforms, nondecimated\n\twavelet packet transforms, multiple wavelet transforms,\n\tcomplex-valued wavelet transforms, wavelet shrinkage for\n\tvarious kinds of data, locally stationary wavelet time series,\n\tnonstationary multiscale transfer function modeling, density\n\testimation.","Published":"2016-10-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wBoot","Version":"1.0.3","Title":"Bootstrap Methods","Description":"Supplies bootstrap alternatives to traditional hypothesis-test\n and confidence-interval procedures such as one-sample and two-sample\n inferences for means, medians, standard deviations, and proportions; simple\n linear regression; and more. Suitable for general audiences, including\n individual and group users, introductory statistics courses, and more advanced\n statistics courses that desire an introduction to bootstrap methods. ","Published":"2016-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wbs","Version":"1.3","Title":"Wild Binary Segmentation for Multiple Change-Point Detection","Description":"Provides efficient implementation of the Wild Binary Segmentation and Binary\n Segmentation algorithms for estimation of the number and locations of\n multiple change-points in the piecewise constant function plus Gaussian\n noise model.","Published":"2015-02-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wbstats","Version":"0.1.1","Title":"Programmatic Access to Data and Statistics from the World Bank\nAPI","Description":"Tools for searching and downloading data and statistics from\n the World Bank Data API ()\n and the World Bank Data Catalog API ().","Published":"2016-12-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"wbsts","Version":"0.3","Title":"Multiple Change-Point Detection for Nonstationary Time Series","Description":"Implements detection for the number and locations of\n the change-points in a time series using the Wild Binary Segmentation and\n the Locally Stationary Wavelet model.","Published":"2015-09-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wccsom","Version":"1.2.11","Title":"SOM Networks for Comparing Patterns with Peak Shifts","Description":"SOMs can be useful tools to group patterns containing several peaks. If peaks do not always occur at exactly the same position, classical distance measures cannot be used. This package provides SOM technology using the weighted crosscorrelation (WCC) distance.","Published":"2015-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WCE","Version":"1.0","Title":"Weighted Cumulative Exposure Models","Description":"WCE implements a flexible method for modeling cumulative effects of time-varying exposures, weighted according to their relative proximity in time, and represented by time-dependent covariates. The current implementation estimates the weight function in the Cox proportional hazards model. The function that assigns weights to doses taken in the past is estimated using cubic regression splines.","Published":"2015-01-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wCorr","Version":"1.9.1","Title":"Weighted Correlations","Description":"Calculates Pearson, Spearman, polychoric, and polyserial correlation coefficients, in weighted or unweighted form. The package implements tetrachoric correlation as a special case of the polychoric and biserial correlation as a specific case of the polyserial.","Published":"2017-05-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"WCQ","Version":"0.2","Title":"Detection of QTL effects in a small mapping population","Description":"The package contains the WCQ method for detection of QTL\n effects in a small mapping population. It also contains\n implementation of the Chen-Qin two-sample and one-sample test\n of means.","Published":"2012-09-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"WDI","Version":"2.4","Title":"World Development Indicators (World Bank)","Description":"Search, extract and format data from the World Bank's\n World Development Indicators","Published":"2013-08-20","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wdman","Version":"0.2.2","Title":"'Webdriver'/'Selenium' Binary Manager","Description":"There are a number of binary files associated with the\n 'Webdriver'/'Selenium' project (see ,\n ,\n ,\n and\n for\n more information). This package provides functions to download these\n binaries and to manage processes involving them.","Published":"2017-01-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"weatherData","Version":"0.5.0","Title":"Get Weather Data from the Web","Description":"Functions that help in fetching weather data from\n websites. Given a location and a date range, these functions help fetch\n weather data (temperature, pressure etc.) for any weather related analysis.","Published":"2017-06-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"weathermetrics","Version":"1.2.2","Title":"Functions to Convert Between Weather Metrics","Description":"Functions to convert between weather metrics, including conversions\n for metrics of temperature, air moisture, wind speed, and precipitation.\n This package also includes functions to calculate the heat index from\n air temperature and air moisture.","Published":"2016-05-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"weatherr","Version":"0.1.2","Title":"Tools for Handling and Scrapping Instant Weather Forecast Feeds","Description":"Handle instant weather forecasts and geographical information. It combines multiple sources of information to obtain instant weather forecasts.","Published":"2015-09-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"webchem","Version":"0.2","Title":"Chemical Information from the Web","Description":"Chemical information from around the web. This package interacts\n with a suite of web APIs for chemical information.","Published":"2017-03-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"WebGestaltR","Version":"0.1.1","Title":"The R Version of WebGestalt","Description":"The web version WebGestalt supports 12 organisms, 324 gene identifiers and 150,937 function categories. Users can upload the data and functional categories with their own gene identifiers. In addition to the Over-Representation Analysis, WebGestalt also supports Gene Set Enrichment Analysis. The user-friendly output interface allow interactive and efficient exploration of enrichment results. The WebGestaltR package not only supports all above functions but also can be integrated into other pipeline or simultaneous analyze multiple gene lists.","Published":"2017-05-11","License":"LGPL","snapshot_date":"2017-06-23"} {"Package":"webglobe","Version":"1.0.2","Title":"3D Interactive Globes","Description":"Displays geospatial data on an interactive 3D globe in the web browser.","Published":"2017-06-02","License":"MIT + file LICENCE","snapshot_date":"2017-06-23"} {"Package":"webmockr","Version":"0.1.0","Title":"Stubbing and Setting Expectations on 'HTTP' Requests","Description":"Stubbing and setting expectations on 'HTTP' requests.\n Includes tools for stubbing 'HTTP' requests, including expected\n request conditions and response conditions. Match on\n 'HTTP' method, query parameters, request body, headers and\n more.","Published":"2017-05-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"webp","Version":"0.4","Title":"A New Format for Lossless and Lossy Image Compression","Description":"Lossless webp images are 26% smaller in size compared to PNG. Lossy\n webp images are 25-34% smaller in size compared to JPEG. This package reads\n and writes webp images into a 3 (rgb) or 4 (rgba) channel bitmap array using\n conventions from the 'jpeg' and 'png' packages.","Published":"2017-03-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"webreadr","Version":"0.4.0","Title":"Tools for Reading Formatted Access Log Files","Description":"R is used by a vast array of people for a vast array of purposes\n - including web analytics. This package contains functions for consuming and\n munging various common forms of request log, including the Common and Combined\n Web Log formats and various Amazon access logs.","Published":"2016-01-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"webshot","Version":"0.4.1","Title":"Take Screenshots of Web Pages","Description":"Takes screenshots of web pages, including Shiny applications.","Published":"2017-05-31","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"webuse","Version":"0.1.2","Title":"Import Stata 'webuse' Datasets","Description":"A Stata-style `webuse()` function for importing named datasets from Stata's online collection.","Published":"2015-07-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"webutils","Version":"0.6","Title":"Utility Functions for Developing Web Applications","Description":"High performance in-memory http request parser for application/json, \n multipart/form-data, and application/x-www-form-urlencoded. Includes live demo\n of hosting and parsing multipart forms with either 'httpuv' or 'Rhttpd'.","Published":"2017-06-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"webvis","Version":"0.0.2","Title":"Create graphics for the web from R","Description":"Uses Protovis to provide web graphics for R (exposes most\n low-level functions). Package is still under active\n development and shouldn't be considered stable until version\n 0.1. Currently uses a web browser to process JavaScript,\n although future version will process JavaScript directly and\n return the SVG output. Also does not properly support discrete\n labels (e.g. with histograms) or statistical functions. See\n website for more details.","Published":"2011-10-23","License":"BSD","snapshot_date":"2017-06-23"} {"Package":"wec","Version":"0.4","Title":"Weighted Effect Coding","Description":"Provides functions to create factor variables with contrasts based on weighted effect coding, and their interactions. In weighted effect coding the estimates from a first order regression model show the deviations per group from the sample mean. This is especially useful when a researcher has no directional hypotheses and uses a sample from a population in which the number of observation per group is different.","Published":"2016-12-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"weco","Version":"1.0","Title":"Western Electric Company Rules (WECO) for Shewhart Control Chart","Description":"Western Electric Company Rules (WECO) have been widely used for\n Shewhart control charts in order to increase the sensitivity of detecting\n assignable causes of process change. This package implements eight commonly\n used WECO rules and allow to apply the combination of these individual rules\n for detecting the deviation from a stable process. The package also provides\n a web-based graphical user interface to help users conduct the analysis. ","Published":"2016-11-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"WEE","Version":"1.0","Title":"Weighted Estimated Equation (WEE) Approaches in Genetic\nCase-Control Studies","Description":"Secondary analysis of case-control studies using a weighted estimating equation (WEE) approach: logistic regression for binary secondary outcomes, linear regression and quantile regression for continuous secondary outcomes.","Published":"2016-11-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"Weighted.Desc.Stat","Version":"1.0","Title":"Weighted Descriptive Statistics","Description":"Weighted descriptive statistics is the discipline of quantitatively describing the main features of real-valued fuzzy data which usually given from a fuzzy population. One can summarize this special kind of fuzzy data numerically or graphically using this package. To interpret some of the properties of one or several sets of real-valued fuzzy data, numerically summarize is possible by some weighted statistics which are designed in this package such as mean, variance, covariance and correlation coefficent. Also, graphically interpretation can be given by weighted histogram and weighted scatter plot using this package to describe properties of real-valued fuzzy data set.","Published":"2016-02-29","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"WeightedCluster","Version":"1.2-1","Title":"Clustering of Weighted Data","Description":"Clusters state sequences and weighted data. It provides an optimized weighted PAM algorithm as well as functions for aggregating replicated cases, computing cluster quality measures for a range of clustering solutions and plotting clusters of state sequences.","Published":"2017-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WeightedPortTest","Version":"1.0","Title":"Weighted Portmanteau Tests for Time Series Goodness-of-fit","Description":"This packages contains the Weighted Portmanteau Tests as\n described in \"New Weighted Portmanteau Statistics for Time\n Series Goodness-of-Fit Testing' accepted for publication by the\n Journal of the American Statistical Association.","Published":"2012-05-24","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"WeightedROC","Version":"2017.06.08","Title":"Fast, Weighted ROC Curves","Description":"Fast computation of\n Receiver Operating Characteristic (ROC) curves\n and Area Under the Curve (AUC)\n for weighted binary classification problems\n (weights are example-specific cost values).","Published":"2017-06-19","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"weightedScores","Version":"0.9.5.1","Title":"Weighted Scores Method for Regression Models with Dependent Data","Description":"Has functions to implement the weighted scores method and CL1 information criteria as an intermediate step for variable/correlation selection for longitudinal categorical and count data in Nikoloulopoulos, Joe and Chaganty (2011, Biostatistics, 12: 653-665) and Nikoloulopoulos (2015a,2015b).","Published":"2015-10-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"weightr","Version":"1.1.2","Title":"Estimating Weight-Function Models for Publication Bias","Description":"Estimates the Vevea and Hedges (1995) \n weight-function model. By specifying arguments, users can\n also estimate the modified model described in Vevea and Woods (2005) \n , which may be more practical with small datasets. Users \n can also specify moderators to estimate a linear model. The package functionality \n allows users to easily extract the results of these analyses as R objects for \n other uses. In addition, the package includes a function to launch both models as \n a Shiny application. Although the Shiny application is also available online, \n this function allows users to launch it locally if they choose.","Published":"2017-04-04","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"weights","Version":"0.85","Title":"Weighting and Weighted Statistics","Description":"Provides a variety of functions for producing simple weighted statistics, such as weighted Pearson's correlations, partial correlations, Chi-Squared statistics, histograms, and t-tests. Also now includes some software for quickly recoding survey data and plotting point estimates from interaction terms in regressions (and multiply imputed regressions). Future versions of the package will be more closely integrated with \"anesrake\" and additional weighting tools and will provide the option to find weighting benchmarks and weight data using a variety of methodologies. NOTE: Weighted partial correlation calculations temporarily pulled to address a bug.","Published":"2016-02-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"weightTAPSPACK","Version":"0.1","Title":"Weight TAPS Data","Description":"The weightTAPSPACK subsets The American Panel Survey (TAPS) data by outcome and covariates, models the attrition rates, imputes data for attrited individuals, and finds weights for analysis.","Published":"2015-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"weirs","Version":"0.25","Title":"A Hydraulics Package to Compute Open-Channel Flow over Weirs","Description":"Provides computational support for flow over weirs, such as sharp-crested, broad-crested, and embankments. Initially, the package supports broad- and sharp-crested weirs.","Published":"2015-08-20","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"welchADF","Version":"0.1","Title":"Welch-James Statistic for Robust Hypothesis Testing under\nHeterocedasticity and Non-Normality","Description":"Implementation of Johansen's general formulation of Welch-James's statistic with Approximate Degrees of Freedom, which makes it suitable for testing \n any linear hypothesis concerning cell means in univariate and multivariate mixed model designs when the data pose non-normality and non-homogeneous variance. Some \n improvements, namely trimmed means and Winsorized variances, and bootstrapping for calculating an empirical critical value, have been added to the classical formulation. \n The code departs from a previous SAS implementation by L.M. Lix and H.J. Keselman, available at and\n published in Keselman, H.J., Wilcox, R.R., and Lix, L.M. (2003) .","Published":"2017-04-23","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"wellknown","Version":"0.1.0","Title":"Convert Between 'WKT' and 'GeoJSON'","Description":"Convert 'WKT' to 'GeoJSON' and 'GeoJSON' to 'WKT'. Functions\n included for converting between 'GeoJSON' to 'WKT', creating both\n 'GeoJSON' features, and non-features, creating WKT from R objects\n (e.g., lists, data.frames, vectors), and linting 'WKT'.","Published":"2015-10-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"wesanderson","Version":"0.3.2","Title":"A Wes Anderson Palette Generator","Description":"Palettes generated mostly from Wes Anderson movies","Published":"2015-01-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"wfe","Version":"1.3","Title":"Weighted Linear Fixed Effects Regression Models for Causal\nInference","Description":"This R package provides a computationally efficient way\n\t of fitting weighted linear fixed effects estimators for\n\t causal inference with various weighting schemes. Imai\n\t and Kim (2012) show that weighted linear fixed effects\n\t estimators can be used to estimate the average treatment\n\t effects under different identification strategies. This\n\t includes stratified randomized experiments, matching and\n\t stratification for observational studies, first\n\t differencing, and difference-in-differences. The package\n\t also provides various robust standard errors and a\n\t specification test for standard linear fixed effects\n\t estimators.","Published":"2014-08-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wfg","Version":"0.1","Title":"Weighted Fast Greedy Algorithm","Description":"Implementation of Weighted Fast Greedy algorithm for community detection in networks with mixed types of attributes.","Published":"2016-02-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wfindr","Version":"0.1.0","Title":"Crossword, Scrabble and Anagram Solver","Description":"Provides a large English words list and tools to find words by patterns. In particular, anagram finder and scrabble word finder.","Published":"2016-07-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wgaim","Version":"1.4-11","Title":"Whole Genome Average Interval Mapping for QTL Detection using\nMixed Models","Description":"Integrates sophisticated mixed modelling methods with a whole genome approach to detecting significant QTL in linkage maps.","Published":"2016-06-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WGCNA","Version":"1.51","Title":"Weighted Correlation Network Analysis","Description":"Functions necessary to perform Weighted Correlation Network Analysis on high-dimensional data. Includes functions for rudimentary data cleaning, construction of correlation networks, module identification, summarization, and relating of variables and modules to sample traits. Also includes a number of utility functions for data manipulation and visualization.","Published":"2016-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wgeesel","Version":"1.3","Title":"Weighted Generalized Estimating Equations and Model Selection","Description":"Weighted generalized estimating equations (WGEE) is an extension of generalized linear models to longitudinal clustered data by incorporating the correlation within-cluster when data is missing at random (MAR). The parameters in mean, scale correlation structures are estimated based on quasi-likelihood. Multiple model selection criterion are provided for selection of mean model and working correlation structure based on WGEE/GEE.","Published":"2017-05-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wgsea","Version":"1.8","Title":"Wilcoxon based gene set enrichment analysis","Description":"Non parametric alternative to Kolmogorov-Smirnov based\n standard GSEA testing.","Published":"2016-12-05","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"WhatIf","Version":"1.5-8","Title":"Evaluate Counterfactuals","Description":"Inferences about counterfactuals are essential for prediction,\n answering what if questions, and estimating causal effects.\n However, when the counterfactuals posed are too far from the data at\n hand, conclusions drawn from well-specified statistical analyses\n become based largely on speculation hidden in convenient modeling\n assumptions that few would be willing to defend. Unfortunately,\n standard statistical approaches assume the veracity of the model\n rather than revealing the degree of model-dependence, which makes this\n problem hard to detect. WhatIf offers easy-to-apply methods to\n evaluate counterfactuals that do not require sensitivity testing over\n specified classes of models. If an analysis fails the tests offered\n here, then we know that substantive inferences will be sensitive to at\n least some modeling choices that are not based on empirical evidence,\n no matter what method of inference one chooses to use.","Published":"2017-03-21","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"whisker","Version":"0.3-2","Title":"{{mustache}} for R, logicless templating","Description":"logicless templating, reuse templates in many programming\n languages including R","Published":"2013-04-28","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"WhiteStripe","Version":"2.2.2","Title":"White Matter Normalization for Magnetic Resonance Images using\nWhitestripe","Description":"Shinohara et. al (2014) \n introduced Whitestripe, an intensity-based normalization of T1 \n and T2 images, where normal \n appearing white matter performs well, but requires segmentation.\n This method performs white matter mean and standard deviation\n estimates on data that has been rigidly-registered to the MNI\n template and uses histogram-based methods.","Published":"2017-03-08","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"WHO","Version":"0.2","Title":"R Client for the World Health Organization API","Description":"Provides programmatic access to the World Health Organization API.","Published":"2016-04-02","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"whoami","Version":"1.1.1","Title":"Username, Full Name, Email Address, 'GitHub' Username of the\nCurrent User","Description":"Look up the username and full name of the current user,\n the current user's email address and 'GitHub' username,\n using various sources of system and configuration information.","Published":"2015-07-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"whoapi","Version":"0.1.2","Title":"A 'Whoapi' API Client","Description":"Retrieve data from the 'Whoapi' (https://whoapi.com) store of\n domain information, including a domain's geographic location, registration\n status and search prominence.","Published":"2016-09-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"WhopGenome","Version":"0.9.7","Title":"High-Speed Processing of VCF, FASTA and Alignment Data","Description":"Provides very fast access to whole genome, population scale variation data\n\tfrom VCF files and sequence data from FASTA-formatted files.\n\tIt also reads in alignments from FASTA, Phylip, MAF and other file formats.\n\tProvides easy-to-use interfaces to genome annotation from UCSC and Bioconductor and gene ontology data\n\tfrom AmiGO and is capable to read, modify and write PLINK .PED-format pedigree files.","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wicket","Version":"0.3.0","Title":"Utilities to Handle WKT Spatial Data","Description":"Utilities to generate bounding boxes from 'WKT' (Well-Known Text) objects and R data types, validate\n 'WKT' objects and convert object types from the 'sp' package into 'WKT' representations.","Published":"2017-03-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"widals","Version":"0.5.4","Title":"Weighting by Inverse Distance with Adaptive Least Squares for\nMassive Space-Time Data","Description":"Fit, forecast, predict massive spacio-temporal data","Published":"2014-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"widenet","Version":"0.1-2","Title":"Penalized Regression with Polynomial Basis Expansions","Description":"Extends the glmnet and relaxnet packages with polynomial basis expansions. Basis expansion is applied to the predictors and a subset of the basis functions is chosen using relaxnet. Predictors may be screened using correlation or t-tests. Screening is done separately within cross-validation folds. Cross-validation may be used to select the order of basis expansion and alpha, the elastic net tuning parameter.","Published":"2013-07-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"widgetframe","Version":"0.2.0","Title":"'Htmlwidgets' in Responsive 'iframes'","Description":"Provides two functions 'frameableWidget()', and 'frameWidget()'.\n The 'frameableWidget()' is used to add extra code to a 'htmlwidget' which\n allows is to be rendered correctly inside a responsive 'iframe'.\n The 'frameWidget()' is a 'htmlwidget' which displays content of another 'htmlwidget'\n inside a responsive 'iframe'.\n These functions allow for easier embedding of 'htmlwidgets' in content management systems\n such as 'wordpress', 'blogger' etc.\n They also allow for separation of widget content from main HTML content where\n CSS of the main HTML could interfere with the widget.","Published":"2017-05-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"wikibooks","Version":"0.2","Title":"Functions and datasets of the german WikiBook \"GNU R\"","Description":"The german Wikibook \"GNU R\" introduces R to new users.\n This package is a collection of functions and datas used in the\n german WikiBook \"GNU R\"","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WikidataQueryServiceR","Version":"0.1.1","Title":"API Client Library for 'Wikidata Query Service'","Description":"An API client for the 'Wikidata Query Service'\n .","Published":"2017-04-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"WikidataR","Version":"1.3.0","Title":"API Client Library for 'Wikidata'","Description":"An API client for the Wikidata store of\n semantic data.","Published":"2017-05-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"wikilake","Version":"0.2","Title":"Scrape Lakes Metadata Tables from Wikipedia","Description":"Scrape lakes metadata tables from Wikipedia . ","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WikipediaR","Version":"1.1","Title":"R-Based Wikipedia Client","Description":"Provides an interface to the Wikipedia web application programming\n interface (API), using internet connexion.Three functions provide details for\n a specific Wikipedia page : all links that are present, all pages that link\n to, all the contributions (revisions for main pages, and discussions for talk\n pages). Two functions provide details for a specific user : all contributions,\n and general information (as name, gender, rights or groups). It provides\n additional information compared to others packages, as WikipediR. It does not\n need login. The multiplex network that can be constructed from the results\n of the functions of WikipediaR can be modeled as Stochastic Block Model as in\n Barbillon P., Donnet, S., Lazega E., and Bar-Hen A. : Stochastic Block Models\n for Multiplex networks: an application to networks of researchers, ArXiv\n 1501.06444, http://arxiv.org/abs/1501.06444.","Published":"2016-02-05","License":"GPL (> 2)","snapshot_date":"2017-06-23"} {"Package":"wikipediatrend","Version":"1.1.10","Title":"Public Subject Attention via Wikipedia Page View Statistics","Description":"Public attention is an interesting field of study. The\n internet not only allows to access information in no time on\n virtually any subject but via page access statistics gathered\n by website authors the subject of attention as well can be\n studied. For the omnipresent Wikipedia those access statistics\n are made available via 'http://stats.grok.se' a server\n providing the information as file dumps as well as as web API.\n This package provides an easy to use, consistent and traffic\n minimizing approach to make those data accessible within R.","Published":"2016-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WikipediR","Version":"1.5.0","Title":"A MediaWiki API Wrapper","Description":"A wrapper for the MediaWiki API, aimed particularly at the\n Wikimedia 'production' wikis, such as Wikipedia. It can be used to retrieve\n page text, information about users or the history of pages, and elements of\n the category tree.","Published":"2017-02-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"WikiSocio","Version":"0.7.0","Title":"A MediaWiki API Wrapper","Description":"\n MediaWiki is wiki platform. Providing the infrastructure of Wikipedia, it also offers very sophisticated archiving functionalities.\n This package is built to store these wiki's archives to R object - data-frame, lists, vector and variables. All data are downloaded\n with the help of MediaWiki REST API. For instance, you can get all revisions made by a contributor - contrib_list(), all the revisions \n of a page page_revisions(), or create corpus of contributors - corpus_contrib_create() - and pages corpus_page_create(). Then, you can \n make these corpus rich of data about contributors or pages - corpus_contrib_data() or corpus_page_data().","Published":"2016-02-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wikitaxa","Version":"0.1.4","Title":"Taxonomic Information from 'Wikipedia'","Description":"'Taxonomic' information from 'Wikipedia', 'Wikicommons',\n 'Wikispecies', and 'Wikidata'. Functions included for getting\n taxonomic information from each of the sources just listed, as\n well performing taxonomic search.","Published":"2017-05-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"WilcoxCV","Version":"1.0-2","Title":"Wilcoxon-based variable selection in cross-validation","Description":"This package provides functions to perform fast variable\n selection based on the Wilcoxon rank sum test in the\n cross-validation or Monte-Carlo cross-validation settings, for\n use in microarray-based binary classification.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wildcard","Version":"1.0.1","Title":"Templates for Data Frames","Description":"Generate data frames from templates.","Published":"2017-06-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"wildlifeDI","Version":"0.2","Title":"Calculate Indices of Dynamic Interaction for Wildlife Telemetry\nData","Description":"Dynamic interaction refers to spatial-temporal associations in the\n movements of two (or more) animals. This package provides tools for\n calculating a suite of indices used for quantifying dynamic interaction\n with wildlife telemetry data. For more information on each of the methods\n employed see the references within. The package draws heavily on the\n classes and methods developed in the 'adehabitat' packages.","Published":"2014-12-15","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wildpoker","Version":"1.1","Title":"Best Hand Analysis for Poker Variants Including Wildcards","Description":"Provides insight into how the best hand for a poker game changes based on the game dealt, players who stay in until the showdown and wildcards added to the base game. At this time the package does not support player tactics, so draw poker variants are not included.","Published":"2016-01-30","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"windex","Version":"1.0","Title":"windex: Analysing convergent evolution using the Wheatsheaf\nindex","Description":"Analysing convergent evolution using the Wheatsheaf index.","Published":"2014-10-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wingui","Version":"0.2","Title":"Advanced Windows Functions","Description":"Helps for interfacing with the operating system\n particularly for Windows.","Published":"2015-10-13","License":"GPL-2 | GPL-3 | MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"wiod","Version":"0.3.0","Title":"World Input Output Database 1995-2011","Description":"Data sets from the World Input Output Database, for the years 1995-2011.","Published":"2015-07-29","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wiqid","Version":"0.1.1","Title":"Quick and Dirty Estimates for Wildlife Populations","Description":"Provides simple, fast functions for maximum likelihood and Bayesian estimates of wildlife population parameters, suitable for use with simulated data or bootstraps. Early versions were indeed quick and dirty, but optional error-checking routines and meaningful error messages have been added. Includes single and multi-season occupancy, closed capture population estimation, survival, species richness and distance measures.","Published":"2017-06-09","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"WiSEBoot","Version":"1.4.0","Title":"Wild Scale-Enhanced Bootstrap","Description":"Perform the Wild Scale-Enhanced (WiSE) bootstrap. Specifically, the user may supply a single or multiple equally-spaced time series and use the WiSE bootstrap to select a wavelet-smoothed model. Conversely, a pre-selected smooth level may also be specified for the time series. Quantities such as the bootstrap sample of wavelet coefficients, smoothed bootstrap samples, and specific hypothesis testing and confidence region results of the wavelet coefficients may be obtained. Additional functions are available to the user which help format the time series before analysis. This methodology is recommended to aid in model selection and signal extraction.\n Note: This package specifically uses wavelet bases in the WiSE bootstrap methodology, but the theoretical construct is much more versatile.","Published":"2016-04-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"withr","Version":"1.0.2","Title":"Run Code 'With' Temporarily Modified Global State","Description":"A set of functions to run code 'with' safely and temporarily\n modified global state. Many of these functions were originally a part of the\n 'devtools' package, this provides a simple package with limited dependencies\n to provide access to these functions.","Published":"2016-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wkb","Version":"0.3-0","Title":"Convert Between Spatial Objects and Well-Known Binary Geometry","Description":"Utility functions to convert between the 'Spatial' classes\n specified by the package 'sp', and the well-known binary '(WKB)'\n representation for geometry specified by the Open Geospatial Consortium.\n Supports 'Spatial' objects of class 'SpatialPoints',\n 'SpatialPointsDataFrame', 'SpatialLines', 'SpatialLinesDataFrame',\n 'SpatialPolygons', and 'SpatialPolygonsDataFrame'. Supports 'WKB' geometry\n types 'Point', 'LineString', 'Polygon', 'MultiPoint', 'MultiLineString', and\n 'MultiPolygon'. Includes extensions to enable creation of maps with\n 'TIBCO Spotfire'.","Published":"2016-03-24","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"wktmo","Version":"1.0.3","Title":"Converting Weekly Data to Monthly Data","Description":"Converts weekly data to monthly data.\n Users can use three types of week formats: ISO week, epidemiology week (epi week) and calendar date. ","Published":"2017-06-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wle","Version":"0.9-91","Title":"Weighted Likelihood Estimation","Description":"Approach to the robustness via Weighted Likelihood. ","Published":"2015-10-18","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"WLreg","Version":"1.0.0","Title":"Regression Analysis Based on Win Loss Endpoints","Description":"Use various regression models for the analysis of win loss endpoints \n adjusting for non-binary and multivariate covariates.","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WMCapacity","Version":"0.9.6.7","Title":"GUI Implementing Bayesian Working Memory Models","Description":"A GUI R implementation of hierarchical Bayesian models of working memory, used for analyzing change detection data.","Published":"2015-07-12","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"WMDB","Version":"1.0","Title":"Discriminant Analysis Methods by Weight Mahalanobis Distance and\nbayes","Description":"Distance discriminant analysis method is one of\n classification methods according to multiindex performance\n parameters.However,the traditional Mahalanobis distance\n discriminant method treats with the importance of all\n parameters equally,and exaggerates the role of parameters which\n changes a little.The weighted Mahalanobis distance is used in\n discriminant analysis method to distinguish the importance of\n each parameter.In the concrete application,firstly based on the\n principal component analysis scheme,a new group of parameters\n and their corresponding percent contributions of the parameters\n are calculated ,and the weighted matrix is regarded as the\n diagonal matrix of the contributions rates.Setting data to\n standardization,then the weighted Mahalanobis distance can be\n calculated.Besides the methods metioned above,bayes method is\n also given.","Published":"2012-07-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Wmisc","Version":"0.3.2","Title":"Wamser Misc: Reading Files by Tokens, Stateful Computations,\nUtility Functions","Description":"A tokenizer to read a text file token by token with a very lightweight API, a framework for stateful computations with finite state machines and a few string utility functions. ","Published":"2017-02-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wmlf","Version":"0.1.2","Title":"Wavelet Leaders in Multifractal Analysis","Description":"Analyzing the texture of an image from a multifractal wavelet leader analysis. ","Published":"2015-02-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wmtsa","Version":"2.0-2","Title":"Wavelet Methods for Time Series Analysis","Description":"Software to book Wavelet Methods for Time Series Analysis,\n Donald B. Percival and Andrew T. Walden, Cambridge University\n Press, 2000.","Published":"2016-06-09","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wnominate","Version":"1.2","Title":"Roll Call Analysis Software","Description":"Estimates Poole and Rosenthal W-NOMINATE scores from roll\n call votes supplied though a 'rollcall' object from package\n 'pscl'.","Published":"2016-07-30","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"woe","Version":"0.2","Title":"Computes Weight of Evidence and Information Values","Description":"Shows the relationship between an independent and dependent variable through Weight of Evidence and Information Value.","Published":"2015-07-28","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"woeBinning","Version":"0.1.4","Title":"Supervised Weight of Evidence Binning of Numeric Variables and\nFactors","Description":"Implements an automated binning of numeric variables and factors with\n respect to a dichotomous target variable.\n Two approaches are provided: An implementation of fine and coarse classing that\n merges granular classes and levels step by step. And a tree-like approach that\n iteratively segments the initial bins via binary splits. Both procedures merge,\n respectively split, bins based on similar weight of evidence (WOE) values and\n stop via an information value (IV) based criteria.\n The package can be used with single variables or an entire data frame. It provides\n flexible tools for exploring different binning solutions and for deploying them to\n (new) data.","Published":"2017-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"womblR","Version":"1.0.0","Title":"Spatiotemporal Boundary Detection Model for Areal Unit Data","Description":"Implements a spatiotemporal boundary detection model with a dissimilarity\n metric for areal data with inference in a Bayesian setting using Markov chain\n Monte Carlo (MCMC). The response variable can be modeled as Gaussian (no nugget),\n probit or Tobit link and spatial correlation is introduced at each time point\n through a conditional autoregressive (CAR) prior. Temporal correlation is introduced\n through a hierarchical structure and can be specified as exponential or first-order\n autoregressive. Full details of the package can be found in the accompanying vignette.","Published":"2017-06-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"word.alignment","Version":"1.0.6","Title":"Computing Word Alignment Using IBM Model 1 (and Symmetrization)\nfor a Given Parallel Corpus and Its Evaluation","Description":"For a given Sentence-Aligned Parallel Corpus, it aligns words for each sentence pair. It considers one-to-many and symmetrization alignments. Moreover, it evaluates the quality of word alignment based on this package and some other software. It also builds an automatic dictionary of two languages based on given parallel corpus.","Published":"2017-04-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wordbankr","Version":"0.2.0","Title":"Accessing the Wordbank Database","Description":"Tools for connecting to Wordbank, an open repository for\n developmental vocabulary data.","Published":"2016-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wordcloud","Version":"2.5","Title":"Word Clouds","Description":"Pretty word clouds.","Published":"2014-06-13","License":"LGPL-2.1","snapshot_date":"2017-06-23"} {"Package":"wordcloud2","Version":"0.2.0","Title":"Create Word Cloud by htmlWidget","Description":"A fast visualization tool for creating wordcloud\n by using wordcloud2.js.","Published":"2016-07-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wordmatch","Version":"1.0","Title":"Matches words in one file with words in another file","Description":"Matches words in one file with words in another file and shows index(row number) for the matches","Published":"2013-07-22","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wordnet","Version":"0.1-11","Title":"WordNet Interface","Description":"An interface to WordNet using the Jawbone Java API to WordNet.\n WordNet () is a large lexical database of\n English. Nouns, verbs, adjectives and adverbs are grouped into sets of\n cognitive synonyms (synsets), each expressing a distinct concept. Synsets\n are interlinked by means of conceptual-semantic and lexical relations.\n Please note that WordNet(R) is a registered tradename. Princeton\n University makes WordNet available to research and commercial users\n free of charge provided the terms of their license\n () are followed, and\n proper reference is made to the project using an appropriate\n citation ().","Published":"2016-01-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"WordPools","Version":"1.0-2","Title":"Classical word pools used in studies of learning and memory","Description":"This package collects several classical word pools used\n most often to provide lists of words in psychological studies\n of learning and memory.","Published":"2012-12-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wordspace","Version":"0.2-0","Title":"Distributional Semantic Models in R","Description":"An interactive laboratory for research on distributional semantic models ('DSM',\n see for more information).","Published":"2016-08-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"worldmet","Version":"0.7.5","Title":"Import Surface Meteorological Data from NOAA Integrated Surface\nDatabase (ISD)","Description":"Functions to import data from more than 30,000 surface\n meteorological sites around the world managed by the National Oceanic and Atmospheric Administration (NOAA) Integrated Surface\n Database (ISD, see ).","Published":"2017-01-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"worms","Version":"0.2.1","Title":"Retriving Aphia Information from World Register of Marine\nSpecies","Description":"Retrieves taxonomic information from using WoRMS' RESTful Webservice. Utility functions aim at taxonomic consistency.","Published":"2017-06-18","License":"GNU Affero General Public License","snapshot_date":"2017-06-23"} {"Package":"worrms","Version":"0.1.0","Title":"World Register of Marine Species (WoRMS) Client","Description":"Client for World Register of Marine Species \n (). Includes functions for each\n of the API methods, including searching for names by name, date and\n common names, searching using external identifiers, fetching\n synonyms, as well as fetching taxonomic children and \n taxonomic classification.","Published":"2017-01-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"WPC","Version":"1.0","Title":"Weighted Predictiveness Curve","Description":"Implementing weighted predictiveness curve to visualize the marker-by-treatment relationship and measure the performance of biomarkers for guiding treatment decision. ","Published":"2016-07-30","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wPerm","Version":"1.0.1","Title":"Permutation Tests","Description":"Supplies permutation-test alternatives to traditional hypothesis-test\n procedures such as two-sample tests for means, medians, and standard deviations;\n correlation tests; tests for homogeneity and independence; and more. Suitable for\n general audiences, including individual and group users, introductory statistics\n courses, and more advanced statistics courses that desire an introduction to\n permutation tests.","Published":"2015-11-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WPKDE","Version":"0.1","Title":"Weighted Piecewise Kernel Density Estimation","Description":"Weighted Piecewise Kernel Density Estimation for large data.","Published":"2017-03-02","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"wpp2008","Version":"1.0-1","Title":"World Population Prospects 2008","Description":"Data from the United Nation's World Population Prospects 2008","Published":"2014-01-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wpp2010","Version":"1.2-0","Title":"World Population Prospects 2010","Description":"Data from the United Nation's World Population Prospects\n 2010","Published":"2013-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wpp2012","Version":"2.2-1","Title":"World Population Prospects 2012","Description":"Data from the United Nation's World Population Prospects 2012","Published":"2014-08-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wpp2015","Version":"1.1-0","Title":"World Population Prospects 2015","Description":"Provides data from the United Nation's World Population Prospects 2015.","Published":"2016-06-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wppExplorer","Version":"2.0-2","Title":"Explorer of World Population Prospects","Description":"Explore data in the 'wpp2015' (or 2012, 2010) package using a 'shiny' interface.","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wqs","Version":"0.0.1","Title":"Weighted Quantile Sum Regression","Description":"Fits weighted quantile sum regression models, calculates weighted quantile sum index and estimated component weights.","Published":"2015-10-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wrangle","Version":"0.4","Title":"A Systematic Data Wrangling Idiom","Description":"Supports systematic scrutiny, modification, and integration of\n data. The function status() counts rows that have missing values in \n grouping columns (returned by na() ), have non-unique combinations of \n grouping columns (returned by dup() ), and that are not locally sorted\n (returned by unsorted() ). Functions enumerate() and itemize() give \n sorted unique combinations of columns, with or without occurrence counts,\n respectively. Function ignore() drops columns in x that are present in y,\n and informative() drops columns in x that are entirely NA. Data that have\n defined unique combinations of grouping values behave more predictably\n during merge operations.","Published":"2017-04-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Wrapped","Version":"1.0","Title":"Computes Pdf, Cdf, Quantile, Random Numbers and Provides\nEstimation for 40 Univariate Wrapped Distributions","Description":"Computes the probability density function, cumulative distribution function, quantile function and random numbers for 40 univariate wrapped distributions. They include the wrapped normal, wrapped Gumbel, wrapped logistic, wrapped t, wrapped Cauchy, wrapped skew normal, wrapped skew t, wrapped asymmetric Laplace, wrapped normal Laplace, wrapped skew Laplace, wrapped skew logistic, wrapped exponential power, wrapped skew power exponential, wrapped power exponential t, wrapped skew generalized t, wrapped skew hyperbolic, wrapped generalized hyperbolic Student t, wrapped power hyperbola logistic, wrapped Kiener, wrapped Laplace mixture, wrapped skew Laplace, wrapped polynomial tail Laplace, wrapped generalized asymmetric t, wrapped variance gamma, wrapped normal inverse gamma, wrapped skew Cauchy, wrapped slash, wrapped ex Gaussian, wrapped stable and wrapped log gamma distributions. Also given are maximum likelihood estimates of the parameters, standard errors, 95 percent confidence intervals, log-likelihood values, AIC values, CAIC values, BIC values, HQIC values, values of the W statistic, values of the A statistic, values of the KS tatistic and the associated p-value.","Published":"2017-02-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wrapr","Version":"0.1.3","Title":"Wrap R Functions for Debugging and Parametric Programming","Description":"Provides 'DebugFnW()' to capture function context on error for\n debugging, and 'let()' which converts non-standard evaluation interfaces to\n parametric standard evaluation interfaces.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wrassp","Version":"0.1.4","Title":"Interface to the ASSP Library","Description":"A wrapper around Michel Scheffers's libassp (Advanced\n Speech Signal Processor). The libassp library aims at providing\n functionality for handling speech signal files in most common audio formats\n and for performing analyses common in phonetic science/speech science. This\n includes the calculation of formants, fundamental frequency, root mean\n square, auto correlation, a variety of spectral analyses, zero crossing\n rate, filtering etc. This wrapper provides R with a large subset of\n libassp's signal processing functions and provides them to the user in a\n (hopefully) user-friendly manner.","Published":"2016-05-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"WrightMap","Version":"1.2.1","Title":"IRT Item-Person Map with 'ConQuest' Integration","Description":"A powerful yet simple graphical tool available in the field of psychometrics is the Wright Map (also known as item maps or item-person maps), which presents the location of both respondents and items on the same scale. Wright Maps are commonly used to present the results of dichotomous or polytomous item response models. The 'WrightMap' package provides functions to create these plots from item parameters and person estimates stored as R objects. Although the package can be used in conjunction with any software used to estimate the IRT model (e.g. 'TAM', 'mirt', 'eRm' or 'IRToys' in 'R', or 'Stata', 'Mplus', etc.), 'WrightMap' features special integration with 'ConQuest' to facilitate reading and plotting its output directly.The 'wrightMap' function creates Wright Maps based on person estimates and item parameters produced by an item response analysis. The 'CQmodel' function reads output files created using 'ConQuest' software and creates a set of data frames for easy data manipulation, bundled in a 'CQmodel' object. The 'wrightMap' function can take a 'CQmodel' object as input or it can be used to create Wright Maps directly from data frames of person and item parameters.","Published":"2016-03-23","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"write.snns","Version":"0.0-4.2","Title":"Function for exporting data to SNNS pattern files","Description":"Function for writing a SNNS pattern file from a data.frame\n or matrix.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WriteXLS","Version":"4.0.0","Title":"Cross-Platform Perl Based R Function to Create Excel 2003 (XLS)\nand Excel 2007 (XLSX) Files","Description":"Cross-platform Perl based R function to create Excel 2003 (XLS) and Excel 2007 (XLSX)\n files from one or more data frames. Each data frame will be\n written to a separate named worksheet in the Excel spreadsheet.\n The worksheet name will be the name of the data frame it contains\n or can be specified by the user. ","Published":"2015-12-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WRS2","Version":"0.9-2","Title":"A Collection of Robust Statistical Methods","Description":"A collection of robust statistical methods based on Wilcox' WRS functions. It implements robust t-tests (independent and dependent samples), robust ANOVA (including between-within subject designs), quantile ANOVA, robust correlation, robust mediation, and nonparametric ANCOVA models based on robust location measures.","Published":"2017-05-01","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wrspathrow","Version":"0.1","Title":"Functions for working with Worldwide Reference System (WRS)","Description":"Contains functions for working with the Worldwide Reference System\n (WRS) 1 and 2 systems used by NASA. WRS-1 applies to Landsat 1-3, WRS-2\n applies to Landsat 4-8. The package has functions for retrieving a given\n path and row as a polygon, and for retrieving the path(s) and row(s)\n containing a given raster or vector.","Published":"2014-02-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"wrspathrowData","Version":"1.0","Title":"Data used by the wrspathrow package","Description":"Contains the Worldwide Reference System (WRS) 1 and 2 polygon\n data from NASA, for use by the wrspathrow package. WRS-1 and WRS-2\n Shape files courtesy of the U.S. Geological Survey, from\n http://landsat.usgs.gov/tools_wrs-2_shapefile.php","Published":"2014-02-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"wrswoR","Version":"1.0-1","Title":"Weighted Random Sampling without Replacement","Description":"A collection of implementations of classical and novel\n algorithms for weighted sampling without replacement.","Published":"2016-03-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wrswoR.benchmark","Version":"0.1-1","Title":"Benchmark and Correctness Data for Weighted Random Sampling\nWithout Replacement","Description":"Includes performance measurements and results of repeated\n experiment runs (for correctness checks) for code in the\n 'wrswoR' package.","Published":"2016-02-12","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"WRTDStidal","Version":"1.0.1","Title":"Weighted Regression for Water Quality Evaluation in Tidal Waters","Description":"An adaptation for estuaries (tidal waters) of weighted regression\n on time, discharge, and season to evaluate trends in water quality time series.","Published":"2016-11-08","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"wru","Version":"0.1-5","Title":"Who are You? Bayesian Prediction of Racial Category Using\nSurname and Geolocation","Description":"Predicts individual race/ethnicity using surname, geolocation, and other \n attributes, such as gender and age. The method utilizes the Bayes' Rule to compute\n the posterior probability of each racial category for any given individual.\n The package implements methods described in Imai and Khanna (2015) \"Improving\n Ecological Inference by Predicting Individual Ethnicity from Voter Registration\n Records\" .","Published":"2017-06-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"wskm","Version":"1.4.28","Title":"Weighted k-Means Clustering","Description":"Entropy weighted k-means (ewkm) is a weighted subspace\n clustering algorithm that is well suited to very high\n dimensional data. Weights are calculated as the importance of\n a variable with regard to cluster membership. The two-level\n variable weighting clustering algorithm tw-k-means (twkm)\n introduces two types of weights, the weights on individual\n variables and the weights on variable groups, and they are\n calculated during the clustering process. The feature group\n weighted k-means (fgkm) extends this concept by grouping\n features and weighting the group in addition to weighting\n individual features.","Published":"2015-07-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"wsrf","Version":"1.7.13","Title":"Weighted Subspace Random Forest for Classification","Description":"A parallel implementation of Weighted Subspace Random\n Forest. The Weighted Subspace Random Forest algorithm was\n proposed in the International Journal of Data Warehousing and\n Mining by Baoxun Xu, Joshua Zhexue Huang, Graham Williams, Qiang\n Wang, and Yunming Ye (2012) . The\n algorithm can classify very high-dimensional data with random\n forests built using small subspaces. A novel variable weighting\n method is used for variable subspace selection in place of the\n traditional random variable sampling.This new approach is\n particularly useful in building models from high-dimensional data.","Published":"2017-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"wSVM","Version":"0.1-7","Title":"Weighted SVM with boosting algorithm for improving accuracy","Description":"We propose weighted SVM methods with penalization form. By\n adding weights to loss term, we can build up weighted SVM\n easily and examine classification algorithm properties under\n weighted SVM. Through comparing each of test error rates, we\n conclude that our Weighted SVM with boosting has predominant\n properties than the standard SVM have, as a whole.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wtest","Version":"1.0","Title":"The W-Test on Genetic Interactions Testing","Description":"Perform the calculation of W-test, diagnostic checking, calculate minor allele frequency (MAF) and odds ratio.","Published":"2016-08-05","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"wTO","Version":"1.2.0","Title":"Computing Weighted Topological Overlaps (wTO) & Consensus wTO\nNetwork","Description":"Computes the Weighted Topological Overlap (wTO) networks. Once a data.frame containing the count/ expression/ abundance per sample, and a vector containing the interested nodes of interaction.It also computes the cut-off threshold or p-value based on the individuals bootstrap or the values reshuffle per individual. It also allows the construction of a Consensus network, based on multiple wTOs. Also includes a visualization tool for the final network.","Published":"2017-06-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"WufooR","Version":"0.6.2","Title":"R Wrapper for the 'Wufoo.com' - The Form Building Service","Description":"Allows form managers to download entries from their respondents\n using Wufoo JSON API (). Additionally, the Wufoo reports - when public - can be\n also acquired programmatically. Note that building new forms within this package\n is not supported.","Published":"2017-04-20","License":"Apache License 2.0","snapshot_date":"2017-06-23"} {"Package":"wux","Version":"2.2-1","Title":"Wegener Center Climate Uncertainty Explorer","Description":"Methods to calculate and interpret climate change signals and time series from climate multi-model ensembles. Climate model output in binary 'NetCDF' format is read in and aggregated over a specified region to a data.frame for statistical analysis. Global Circulation Models, as the 'CMIP5' simulations, can be read in the same way as Regional Climate Models, as e.g. the 'CORDEX' or 'ENSEMBLES' simulations. The package has been developed at the 'Wegener Center for Climate and Global Change' at the University of Graz, Austria.","Published":"2016-12-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WVPlots","Version":"0.2.4","Title":"Common Plots for Analysis","Description":"Select data analysis plots, under a standardized calling interface implemented on top of 'ggplot2' and 'plotly'. \n Plots of interest include: 'ROC', gain curve, scatter plot with marginal distributions, \n conditioned scatter plot with marginal densities.\n box and stem with matching theoretical distribution, and density with matching theoretical distribution.","Published":"2017-05-05","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"wvtool","Version":"1.0","Title":"Image Tools for Automated Wood Identification","Description":"This tool, wood vision tool, is intended to facilitate preprocessing and analyzing 2-dimensional wood images toward automated recognition. The former includes some basics such as functions to RGB to grayscale, gray to binary, cropping, rotation(bilinear), median/mean/Gaussian filter, and Canny/Sobel edge detection. The latter includes gray level co-occurrence matrix (GLCM), Haralick parameters, local binary pattern (LBP), higher order local autocorrelation (HLAC), Fourier transform (radial and azimuthal integration), and Gabor filtering. The functions are intended to read data using 'readTIFF(x,info=T)' from 'tiff' package. The functions in this packages basically assumes the grayscale images as input data, thus the color images should be subjected to the function rgb2gray() before used for some other functions.","Published":"2016-11-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WWGbook","Version":"1.0.1","Title":"Functions and datasets for WWGbook","Description":"Book is \"Linear Mixed Models: A Practical Guide Using\n Statistical Software\" published in 2006 by Chapman Hall / CRC\n Press","Published":"2012-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"WWR","Version":"1.2.0","Title":"Weighted Win Loss Statistics and their Variances","Description":"Calculate the (weighted) win loss statistics including the win ratio, win difference and win product \n and their variances, with which the p-values are also calculated. The variance estimation is based on \n Luo et al. (2015) and Luo et al. (2017) . This package also calculates general win loss statistics with \n user-specified win loss function with variance estimation based on \n Bebu and Lachin (2016) . ","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"x.ent","Version":"1.1.7","Title":"eXtraction of ENTity","Description":"Provides a tool for extracting information (entities and relations between them) in text datasets. It also emphasizes the results exploration with graphical displays. It is a rule-based system and works with hand-made dictionaries and local grammars defined by users. 'x.ent' uses parsing with Perl functions and JavaScript to define user preferences through a browser and R to display and support analysis of the results extracted. Local grammars are defined and compiled with the tool Unitex, a tool developed by University Paris Est that supports multiple languages. See ?xconfig for an introduction.","Published":"2017-05-24","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"x12","Version":"1.8.0","Title":"Interface to X12-ARIMA/X13-ARIMA-SEATS and Structure for Batch\nProcessing of Seasonal Adjustment","Description":"The X13-ARIMA-SEATS methodology and software is a widely used software and developed by the US Census Bureau. It can be accessed from R with this R package and X13-ARIMA-SEATS binaries are provided by the R package x13binary.","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"x12GUI","Version":"0.13.0","Title":"X12 - Graphical User Interface","Description":"A graphical user interface for the x12 package ","Published":"2014-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"x13binary","Version":"1.1.39-1","Title":"Provide the 'x13ashtml' Seasonal Adjustment Binary","Description":"The US Census Bureau provides a seasonal adjustment program now\n called 'X-13ARIMA-SEATS' building on both earlier programs called X-11 and\n X-12 as well as the SEATS program by the Bank of Spain. The US Census Bureau\n offers both source and binary versions -- which this package integrates for\n use by other R packages.","Published":"2017-05-04","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"xaringan","Version":"0.3","Title":"Presentation Ninja","Description":"Create HTML5 slides with R Markdown and the JavaScript library\n 'remark.js' ().","Published":"2017-05-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"xbreed","Version":"1.0.1","Title":"Genomic Simulation of Purebred and Crossbred Populations","Description":"Simulation of purebred and crossbred genomic data as well as pedigree and phenotypes are possible by this package. 'xbreed' can be used for the simulation of populations with flexible genome structures and trait genetic architectures. It can also be used to evaluate breeding schemes and generate genetic data to test statistical tools. ","Published":"2017-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"XBRL","Version":"0.99.18","Title":"Extraction of Business Financial Information from 'XBRL'\nDocuments","Description":"\n Functions to extract business financial information from\n an Extensible Business Reporting Language ('XBRL') instance file and the\n associated collection of files that defines its 'Discoverable' Taxonomy\n Set ('DTS').","Published":"2017-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xergm","Version":"1.8.2","Title":"Extensions of Exponential Random Graph Models","Description":"Extensions of Exponential Random Graph Models (ERGM): Temporal Exponential Random Graph Models (TERGM), Generalized Exponential Random Graph Models (GERGM), Temporal Network Autocorrelation Models (TNAM), and Relational Event Models (REM). This package acts as a meta-package for several sub-packages on which it depends.","Published":"2017-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xergm.common","Version":"1.7.7","Title":"Common Infrastructure for Extensions of Exponential Random Graph\nModels","Description":"Datasets and definitions of generic functions used in dependencies of the 'xergm' package.","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xesreadR","Version":"0.1.0","Title":"Read and Write XES Files","Description":"Read and write XES Files to create event log objects used by the 'bupaR' framework. XES (Extensible Event Stream) is the IEEE standard for storing and sharing event data (see for more info).","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"xgboost","Version":"0.6-4","Title":"Extreme Gradient Boosting","Description":"Extreme Gradient Boosting, which is an efficient implementation\n of the gradient boosting framework from Chen & Guestrin (2016) .\n This package is its R interface. The package includes efficient linear \n model solver and tree learning algorithms. The package can automatically \n do parallel computation on a single machine which could be more than 10 \n times faster than existing gradient boosting packages. It supports\n various objective functions, including regression, classification and ranking.\n The package is made to be extensible, so that users are also allowed to define\n their own objectives easily.","Published":"2017-01-05","License":"Apache License (== 2.0) | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"xgobi","Version":"1.2-15","Title":"Interface to the XGobi and XGvis programs for graphical data\nanalysis","Description":"Interface to the XGobi and XGvis programs for graphical\n data analysis.","Published":"2012-11-01","License":"file LICENSE","snapshot_date":"2017-06-23"} {"Package":"XGR","Version":"1.0.10","Title":"Exploring Genomic Relations for Enhanced Interpretation Through\nEnrichment, Similarity, Network and Annotation Analysis","Description":"The central goal of XGR is to provide a data interpretation system. It is designed to make a user-defined gene or SNP list (or genomic regions) more interpretable by comprehensively utilising ontology annotations and interaction networks to reveal relationships and enhance opportunities for biological discovery. XGR is unique in supporting a broad range of ontologies (including knowledge of biological and molecular functions, pathways, diseases and phenotypes - in both human and mouse) and different types of networks (including functional, physical and pathway interactions). There are two core functionalities of XGR. The first is to provide basic infrastructures for easy access to built-in ontologies and networks. The second is to support data interpretations via 1) enrichment analysis using either built-in or custom ontologies, 2) similarity analysis for calculating semantic similarity between genes (or SNPs) based on their ontology annotation profiles, 3) network analysis for identification of gene networks given a query list of (significant) genes, SNPs or genomic regions, and 4) annotation analysis for interpreting genomic regions using co-localised functional genomic annotations (such as open chromatin, epigenetic marks, TF binding sites and genomic segments) and using nearby gene annotations (by ontologies). Together with its web app, XGR aims to provide a user-friendly tool for exploring genomic relations at the gene, SNP and genomic region level.","Published":"2017-04-16","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"xhmmScripts","Version":"1.1","Title":"XHMM R scripts","Description":"R scripts for plotting and assessing XHMM whole-exome-sequencing-based CNV calls. XHMM (eXome Hidden Markov Model) is a C++ software package (http://atgu.mgh.harvard.edu/xhmm) written to call copy number variation (CNV) from next-generation sequencing projects, where exome capture was used (or targeted sequencing, more generally). This R package enables the user to visualize both the PCA normalization performed by XHMM and the CNVs it has called.","Published":"2014-06-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"XHWE","Version":"1.0","Title":"X Chromosome Hardy-Weinberg Equilibrium","Description":"Conduct the likelihood ratio tests for Hardy-Weinberg equilibrium at marker loci on the X chromosome.","Published":"2015-06-03","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"XiMpLe","Version":"0.10-1","Title":"A Simple XML Tree Parser and Generator","Description":"Provides a simple XML tree parser/generator. It includes functions to read XML files into R objects, get information out\n of and into nodes, and write R objects back to XML code. It's not as powerful as the 'XML' package and doesn't aim to\n be, but for simple XML handling it could be useful. It was originally developed for the R GUI and IDE 'RKWard'\n , to make plugin development easier.","Published":"2017-04-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"xkcd","Version":"0.0.5","Title":"Plotting ggplot2 Graphics in an XKCD Style","Description":"Plotting ggplot2 graphs using the XKCD style.","Published":"2016-01-13","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"xkcdcolors","Version":"1.0","Title":"Color Names from the XKCD Color Survey","Description":"The XKCD color survey asked participants to name colours. Randall Munroe published the top thousand(roughly) names and their sRGB hex values. This package lets you use them.","Published":"2016-04-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"XLConnect","Version":"0.2-13","Title":"Excel Connector for R","Description":"Provides comprehensive functionality to read, write and format Excel data.","Published":"2017-05-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"XLConnectJars","Version":"0.2-13","Title":"JAR Dependencies for the XLConnect Package","Description":"Provides external JAR dependencies for the XLConnect package.","Published":"2017-05-14","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"xLLiM","Version":"2.1","Title":"High Dimensional Locally-Linear Mapping","Description":"Provides a tool for non linear mapping (non linear regression) using a mixture of regression model and an inverse regression strategy. The methods include the GLLiM model (see Deleforge et al (2015) ) based on Gaussian mixtures and a robust version of GLLiM, named SLLiM (see Perthame et al (2016) ) based on a mixture of Generalized Student distributions. The methods also include BLLiM (see Devijver et al (2017) ) which is an extension of GLLiM with a sparse block diagonal structure for large covariance matrices (particularly interesting for transcriptomic data).","Published":"2017-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xlsimple","Version":"0.0.1","Title":"'XLConnect' Wrapper","Description":"Provides a simple wrapper for some 'XLConnect' functions. 'XLConnect' is\n a package that allows for reading, writing, and manipulating Microsoft Excel\n files. This package, 'xlsimple', adds some documentation and pre-defined formatting\n to the outputted Excel file. Individual sheets can include a description on the\n first row to remind user what is in the data set. Auto filters and freeze\n rows are turned on. A brief readme file is created that provides a summary\n listing of the created sheets and, where provided, the description.","Published":"2017-03-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"xlsx","Version":"0.5.7","Title":"Read, write, format Excel 2007 and Excel 97/2000/XP/2003 files","Description":"Provide R functions to read/write/format Excel 2007 and Excel 97/2000/XP/2003 file formats.","Published":"2014-08-02","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"xlsxjars","Version":"0.6.1","Title":"Package required POI jars for the xlsx package","Description":"The xlsxjars package collects all the external jars\n required for the xlxs package. This release corresponds to POI\n 3.10.1.","Published":"2014-08-22","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"xlutils3","Version":"0.1.0","Title":"Extract Multiple Excel Files at Once","Description":"Extract Excel files from folder.\n Also display extracted data and compute a summary of it.\n Based on the 'readxl' package.","Published":"2016-08-31","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"xmeta","Version":"1.1-3","Title":"A Toolbox for Multivariate Meta-Analysis","Description":"A toolbox for meta-analysis. This package includes a collection of functions for (1) implementing robust multivariate meta-analysis of continuous or binary outcomes; and (2) a bivariate Egger's test for detecting publication bias.","Published":"2017-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"Xmisc","Version":"0.2.1","Title":"Xiaobei's miscellaneous classes and functions","Description":"This is Xiaobei's miscellaneous classes and functions useful when\n developing R packages, particularly for OOP using R Reference Class.","Published":"2014-08-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"XML","Version":"3.98-1.9","Title":"Tools for Parsing and Generating XML Within R and S-Plus","Description":"Many approaches for both reading and\n creating XML (and HTML) documents (including DTDs), both local\n and accessible via HTTP or FTP. Also offers access to an\n 'XPath' \"interpreter\".","Published":"2017-06-19","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"xml2","Version":"1.1.1","Title":"Parse XML","Description":"Work with XML files using a simple, consistent interface. Built on\n top of the 'libxml2' C library.","Published":"2017-01-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"XML2R","Version":"0.0.6","Title":"EasieR XML data collection","Description":"XML2R is a framework that reduces the effort required to transform\n XML content into number of tables while preserving parent to child\n relationships.","Published":"2014-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xmlparsedata","Version":"1.0.1","Title":"Parse Data of 'R' Code as an 'XML' Tree","Description":"Convert the output of 'utils::getParseData()' to an 'XML'\n tree, that is searchable and easier to manipulate in general.","Published":"2016-06-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"XMRF","Version":"1.0","Title":"Markov Random Fields for High-Throughput Genetics Data","Description":"Fit Markov Networks to a wide range of high-throughput genomics data.","Published":"2015-06-25","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"XNomial","Version":"1.0.4","Title":"Exact Goodness-of-Fit Test for Multinomial Data with Fixed\nProbabilities","Description":"Tests whether a set of counts fit a given expected ratio. For\n example, a genetic cross might be expected to produce four types in the\n relative frequencies of 9:3:3:1. To see whether a set of observed counts\n fits this expectation, one can examine all possible outcomes with xmulti() or a\n random sample of them with xmonte() and find the probability of an observation\n deviating from the expectation by at least as much as the observed. As a\n measure of deviation from the expected, one can use the log-likelihood\n ratio, the multinomial probability, or the classic chi-square statistic. A\n histogram of the test statistic can also be plotted and compared with the\n asymptotic curve.","Published":"2015-12-24","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"xoi","Version":"0.66-9","Title":"Tools for Analyzing Crossover Interference","Description":"Analysis of crossover interference in experimental crosses,\n particularly regarding the gamma model.","Published":"2015-10-18","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Xplortext","Version":"1.00","Title":"Statistical Analysis of Textual Data","Description":"A complete set of functions devoted to statistical analysis of\n documents.","Published":"2017-05-25","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"} {"Package":"xpose4","Version":"4.6.0","Title":"Tools for Nonlinear Mixed-Effect Model Building and Diagnostics","Description":"A collection of functions to be used as a model\n building aid for nonlinear mixed-effects (population) analysis\n using NONMEM. It facilitates data set checkout, exploration and\n visualization, model diagnostics, candidate covariate identification\n and model comparison.","Published":"2017-06-17","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"XR","Version":"0.7","Title":"A Structure for Interfaces from R","Description":"Support for interfaces from R to other languages,\n built around a class for evaluators and a combination of functions, classes and\n methods for communication. Will be used through a specific language interface\n package. Described in the book \"Extending R\".","Published":"2016-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"XRJulia","Version":"0.7","Title":"Structured Interface to Julia","Description":"A Julia interface structured according to the general\n\t form described in package XR and in the book \"Extending R\".","Published":"2016-09-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"XRPython","Version":"0.7","Title":"Structured Interface to Python","Description":"A Python interface structured according to the general\n form described in package XR and in the book \"Extending R\".","Published":"2016-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"XRSCC","Version":"0.1","Title":"Statistical Quality Control Simulation","Description":"This is a set of statistical quality control functions, that allows plotting control charts and its iterations, process capability for variable and attribute control, highlighting the xrs_gr() function, like a first iteration for variable chart, meanwhile the we_rules() function detects non random patterns in sample.","Published":"2016-11-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xseq","Version":"0.2.1","Title":"Assessing Functional Impact on Gene Expression of Mutations in\nCancer","Description":"A hierarchical Bayesian approach to assess functional impact of mutations on gene expression in cancer. Given a patient-gene matrix encoding the presence/absence of a mutation, a patient-gene expression matrix encoding continuous value expression data, and a graph structure encoding whether two genes are known to be functionally related, xseq outputs: a) the probability that a recurrently mutated gene g influences gene expression across the population of patients; \n and b) the probability that an individual mutation in gene g in an individual patient m influences expression within that patient. ","Published":"2015-09-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xslt","Version":"1.1","Title":"XSLT 1.0 Transformations","Description":"An extension for the 'xml2' package to transform XML documents\n by applying an XSL stylesheet.","Published":"2017-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xsp","Version":"0.1.2","Title":"The Chi-Square Periodogram","Description":"The circadian period of a time series data is predicted and the statistical significance of the periodicity are calculated using the chi-square periodogram.","Published":"2017-06-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"xtable","Version":"1.8-2","Title":"Export Tables to LaTeX or HTML","Description":"Coerce data to LaTeX and HTML tables.","Published":"2016-02-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xtal","Version":"1.15","Title":"Crystallization Toolset","Description":"This is the tool set for crystallographer to design and analyze crystallization experiments, especially for ribosome from Mycobacterium tuberculosis.","Published":"2015-12-29","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"xtermStyle","Version":"3.0.5","Title":"Terminal Text Formatting Using Escape Sequences","Description":"Can be used for coloring output in terminals.\n\tIt was developed for the standard Ubuntu terminal but should be compatible\n\twith any terminal using xterm or ANSI escape sequences. If run in windows,\n\tRStudio, or any other platform not supporting such escape sequences it\n\tgracefully passes on any output without modifying it.","Published":"2015-05-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xtractomatic","Version":"3.3.2","Title":"Accessing Environmental Data from ERD's ERDDAP Server","Description":"Contains three functions that access\n environmental data from ERD's ERDDAP service . The xtracto() function extracts\n data along a trajectory for a given \"radius\" around the point. The\n xtracto_3D() function extracts data in a box. The xtractogon() function\n extracts data in a polygon. There are also two helper functions to obtain\n information about available data.","Published":"2017-05-19","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"xts","Version":"0.9-7","Title":"eXtensible Time Series","Description":"Provide for uniform handling of R's different time-based data classes by extending zoo, maximizing native format information preservation and allowing for user level customization and extension, while simplifying cross-class interoperability.","Published":"2014-01-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xVA","Version":"0.8.1","Title":"Calculates Credit Risk Valuation Adjustments","Description":"Calculates a number of valuation adjustments including CVA, DVA,\n FBA, FCA, MVA and KVA. A two-way margin agreement has been implemented. For\n the KVA calculation three regulatory frameworks are supported: CEM, SA-CCR and\n IMM. The probability of default is implied through the credit spreads curve.\n Currently, only IRSwaps are supported. For more information, you can check\n one of the books regarding xVA: .","Published":"2016-11-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"xxIRT","Version":"2.0.1","Title":"Practical Item Response Theory and Computer-Based Testing in R","Description":"An implementation of item response theory and computer-based testing in R. \n It is designed for bridging the gap between theoretical advancements in \n psychometric research and their applications in practice. Currently, it \n consists of five modules: (1) common item response theory functions, (2) estimation procedures, \n (3) automated test assembly framework, (4) computerized adaptive testing framework, \n (5) multistage testing framework. See detailed documentation at \n .","Published":"2017-04-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"xyloplot","Version":"1.5","Title":"A Method for Creating Xylophone-Like Frequency Density Plots","Description":"A method for creating vertical histograms sharing a y-axis using\n base graphics.","Published":"2017-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"xyz","Version":"0.2","Title":"The 'xyz' Algorithm for Fast Interaction Search in\nHigh-Dimensional Data","Description":"High dimensional interaction search by brute force requires a\n quadratic computational cost in the number of variables. The xyz algorithm provably finds strong interactions in almost linear time.\n For details of the algorithm see: G. Thanei, N. Meinshausen and R. Shah (2016). The xyz algorithm for fast interaction search in high-dimensional data .","Published":"2017-04-03","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"yacca","Version":"1.1","Title":"Yet Another Canonical Correlation Analysis Package","Description":"This package provides an alternative canonical\n correlation/redundancy analysis function, with associated\n print, plot, and summary methods. A method for generating\n helio plots is also included.","Published":"2012-10-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"yaImpute","Version":"1.0-26","Title":"Nearest Neighbor Observation Imputation and Evaluation Tools","Description":"Performs nearest neighbor-based imputation using one or more alternative \n approaches to processing multivariate data. These include methods based on canonical \n correlation analysis, canonical correspondence analysis, and a multivariate adaptation \n of the random forest classification and regression techniques of Leo Breiman and Adele \n Cutler. Additional methods are also offered. The package includes functions for \n comparing the results from running alternative techniques, detecting imputation targets \n that are notably distant from reference observations, detecting and correcting \n for bias, bootstrapping and building ensemble imputations, and mapping results.","Published":"2015-07-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"yakmoR","Version":"0.1.1","Title":"A Simple Wrapper for the k-Means Library Yakmo","Description":"This is a simple wrapper for the yakmo K-Means library (developed by Naoki Yoshinaga, see http://www.tkl.iis.u-tokyo.ac.jp/~ynaga/yakmo/). It performs fast and robust (orthogonal) K-Means.","Published":"2015-08-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"YaleToolkit","Version":"4.2.2","Title":"Data exploration tools from Yale University","Description":"This collection of data exploration tools was developed at\n Yale University for the graphical exploration of complex\n multivariate data; barcode and gpairs now have their own\n packages. The new big.read.table() provided here may be\n useful for large files when only a subset is needed.","Published":"2014-12-31","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"yaml","Version":"2.1.14","Title":"Methods to Convert R Data to YAML and Back","Description":"Implements the 'libyaml' 'YAML' 1.1 parser and emitter () for R.","Published":"2016-11-12","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"yarrr","Version":"0.1.5","Title":"A Companion to the e-Book \"YaRrr!: The Pirate's Guide to R\"","Description":"Contains a mixture of functions and data sets referred to in the introductory e-book \"YaRrr!: The Pirate's Guide to R\". The latest version of the e-book is available for free at .","Published":"2017-04-19","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"ycinterextra","Version":"0.1","Title":"Yield curve or zero-coupon prices interpolation and\nextrapolation","Description":"Yield curve or zero-coupon prices interpolation and extrapolation using the Nelson-Siegel, Svensson, Smith-Wilson models, and Hermite cubic splines.","Published":"2013-12-18","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"yCrypticRNAs","Version":"0.99.2","Title":"Cryptic Transcription Analysis in Yeast","Description":"Calculates cryptic scores for genes using the ratio\n (Cheung et al., 2008), the 3' enrichment method\n (DeGennaro et al., 2013) and the probabilistic\n method. It also provide\tmethods to estimates\n cryptic transcription start sites.","Published":"2016-02-08","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"yearn","Version":"0.1.1","Title":"Use and if Needed Install Packages from CRAN, BioConductor, CRAN\nArchive, and GitHub","Description":"This tries to attach a package if you have it; if not, it tries to install it from BioConductor or CRAN; if not available there, it tries to install it from the cran mirror on GitHub, which includes packages that have been removed from CRAN; if not available there, it looks for a matching other package on GitHub to install. Note this is sloppy practice and prone to all sorts of risks. However, there are use cases, such as quick scripting, or in a class where students already know best practices, where this can be useful. yearn was inspired by teaching in PhyloMeth, a course funded by an NSF CAREER award to the author (NSF DEB-1453424).","Published":"2017-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"yesno","Version":"0.0.2","Title":"Ask a Custom Yes-No Question","Description":"Asks a custom Yes-No question with variable responses.\n The order and phrasing of the possible responses varies randomly \n to ensure the user consciously chooses (as opposed to automatically types their response).","Published":"2017-02-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"yhat","Version":"2.0-0","Title":"Interpreting Regression Effects","Description":"The purpose of this package is to provide methods to interpret multiple\n linear regression and canonical correlation results including beta weights,structure coefficients, \n validity coefficients, product measures, relative weights, all-possible-subsets regression,\n dominance analysis, commonality analysis, and adjusted effect sizes.","Published":"2013-09-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"yhatr","Version":"0.15.1","Title":"R Binder for the Yhat API","Description":"Deploy, maintain, and invoke models via the Yhat\n REST API.","Published":"2017-05-09","License":"FreeBSD","snapshot_date":"2017-06-23"} {"Package":"YieldCurve","Version":"4.1","Title":"Modelling and estimation of the yield curve","Description":"Modelling the yield curve with some parametric models.\n The models implemented are: Nelson-Siegel, Diebold-Li and\n Svensson. The package also includes the data of the term\n structure of interest rate of Federal Reserve Bank and European\n Central Bank.","Published":"2013-01-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ykmeans","Version":"1.0","Title":"K-means using a target variable","Description":"The clustering by k-means of using the target variable.\n To determine the number of clusters with the variance of \n the target variable in the cluster.","Published":"2014-03-14","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"yorkr","Version":"0.0.7","Title":"Analyze Cricket Performances Based on Data from Cricsheet","Description":"Analyzing performances of cricketers and cricket teams\n based on 'yaml' match data from Cricsheet .","Published":"2017-02-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"YplantQMC","Version":"0.6-6","Title":"Plant Architectural Analysis with Yplant and QuasiMC","Description":"An R implementation of Yplant, combined with the QuasiMC\n raytracer. Calculate radiation absorption, transmission and scattering,\n photosynthesis and transpiration of virtual 3D plants.","Published":"2016-05-23","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"YPmodel","Version":"1.3","Title":"The Short-Term and Long-Term Hazard Ratio Model for Survival\nData","Description":"Inference procedures accommodate a flexible range of hazard ratio patterns with a two-sample semi-parametric model. This model contains the proportional hazards model and the proportional odds model as sub-models, and accommodates non-proportional hazards situations to the extreme of having crossing hazards and crossing survivor functions. Overall, this package has four major functions: 1) the parameter estimation, namely short-term and long-term hazard ratio parameters; 2) 95 percent and 90 percent point-wise confidence intervals and simultaneous confidence bands for the hazard ratio function; 3) p-value of the adaptive weighted log-rank test; 4) p-values of two lack-of-fit tests for the model. See the included \"read_me_first.pdf\" for brief instructions. In this version (1.1), there is no need to sort the data before applying this package.","Published":"2015-11-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"YuGene","Version":"1.1.5","Title":"A Simple Approach to Scale Gene Expression Data Derived from\nDifferent Platforms for Integrated Analyses","Description":"Simple method for comparison of gene\n expression generated across different experiments, and on\n different platforms; that does not require global\n renormalization, and is not restricted to comparison of\n identical probes. YuGene works on a range of microarray dataset\n distributions, such as between manufacturers. The resulting\n output allows direct comparisons of gene expression between\n experiments and experimental platforms.","Published":"2015-11-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"yuima","Version":"1.6.4","Title":"The YUIMA Project Package for SDEs","Description":"Simulation and Inference for SDEs and Other Stochastic Processes.","Published":"2017-04-29","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"yuimaGUI","Version":"1.1.0","Title":"A Graphical User Interface for the 'yuima' Package","Description":"Provides a graphical user interface for the 'yuima' package.","Published":"2017-05-10","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"yummlyr","Version":"0.1.1","Title":"R Bindings for Yummly API","Description":"\n Yummly.com is one of the world's largest and most powerful recipe search sites and this package aims to provide R bindings for publicly available Yummly.com Recipe API (https://developer.yummly.com/).","Published":"2015-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"zCompositions","Version":"1.0.3-1","Title":"Imputation of Zeros and Nondetects in Compositional Data Sets","Description":"Implements principled methods to impute multivariate left-censored data and zeros in compositional data sets.","Published":"2016-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"zeallot","Version":"0.0.3","Title":"Multiple and Unpacking Variable Assignment","Description":"Provides a %<-% operator to perform multiple\n or unpacking assignment in R. The operator unpacks\n the right-hand side of an assignment into multiple\n values and assigns these values to variables on the\n left-hand side of the assignment.","Published":"2017-02-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ZeBook","Version":"0.5","Title":"ZeBook Working with dynamic models for agriculture and\nenvironment","Description":"R package accompanying the book Working with dynamic\n models for agriculture and environment, by Daniel Wallach\n (INRA), David Makowski (INRA), James W. Jones (U.of Florida),\n Francois Brun (ACTA), in preparation for June 2013.","Published":"2013-06-18","License":"LGPL-3","snapshot_date":"2017-06-23"} {"Package":"zebu","Version":"0.1.1","Title":"Local Association Measures","Description":"Implements the estimation of local (and global) association measures: Ducher's Z, pointwise mutual information and normalized pointwise mutual information. The significance of local (and global) association is accessed using p-values estimated by permutations. Finally, using local association subgroup analysis, it identifies if the association between variables is dependent on the value of another variable.","Published":"2017-04-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"Zelig","Version":"5.1-2","Title":"Everyone's Statistical Software","Description":"A framework that brings together an abundance of common\n statistical models found across packages into a unified interface, and\n provides a common architecture for estimation and interpretation, as well\n as bridging functions to absorb increasingly more models into the\n collective library. Zelig allows each individual package, for each\n statistical model, to be accessed by a common uniformly structured call and\n set of arguments. Moreover, Zelig automates all the surrounding building\n blocks of a statistical work-flow--procedures and algorithms that may be\n essential to one user's application but which the original package\n developer did not use in their own research and might not themselves\n support. These include bootstrapping, jackknifing, and re-weighting of data.\n In particular, Zelig automatically generates predicted and simulated\n quantities of interest (such as relative risk ratios, average treatment\n effects, first differences and predicted and expected values) to interpret\n and visualize complex models.","Published":"2017-06-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ZeligChoice","Version":"0.9-6","Title":"Zelig Choice Models","Description":"Add-on package for Zelig 5. Enables the use of a variety of logit\n and probit regressions.","Published":"2017-06-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"ZeligEI","Version":"0.1-2","Title":"Zelig Ecological Inference Models","Description":"Add-on package for Zelig 5. Enables the use of a variety of\n ecological inference models.","Published":"2017-06-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"zeligverse","Version":"0.1.1","Title":"Easily Install and Load Stable Zelig Packages","Description":"Provides an easy way to load stable Core Zelig and ancillary Zelig\n packages.","Published":"2017-05-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"zendeskR","Version":"0.4","Title":"Zendesk API Wrapper","Description":"This package provides an R wrapper for the Zendesk API","Published":"2014-02-21","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"zenplots","Version":"0.0-1","Title":"Zigzag Expanded Navigation Plots","Description":"Graphical tools for visualizing high-dimensional data with a path of\n pairs.","Published":"2016-12-16","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"zetadiv","Version":"1.0.1","Title":"Functions to Compute Compositional Turnover Using Zeta Diversity","Description":"Functions to compute compositional turnover using zeta-diversity,\n the number of species shared by multiple assemblages. The package includes\n functions to compute zeta-diversity for a specific number of\n assemblages and to compute zeta-diversity for a range of numbers of\n assemblages. It also includes functions to explain how zeta-diversity\n varies with distance and with differences in environmental variables\n between assemblages, using generalised linear models, linear models\n with negative constraints, generalised additive models,shape\n constrained additive models, and I-splines.","Published":"2017-06-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"zfa","Version":"1.0","Title":"Zoom-Focus Algorithm","Description":"Performs Zoom-Focus Algorithm (ZFA) to optimize testing regions for rare variant association tests in exome sequencing data.","Published":"2017-04-06","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"ZIBseq","Version":"1.2","Title":"Differential Abundance Analysis for Metagenomic Data via\nZero-Inflated Beta Regression","Description":"Detects abundance differences across clinical conditions. Besides, it takes the sparse nature of metagenomic data into account and handles compositional data efficiently.","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"zic","Version":"0.9","Title":"Bayesian Inference for Zero-Inflated Count Models","Description":"Provides MCMC algorithms for the analysis of\n zero-inflated count models. The case of stochastic search\n variable selection (SVS) is also considered. All MCMC samplers\n are coded in C++ for improved efficiency. A data set\n considering the demand for health care is provided.","Published":"2015-09-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ZillowR","Version":"0.1.0","Title":"R Interface to Zillow Real Estate and Mortgage Data API","Description":"Zillow, an online real estate company, provides real estate and\n mortgage data for the United States through a REST API. The ZillowR package\n provides an R function for each API service, making it easy to make API\n calls and process the response into convenient, R-friendly data structures.\n See for the Zillow API\n Documentation.","Published":"2016-03-26","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ZIM","Version":"1.0.3","Title":"Zero-Inflated Models for Count Time Series with Excess Zeros","Description":"Fits observation-driven and parameter-driven models for zero-inflated time series. ","Published":"2017-02-06","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"zip","Version":"1.0.0","Title":"Cross-Platform 'zip' Compression","Description":"Cross-Platform 'zip' Compression Library. A replacement\n for the 'zip' function, that does not require any additional\n external tools on any platform.","Published":"2017-04-25","License":"CC0","snapshot_date":"2017-06-23"} {"Package":"zipcode","Version":"1.0","Title":"U.S. ZIP Code database for geocoding","Description":"This package contains a database of city, state, latitude,\n and longitude information for U.S. ZIP codes from the\n CivicSpace Database (August 2004) augmented by Daniel Coven's\n federalgovernmentzipcodes.us web site (updated January 22,\n 2012). Previous versions of this package (before 1.0) were\n based solely on the CivicSpace data, so an original version of\n the CivicSpace database is also included.","Published":"2012-03-12","License":"CC BY-SA 2.0 + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"zipfR","Version":"0.6-6","Title":"Statistical models for word frequency distributions","Description":"Statistical models and utilities for the analysis of word\n frequency distributions. The utilities include functions for\n loading, manipulating and visualizing word frequency data and\n vocabulary growth curves. The package also implements several\n statistical models for the distribution of word frequencies in\n a population. (The name of this library derives from the most\n famous word frequency distribution, Zipf's law.)","Published":"2012-04-03","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"ziphsmm","Version":"1.0.6","Title":"Zero-Inflated Poisson Hidden (Semi-)Markov Models","Description":"Fit zero-inflated Poisson hidden (semi-)Markov models with or without covariates by directly minimizing the negative log likelihood function using the gradient descent algorithm. Multiple starting values should be used to avoid local minima.","Published":"2017-06-07","License":"GPL","snapshot_date":"2017-06-23"} {"Package":"zoeppritz","Version":"1.0-6","Title":"Seismic Reflection and Scattering Coefficients","Description":"Calculate and plot scattering matrix coefficients for plane waves at interface.","Published":"2017-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"zoib","Version":"1.4.2","Title":"Bayesian Inference for Beta Regression and Zero-or-One Inflated\nBeta Regression","Description":"Fits beta regression and zero-or-one inflated beta regression and obtains Bayesian Inference of the model via the Markov Chain Monte Carlo approach implemented in JAGS.","Published":"2016-10-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"zoo","Version":"1.8-0","Title":"S3 Infrastructure for Regular and Irregular Time Series (Z's\nOrdered Observations)","Description":"An S3 class with methods for totally ordered indexed\n observations. It is particularly aimed at irregular time series\n of numeric vectors/matrices and factors. zoo's key design goals\n are independence of a particular index/date/time class and\n consistency with ts and base R by providing methods to extend\n standard generics.","Published":"2017-04-12","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"} {"Package":"zooaRch","Version":"1.2","Title":"Analytical Tools for Zooarchaeological Data","Description":"The analysis and inference of faunal remains recovered from\n archaeological sites concerns the field of zooarchaeology. The zooaRch package\n provides analytical tools to make inferences on zooarchaeological data.\n Functions in this package allow users to read, manipulate, visualize, and\n analyze zooarchaeological data.","Published":"2016-01-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"zooaRchGUI","Version":"1.0.2","Title":"Interactive Analytical Tools for Zooarchaeological Data","Description":"The analysis and inference of faunal remains recovered from\n archaeological sites concerns the field of zooarchaeology. The zooaRchGUI package\n provides a graphical user interface to analytical tools found in the R statistical environment\n to make inferences on zooarchaeological data. Functions in this package allow users to interactively\n read, manipulate, visualize, and analyze zooarchaeological data.","Published":"2017-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"zoocat","Version":"0.2.0","Title":"'zoo' Objects with Column Attributes","Description":"Tools for manipulating multivariate time series data by extending\n 'zoo' class.","Published":"2016-11-10","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"zooimage","Version":"3.0-5","Title":"Analysis of numerical zooplankton images","Description":"ZooImage is a free (open source) solution for analyzing\n digital images of zooplankton. In combination with ImageJ, a\n free image analysis system, it processes digital images,\n measures individuals, trains for automatic classification of\n taxa, and finally, measures zooplankton samples (abundances,\n total and partial size spectra or biomasses, etc.)","Published":"2014-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"zoom","Version":"2.0.4","Title":"A spatial data visualization tool","Description":"zm(), called with any active plot allow to enter an\n interactive session to zoom/navigate any plot. The development\n version, as well as binary releases can be found at\n https://github.com/cbarbu/R-package-zoom","Published":"2013-10-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"} {"Package":"zoon","Version":"0.6","Title":"Reproducible, Accessible & Shareable Species Distribution\nModelling","Description":"Reproducible and remixable species distribution modelling. The\n package reads user submitted modules from an online repository, runs full\n SDM workflows and returns output that is fully reproducible.","Published":"2017-01-12","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ZRA","Version":"0.2","Title":"Dynamic Plots for Time Series Forecasting","Description":"Combines a forecast of a time series, using the function forecast(), with the dynamic plots from dygraphs.","Published":"2015-08-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"zstdr","Version":"0.1.1","Title":"R Bindings to the 'Zstandard' Compression Library","Description":"Provides R bindings to the 'Zstandard' compression library.\n 'Zstandard' is a real-time compression algorithm, providing high compression ratios.\n It offers a very wide range of compression / speed trade-off, while being backed by a very fast decoder.\n See for more information.","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"} {"Package":"ztable","Version":"0.1.5","Title":"Zebra-Striped Tables in LaTeX and HTML Formats","Description":"Makes zebra-striped tables (tables with alternating row colors)\n in LaTeX and HTML formats easily from a data.frame, matrix, lm, aov, anova,\n glm, coxph, nls, fitdistr, mytable and cbind.mytable objects.","Published":"2015-02-15","License":"GPL-2","snapshot_date":"2017-06-23"} {"Package":"zTree","Version":"1.0.4","Title":"Functions to Import Data from 'z-Tree' into R","Description":"Read '.xls' and '.sbj' files which are written by the\n Microsoft Windows program 'z-Tree'. The latter is a software for\n developing and carrying out economic experiments\n (see for more information).","Published":"2017-01-12","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"} {"Package":"ztype","Version":"0.1.0","Title":"Run a Ztype Game Loaded with R Functions","Description":"How fast can you type R functions on your keyboard? Find out by running a 'zty.pe' game: export R functions as instructions to type to destroy opponents vessels.","Published":"2016-12-23","License":"GPL-3","snapshot_date":"2017-06-23"} {"Package":"zyp","Version":"0.10-1","Title":"Zhang + Yue-Pilon trends package","Description":"The zyp package contains an efficient implementation of Sen's slope method (Sen, 1968) plus implementation of Xuebin Zhang's (Zhang, 1999) and Yue-Pilon's (Yue, 2002) prewhitening approaches to determining trends in climate data.","Published":"2013-09-19","License":"LGPL-2.1","snapshot_date":"2017-06-23"}