{"Package":"A3","Version":"1.0.0","Title":"Accurate, Adaptable, and Accessible Error Metrics for Predictive\nModels","Description":"Supplies tools for tabulating and analyzing the results of predictive models. The methods employed are applicable to virtually any predictive model and make comparisons between different methodologies straightforward.","Published":"2015-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"abbyyR","Version":"0.5.1","Title":"Access to Abbyy Optical Character Recognition (OCR) API","Description":"Get text from images of text using Abbyy Cloud Optical Character\n Recognition (OCR) API. Easily OCR images, barcodes, forms, documents with\n machine readable zones, e.g. passports. Get the results in a variety of formats\n including plain text and XML. To learn more about the Abbyy OCR API, see \n .","Published":"2017-04-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"abc","Version":"2.1","Title":"Tools for Approximate Bayesian Computation (ABC)","Description":"Implements several ABC algorithms for\n performing parameter estimation, model selection, and goodness-of-fit.\n Cross-validation tools are also available for measuring the\n accuracy of ABC estimates, and to calculate the\n misclassification probabilities of different models.","Published":"2015-05-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"abc.data","Version":"1.0","Title":"Data Only: Tools for Approximate Bayesian Computation (ABC)","Description":"Contains data which are used by functions of the 'abc' package.","Published":"2015-05-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"ABC.RAP","Version":"0.9.0","Title":"Array Based CpG Region Analysis Pipeline","Description":"It aims to identify candidate genes that are “differentially\n methylated” between cases and controls. It applies Student’s t-test and delta beta analysis to\n identify candidate genes containing multiple “CpG sites”.","Published":"2016-10-20","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ABCanalysis","Version":"1.2.1","Title":"Computed ABC Analysis","Description":"For a given data set, the package provides a novel method of computing precise limits to acquire subsets which are easily interpreted. Closely related to the Lorenz curve, the ABC curve visualizes the data by graphically representing the cumulative distribution function. Based on an ABC analysis the algorithm calculates, with the help of the ABC curve, the optimal limits by exploiting the mathematical properties pertaining to distribution of analyzed items. The data containing positive values is divided into three disjoint subsets A, B and C, with subset A comprising very profitable values, i.e. largest data values (\"the important few\"), subset B comprising values where the yield equals to the effort required to obtain it, and the subset C comprising of non-profitable values, i.e., the smallest data sets (\"the trivial many\"). Package is based on \"Computed ABC Analysis for rational Selection of most informative Variables in multivariate Data\", PLoS One. Ultsch. A., Lotsch J. (2015) .","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"abcdeFBA","Version":"0.4","Title":"ABCDE_FBA: A-Biologist-Can-Do-Everything of Flux Balance\nAnalysis with this package","Description":"Functions for Constraint Based Simulation using Flux\n Balance Analysis and informative analysis of the data generated\n during simulation.","Published":"2012-09-15","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ABCoptim","Version":"0.14.0","Title":"Implementation of Artificial Bee Colony (ABC) Optimization","Description":"An implementation of Karaboga (2005) Artificial Bee Colony\n Optimization algorithm .\n This (working) version is a Work-in-progress, which is\n why it has been implemented using pure R code. This was developed upon the basic\n version programmed in C and distributed at the algorithm's official website.","Published":"2016-11-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"ABCp2","Version":"1.2","Title":"Approximate Bayesian Computational Model for Estimating P2","Description":"Tests the goodness of fit of a distribution of offspring to the Normal, Poisson, and Gamma distribution and estimates the proportional paternity of the second male (P2) based on the best fit distribution.","Published":"2016-02-04","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"abcrf","Version":"1.5","Title":"Approximate Bayesian Computation via Random Forests","Description":"Performs Approximate Bayesian Computation (ABC) model choice and parameter inference via random forests.","Published":"2017-01-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"abctools","Version":"1.1.1","Title":"Tools for ABC Analyses","Description":"Tools for approximate Bayesian computation including summary statistic selection and assessing coverage.","Published":"2017-04-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"abd","Version":"0.2-8","Title":"The Analysis of Biological Data","Description":"The abd package contains data sets and sample code for The\n Analysis of Biological Data by Michael Whitlock and Dolph Schluter (2009;\n Roberts & Company Publishers).","Published":"2015-07-03","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"abf2","Version":"0.7-1","Title":"Load Gap-Free Axon ABF2 Files","Description":"Loads ABF2 files containing gap-free data from electrophysiological recordings, as created by Axon Instruments/Molecular Devices software such as pClamp 10.","Published":"2015-03-04","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"ABHgenotypeR","Version":"1.0.1","Title":"Easy Visualization of ABH Genotypes","Description":"Easy to use functions to visualize marker data\n from biparental populations. Useful for both analyzing and\n presenting genotypes in the ABH format.","Published":"2016-02-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"abind","Version":"1.4-5","Title":"Combine Multidimensional Arrays","Description":"Combine multidimensional arrays into a single array.\n This is a generalization of 'cbind' and 'rbind'. Works with\n vectors, matrices, and higher-dimensional arrays. Also\n provides functions 'adrop', 'asub', and 'afill' for manipulating,\n extracting and replacing data in arrays.","Published":"2016-07-21","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"abjutils","Version":"0.0.1","Title":"Useful Tools for Jurimetrical Analysis Used by the Brazilian\nJurimetrics Association","Description":"The Brazilian Jurimetrics Association (BJA or ABJ in Portuguese, see for more information) is a non-profit organization which aims to investigate and promote the use of statistics and probability in the study of Law and its institutions. This package implements general purpose tools used by BJA, such as functions for sampling and basic manipulation of Brazilian lawsuits identification number. It also implements functions for text cleaning, such as accentuation removal.","Published":"2017-01-04","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"abn","Version":"1.0.2","Title":"Modelling Multivariate Data with Additive Bayesian Networks","Description":"Bayesian network analysis is a form of probabilistic graphical models which derives from empirical data a directed acyclic graph, DAG, describing the dependency structure between random variables. An additive Bayesian network model consists of a form of a DAG where each node comprises a generalized linear model, GLM. Additive Bayesian network models are equivalent to Bayesian multivariate regression using graphical modelling, they generalises the usual multivariable regression, GLM, to multiple dependent variables. 'abn' provides routines to help determine optimal Bayesian network models for a given data set, where these models are used to identify statistical dependencies in messy, complex data. The additive formulation of these models is equivalent to multivariate generalised linear modelling (including mixed models with iid random effects). The usual term to describe this model selection process is structure discovery. The core functionality is concerned with model selection - determining the most robust empirical model of data from interdependent variables. Laplace approximations are used to estimate goodness of fit metrics and model parameters, and wrappers are also included to the INLA package which can be obtained from . It is recommended the testing version, which can be downloaded by running: source(\"http://www.math.ntnu.no/inla/givemeINLA-testing.R\"). A comprehensive set of documented case studies, numerical accuracy/quality assurance exercises, and additional documentation are available from the 'abn' website.","Published":"2016-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"abodOutlier","Version":"0.1","Title":"Angle-Based Outlier Detection","Description":"Performs angle-based outlier detection on a given dataframe. Three methods are available, a full but slow implementation using all the data that has cubic complexity, a fully randomized one which is way more efficient and another using k-nearest neighbours. These algorithms are specially well suited for high dimensional data outlier detection.","Published":"2015-08-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"AbsFilterGSEA","Version":"1.5","Title":"Improved False Positive Control of Gene-Permuting GSEA with\nAbsolute Filtering","Description":"Gene-set enrichment analysis (GSEA) is popularly used to assess the enrichment of differential signal in a pre-defined gene-set without using a cutoff threshold for differential expression. The significance of enrichment is evaluated through sample- or gene-permutation method. Although the sample-permutation approach is highly recommended due to its good false positive control, we must use gene-permuting method if the number of samples is small. However, such gene-permuting GSEA (or preranked GSEA) generates a lot of false positive gene-sets as the inter-gene correlation in each gene set increases. These false positives can be successfully reduced by filtering with the one-tailed absolute GSEA results. This package provides a function that performs gene-permuting GSEA calculation with or without the absolute filtering. Without filtering, users can perform (original) two-tailed or one-tailed absolute GSEA.","Published":"2016-08-26","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AbSim","Version":"0.2.2","Title":"Time Resolved Simulations of Antibody Repertoires","Description":"Simulation methods for the evolution of antibody repertoires. The heavy and light chain variable region of both human and C57BL/6 mice can be simulated in a time-dependent fashion. Both single lineages using one set of V-, D-, and J-genes or full repertoires can be simulated. The algorithm begins with an initial V-D-J recombination event, starting the first phylogenetic tree. Upon completion, the main loop of the algorithm begins, with each iteration representing one simulated time step. Various mutation events are possible at each time step, contributing to a diverse final repertoire.","Published":"2017-06-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"abundant","Version":"1.1","Title":"High-Dimensional Principal Fitted Components and Abundant\nRegression","Description":"Fit and predict with the high-dimensional principal fitted\n components model. This model is described by Cook, Forzani, and Rothman (2012)\n\t.","Published":"2017-01-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ACA","Version":"1.0","Title":"Abrupt Change-Point or Aberration Detection in Point Series","Description":"Offers an interactive function for the detection of breakpoints in series. ","Published":"2016-03-10","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"acc","Version":"1.3.3","Title":"Exploring Accelerometer Data","Description":"Processes accelerometer data from uni-axial and tri-axial devices,\n and generates data summaries. Also includes functions to plot, analyze, and\n simulate accelerometer data.","Published":"2016-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"accelerometry","Version":"2.2.5","Title":"Functions for Processing Minute-to-Minute Accelerometer Data","Description":"A collection of functions that perform operations on time-series accelerometer data, such as identify non-wear time, flag minutes that are part of an activity bout, and find the maximum 10-minute average count value. The functions are generally very flexible, allowing for a variety of algorithms to be implemented. Most of the functions are written in C++ for efficiency.","Published":"2015-05-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"accelmissing","Version":"1.1","Title":"Missing Value Imputation for Accelerometer Data","Description":"Imputation for the missing count values in accelerometer data. The methodology includes both parametric and semi-parametric multiple imputations under the zero-inflated Poisson lognormal model. This package also provides multiple functions to pre-process the accelerometer data previous to the missing data imputation. These includes detecting wearing and non-wearing time, selecting valid days and subjects, and creating plots.","Published":"2016-07-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AcceptanceSampling","Version":"1.0-5","Title":"Creation and Evaluation of Acceptance Sampling Plans","Description":"Provides functionality for creating and\n\tevaluating acceptance sampling plans. Sampling plans can be single,\n\tdouble or multiple.","Published":"2016-12-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"ACCLMA","Version":"1.0","Title":"ACC & LMA Graph Plotting","Description":"The main function is plotLMA(sourcefile,header) that takes\n a data set and plots the appropriate LMA and ACC graphs. If no\n sourcefile (a string) was passed, a manual data entry window is\n opened. The header parameter indicates by TRUE/FALSE (false by\n default) if the source CSV file has a head row or not. The data\n set should contain only one independent variable (X) and one\n dependent varialbe (Y) and can contain a weight for each\n observation","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"accrual","Version":"1.2","Title":"Bayesian Accrual Prediction","Description":"Subject recruitment for medical research is challenging. Slow patient accrual leads to delay in research. Accrual monitoring during the process of recruitment is critical. Researchers need reliable tools to manage the accrual rate. We developed a Bayesian method that integrates researcher's experience on previous trials and data from the current study, providing reliable prediction on accrual rate for clinical studies. In this R package, we present functions for Bayesian accrual prediction which can be easily used by statisticians and clinical researchers.","Published":"2016-07-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"accrued","Version":"1.4.1","Title":"Data Quality Visualization Tools for Partially Accruing Data","Description":"Package for visualizing data quality of partially accruing data.","Published":"2016-08-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ACD","Version":"1.5.3","Title":"Categorical data analysis with complete or missing responses","Description":"Categorical data analysis with complete or missing responses","Published":"2013-10-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ACDm","Version":"1.0.4","Title":"Tools for Autoregressive Conditional Duration Models","Description":"Package for Autoregressive Conditional Duration (ACD, Engle and Russell, 1998) models. Creates trade, price or volume durations from transactions (tic) data, performs diurnal adjustments, fits various ACD models and tests them. ","Published":"2016-07-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"acebayes","Version":"1.4","Title":"Optimal Bayesian Experimental Design using the ACE Algorithm","Description":"Optimal Bayesian experimental design using the approximate coordinate exchange (ACE) algorithm.","Published":"2017-05-22","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"acepack","Version":"1.4.1","Title":"ACE and AVAS for Selecting Multiple Regression Transformations","Description":"Two nonparametric methods for multiple regression transform selection are provided.\n The first, Alternative Conditional Expectations (ACE), \n is an algorithm to find the fixed point of maximal\n correlation, i.e. it finds a set of transformed response variables that maximizes R^2\n using smoothing functions [see Breiman, L., and J.H. Friedman. 1985. \"Estimating Optimal Transformations\n for Multiple Regression and Correlation\". Journal of the American Statistical Association.\n 80:580-598. ].\n Also included is the Additivity Variance Stabilization (AVAS) method which works better than ACE when\n correlation is low [see Tibshirani, R.. 1986. \"Estimating Transformations for Regression via Additivity\n and Variance Stabilization\". Journal of the American Statistical Association. 83:394-405. \n ]. A good introduction to these two methods is in chapter 16 of\n Frank Harrel's \"Regression Modeling Strategies\" in the Springer Series in Statistics.","Published":"2016-10-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"ACEt","Version":"1.8.0","Title":"Estimating Dynamic Heritability and Twin Model Comparison","Description":"Twin models that are able to estimate the dynamic behaviour of the variance components in the classical twin models with respect to age using B-splines and P-splines.","Published":"2017-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"acid","Version":"1.1","Title":"Analysing Conditional Income Distributions","Description":"Functions for the analysis of income distributions for subgroups of the population as defined by a set of variables like age, gender, region, etc. This entails a Kolmogorov-Smirnov test for a mixture distribution as well as functions for moments, inequality measures, entropy measures and polarisation measures of income distributions. This package thus aides the analysis of income inequality by offering tools for the exploratory analysis of income distributions at the disaggregated level. ","Published":"2016-02-01","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"acm4r","Version":"1.0","Title":"Align-and-Count Method comparisons of RFLP data","Description":"Fragment lengths or molecular weights from pairs of lanes are\n compared, and a number of matching bands are calculated using the\n Align-and-Count Method.","Published":"2013-12-28","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"ACMEeqtl","Version":"1.4","Title":"Estimation of Interpretable eQTL Effect Sizes Using a Log of\nLinear Model","Description":"We use a non-linear model, termed ACME, \n that reflects a parsimonious biological model for \n allelic contributions of cis-acting eQTLs.\n With non-linear least-squares algorithm we \n estimate maximum likelihood parameters. The ACME model\n provides interpretable effect size estimates and\n p-values with well controlled Type-I error.\n Includes both R and (much faster) C implementations.","Published":"2017-03-11","License":"LGPL-3","snapshot_date":"2017-06-23"}
{"Package":"acmeR","Version":"1.1.0","Title":"Implements ACME Estimator of Bird and Bat Mortality by Wind\nTurbines","Description":"Implementation of estimator ACME, described in Wolpert (2015), ACME: A \t\tPartially Periodic Estimator of Avian & Chiropteran Mortality at Wind\n Turbines (submitted). Unlike most other models, this estimator\n supports decreasing-hazard Weibull model for persistence;\n decreasing search proficiency as carcasses age; variable\n bleed-through at successive searches; and interval mortality\n estimates. The package provides, based on search data, functions\n for estimating the mortality inflation factor in Frequentist and\n Bayesian settings.","Published":"2015-09-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"ACNE","Version":"0.8.1","Title":"Affymetrix SNP Probe-Summarization using Non-Negative Matrix\nFactorization","Description":"A summarization method to estimate allele-specific copy number signals for Affymetrix SNP microarrays using non-negative matrix factorization (NMF).","Published":"2015-10-27","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"}
{"Package":"acnr","Version":"1.0.0","Title":"Annotated Copy-Number Regions","Description":"Provides SNP array data from different types of\n copy-number regions. These regions were identified manually by the authors\n of the package and may be used to generate realistic data sets with known\n truth.","Published":"2017-04-18","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"}
{"Package":"acopula","Version":"0.9.2","Title":"Modelling dependence with multivariate Archimax (or any\nuser-defined continuous) copulas","Description":"Archimax copulas are mixture of Archimedean and EV copulas. The package provides definitions of several parametric families of generator and dependence function, computes CDF and PDF, estimates parameters, tests for goodness of fit, generates random sample and checks copula properties for custom constructs. In 2-dimensional case explicit formulas for density are used, in the contrary to higher dimensions when all derivatives are linearly approximated. Several non-archimax families (normal, FGM, Plackett) are provided as well. ","Published":"2013-07-31","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AcousticNDLCodeR","Version":"1.0.1","Title":"Coding Sound Files for Use with NDL","Description":"Make acoustic cues to use with the R packages 'ndl' or 'ndl2'. The package implements functions used\n in the PLoS ONE paper:\n Denis Arnold, Fabian Tomaschek, Konstantin Sering, Florence Lopez, and R. Harald Baayen (2017).\n Words from spontaneous conversational speech can be recognized with human-like accuracy by \n an error-driven learning algorithm that discriminates between meanings straight from smart \n acoustic features, bypassing the phoneme as recognition unit. PLoS ONE 12(4):e0174623\n https://doi.org/10.1371/journal.pone.0174623\n More details can be found in the paper and the supplement.\n 'ndl' is available on CRAN. 'ndl2' is available by request from .","Published":"2017-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"acp","Version":"2.1","Title":"Autoregressive Conditional Poisson","Description":"Analysis of count data exhibiting autoregressive properties, using the Autoregressive Conditional Poisson model (ACP(p,q)) proposed by Heinen (2003).","Published":"2015-12-04","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"aCRM","Version":"0.1.1","Title":"Convenience functions for analytical Customer Relationship\nManagement","Description":"Convenience functions for data preparation and modeling often used in aCRM.","Published":"2014-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AcrossTic","Version":"1.0-3","Title":"A Cost-Minimal Regular Spanning Subgraph with TreeClust","Description":"Construct minimum-cost regular spanning subgraph as part of a\n non-parametric two-sample test for equality of distribution.","Published":"2016-08-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"acrt","Version":"1.0.1","Title":"Autocorrelation Robust Testing","Description":"Functions for testing affine hypotheses on the regression coefficient vector in regression models with autocorrelated errors. ","Published":"2016-12-27","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"acs","Version":"2.0","Title":"Download, Manipulate, and Present American Community Survey and\nDecennial Data from the US Census","Description":"Provides a general toolkit for downloading, managing,\n analyzing, and presenting data from the U.S. Census, including SF1\n (Decennial short-form), SF3 (Decennial long-form), and the American\n Community Survey (ACS). Confidence intervals provided with ACS data\n are converted to standard errors to be bundled with estimates in\n complex acs objects. Package provides new methods to conduct\n standard operations on acs objects and present/plot data in\n statistically appropriate ways. Current version is 2.0 +/- .033.","Published":"2016-03-18","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ACSNMineR","Version":"0.16.8.25","Title":"Gene Enrichment Analysis from ACSN Maps or GMT Files","Description":"Compute and represent gene set enrichment or depletion from your\n data based on pre-saved maps from the Atlas of Cancer Signalling Networks (ACSN)\n or user imported maps. User imported maps must be complying with the GMT format\n as defined by the Broad Institute, that is to say that the file should be tab-\n separated, that the first column should contain the module name, the second\n column can contain comments that will be overwritten with the number of genes\n in the module, and subsequent columns must contain the list of genes (HUGO\n symbols; tab-separated) inside the module. The gene set enrichment can be run\n with hypergeometric test or Fisher exact test, and can use multiple corrections.\n Visualization of data can be done either by barplots or heatmaps.","Published":"2016-09-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"acss","Version":"0.2-5","Title":"Algorithmic Complexity for Short Strings","Description":"Main functionality is to provide the algorithmic complexity for\n short strings, an approximation of the Kolmogorov Complexity of a short\n string using the coding theorem method (see ?acss). The database containing\n the complexity is provided in the data only package acss.data, this package\n provides functions accessing the data such as prob_random returning the\n posterior probability that a given string was produced by a random process.\n In addition, two traditional (but problematic) measures of complexity are\n also provided: entropy and change complexity.","Published":"2014-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"acss.data","Version":"1.0","Title":"Data Only: Algorithmic Complexity of Short Strings (Computed via\nCoding Theorem Method)","Description":"Data only package providing the algorithmic complexity of short strings, computed using the coding theorem method. For a given set of symbols in a string, all possible or a large number of random samples of Turing machines (TM) with a given number of states (e.g., 5) and number of symbols corresponding to the number of symbols in the strings were simulated until they reached a halting state or failed to end. This package contains data on 4.5 million strings from length 1 to 12 simulated on TMs with 2, 4, 5, 6, and 9 symbols. The complexity of the string corresponds to the distribution of the halting states of the TMs.","Published":"2014-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ACSWR","Version":"1.0","Title":"A Companion Package for the Book \"A Course in Statistics with R\"","Description":"A book designed to meet the requirements of masters students. Tattar, P.N., Suresh, R., and Manjunath, B.G. \"A Course in Statistics with R\", J. Wiley, ISBN 978-1-119-15272-9. ","Published":"2015-09-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ACTCD","Version":"1.1-0","Title":"Asymptotic Classification Theory for Cognitive Diagnosis","Description":"Cluster analysis for cognitive diagnosis based on the Asymptotic Classification Theory (Chiu, Douglas & Li, 2009; ). Given the sample statistic of sum-scores, cluster analysis techniques can be used to classify examinees into latent classes based on their attribute patterns. In addition to the algorithms used to classify data, three labeling approaches are proposed to label clusters so that examinees' attribute profiles can be obtained.","Published":"2016-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Actigraphy","Version":"1.3.2","Title":"Actigraphy Data Analysis","Description":"Functional linear modeling and analysis for actigraphy data. ","Published":"2016-01-15","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"}
{"Package":"activity","Version":"1.1","Title":"Animal Activity Statistics","Description":"Provides functions to fit kernel density functions\n to animal activity time data; plot activity distributions;\n quantify overall levels of activity; statistically compare\n activity metrics through bootstrapping; and evaluate variation\n in linear variables with time (or other circular variables).","Published":"2016-09-26","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"activpalProcessing","Version":"1.0.2","Title":"Process activPAL Events Files","Description":"Performs estimation of physical activity and sedentary behavior variables from activPAL (PAL Technologies, Glasgow, Scotland) events files. See for more information on the activPAL.","Published":"2016-12-14","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"actuar","Version":"2.1-1","Title":"Actuarial Functions and Heavy Tailed Distributions","Description":"Functions and data sets for actuarial science:\n modeling of loss distributions; risk theory and ruin theory;\n simulation of compound models, discrete mixtures and compound\n hierarchical models; credibility theory. Support for many additional\n probability distributions to model insurance loss amounts and loss\n frequency: 19 continuous heavy tailed distributions; the\n Poisson-inverse Gaussian discrete distribution; zero-truncated and\n zero-modified extensions of the standard discrete distributions.\n Support for phase-type distributions commonly used to compute ruin\n probabilities.","Published":"2017-05-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ActuDistns","Version":"3.0","Title":"Functions for actuarial scientists","Description":"Computes the probability density function, hazard rate\n function, integrated hazard rate function and the quantile\n function for 44 commonly used survival models","Published":"2012-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AcuityView","Version":"0.1","Title":"A Package for Displaying Visual Scenes as They May Appear to an\nAnimal with Lower Acuity","Description":"This code provides a simple method for representing a visual scene as it may be seen by an animal with less acute vision. When using (or for more information), please cite the original publication.","Published":"2017-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ada","Version":"2.0-5","Title":"The R Package Ada for Stochastic Boosting","Description":"Performs discrete, real, and gentle boost under both exponential and \n logistic loss on a given data set. The package ada provides a straightforward, \n well-documented, and broad boosting routine for classification, ideally suited \n for small to moderate-sized data sets.","Published":"2016-05-13","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"adabag","Version":"4.1","Title":"Applies Multiclass AdaBoost.M1, SAMME and Bagging","Description":"It implements Freund and Schapire's Adaboost.M1 algorithm and Breiman's Bagging\n\talgorithm using classification trees as individual classifiers. Once these classifiers have been\n\ttrained, they can be used to predict on new data. Also, cross validation estimation of the error can\n\tbe done. Since version 2.0 the function margins() is available to calculate the margins for these\n\tclassifiers. Also a higher flexibility is achieved giving access to the rpart.control() argument\n\tof 'rpart'. Four important new features were introduced on version 3.0, AdaBoost-SAMME (Zhu \n\tet al., 2009) is implemented and a new function errorevol() shows the error of the ensembles as\n\ta function of the number of iterations. In addition, the ensembles can be pruned using the option \n\t'newmfinal' in the predict.bagging() and predict.boosting() functions and the posterior probability of\n\teach class for observations can be obtained. Version 3.1 modifies the relative importance measure\n\tto take into account the gain of the Gini index given by a variable in each tree and the weights of \n\tthese trees. Version 4.0 includes the margin-based ordered aggregation for Bagging pruning (Guo\n\tand Boukir, 2013) and a function to auto prune the 'rpart' tree. Moreover, three new plots are also \n\tavailable importanceplot(), plot.errorevol() and plot.margins(). Version 4.1 allows to predict on \n\tunlabeled data. ","Published":"2015-10-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adagio","Version":"0.6.5","Title":"Discrete and Global Optimization Routines","Description":"\n The R package 'adagio' will provide methods and algorithms for\n discrete optimization and (evolutionary) global optimization.","Published":"2016-05-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"AdapEnetClass","Version":"1.2","Title":"A Class of Adaptive Elastic Net Methods for Censored Data","Description":"Provides new approaches to variable selection for AFT model. ","Published":"2015-10-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"adapr","Version":"1.0.2","Title":"Implementation of an Accountable Data Analysis Process","Description":"Tracks reading and writing within R scripts that are organized into a directed acyclic graph. Contains an interactive shiny application adaprApp(). Uses git2r package, Git and file hashes to track version histories of input and output. See package vignette for how to get started. V1.02 adds parallel execution of project scripts and function map in vignette. Makes project specification argument last in order.","Published":"2017-02-02","License":"LGPL-2","snapshot_date":"2017-06-23"}
{"Package":"adaptDA","Version":"1.0","Title":"Adaptive Mixture Discriminant Analysis","Description":"The adaptive mixture discriminant analysis (AMDA) allows to adapt a model-based classifier to the situation where a class represented in the test set may have not been encountered earlier in the learning phase.","Published":"2014-09-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AdaptFit","Version":"0.2-2","Title":"Adaptive Semiparametic Regression","Description":"Based on the function \"spm\" of the SemiPar package fits\n semiparametric regression models with spatially adaptive\n penalized splines.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AdaptFitOS","Version":"0.62","Title":"Adaptive Semiparametric Regression with Simultaneous Confidence\nBands","Description":"Fits semiparametric regression models with spatially adaptive penalized splines and computes simultaneous confidence bands.","Published":"2015-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AdaptGauss","Version":"1.3.3","Title":"Gaussian Mixture Models (GMM)","Description":"Multimodal distributions can be modelled as a mixture of components. The model is derived using the Pareto Density Estimation (PDE) for an estimation of the pdf. PDE has been designed in particular to identify groups/classes in a dataset. Precise limits for the classes can be calculated using the theorem of Bayes. Verification of the model is possible by QQ plot, Chi-squared test and Kolmogorov-Smirnov test. The package is based on the publication of Ultsch, A., Thrun, M.C., Hansen-Goos, O., Lotsch, J. (2015) .","Published":"2017-03-15","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"adaptiveGPCA","Version":"0.1","Title":"Adaptive Generalized PCA","Description":"Implements adaptive gPCA, as described in: Fukuyama, J. (2017)\n . The package also includes functionality for applying\n the method to 'phyloseq' objects so that the method can be easily applied\n to microbiome data and a 'shiny' app for interactive visualization. ","Published":"2017-05-05","License":"AGPL-3","snapshot_date":"2017-06-23"}
{"Package":"AdaptiveSparsity","Version":"1.4","Title":"Adaptive Sparsity Models","Description":"Implements Figueiredo EM algorithm for adaptive sparsity (Jeffreys prior) (see Figueiredo, M.A.T.; , \"Adaptive sparseness for supervised learning,\" Pattern Analysis and Machine Intelligence, IEEE Transactions on , vol.25, no.9, pp. 1150- 1159, Sept. 2003) and Wong algorithm for adaptively sparse gaussian geometric models (see Wong, Eleanor, Suyash Awate, and P. Thomas Fletcher. \"Adaptive Sparsity in Gaussian Graphical Models.\" In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 311-319. 2013.)","Published":"2014-01-03","License":"LGPL (>= 3.0)","snapshot_date":"2017-06-23"}
{"Package":"adaptivetau","Version":"2.2-1","Title":"Tau-Leaping Stochastic Simulation","Description":"Implements adaptive tau leaping to approximate the\n trajectory of a continuous-time stochastic process as\n described by Cao et al. (2007) The Journal of Chemical Physics\n . This package is based upon work\n supported by NSF DBI-0906041 and NIH K99-GM104158 to Philip\n Johnson and NIH R01-AI049334 to Rustom\n Antia.","Published":"2016-10-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"adaptMCMC","Version":"1.1","Title":"Implementation of a generic adaptive Monte Carlo Markov Chain\nsampler","Description":"This package provides an implementation of the generic\n adaptive Monte Carlo Markov chain sampler proposed by Vihola\n (2011).","Published":"2012-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adaptsmoFMRI","Version":"1.1","Title":"Adaptive Smoothing of FMRI Data","Description":"This package contains R functions for estimating the blood\n oxygenation level dependent (BOLD) effect by using functional\n Magnetic Resonance Imaging (fMRI) data, based on adaptive Gauss\n Markov random fields, for real as well as simulated data. The\n implemented simulations make use of efficient Markov Chain\n Monte Carlo methods.","Published":"2013-01-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"adaptTest","Version":"1.0","Title":"Adaptive two-stage tests","Description":"The functions defined in this program serve for\n implementing adaptive two-stage tests. Currently, four tests\n are included: Bauer and Koehne (1994), Lehmacher and Wassmer\n (1999), Vandemeulebroecke (2006), and the horizontal\n conditional error function. User-defined tests can also be\n implemented. Reference: Vandemeulebroecke, An investigation of\n two-stage tests, Statistica Sinica 2006.","Published":"2009-10-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ADCT","Version":"0.1.0","Title":"Adaptive Design in Clinical Trials","Description":"Existing adaptive design methods in clinical trials. The package\n includes power, stopping boundaries (sample size) calculation functions for\n two-group group sequential designs, adaptive design with coprimary endpoints,\n biomarker-informed adaptive design, etc.","Published":"2016-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"addhaz","Version":"0.4","Title":"Binomial and Multinomial Additive Hazards Models","Description":"Functions to fit the binomial and multinomial additive hazards models and to calculate the contribution of diseases/conditions to the disability prevalence, as proposed by Nusselder and Looman (2004) .","Published":"2016-05-16","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"addhazard","Version":"1.1.0","Title":"Fit Additive Hazards Models for Survival Analysis","Description":"Contains tools to fit the additive hazards model to data from a cohort,\n random sampling, two-phase Bernoulli sampling and two-phase finite population sampling,\n as well as calibration tool to incorporate phase I auxiliary information into the\n two-phase data model fitting. This package provides regression parameter estimates and\n their model-based and robust standard errors. It also offers tools to make prediction of\n individual specific hazards.","Published":"2017-03-21","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"addinslist","Version":"0.2","Title":"Discover and Install Useful RStudio Addins","Description":"Browse through a continuously updated list of existing RStudio \n addins and install/uninstall their corresponding packages.","Published":"2016-09-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"additivityTests","Version":"1.1-4","Title":"Additivity Tests in the Two Way Anova with Single Sub-class\nNumbers","Description":"Implementation of the Tukey, Mandel, Johnson-Graybill, LBI, Tusell\n and modified Tukey non-additivity tests.","Published":"2014-12-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"addreg","Version":"2.0","Title":"Additive Regression for Discrete Data","Description":"Methods for fitting identity-link GLMs and GAMs to discrete data,\n using EM-type algorithms with more stable convergence properties than standard methods.","Published":"2015-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ADDT","Version":"2.0","Title":"Analysis of Accelerated Destructive Degradation Test Data","Description":"Accelerated destructive degradation tests (ADDT) are often used to collect necessary data for assessing the long-term properties of polymeric materials. Based on the collected data, a thermal index (TI) is estimated. The TI can be useful for material rating and comparison. This package implements the traditional method based on the least-squares method, the parametric method based on maximum likelihood estimation, and the semiparametric method based on spline methods, and the corresponding methods for estimating TI for polymeric materials. The traditional approach is a two-step approach that is currently used in industrial standards, while the parametric method is widely used in the statistical literature. The semiparametric method is newly developed. Both the parametric and semiparametric approaches allow one to do statistical inference such as quantifying uncertainties in estimation, hypothesis testing, and predictions. Publicly available datasets are provided illustrations. More details can be found in Jin et al. (2017).","Published":"2016-11-03","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ade4","Version":"1.7-6","Title":"Analysis of Ecological Data : Exploratory and Euclidean Methods\nin Environmental Sciences","Description":"Multivariate data analysis and graphical display.","Published":"2017-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ade4TkGUI","Version":"0.2-9","Title":"'ade4' Tcl/Tk Graphical User Interface","Description":"A Tcl/Tk GUI for some basic functions in the 'ade4' package.","Published":"2015-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adegenet","Version":"2.0.1","Title":"Exploratory Analysis of Genetic and Genomic Data","Description":"Toolset for the exploration of genetic and genomic data. Adegenet\n provides formal (S4) classes for storing and handling various genetic data,\n including genetic markers with varying ploidy and hierarchical population\n structure ('genind' class), alleles counts by populations ('genpop'), and\n genome-wide SNP data ('genlight'). It also implements original multivariate\n methods (DAPC, sPCA), graphics, statistical tests, simulation tools, distance\n and similarity measures, and several spatial methods. A range of both empirical\n and simulated datasets is also provided to illustrate various methods.","Published":"2016-02-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adegraphics","Version":"1.0-8","Title":"An S4 Lattice-Based Package for the Representation of\nMultivariate Data","Description":"Graphical functionalities for the representation of multivariate data. It is a complete re-implementation of the functions available in the 'ade4' package.","Published":"2017-04-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adehabitat","Version":"1.8.18","Title":"Analysis of Habitat Selection by Animals","Description":"A collection of tools for the analysis of habitat selection by animals.","Published":"2015-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adehabitatHR","Version":"0.4.14","Title":"Home Range Estimation","Description":"A collection of tools for the estimation of animals home range.","Published":"2015-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adehabitatHS","Version":"0.3.12","Title":"Analysis of Habitat Selection by Animals","Description":"A collection of tools for the analysis of habitat selection.","Published":"2015-07-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adehabitatLT","Version":"0.3.21","Title":"Analysis of Animal Movements","Description":"A collection of tools for the analysis of animal movements.","Published":"2016-08-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adehabitatMA","Version":"0.3.11","Title":"Tools to Deal with Raster Maps","Description":"A collection of tools to deal with raster maps.","Published":"2016-08-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adephylo","Version":"1.1-10","Title":"Adephylo: Exploratory Analyses for the Phylogenetic Comparative\nMethod","Description":"Multivariate tools to analyze comparative data, i.e. a phylogeny\n and some traits measured for each taxa.","Published":"2016-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AdequacyModel","Version":"2.0.0","Title":"Adequacy of Probabilistic Models and General Purpose\nOptimization","Description":"The main application concerns to a new robust optimization package with two major contributions. The first contribution refers to the assessment of the adequacy of probabilistic models through a combination of several statistics, which measure the relative quality of statistical models for a given data set. The second one provides a general purpose optimization method based on meta-heuristics functions for maximizing or minimizing an arbitrary objective function.","Published":"2016-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adespatial","Version":"0.0-8","Title":"Multivariate Multiscale Spatial Analysis","Description":"Tools for the multiscale spatial analysis of multivariate data.\n Several methods are based on the use of a spatial weighting matrix and its\n eigenvector decomposition (Moran's Eigenvectors Maps, MEM).","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ADGofTest","Version":"0.3","Title":"Anderson-Darling GoF test","Description":"Anderson-Darling GoF test with p-value calculation based on Marsaglia's 2004 paper \"Evaluating the Anderson-Darling Distribution\"","Published":"2011-12-28","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"AdhereR","Version":"0.1.0","Title":"Adherence to Medications","Description":"Computation of adherence to medications from Electronic Health care \n Data and visualization of individual medication histories and adherence \n patterns. The package implements a set of S3 classes and\n functions consistent with current adherence guidelines and definitions. \n It allows the computation of different measures of\n adherence (as defined in the literature, but also several original ones), \n their publication-quality plotting,\n the interactive exploration of patient medication history and \n the real-time estimation of adherence given various parameter settings. ","Published":"2017-04-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adhoc","Version":"1.1","Title":"Calculate Ad Hoc Distance Thresholds for DNA Barcoding\nIdentification","Description":"Two functions to calculate intra- and interspecific pairwise distances, evaluate DNA barcoding identification error and calculate an ad hoc distance threshold for each particular reference library of DNA barcodes. Specimen identification at this ad hoc distance threshold (using the best close match method) will produce identifications with an estimated relative error probability that can be fixed by the user (e.g. 5%).","Published":"2017-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"adimpro","Version":"0.8.2","Title":"Adaptive Smoothing of Digital Images","Description":"Implements tools for manipulation of digital \n \t\timages and the Propagation Separation approach \n \t\tby Polzehl and Spokoiny (2006) \n for smoothing digital images, see Polzehl and Tabelow (2007)\n .","Published":"2016-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AdjBQR","Version":"1.0","Title":"Adjusted Bayesian Quantile Regression Inference","Description":"Adjusted inference for Bayesian quantile regression based on\n asymmetric Laplace working likelihood, for details see Yang, Y., Wang, H.\n and He, X. (2015), Posterior inference in Bayesian quantile regression with\n asymmetric Laplace likelihood, International Statistical \n Review, 2015 .","Published":"2016-10-30","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"adlift","Version":"1.3-2","Title":"An adaptive lifting scheme algorithm","Description":"Adaptive Wavelet transforms for signal denoising","Published":"2012-11-06","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"ADM3","Version":"1.3","Title":"An Interpretation of the ADM method - automated detection\nalgorithm","Description":"Robust change point detection using ADM3 algorithm.","Published":"2013-12-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AdMit","Version":"2.1.3","Title":"Adaptive Mixture of Student-t Distributions","Description":"Provides functions to perform the fitting of an adaptive mixture\n of Student-t distributions to a target density through its kernel function as described in\n Ardia et al. (2009) . The\n mixture approximation can then be used as the importance density in importance\n sampling or as the candidate density in the Metropolis-Hastings algorithm to\n obtain quantities of interest for the target density itself. ","Published":"2017-02-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"admixturegraph","Version":"1.0.2","Title":"Admixture Graph Manipulation and Fitting","Description":"Implements tools for building and visualising admixture graphs\n and for extracting equations from them. These equations can be compared to f-\n statistics obtained from data to test the consistency of a graph against data --\n for example by comparing the sign of f_4-statistics with the signs predicted by\n the graph -- and graph parameters (edge lengths and admixture proportions) can\n be fitted to observed statistics.","Published":"2016-12-13","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ADMMnet","Version":"0.1","Title":"Regularized Model with Selecting the Number of Non-Zeros","Description":"Fit linear and cox models regularized with net (L1 and Laplacian), elastic-net (L1 and L2) or lasso (L1) penalty, and their adaptive forms, such as adaptive lasso and net adjusting for signs of linked coefficients. In addition, it treats the number of non-zero coefficients as another tuning parameter and simultaneously selects with the regularization parameter. The package uses one-step coordinate descent algorithm and runs extremely fast by taking into account the sparsity structure of coefficients.","Published":"2015-12-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ADPclust","Version":"0.7","Title":"Fast Clustering Using Adaptive Density Peak Detection","Description":"An implementation of ADPclust clustering procedures (Fast\n Clustering Using Adaptive Density Peak Detection). The work is built and\n improved upon the idea of Rodriguez and Laio (2014). \n ADPclust clusters data by finding density peaks in a density-distance plot \n generated from local multivariate Gaussian density estimation. It includes \n an automatic centroids selection and parameter optimization algorithm, which \n finds the number of clusters and cluster centroids by comparing average \n silhouettes on a grid of testing clustering results; It also includes a user \n interactive algorithm that allows the user to manually selects cluster \n centroids from a two dimensional \"density-distance plot\". Here is the \n research article associated with this package: \"Wang, Xiao-Feng, and \n Yifan Xu (2015) Fast clustering using adaptive \n density peak detection.\" Statistical methods in medical research\". url:\n http://smm.sagepub.com/content/early/2015/10/15/0962280215609948.abstract. ","Published":"2016-10-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ads","Version":"1.5-2.2","Title":"Spatial point patterns analysis","Description":"Perform first- and second-order multi-scale analyses derived from Ripley K-function, for univariate,\n multivariate and marked mapped data in rectangular, circular or irregular shaped sampling windows, with tests of \n statistical significance based on Monte Carlo simulations.","Published":"2015-01-13","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AdvBinomApps","Version":"1.0","Title":"Upper Clopper-Pearson Confidence Limits for Burn-in Studies\nunder Additional Available Information","Description":"Functions to compute upper Clopper-Pearson confidence limits of early life failure probabilities and required sample sizes of burn-in studies under further available information, e.g. from other products or technologies. ","Published":"2016-04-07","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"advclust","Version":"0.4","Title":"Object Oriented Advanced Clustering","Description":"S4 Object Oriented for Advanced Fuzzy Clustering and Fuzzy COnsensus Clustering. Techniques that provided by this package are Fuzzy C-Means, Gustafson Kessel (Babuska Version), Gath-Geva, Sum Voting Consensus, Product Voting Consensus, and Borda Voting Consensus. This package also provide visualization via Biplot and Radar Plot.","Published":"2016-09-04","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"adwave","Version":"1.1","Title":"Wavelet Analysis of Genomic Data from Admixed Populations","Description":"Implements wavelet-based approaches for describing population admixture. Principal Components Analysis (PCA) is used to define the population structure and produce a localized admixture signal for each individual. Wavelet summaries of the PCA output describe variation present in the data and can be related to population-level demographic processes. For more details, see Sanderson et al. (2015).","Published":"2015-06-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AEDForecasting","Version":"0.20.0","Title":"Change Point Analysis in ARIMA Forecasting","Description":"Package to incorporate change point analysis in ARIMA forecasting.","Published":"2016-09-16","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"aemo","Version":"0.2.0","Title":"Download and Process AEMO Price and Demand Data","Description":"Download and process real time trading prices and demand data\n freely provided by the Australian Energy Market Operator (AEMO). Note that\n this includes a sample data set.","Published":"2016-08-20","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AER","Version":"1.2-5","Title":"Applied Econometrics with R","Description":"Functions, data sets, examples, demos, and vignettes for the book\n Christian Kleiber and Achim Zeileis (2008),\n\t Applied Econometrics with R, Springer-Verlag, New York.\n\t ISBN 978-0-387-77316-2. (See the vignette \"AER\" for a package overview.)","Published":"2017-01-07","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"AF","Version":"0.1.4","Title":"Model-Based Estimation of Confounder-Adjusted Attributable\nFractions","Description":"Estimates the attributable fraction in different sampling designs\n adjusted for measured confounders using logistic regression (cross-sectional\n and case-control designs), conditional logistic regression (matched case-control\n design), Cox proportional hazard regression (cohort design with time-to-\n event outcome) and gamma-frailty model with a Weibull baseline hazard. The variance of the estimator is obtained by combining the delta\n method with the the sandwich formula. Dahlqwist et al.(2016) .","Published":"2017-02-11","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"afc","Version":"1.4.0","Title":"Generalized Discrimination Score","Description":"This is an implementation of the Generalized Discrimination Score\n (also known as Two Alternatives Forced Choice Score, 2AFC) for various \n representations of forecasts and verifying observations. The Generalized \n Discrimination Score is a generic forecast verification framework which \n can be applied to any of the following verification contexts: dichotomous, \n polychotomous (ordinal and nominal), continuous, probabilistic, and ensemble.\n A comprehensive description of the Generalized Discrimination Score, including \n all equations used in this package, is provided by Mason and Weigel (2009) \n .","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"afex","Version":"0.18-0","Title":"Analysis of Factorial Experiments","Description":"Convenience functions for analyzing factorial experiments using ANOVA or\n mixed models. aov_ez(), aov_car(), and aov_4() allow specification of between,\n within (i.e., repeated-measures), or mixed between-within (i.e., split-plot)\n ANOVAs for data in long format (i.e., one observation per row), aggregating\n multiple observations per individual and cell of the design. mixed() fits mixed\n models using lme4::lmer() and computes p-values for all fixed effects using\n either Kenward-Roger or Satterthwaite approximation for degrees of freedom (LMM\n only), parametric bootstrap (LMMs and GLMMs), or likelihood ratio tests (LMMs\n and GLMMs). afex uses type 3 sums of squares as default (imitating commercial\n statistical software).","Published":"2017-05-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"affluenceIndex","Version":"1.0","Title":"Affluence Indices","Description":"Computes the statistical indices of affluence (richness) and constructs bootstrap confidence intervals for these indices. Also computes the Wolfson polarization index.","Published":"2017-03-23","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AFLPsim","Version":"0.4-2","Title":"Hybrid Simulation and Genome Scan for Dominant Markers","Description":"Hybrid simulation functions for dominant genetic data and genome scan methods.","Published":"2015-08-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AFM","Version":"1.2.2","Title":"Atomic Force Microscope Image Analysis","Description":"Provides Atomic Force Microscope images analysis such as Power\n Spectral Density, roughness against lengthscale, experimental variogram and variogram models,\n fractal dimension and scale. The AFM images can be exported to STL format for 3D\n printing.","Published":"2016-09-01","License":"AGPL-3","snapshot_date":"2017-06-23"}
{"Package":"afmToolkit","Version":"0.0.1","Title":"Functions for Atomic Force Microscope Force-Distance Curves\nAnalysis","Description":"Set of functions for analyzing Atomic Force Microscope (AFM) force-distance curves. It allows to obtain the contact and unbinding points, perform the baseline correction, estimate the Young's modulus, fit up to two exponential decay function to a stress-relaxation / creep experiment, obtain adhesion energies. These operations can be done either over a single F-d curve or over a set of F-d curves in batch mode.","Published":"2017-04-03","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"aftgee","Version":"1.0-0","Title":"Accelerated Failure Time Model with Generalized Estimating\nEquations","Description":"This package features both rank-based estimates and least\n\t\t square estimates to the Accelerated Failure Time (AFT) model. \n\t\t For rank-based estimation, it provides approaches that include \n\t\t the computationally efficient Gehan's weight and the general's \n\t\t weight such as the logrank weight. \n\t\t For the least square estimation, the estimating equation is \n\t\t solved with Generalized Estimating Equations (GEE). \n\t\t Moreover, in multivariate cases, the dependence working \n\t\t correlation structure can be specified in GEE's setting.","Published":"2014-11-13","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"AGD","Version":"0.35","Title":"Analysis of Growth Data","Description":"Tools for NIHES course EP18 'Analysis of Growth Data', May 22-23\n 2012, Rotterdam.","Published":"2015-05-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"AggregateR","Version":"0.0.2","Title":"Aggregate Numeric, Date and Categorical Variables by an ID","Description":"Convenience functions for aggregating data frame. Currently mean, sum and variance are supported. For Date variables, recency and duration are supported. There is also support for dummy variables in predictive contexts. ","Published":"2015-11-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"agop","Version":"0.1-4","Title":"Aggregation Operators and Preordered Sets","Description":"Tools supporting multi-criteria decision making, including\n variable number of criteria, by means of aggregation operators\n and preordered sets. Possible applications include, but are not\n limited to, scientometrics and bibliometrics.","Published":"2014-09-14","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"agRee","Version":"0.5-0","Title":"Various Methods for Measuring Agreement","Description":"Bland-Altman plot and scatter plot with identity line \n for visualization and point and \n interval estimates for different metrics related to \n reproducibility/repeatability/agreement including\n the concordance correlation coefficient, \n intraclass correlation coefficient,\n within-subject coefficient of variation,\n smallest detectable difference, \n and mean normalized smallest detectable difference.","Published":"2016-07-07","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"Agreement","Version":"0.8-1","Title":"Statistical Tools for Measuring Agreement","Description":"This package computes several statistics for measuring\n agreement, for example, mean square deviation (MSD), total\n deviation index (TDI) or concordance correlation coefficient\n (CCC). It can be used for both continuous data and categorical\n data for multiple raters and multiple readings cases.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"agricolae","Version":"1.2-4","Title":"Statistical Procedures for Agricultural Research","Description":"Original idea was presented in the thesis \"A statistical analysis tool for agricultural research\" to obtain the degree of Master on science, National Engineering University (UNI), Lima-Peru. Some experimental data for the examples come from the CIP and others research. Agricolae offers extensive functionality on experimental design especially for agricultural and plant breeding experiments, which can also be useful for other purposes. It supports planning of lattice, Alpha, Cyclic, Complete Block, Latin Square, Graeco-Latin Squares, augmented block, factorial, split and strip plot designs. There are also various analysis facilities for experimental data, e.g. treatment comparison procedures and several non-parametric tests comparison, biodiversity indexes and consensus cluster.","Published":"2016-06-12","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"agridat","Version":"1.12","Title":"Agricultural Datasets","Description":"Datasets from books, papers, and websites related to agriculture.\n Example analyses are included. Includes functions for plotting field\n designs and GGE biplots.","Published":"2015-06-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"agrmt","Version":"1.40.4","Title":"Calculate Agreement or Consensus in Ordered Rating Scales","Description":"Calculate agreement or consensus in ordered rating scales. The package implements van der Eijk's (2001) measure of agreement A, which can be used to describe agreement, consensus, or polarization among respondents. It also implements measures of consensus (dispersion) by Leik, Tatsle and Wierman, Blair and Lacy, Kvalseth, Berry and Mielke, and Garcia-Montalvo and Reynal-Querol. Furthermore, an implementation of Galtungs AJUS-system is provided to classify distributions, as well as a function to identify the position of multiple modes.","Published":"2016-04-02","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"AGSDest","Version":"2.3.1","Title":"Estimation in Adaptive Group Sequential Trials","Description":"Calculation of repeated confidence intervals as well as confidence\n intervals based on the stage-wise ordering in group sequential designs and\n adaptive group sequential designs. For adaptive group sequential designs\n the confidence intervals are based on the conditional rejection probability\n principle. Currently the procedures do not support the use of futility\n boundaries or more than one adaptive interim analysis.","Published":"2016-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"agsemisc","Version":"1.3-1","Title":"Miscellaneous plotting and utility functions","Description":"High-featured panel functions for bwplot and xyplot,\n some plot management helpers, various convenience functions","Published":"2014-07-31","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ahaz","Version":"1.14","Title":"Regularization for semiparametric additive hazards regression","Description":"Computationally efficient procedures for regularized\n estimation with the semiparametric additive hazards regression\n model.","Published":"2013-06-03","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AHMbook","Version":"0.1.4","Title":"Functions and Data for the Book 'Applied Hierarchical Modeling\nin Ecology'","Description":"Provides functions and data sets to accompany the book 'Applied Hierarchical Modeling in Ecology: Analysis of distribution, abundance and species richness in R and BUGS' by Marc Kery and Andy Royle. The first volume appeared early in 2016 (ISBN: 978-0-12-801378-6, ); the second volume is in preparation and additional functions will be added to this package.","Published":"2017-05-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"AhoCorasickTrie","Version":"0.1.0","Title":"Fast Searching for Multiple Keywords in Multiple Texts","Description":"Aho-Corasick is an optimal algorithm for finding many\n keywords in a text. It can locate all matches in a text in O(N+M) time; i.e.,\n the time needed scales linearly with the number of keywords (N) and the size of\n the text (M). Compare this to the naive approach which takes O(N*M) time to loop\n through each pattern and scan for it in the text. This implementation builds the\n trie (the generic name of the data structure) and runs the search in a single\n function call. If you want to search multiple texts with the same trie, the\n function will take a list or vector of texts and return a list of matches to\n each text. By default, all 128 ASCII characters are allowed in both the keywords\n and the text. A more efficient trie is possible if the alphabet size can be\n reduced. For example, DNA sequences use at most 19 distinct characters and\n usually only 4; protein sequences use at most 26 distinct characters and usually\n only 20. UTF-8 (Unicode) matching is not currently supported.","Published":"2016-07-29","License":"Apache License 2.0","snapshot_date":"2017-06-23"}
{"Package":"ahp","Version":"0.2.11","Title":"Analytic Hierarchy Process","Description":"Model and analyse complex decision making problems\n using the Analytic Hierarchy Process (AHP) by Thomas Saaty.","Published":"2017-01-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AHR","Version":"1.4.2","Title":"Estimation and Testing of Average Hazard Ratios","Description":"Methods for estimation of multivariate average hazard ratios as\n defined by Kalbfleisch and Prentice. The underlying survival functions of the\n event of interest in each group can be estimated using either the (weighted)\n Kaplan-Meier estimator or the Aalen-Johansen estimator for the transition\n probabilities in Markov multi-state models. Right-censored and left-truncated\n data is supported. Moreover, the difference in restricted mean survival can be\n estimated.","Published":"2016-08-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"AICcmodavg","Version":"2.1-1","Title":"Model Selection and Multimodel Inference Based on (Q)AIC(c)","Description":"Functions to implement model selection and multimodel inference based on Akaike's information criterion (AIC) and the second-order AIC (AICc), as well as their quasi-likelihood counterparts (QAIC, QAICc) from various model object classes. The package implements classic model averaging for a given parameter of interest or predicted values, as well as a shrinkage version of model averaging parameter estimates or effect sizes. The package includes diagnostics and goodness-of-fit statistics for certain model types including those of 'unmarkedFit' classes estimating demographic parameters after accounting for imperfect detection probabilities. Some functions also allow the creation of model selection tables for Bayesian models of the 'bugs' and 'rjags' classes. Functions also implement model selection using BIC. Objects following model selection and multimodel inference can be formatted to LaTeX using 'xtable' methods included in the package.","Published":"2017-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AID","Version":"2.0","Title":"Box-Cox Power Transformation","Description":"Performs Box-Cox power transformation for different purposes, graphical approaches, assess the success of the transformation via tests and plots, computes mean and confidence interval for back transformed data.","Published":"2017-05-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aidar","Version":"1.0.0","Title":"Tools for reading AIDA (http://aida.freehep.org/) files into R","Description":"Read objects from the AIDA file and make them available\n as dataframes in R","Published":"2013-12-11","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AIG","Version":"0.1.6","Title":"Automatic Item Generator","Description":"A collection of Automatic Item Generators used mainly for\n psychological research. This package can generate linear syllogistic reasoning,\n arithmetic and 2D/3D/Double 3D spatial reasoning items. It is recommended for research\n purpose only.","Published":"2017-06-01","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"AIM","Version":"1.01","Title":"AIM: adaptive index model","Description":"R functions for adaptively constructing index models for\n continuous, binary and survival outcomes. Implementation\n requires loading R-pacakge \"survival\"","Published":"2010-04-05","License":"LGPL-2","snapshot_date":"2017-06-23"}
{"Package":"aimPlot","Version":"1.0.0","Title":"Create Pie Like Plot for Completeness","Description":"Create a pie like plot to visualise if the aim or several aims of a\n project is achieved or close to be achieved i.e the aim is achieved when the point is at the\n center of the pie plot. Imagine it's like a dartboard and the center means 100%\n completeness/achievement. Achievement can also be understood as 100%\n coverage. The standard distribution of completeness allocated in the pie plot\n is 50%, 80% and 100% completeness.","Published":"2016-04-22","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"airGR","Version":"1.0.5.12","Title":"Suite of GR Hydrological Models for Precipitation-Runoff\nModelling","Description":"Hydrological modelling tools developed\n at Irstea-Antony (HBAN Research Unit, France). The package includes several conceptual\n rainfall-runoff models (GR4H, GR4J, GR5J, GR6J, GR2M, GR1A), a snowmelt module (CemaNeige)\n and the associated functions for their calibration and evaluation. Use help(airGR) for package description.","Published":"2017-01-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ajv","Version":"1.0.0","Title":"Another JSON Schema Validator","Description":"A thin wrapper around the 'ajv' JSON validation package for\n JavaScript. See for details.","Published":"2017-04-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Ake","Version":"1.0","Title":"Associated Kernel Estimations","Description":"Continuous and discrete (count or categorical) estimation of density, probability mass function (p.m.f.) and regression functions are performed using associated kernels. The cross-validation technique and the local Bayesian procedure are also implemented for bandwidth selection.","Published":"2015-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"akima","Version":"0.6-2","Title":"Interpolation of Irregularly and Regularly Spaced Data","Description":"Several cubic spline interpolation methods of H. Akima for irregular and\n regular gridded data are available through this package, both for the bivariate case\n (irregular data: ACM 761, regular data: ACM 760) and univariate case (ACM 433 and ACM 697).\n Linear interpolation of irregular gridded data is also covered by reusing D. J. Renkas\n triangulation code which is part of Akimas Fortran code. A bilinear interpolator\n for regular grids was also added for comparison with the bicubic interpolator on\n regular grids.","Published":"2016-12-20","License":"ACM | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"akmeans","Version":"1.1","Title":"Adaptive Kmeans algorithm based on threshold","Description":"Adaptive K-means algorithm with various threshold settings.\n It support two distance metric: \n Euclidean distance, Cosine distance (1 - cosine similarity)\n In version 1.1, it contains one more threshold condition.","Published":"2014-05-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ALA4R","Version":"1.5.6","Title":"Atlas of Living Australia (ALA) Data and Resources in R","Description":"The Atlas of Living Australia (ALA) provides tools to enable users\n of biodiversity information to find, access, combine and visualise data on\n Australian plants and animals; these have been made available from\n . ALA4R provides a subset of the tools to be\n directly used within R. It enables the R community to directly access data\n and resources hosted by the ALA. Our goal is to enable outputs (e.g.\n observations of species) to be queried and output in a range of standard\n formats.","Published":"2017-02-18","License":"MPL-2.0","snapshot_date":"2017-06-23"}
{"Package":"alabama","Version":"2015.3-1","Title":"Constrained Nonlinear Optimization","Description":"Augmented Lagrangian Adaptive Barrier Minimization\n Algorithm for optimizing smooth nonlinear objective functions\n with constraints. Linear or nonlinear equality and inequality\n constraints are allowed.","Published":"2015-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"alakazam","Version":"0.2.7","Title":"Immunoglobulin Clonal Lineage and Diversity Analysis","Description":"Provides immunoglobulin (Ig) sequence lineage reconstruction,\n diversity profiling, and amino acid property analysis.","Published":"2017-06-15","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"}
{"Package":"ald","Version":"1.1","Title":"The Asymmetric Laplace Distribution","Description":"It provides the density, distribution function, quantile function, \n random number generator, likelihood function, moments and Maximum Likelihood estimators for a given sample, all this for\n the three parameter Asymmetric Laplace Distribution defined \n in Koenker and Machado (1999). This is a special case of the skewed family of distributions\n available in Galarza (2016) useful for quantile regression. ","Published":"2016-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ALDqr","Version":"1.0","Title":"Quantile Regression Using Asymmetric Laplace Distribution","Description":"EM algorithm for estimation of parameters and other methods in a quantile regression. ","Published":"2017-01-22","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"}
{"Package":"aLFQ","Version":"1.3.4","Title":"Estimating Absolute Protein Quantities from Label-Free LC-MS/MS\nProteomics Data","Description":"Determination of absolute protein quantities is necessary for multiple applications, such as mechanistic modeling of biological systems. Quantitative liquid chromatography tandem mass spectrometry (LC-MS/MS) proteomics can measure relative protein abundance on a system-wide scale. To estimate absolute quantitative information using these relative abundance measurements requires additional information such as heavy-labeled references of known concentration. Multiple methods have been using different references and strategies; some are easily available whereas others require more effort on the users end. Hence, we believe the field might benefit from making some of these methods available under an automated framework, which also facilitates validation of the chosen strategy. We have implemented the most commonly used absolute label-free protein abundance estimation methods for LC-MS/MS modes quantifying on either MS1-, MS2-levels or spectral counts together with validation algorithms to enable automated data analysis and error estimation. Specifically, we used Monte-carlo cross-validation and bootstrapping for model selection and imputation of proteome-wide absolute protein quantity estimation. Our open-source software is written in the statistical programming language R and validated and demonstrated on a synthetic sample. ","Published":"2017-03-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"alfred","Version":"0.1.1","Title":"Downloading Time Series from ALFRED Database for Various\nVintages","Description":"Provides direct access to the ALFRED () and FRED () databases.\n Its functions return tidy data frames for different releases of the specified time series. \n Note that this product uses the FRED© API but is not endorsed or certified by the Federal Reserve Bank of St. Louis.","Published":"2017-06-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"AlgDesign","Version":"1.1-7.3","Title":"Algorithmic Experimental Design","Description":"Algorithmic experimental designs. Calculates exact and\n approximate theory experimental designs for D,A, and I\n criteria. Very large designs may be created. Experimental\n designs may be blocked or blocked designs created from a\n candidate list, using several criteria. The blocking can be\n done when whole and within plot factors interact.","Published":"2014-10-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AlgebraicHaploPackage","Version":"1.2","Title":"Haplotype Two Snips Out of a Paired Group of Patients","Description":"Two unordered pairs of data of two different snips positions is haplotyped by resolving a small number ob closed equations.","Published":"2015-10-31","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"algorithmia","Version":"0.0.2","Title":"Allows you to Easily Interact with the Algorithmia Platform","Description":"The company, Algorithmia, houses the largest marketplace of online\n algorithms. This package essentially holds a bunch of REST wrappers that\n make it very easy to call algorithms in the Algorithmia platform and access\n files and directories in the Algorithmia data API. To learn more about the\n services they offer and the algorithms in the platform visit\n . More information for developers can be found at\n .","Published":"2016-09-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"algstat","Version":"0.0.2","Title":"Algebraic statistics in R","Description":"algstat provides functionality for algebraic statistics in R.\n Current applications include exact inference in log-linear models for\n contingency table data, analysis of ranked and partially ranked data, and\n general purpose tools for multivariate polynomials, building on the mpoly\n package. To aid in the process, algstat has ports to Macaulay2, Bertini,\n LattE-integrale and 4ti2.","Published":"2014-12-06","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AlignStat","Version":"1.3.1","Title":"Comparison of Alternative Multiple Sequence Alignments","Description":"Methods for comparing two alternative multiple \n sequence alignments (MSAs) to determine whether they align homologous residues in \n the same columns as one another. It then classifies similarities and differences \n into conserved gaps, conserved sequence, merges, splits or shifts of one MSA relative \n to the other. Summarising these categories for each MSA column yields information \n on which sequence regions are agreed upon my both MSAs, and which differ. Several \n plotting functions enable easily visualisation of the comparison data for analysis.","Published":"2016-09-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"alineR","Version":"1.1.3","Title":"Alignment of Phonetic Sequences Using the 'ALINE' Algorithm","Description":"Functions are provided to calculate the 'ALINE' Distance between words. The score is based on phonetic features represented using the Unicode-compliant International Phonetic Alphabet (IPA). Parameterized features weights are used to determine the optimal alignment and functions are provided to estimate optimum values.","Published":"2016-04-01","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ALKr","Version":"0.5.3.1","Title":"Generate Age-Length Keys for fish populations","Description":"A collection of functions that implement several algorithms for\n generating age-length keys for fish populations from incomplete data.","Published":"2014-02-26","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"allan","Version":"1.01","Title":"Automated Large Linear Analysis Node","Description":"Automated fitting of linear regression models and a\n stepwise routine","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"allanvar","Version":"1.1","Title":"Allan Variance Analysis","Description":"A collection of tools for stochastic sensor error\n characterization using the Allan Variance technique originally\n developed by D. Allan.","Published":"2015-07-08","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"alleHap","Version":"0.9.7","Title":"Allele Imputation and Haplotype Reconstruction from Pedigree\nDatabases","Description":"Tools to simulate alphanumeric alleles, impute genetic missing data and reconstruct non-recombinant haplotypes from pedigree databases in a deterministic way. Allelic simulations can be implemented taking into account many factors (such as number of families, markers, alleles per marker,\n probability and proportion of missing genotypes, recombination rate, etc).\n Genotype imputation can be used with simulated datasets or real databases (previously loaded in .ped format). Haplotype reconstruction can be carried\n out even with missing data, since the program firstly imputes each family genotype (without a reference panel), to later reconstruct the corresponding\n haplotypes for each family member. All this considering that each individual (due to meiosis) should unequivocally have two alleles per marker (one inherited\n from each parent) and thus imputation and reconstruction results can be deterministically calculated.","Published":"2016-07-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"allelematch","Version":"2.5","Title":"Identifying unique multilocus genotypes where genotyping error\nand missing data may be present","Description":"This package provides tools for the identification of unique of multilocus genotypes when both genotyping error and missing data may be present. The package is targeted at those working with large datasets and databases containing multiple samples of each individual, a situation that is common in conservation genetics, and particularly in non-invasive wildlife sampling applications. Functions explicitly incorporate missing data, and can tolerate allele mismatches created by genotyping error. If you use this tool, please cite the package using the journal article in Molecular Ecology Resources (Galpern et al., 2012). Please use citation('allelematch') to find this. Due to changing CRAN policy, and the size and compile time of the vignettes, they can no longer be distributed with this package. Please contact the package primary author, or visit the allelematch site for a complete vignette (http://nricaribou.cc.umanitoba.ca/allelematch/). For users with access to academic literature, tutorial material is also available as supplementary material to the article describing this software. ","Published":"2014-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AlleleRetain","Version":"1.3.1","Title":"Allele Retention, Inbreeding, and Demography","Description":"Simulate the effect of management or demography on allele\n retention and inbreeding accumulation in bottlenecked\n populations of animals with overlapping generations.","Published":"2013-06-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"allelic","Version":"0.1","Title":"A fast, unbiased and exact allelic exact test","Description":"This is the implementation in R+C of a new association\n test described in \"A fast, unbiased and exact allelic exact\n test for case-control association studies\" (Submitted). It\n appears that in most cases the classical chi-square test used\n for testing for allelic association on genotype data is biased.\n Our test is unbiased, exact but fast throught careful\n optimization.","Published":"2006-05-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AllPossibleSpellings","Version":"1.1","Title":"Computes all of a word's possible spellings","Description":"Contains functions possSpells.fnc and\n batch.possSpells.fnc.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"alluvial","Version":"0.1-2","Title":"Alluvial Diagrams","Description":"Creating alluvial diagrams (also known as parallel sets plots) for multivariate\n and time series-like data.","Published":"2016-09-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"alphabetr","Version":"0.2.2","Title":"Algorithms for High-Throughput Sequencing of Antigen-Specific T\nCells","Description":"Provides algorithms for frequency-based pairing of alpha-beta T\n cell receptors.","Published":"2017-01-28","License":"AGPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"alphahull","Version":"2.1","Title":"Generalization of the Convex Hull of a Sample of Points in the\nPlane","Description":"Computation of the alpha-shape and alpha-convex\n hull of a given sample of points in the plane. The concepts of\n alpha-shape and alpha-convex hull generalize the definition of\n the convex hull of a finite set of points. The programming is\n based on the duality between the Voronoi diagram and Delaunay\n triangulation. The package also includes a function that\n returns the Delaunay mesh of a given sample of points and its\n dual Voronoi diagram in one single object.","Published":"2016-02-15","License":"file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"alphaOutlier","Version":"1.2.0","Title":"Obtain Alpha-Outlier Regions for Well-Known Probability\nDistributions","Description":"Given the parameters of a distribution, the package uses the concept of alpha-outliers by Davies and Gather (1993) to flag outliers in a data set. See Davies, L.; Gather, U. (1993): The identification of multiple outliers, JASA, 88 423, 782-792, for details.","Published":"2016-09-09","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"alphashape3d","Version":"1.2","Title":"Implementation of the 3D Alpha-Shape for the Reconstruction of\n3D Sets from a Point Cloud","Description":"Implementation in R of the alpha-shape of a finite set of points in the three-dimensional space. The alpha-shape generalizes the convex hull and allows to recover the shape of non-convex and even non-connected sets in 3D, given a random sample of points taken into it. Besides the computation of the alpha-shape, this package provides users with functions to compute the volume of the alpha-shape, identify the connected components and facilitate the three-dimensional graphical visualization of the estimated set. ","Published":"2016-02-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"alr3","Version":"2.0.5","Title":"Data to accompany Applied Linear Regression 3rd edition","Description":"This package is a companion to the textbook S. Weisberg (2005), \n \"Applied Linear Regression,\" 3rd edition, Wiley. It includes all the\n data sets discussed in the book (except one), and a few functions that \n are tailored to the methods discussed in the book. As of version 2.0.0,\n this package depends on the car package. Many functions formerly \n in alr3 have been renamed and now reside in car. \n Data files have beeen lightly modified to make some data columns row labels.","Published":"2011-10-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"alr4","Version":"1.0.5","Title":"Data to accompany Applied Linear Regression 4rd edition","Description":"This package is a companion to the textbook S. Weisberg (2014), \n \"Applied Linear Regression,\" 4rd edition, Wiley. It includes all the\n data sets discussed in the book and one function to access the textbook's\n website. \n This package depends on the car package. Many data files in this package\n are included in the alr3 package as well, so only one of them should be\n loaded.","Published":"2014-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ALS","Version":"0.0.6","Title":"Multivariate Curve Resolution Alternating Least Squares\n(MCR-ALS)","Description":"Alternating least squares is often used to resolve\n components contributing to data with a bilinear structure; the\n basic technique may be extended to alternating constrained\n least squares. Commonly applied constraints include\n unimodality, non-negativity, and normalization of components.\n Several data matrices may be decomposed simultaneously by\n assuming that one of the two matrices in the bilinear\n decomposition is shared between datasets.","Published":"2015-08-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ALSCPC","Version":"1.0","Title":"Accelerated line search algorithm for simultaneous orthogonal\ntransformation of several positive definite symmetric matrices\nto nearly diagonal form","Description":"Using of the accelerated line search algorithm for simultaneously diagonalize a set of symmetric positive definite matrices.","Published":"2013-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ALSM","Version":"0.2.0","Title":"Companion to Applied Linear Statistical Models","Description":"Functions and Data set presented in Applied Linear Statistical Models Fifth Edition (Chapters 1-9 and 16-25), Michael H. Kutner; Christopher J. Nachtsheim; John Neter; William Li, 2005. (ISBN-10: 0071122214, ISBN-13: 978-0071122214) that do not exist in R, are gathered in this package. The whole book will be covered in the next versions.","Published":"2017-03-07","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"alterryx","Version":"0.2.0","Title":"An 'API' Client for the 'Alteryx' Gallery","Description":"A tool to access each of the 'Alteryx' Gallery 'API' endpoints.\n Users can queue jobs, poll job status, and retrieve application output as\n a data frame. You will need an 'Alteryx' Server license and have 'Alteryx'\n Gallery running to utilize this package. The 'API' is accessed through the\n 'URL' that you setup for the server running 'Alteryx' Gallery and more\n information on the endpoints can be found at\n .","Published":"2017-03-22","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"altmeta","Version":"2.2","Title":"Alternative Meta-Analysis Methods","Description":"Provides alternative statistical methods for meta-analysis, including new heterogeneity tests and measures that are robust to outliers.","Published":"2016-09-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ALTopt","Version":"0.1.1","Title":"Optimal Experimental Designs for Accelerated Life Testing","Description":"Creates the optimal (D, U and I) designs for the accelerated life\n testing with right censoring or interval censoring. It uses generalized \n linear model (GLM) approach to derive the asymptotic variance-covariance \n matrix of regression coefficients. The failure time distribution is assumed \n to follow Weibull distribution with a known shape parameter and log-linear \n link functions are used to model the relationship between failure time \n parameters and stress variables. The acceleration model may have multiple \n stress factors, although most ALTs involve only two or less stress factors. \n ALTopt package also provides several plotting functions including contour plot,\n Fraction of Use Space (FUS) plot and Variance Dispersion graphs of Use Space\n (VDUS) plot.","Published":"2015-08-28","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"amap","Version":"0.8-14","Title":"Another Multidimensional Analysis Package","Description":"Tools for Clustering and Principal Component Analysis\n (With robust methods, and parallelized functions).","Published":"2014-12-17","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"AMAP.Seq","Version":"1.0","Title":"Compare Gene Expressions from 2-Treatment RNA-Seq Experiments","Description":"An Approximated Most Average Powerful Test with Optimal\n FDR Control with Application to RNA-seq Data","Published":"2012-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AMCP","Version":"0.0.4","Title":"A Model Comparison Perspective","Description":"Accompanies \"Designing experiments and \n analyzing data: A model comparison perspective\" (3rd ed.) by \n Maxwell, Delaney, & Kelley (forthcoming from Routledge). \n Contains all of the data sets in the book's chapters and \n end-of-chapter exercises. Information about the book is available at \n .","Published":"2017-02-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"AMCTestmakeR","Version":"0.1.0","Title":"Generate LaTeX Code for Auto-Multiple-Choice (AMC)","Description":"Generate code for use with the Optical Mark Recognition free software Auto Multiple Choice (AMC). More specifically, this package provides functions that use as input the question and answer texts, and output the LaTeX code for AMC.","Published":"2017-03-29","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ameco","Version":"0.2.7","Title":"European Commission Annual Macro-Economic (AMECO) Database","Description":"Annual macro-economic database provided by the European Commission.","Published":"2017-05-29","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"amei","Version":"1.0-7","Title":"Adaptive Management of Epidemiological Interventions","Description":"\n This package provides a flexible statistical framework for generating optimal \n epidemiological interventions that are designed to minimize the total expected\n cost of an emerging epidemic while simultaneously propagating uncertainty regarding \n underlying disease parameters through to the decision process via Bayesian posterior\n inference. The strategies produced through this framework are adaptive: vaccination \n schedules are iteratively adjusted to reflect the anticipated trajectory of the \n epidemic given the current population state and updated parameter estimates.","Published":"2013-12-13","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"Amelia","Version":"1.7.4","Title":"A Program for Missing Data","Description":"A tool that \"multiply imputes\" missing data in a single cross-section\n (such as a survey), from a time series (like variables collected for\n each year in a country), or from a time-series-cross-sectional data\n set (such as collected by years for each of several countries).\n Amelia II implements our bootstrapping-based algorithm that gives\n essentially the same answers as the standard IP or EMis approaches,\n is usually considerably faster than existing approaches and can\n handle many more variables. Unlike Amelia I and other statistically\n rigorous imputation software, it virtually never crashes (but please\n let us know if you find to the contrary!). The program also\n generalizes existing approaches by allowing for trends in time series\n across observations within a cross-sectional unit, as well as priors\n that allow experts to incorporate beliefs they have about the values\n of missing cells in their data. Amelia II also includes useful\n diagnostics of the fit of multiple imputation models. The program\n works from the R command line or via a graphical user interface that\n does not require users to know R.","Published":"2015-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"amen","Version":"1.3","Title":"Additive and Multiplicative Effects Models for Networks and\nRelational Data","Description":"Analysis of dyadic network and relational data using additive and\n multiplicative effects (AME) models. The basic model includes\n regression terms, the covariance structure of the social relations model\n (Warner, Kenny and Stoto (1979) , \n Wong (1982) ), and multiplicative factor\n models (Hoff(2009) ). \n Four different link functions accommodate different\n relational data structures, including binary/network data (bin), normal\n relational data (nrm), ordinal relational data (ord) and data from\n fixed-rank nomination schemes (frn). Several of these link functions are\n discussed in Hoff, Fosdick, Volfovsky and Stovel (2013) \n . Development of this\n software was supported in part by NIH grant R01HD067509.","Published":"2017-05-25","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"AmericanCallOpt","Version":"0.95","Title":"This package includes pricing function for selected American\ncall options with underlying assets that generate payouts","Description":"This package includes a set of pricing functions for\n American call options. The following cases are covered: Pricing\n of an American call using the standard binomial approximation;\n Hedge parameters for an American call with a standard binomial\n tree; Binomial pricing of an American call with continuous\n payout from the underlying asset; Binomial pricing of an\n American call with an underlying stock that pays proportional\n dividends in discrete time; Pricing of an American call on\n futures using a binomial approximation; Pricing of a currency\n futures American call using a binomial approximation; Pricing\n of a perpetual American call. The user should kindly notice\n that this material is for educational purposes only. The codes\n are not optimized for computational efficiency as they are\n meant to represent standard cases of analytical and numerical\n solution.","Published":"2012-03-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"AMGET","Version":"1.0","Title":"Post-processing tool for ADAPT 5","Description":"AMGET allows to simply and rapidly creates highly informative diagnostic plots for ADAPT 5 models. Features include data analysis prior any modeling form either NONMEM or ADAPT shaped dataset, goodness-of-fit plots (GOF), posthoc-fits plots (PHF), parameters distribution plots (PRM) and visual predictive check plots (VPC) based on ADAPT output.","Published":"2013-08-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aml","Version":"0.1-1","Title":"Adaptive Mixed LASSO","Description":"This package implements the adaptive mixed lasso (AML) method proposed by Wang et al.(2011). AML applies adaptive lasso penalty to a large number of predictors, thus producing a sparse model, while accounting for the population structure in the linear mixed model framework. The package here is primarily designed for application to genome wide association studies or genomic prediction in plant breeding populations, though it could be applied to other settings of linear mixed models.","Published":"2013-11-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AMModels","Version":"0.1.2","Title":"Adaptive Management Model Manager","Description":"Helps enable adaptive management by codifying knowledge in the\n form of models generated from numerous analyses and data sets. Facilitates\n this process by storing all models and data sets in a single object that can\n be updated and saved, thus tracking changes in knowledge through time. A shiny\n application called AM Model Manager (modelMgr()) enables the use of these\n functions via a GUI.","Published":"2017-02-21","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"AmmoniaConcentration","Version":"0.1","Title":"Un-Ionized Ammonia Concentration","Description":"Provides a function to calculate the concentration of un-ionized ammonia in the total ammonia in aqueous solution using the pH and temperature values.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"AMOEBA","Version":"1.1","Title":"A Multidirectional Optimum Ecotope-Based Algorithm","Description":"A function to calculate spatial clusters using the Getis-Ord local statistic. It searches\n irregular clusters (ecotopes) on a map.","Published":"2014-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AMORE","Version":"0.2-15","Title":"A MORE flexible neural network package","Description":"This package was born to release the TAO robust neural\n network algorithm to the R users. It has grown and I think it\n can be of interest for the users wanting to implement their own\n training algorithms as well as for those others whose needs lye\n only in the \"user space\".","Published":"2014-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AmostraBrasil","Version":"1.2","Title":"Generates Samples or Complete List of Brazilian IBGE (Instituto\nBrasileiro De Geografia e Estatistica) Census Households,\nGeocoding it by Google Maps","Description":"Generates samples or complete list of Brazilian IBGE (Instituto Brasileiro de Geografia e Estatistica, see\n for more information) census\n households, geocoding it by Google Maps. The package connects IBGE site and\n downloads maps and census data.","Published":"2016-07-26","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"ampd","Version":"0.2","Title":"An Algorithm for Automatic Peak Detection in Noisy Periodic and\nQuasi-Periodic Signals","Description":"A method for automatic detection of peaks in noisy periodic and quasi-periodic signals. This method, called automatic multiscale-based peak detection (AMPD), is based on the calculation and analysis of the local maxima scalogram, a matrix comprising the scale-dependent occurrences of local maxima.\n For further information see .","Published":"2016-12-11","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"AmpliconDuo","Version":"1.1","Title":"Statistical Analysis of Amplicon Data of the Same Sample to\nIdentify Artefacts","Description":"Increasingly powerful techniques for high-throughput sequencing open the possibility to comprehensively characterize microbial communities, including rare species. However, a still unresolved issue are the substantial error rates in the experimental process generating these sequences. To overcome these limitations we propose an approach, where each sample is split and the same amplification and sequencing protocol is applied to both halves. This procedure should allow to detect likely PCR and sequencing artifacts, and true rare species by comparison of the results of both parts. The AmpliconDuo package, whereas amplicon duo from here on refers to the two amplicon data sets of a split sample, is intended to help interpret the obtained read frequency distribution across split samples, and to filter the false positive reads.","Published":"2016-01-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"AmyloGram","Version":"1.0","Title":"Prediction of Amyloid Proteins","Description":"Predicts amyloid proteins using random forests trained on the\n n-gram encoded peptides. The implemented algorithm can be accessed from\n both the command line and shiny-based GUI.","Published":"2016-09-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"anacor","Version":"1.1-3","Title":"Simple and Canonical Correspondence Analysis","Description":"Performs simple and canonical CA (covariates on rows/columns) on a two-way frequency table (with missings) by means of SVD. Different scaling methods (standard, centroid, Benzecri, Goodman) as well as various plots including confidence ellipsoids are provided. ","Published":"2017-05-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"analogsea","Version":"0.5.0","Title":"Interface to 'Digital Ocean'","Description":"Provides a set of functions for interacting with the 'Digital\n Ocean' API at , including\n creating images, destroying them, rebooting, getting details on regions, and\n available images.","Published":"2016-11-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"analogue","Version":"0.17-0","Title":"Analogue and Weighted Averaging Methods for Palaeoecology","Description":"Fits Modern Analogue Technique and Weighted Averaging transfer \n \t function models for prediction of environmental data from species \n\t data, and related methods used in palaeoecology.","Published":"2016-02-28","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"analogueExtra","Version":"0.1-1","Title":"Additional Functions for Use with the Analogue Package","Description":"Provides additional functionality for the analogue package\n\t that is not required by all users of the main package.","Published":"2016-04-10","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"analytics","Version":"2.0","Title":"Regression Outlier Detection, Stationary Bootstrap, Testing Weak\nStationarity, and Other Tools for Data Analysis","Description":"Current version includes outlier detection in a fitted linear model, stationary bootstrap using a truncated geometric distribution, a comprehensive test for weak stationarity, column means by group, weighted biplots, and a heuristic to obtain a better initial configuration in non-metric MDS.","Published":"2017-06-14","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"analyz","Version":"1.4","Title":"Model Layer for Automatic Data Analysis via CSV File\nInterpretation","Description":"Class with methods to read and execute R commands described as steps in a CSV file.","Published":"2015-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AnalyzeFMRI","Version":"1.1-16","Title":"Functions for analysis of fMRI datasets stored in the ANALYZE or\nNIFTI format","Description":"Functions for I/O, visualisation and analysis of\n functional Magnetic Resonance Imaging (fMRI) datasets stored in\n the ANALYZE or NIFTI format.","Published":"2013-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AnalyzeTS","Version":"2.2","Title":"Analyze Fuzzy Time Series","Description":"Analyze fuzzy time series by Chen, Singh, Heuristic and Chen-Hsu models. The Abbasov-Mamedova and NFTS models is included as well.","Published":"2016-11-24","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"}
{"Package":"anapuce","Version":"2.2","Title":"Tools for microarray data analysis","Description":"This package contains functions for\n normalisation,differentially analysis of microarray data and\n others functions implementing recent methods developed by the\n Statistic and Genom Team from UMR 518 AgroParisTech/INRA Appl.\n Math. Comput. Sc.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AncestryMapper","Version":"2.0","Title":"Assigning Ancestry Based on Population References","Description":"Assigns genetic ancestry to an individual and\n studies relationships between local and global populations.","Published":"2016-09-28","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"anchoredDistr","Version":"1.0.3","Title":"Post-Processing for the Method of Anchored Distributions","Description":"Supplements the 'MAD#' software (see , \n or Osorio-Murillo, et al. (2015) ) that\n implements the Method of Anchored Distributions for inferring geostatistical\n parameters (see Rubin, et al. (2010) ). Reads 'MAD#' \n result databases, performs dimension reduction on inversion data, calculates\n likelihoods and posteriors, and tests for convergence. Also generates plots \n to summarize results.","Published":"2017-06-20","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"anchors","Version":"3.0-8","Title":"Statistical analysis of surveys with anchoring vignettes","Description":"Tools for analyzing survey responses with anchors.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AnDE","Version":"1.0","Title":"An extended Bayesian Learning Technique developed by Dr. Geoff\nWebb","Description":"AODE achieves highly accurate classification by averaging over all\n of a small space.","Published":"2013-07-25","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"andrews","Version":"1.0","Title":"Andrews curves","Description":"Andrews curves for visualization of multidimensional data","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"anesrake","Version":"0.75","Title":"ANES Raking Implementation","Description":"Provides a comprehensive system for selecting\n variables and weighting data to match the specifications of the American\n National Election Studies. The package includes methods for identifying\n discrepant variables, raking data, and assessing the effects of the raking\n algorithm. It also allows automated re-raking if target variables fall\n outside identified bounds and allows greater user specification than other\n available raking algorithms. A variety of simple weighted statistics that\n were previously in this package (version .55 and earlier) have been moved to\n the package 'weights.'","Published":"2016-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"anfis","Version":"0.99.1","Title":"Adaptive Neuro Fuzzy Inference System in R","Description":"The package implements ANFIS Type 3 Takagi and Sugeno's fuzzy\n if-then rule network with the following features: (1) Independent number of\n membership functions(MF) for each input, and also different MF extensible\n types. (2) Type 3 Takagi and Sugeno's fuzzy if-then rule (3) Full Rule\n combinations, e.g. 2 inputs 2 membership funtions -> 4 fuzzy rules (4)\n Hibrid learning, i.e. Descent Gradient for precedents and Least Squares\n Estimation for consequents (5) Multiple outputs.","Published":"2015-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AnglerCreelSurveySimulation","Version":"0.2.1","Title":"Simulate a Bus Route Creel Survey of Anglers","Description":"Create an angler population, sample the population with a user-specified survey times, and calculate metrics from a bus route-type creel survey.","Published":"2015-01-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"angstroms","Version":"0.0.1","Title":"Tools for 'ROMS' the Regional Ocean Modeling System","Description":"Helper functions for working with Regional Ocean Modeling System 'ROMS' output. See\n for more information about 'ROMS'. ","Published":"2017-05-01","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"aniDom","Version":"0.1.1","Title":"Inferring Dominance Hierarchies and Estimating Uncertainty","Description":"Provides: (1) Tools to infer dominance hierarchies based on calculating Elo scores, but with custom functions to improve estimates in animals with relatively stable dominance ranks. (2) Tools to plot the shape of the dominance hierarchy and estimate the uncertainty of a given data set.","Published":"2017-04-28","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"anim.plots","Version":"0.2","Title":"Simple Animated Plots for R","Description":"Simple animated versions of basic R plots, using the 'animation'\n package. Includes animated versions of plot, barplot, persp, contour,\n filled.contour, hist, curve, points, lines, text, symbols, segments, and\n arrows.","Published":"2017-05-14","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"animalTrack","Version":"1.0.0","Title":"Animal track reconstruction for high frequency 2-dimensional\n(2D) or 3-dimensional (3D) movement data","Description":"2D and 3D animal tracking data can be used to reconstruct tracks through time/space with correction based on known positions. 3D visualization of animal position and attitude.","Published":"2013-09-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"animation","Version":"2.5","Title":"A Gallery of Animations in Statistics and Utilities to Create\nAnimations","Description":"Provides functions for animations in statistics, covering topics\n in probability theory, mathematical statistics, multivariate statistics,\n non-parametric statistics, sampling survey, linear models, time series,\n computational statistics, data mining and machine learning. These functions\n may be helpful in teaching statistics and data analysis. Also provided in\n this package are a series of functions to save animations to various formats,\n e.g. Flash, 'GIF', HTML pages, 'PDF' and videos. 'PDF' animations can be\n inserted into 'Sweave' / 'knitr' easily.","Published":"2017-03-30","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"ANLP","Version":"1.3","Title":"Build Text Prediction Model","Description":"Library to sample and clean text data, build N-gram model, Backoff algorithm etc.","Published":"2016-07-11","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"anMC","Version":"0.1.0","Title":"Compute High Dimensional Orthant Probabilities","Description":"Computationally efficient method to estimate orthant probabilities of high-dimensional Gaussian vectors. Further implements a function to compute conservative estimates of excursion sets under Gaussian random field priors. ","Published":"2017-05-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"AnnotationBustR","Version":"1.0","Title":"Extract Subsequences from GenBank Annotations","Description":"Extraction of subsequences into FASTA files from GenBank annotations where gene names may vary among accessions.","Published":"2017-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AnnotLists","Version":"1.2","Title":"AnnotLists: A tool to annotate multiple lists from a specific\nannotation file","Description":"Annotate multiple lists from a specific annotation file.","Published":"2011-10-23","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"anocva","Version":"0.1.0","Title":"A Non-Parametric Statistical Test to Compare Clustering\nStructures","Description":"Provides ANOCVA (ANalysis Of Cluster VAriability), a non-parametric statistical test\n to compare clustering structures with applications in functional magnetic resonance imaging\n data (fMRI). The ANOCVA allows us to compare the clustering structure of multiple groups\n simultaneously and also to identify features that contribute to the differential clustering.","Published":"2016-12-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"anoint","Version":"1.4","Title":"Analysis of Interactions","Description":"The tools in this package are intended to help researchers assess multiple treatment-covariate interactions with data from a parallel-group randomized controlled clinical trial.","Published":"2015-07-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ANOM","Version":"0.5","Title":"Analysis of Means","Description":"Analysis of means (ANOM) as used in technometrical computing. The package takes results from multiple comparisons with the grand mean (obtained with 'multcomp', 'SimComp', 'nparcomp', or 'MCPAN') or corresponding simultaneous confidence intervals as input and produces ANOM decision charts that illustrate which group means deviate significantly from the grand mean.","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"anomalyDetection","Version":"0.1.1","Title":"Implementation of Augmented Network Log Anomaly Detection\nProcedures","Description":"Implements procedures to aid in detecting network log anomalies.\n By combining various multivariate analytic approaches relevant to network\n anomaly detection, it provides cyber analysts efficient means to detect\n suspected anomalies requiring further evaluation.","Published":"2017-03-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"anominate","Version":"0.5","Title":"alpha-NOMINATE Ideal Point Estimator","Description":"Fits ideal point model described in Carroll, Lewis, Lo, Poole and Rosenthal, \"The Structure of Utility in Models of Spatial Voting,\" American Journal of Political Science 57(4): 1008--1028.","Published":"2014-10-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"anonymizer","Version":"0.2.0","Title":"Anonymize Data Containing Personally Identifiable Information","Description":"Allows users to quickly and easily anonymize data containing\n Personally Identifiable Information (PII) through convenience functions.","Published":"2015-09-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"ANOVAreplication","Version":"1.0.0","Title":"Test ANOVA Replications by Means of the Prior Predictive p-Value","Description":"Allows for the computation of a prior predictive p-value to test replication of relevant features of original ANOVA studies. Relevant features are captured in informative hypotheses. The package also allows for the computation of sample sizes for new studies, and comes with a Shiny application in which all calculations can be conducted as well. ","Published":"2017-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AntAngioCOOL","Version":"1.2","Title":"Anti-Angiogenic Peptide Prediction","Description":"Machine learning based package to predict anti-angiogenic peptides using heterogeneous sequence descriptors. 'AntAngioCOOL' exploits five descriptor types of a peptide of interest to do prediction including: pseudo amino acid composition, k-mer composition, k-mer composition (reduced alphabet), physico-chemical profile and atomic profile. According to the obtained results, 'AntAngioCOOL' reached to a satisfactory performance in anti-angiogenic peptide prediction on a benchmark non-redundant independent test dataset.","Published":"2016-08-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"antaresProcessing","Version":"0.10.2","Title":"Antares Results Processing","Description":"\n Process results generated by Antares, a powerful software developed by\n RTE to simulate and study electric power systems (more information about\n Antares here: ). This package provides\n functions to create new columns like net load, load factors, upward and\n downward margins or to compute aggregated statistics like economic surpluses\n of consumers, producers and sectors.","Published":"2017-05-24","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"antaresRead","Version":"1.1.3","Title":"Import, Manipulate and Explore the Results of an Antares\nSimulation","Description":"Import, manipulate and explore results generated by Antares, a \n powerful software developed by RTE to simulate and study electric power systems\n (more information about Antares here: ).","Published":"2017-05-30","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"antaresViz","Version":"0.10","Title":"Antares Visualizations","Description":"Visualize results generated by Antares, a powerful software\n developed by RTE to simulate and study electric power systems\n (more information about Antares here: ).\n This package provides functions that create interactive charts to help\n Antares users visually explore the results of their simulations.","Published":"2017-06-20","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"AnthropMMD","Version":"1.0.1","Title":"A GUI for Mean Measures of Divergence","Description":"Offers a complete and interactive GUI to work out Mean Measures of Divergence, especially for anthropologists.","Published":"2016-04-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Anthropometry","Version":"1.8","Title":"Statistical Methods for Anthropometric Data","Description":"Statistical methodologies especially developed to analyze anthropometric data. These methods are aimed \t\tat providing effective solutions to some commons problems related to Ergonomics and Anthropometry. They are based on clustering, the \t\tstatistical concept of data depth, statistical shape analysis and archetypal analysis.","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"antitrust","Version":"0.95.1","Title":"Tools for Antitrust Practitioners","Description":"A collection of tools for antitrust practitioners, including the ability to calibrate different consumer demand systems and simulate the effects mergers under different competitive regimes.","Published":"2015-11-23","License":"Unlimited","snapshot_date":"2017-06-23"}
{"Package":"antiword","Version":"1.1","Title":"Extract Text from Microsoft Word Documents","Description":"Wraps the 'AntiWord' utility to extract text from Microsoft\n Word documents. The utility only supports the old 'doc' format, not the \n new xml based 'docx' format. Use the 'xml2' package to read the latter.","Published":"2017-04-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AntWeb","Version":"0.7","Title":"programmatic interface to the AntWeb","Description":"A complete programmatic interface to the AntWeb database from the\n California Academy of Sciences.","Published":"2014-08-14","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"anytime","Version":"0.3.0","Title":"Anything to 'POSIXct' or 'Date' Converter","Description":"Convert input in any one of character, integer, numeric, factor,\n or ordered type into 'POSIXct' (or 'Date') objects, using one of a number of\n predefined formats, and relying on Boost facilities for date and time parsing.","Published":"2017-06-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aod","Version":"1.3","Title":"Analysis of Overdispersed Data","Description":"This package provides a set of functions to analyse\n overdispersed counts or proportions. Most of the methods are\n already available elsewhere but are scattered in different\n packages. The proposed functions should be considered as\n complements to more sophisticated methods such as generalized\n estimating equations (GEE) or generalized linear mixed effect\n models (GLMM).","Published":"2012-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aods3","Version":"0.4-1","Title":"Analysis of Overdispersed Data using S3 methods","Description":"This package provides functions to analyse overdispersed\n counts or proportions. These functions should be considered as\n complements to more sophisticated methods such as generalized\n estimating equations (GEE) or generalized linear mixed effect\n models (GLMM). aods3 is an S3 re-implementation of the\n deprecated S4 package aod.","Published":"2013-06-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aoos","Version":"0.5.0","Title":"Another Object Orientation System","Description":"Another implementation of object-orientation in R. It provides\n syntactic sugar for the S4 class system and two alternative new\n implementations. One is an experimental version built around S4\n and the other one makes it more convenient to work with lists as objects.","Published":"2017-05-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"aop","Version":"1.0.0","Title":"Adverse Outcome Pathway Analysis","Description":"Provides tools for analyzing adverse outcome pathways\n (AOPs) for pharmacological and toxicological research. Functionality\n includes the ability to perform causal network analysis of networks\n developed in and exported from Cytoscape or existing as R graph objects, and\n identifying the point of departure/screening/risk value from concentration-\n response data.","Published":"2016-12-05","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"aoristic","Version":"0.6","Title":"aoristic analysis with spatial output (kml)","Description":"'Aoristic' is one of the past tenses in Greek and represents an\n uncertain occurrence time. Aoristic analysis suggested by Ratcliffe (2002)\n is a method to analyze events that do not have exact times of occurrence\n but have starting times and ending times. For example, a property crime\n database (e.g., burglary) typically has a starting time and ending time of\n the crime that could have occurred. Aoristic analysis allocates the\n probability of a crime incident occurring at every hour over a 24-hour\n period. The probability is aggregated over a study area to create an\n aoristic graph.\n Using crime incident data with lat/lon, DateTimeFrom, and\n DateTimeTo, functions in this package create a total of three (3) kml\n files and corresponding aoristic graphs: 1) density and contour; 2) grid\n count; and 3) shapefile boundary. (see also:\n https://sites.google.com/site/georgekick/software)","Published":"2015-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"apa","Version":"0.2.0","Title":"Format Outputs of Statistical Tests According to APA Guidelines","Description":"Formatter functions in the 'apa' package take the return value of a\n statistical test function, e.g. a call to chisq.test() and return a string\n formatted according to the guidelines of the APA (American Psychological\n Association).","Published":"2017-02-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"ApacheLogProcessor","Version":"0.2.2","Title":"Process the Apache Web Server Log Files","Description":"Provides capabilities to process Apache HTTPD Log files.The main functionalities are to extract data from access and error log files to data frames.","Published":"2017-03-29","License":"LGPL-3 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"apaStyle","Version":"0.5","Title":"Generate APA Tables for MS Word","Description":"Most psychological journals require that tables in a manuscript\n comply to APA (American Association of Psychology) standards. Creating APA\n tables manually is often time consuming and prone to transcription errors.\n This package generates tables for MS Word ('.docx' extension) in APA format\n automatically with just a few lines of code.","Published":"2017-03-29","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"apaTables","Version":"1.5.1","Title":"Create American Psychological Association (APA) Style Tables","Description":"A common task faced by researchers is the creation of APA style\n (i.e., American Psychological Association style) tables from statistical\n output. In R a large number of function calls are often needed to obtain all of\n the desired information for a single APA style table. As well, the process of\n manually creating APA style tables in a word processor is prone to transcription\n errors. This package creates Word files (.doc files) containing APA style tables\n for several types of analyses. Using this package minimizes transcription errors\n and reduces the number commands needed by the user.","Published":"2017-06-20","License":"MIT License + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"apc","Version":"1.3","Title":"Age-Period-Cohort Analysis","Description":"Functions for age-period-cohort analysis. The data can be organised in matrices indexed by age-cohort, age-period or cohort-period. The data can include dose and response or just doses. The statistical model is a generalized linear model (GLM) allowing for 3,2,1 or 0 of the age-period-cohort factors. The canonical parametrisation of Kuang, Nielsen and Nielsen (2008) is used. Thus, the analysis does not rely on ad hoc identification.","Published":"2016-12-01","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"apcluster","Version":"1.4.3","Title":"Affinity Propagation Clustering","Description":"Implements Affinity Propagation clustering introduced by Frey and\n\tDueck (2007) . The algorithms are largely\n analogous to the 'Matlab' code published by Frey and Dueck.\n The package further provides leveraged affinity propagation and an\n algorithm for exemplar-based agglomerative clustering that can also be\n used to join clusters obtained from affinity propagation. Various\n plotting functions are available for analyzing clustering results.","Published":"2016-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"apdesign","Version":"1.0.0","Title":"An Implementation of the Additive Polynomial Design Matrix","Description":"An implementation of the additive polynomial (AP) design matrix. It\n constructs and appends an AP design matrix to a data frame for use with\n longitudinal data subject to seasonality.","Published":"2016-11-15","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ape","Version":"4.1","Title":"Analyses of Phylogenetics and Evolution","Description":"Functions for reading, writing, plotting, and manipulating phylogenetic trees, analyses of comparative data in a phylogenetic framework, ancestral character analyses, analyses of diversification and macroevolution, computing distances from DNA sequences, reading and writing nucleotide sequences as well as importing from BioConductor, and several tools such as Mantel's test, generalized skyline plots, graphical exploration of phylogenetic data (alex, trex, kronoviz), estimation of absolute evolutionary rates and clock-like trees using mean path lengths and penalized likelihood, dating trees with non-contemporaneous sequences, translating DNA into AA sequences, and assessing sequence alignments. Phylogeny estimation can be done with the NJ, BIONJ, ME, MVR, SDM, and triangle methods, and several methods handling incomplete distance matrices (NJ*, BIONJ*, MVR*, and the corresponding triangle method). Some functions call external applications (PhyML, Clustal, T-Coffee, Muscle) whose results are returned into R.","Published":"2017-02-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"apercu","Version":"0.2.1","Title":"Apercu is Giving you a Quick Look at your Data","Description":"The goal is to print an \"aperçu\", a short view of a vector, a\n matrix, a data.frame, a list or an array. By default, it prints the first 5\n elements of each dimension. By default, the number of columns is equal to\n the number of lines. If you want to control the selection of the elements,\n you can pass a list, with each element being a vector giving the selection\n for each dimension.","Published":"2017-04-25","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"}
{"Package":"apex","Version":"1.0.2","Title":"Phylogenetic Methods for Multiple Gene Data","Description":"Toolkit for the analysis of multiple gene data. Apex implements\n the new S4 classes 'multidna', 'multiphyDat' and associated methods to handle\n aligned DNA sequences from multiple genes.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"APfun","Version":"0.1.1","Title":"Geo-Processing Base Functions","Description":"Base tools for facilitating the creation geo-processing functions\n in R.","Published":"2017-04-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"aplore3","Version":"0.9","Title":"Datasets from Hosmer, Lemeshow and Sturdivant, \"Applied Logistic\nRegression\" (3rd Ed., 2013)","Description":"An unofficial companion to \"Applied\n Logistic Regression\" by D.W. Hosmer, S. Lemeshow and\n R.X. Sturdivant (3rd ed., 2013) containing the dataset used in the book.","Published":"2016-10-20","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"aplpack","Version":"1.3.0","Title":"Another Plot PACKage: stem.leaf, bagplot, faces, spin3R,\nplotsummary, plothulls, and some slider functions","Description":"set of functions for drawing some special plots:\n stem.leaf plots a stem and leaf plot,\n stem.leaf.backback plots back-to-back versions of stem and leafs,\n bagplot plots a bagplot,\n skyline.hist plots several histgramm in one plot of a one dimensional data set,\n plotsummary plots a graphical summary of a data set with one or more variables,\n plothulls plots sequentially hulls of a bivariate data set,\n faces plots chernoff faces,\n spin3R for an inspection of a 3-dim point cloud,\n slider functions for interactive graphics.","Published":"2014-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"apmsWAPP","Version":"1.0","Title":"Pre- and Postprocessing for AP-MS data analysis based on\nspectral counts","Description":"apmsWAPP provides a complete workflow for the analysis of AP-MS data (replicate single-bait purifications including negative controls) based on spectral counts. \n\t\tIt comprises pre-processing, scoring and postprocessing of protein interactions.\n\t\tA final list of interaction candidates is reported: it provides a ranking of the candidates according \n\t\tto their p-values which allow estimating the number of false-positive interactions.","Published":"2014-04-22","License":"LGPL-3","snapshot_date":"2017-06-23"}
{"Package":"apng","Version":"1.0","Title":"Convert Png Files into Animated Png","Description":"Convert several png files into an animated png file.\n This package exports only a single function `apng'. Call the\n apng function with a vector of file names (which should be\n png files) to convert them to a single animated png file.","Published":"2017-05-25","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"appell","Version":"0.0-4","Title":"Compute Appell's F1 hypergeometric function","Description":"This package wraps Fortran code by F. D. Colavecchia and\n G. Gasaneo for computing the Appell's F1 hypergeometric\n function. Their program uses Fortran code by L. F. Shampine and\n H. A. Watts. Moreover, the hypergeometric function with complex\n arguments is computed with Fortran code by N. L. J. Michel and\n M. V. Stoitsov or with Fortran code by R. C. Forrey. See the\n function documentations for the references and please cite them\n accordingly.","Published":"2013-04-16","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"apple","Version":"0.3","Title":"Approximate Path for Penalized Likelihood Estimators","Description":"Approximate Path for Penalized Likelihood Estimators for\n Generalized Linear Models penalized by LASSO or MCP","Published":"2012-01-20","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AppliedPredictiveModeling","Version":"1.1-6","Title":"Functions and Data Sets for 'Applied Predictive Modeling'","Description":"A few functions and several data set for the Springer book 'Applied Predictive Modeling'","Published":"2014-07-25","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"appnn","Version":"1.0-0","Title":"Amyloid Propensity Prediction Neural Network","Description":"Amyloid propensity prediction neural network (APPNN) is an amyloidogenicity propensity predictor based on a machine learning approach through recursive feature selection and feed-forward neural networks, taking advantage of newly published sequences with experimental, in vitro, evidence of amyloid formation.","Published":"2015-07-12","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"approximator","Version":"1.2-6","Title":"Bayesian prediction of complex computer codes","Description":"Performs Bayesian prediction of complex computer codes\n when fast approximations are available: M. C. Kennedy and A. O'Hagan\n 2000, Biometrika 87(1):1-13","Published":"2013-12-08","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"aprean3","Version":"1.0.1","Title":"Datasets from Draper and Smith \"Applied Regression Analysis\"\n(3rd Ed., 1998)","Description":"An unofficial companion to the textbook \"Applied Regression\n Analysis\" by N.R. Draper and H. Smith (3rd Ed., 1998) including all the\n accompanying datasets.","Published":"2015-05-09","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"apricom","Version":"1.0.0","Title":"Tools for the a Priori Comparison of Regression Modelling\nStrategies","Description":"Tools to compare several model adjustment and validation methods prior to application in a final analysis.","Published":"2015-11-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"aprof","Version":"0.3.2","Title":"Amdahl's Profiler, Directed Optimization Made Easy","Description":"Assists the evaluation of whether and\n where to focus code optimization, using Amdahl's law and visual aids\n based on line profiling. Amdahl's profiler organises profiling output\n files (including memory profiling) in a visually appealing way.\n It is meant to help to balance development\n vs. execution time by helping to identify the most promising sections\n of code to optimize and projecting potential gains. The package is\n an addition to R's standard profiling tools and is not a wrapper for them.","Published":"2016-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"APSIM","Version":"0.9.1","Title":"General Utility Functions for the 'Agricultural Production\nSystems Simulator'","Description":"Contains functions designed to facilitate the loading\n and transformation of 'Agricultural Production Systems Simulator'\n output files . Input meteorological data\n (also known as \"weather\" or \"met\") files can also be generated\n from user supplied data.","Published":"2016-10-19","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"APSIMBatch","Version":"0.1.0.2374","Title":"Analysis the output of Apsim software","Description":"Run APSIM in Batch mode","Published":"2012-10-29","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"apsimr","Version":"1.2","Title":"Edit, Run and Evaluate APSIM Simulations Easily Using R","Description":"The Agricultural Production Systems sIMulator (APSIM) is a widely\n used simulator of agricultural systems. This package includes\n functions to create, edit and run APSIM simulations from R. It\n also includes functions to visualize the results of an APSIM simulation\n and perform sensitivity/uncertainty analysis of APSIM either via functions\n in the sensitivity package or by novel emulator-based functions. \n For more on APSIM including download instructions go to\n \\url{www.apsim.info}.","Published":"2015-10-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"apsrtable","Version":"0.8-8","Title":"apsrtable model-output formatter for social science","Description":"Formats latex tables from one or more model objects\n side-by-side with standard errors below, not unlike tables\n found in such journals as the American Political Science\n Review.","Published":"2012-04-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"apt","Version":"2.5","Title":"Asymmetric Price Transmission","Description":"Asymmetric price transmission between two time series is assessed. Several functions are available for linear and nonlinear threshold cointegration, and furthermore, symmetric and asymmetric error correction model. A graphical user interface is also included for major functions included in the package, so users can also use these functions in a more intuitive way.","Published":"2016-02-25","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"aptg","Version":"0.1.0","Title":"Automatic Phylogenetic Tree Generator","Description":"Generates phylogenetic trees and distance matrices from a list of species name or from a taxon down to whatever lower taxon. It can do so based on two reference super trees: mammals and angiosperms. ","Published":"2017-03-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"APtools","Version":"3.0","Title":"Average Positive Predictive Values (AP) for Binary Outcomes and\nCensored Event Times","Description":"We provide tools to estimate two prediction performance metrics,\n the average positive predictive values (AP) as well as the well-known AUC\n (the area under the receiver operator characteristic curve) for risk scores\n or marker. The outcome of interest is either binary or censored event time.\n Note that for censored event time, our functions estimate the AP and the\n AUC are time-dependent for pre-specified time interval(s). A function that\n compares the APs of two risk scores/markers is also included. Optional\n outputs include positive predictive values and true positive fractions at\n the specified marker cut-off values, and a plot of the time-dependent AP\n versus time (available for event time data).","Published":"2016-08-05","License":"LGPL-3","snapshot_date":"2017-06-23"}
{"Package":"apTreeshape","Version":"1.4-5","Title":"Analyses of Phylogenetic Treeshape","Description":"apTreeshape is mainly dedicated to simulation and analysis\n of phylogenetic tree topologies using statistical indices. It\n is a companion library of the 'ape' package. It provides\n additional functions for reading, plotting, manipulating\n phylogenetic trees. It also offers convenient web-access to\n public databases, and enables testing null models of\n macroevolution using corrected test statistics. Trees of class\n \"phylo\" (from 'ape' package) can be converted easily.","Published":"2012-11-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aqfig","Version":"0.8","Title":"Functions to help display air quality model output and\nmonitoring data","Description":"This package contains functions to help display air quality model output and monitoring data, such as creating color scatterplots, color legends, etc.","Published":"2013-11-09","License":"Unlimited","snapshot_date":"2017-06-23"}
{"Package":"aqp","Version":"1.10","Title":"Algorithms for Quantitative Pedology","Description":"A collection of algorithms related to modeling of soil resources, soil classification, soil profile aggregation, and visualization.","Published":"2017-01-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"aqr","Version":"0.4","Title":"Interface methods to use with an ActiveQuant Master Server","Description":"This R extension provides methods to use a standalone ActiveQuant\n Master Server from within R. Currently available features include fetching\n and storing historical data, receiving and sending live data. Several\n utility methods for simple data transformations are included, too. For\n support requests, please join the mailing list at\n https://r-forge.r-project.org/mail/?group_id=1518","Published":"2014-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AquaEnv","Version":"1.0-4","Title":"Integrated Development Toolbox for Aquatic Chemical Model\nGeneration","Description":"Toolbox for the experimental aquatic chemist, focused on \n acidification and CO2 air-water exchange. It contains all elements to\n model the pH, the related CO2 air-water exchange, and\n aquatic acid-base chemistry for an arbitrary marine,\n estuarine or freshwater system. It contains a suite of tools for \n sensitivity analysis, visualisation, modelling of chemical batches, \n and can be used to build dynamic models of aquatic systems. \n As from version 1.0-4, it also contains functions to calculate \n the buffer factors. ","Published":"2016-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AR","Version":"1.0","Title":"Another Look at the Acceptance-Rejection Method","Description":"In mathematics, 'rejection sampling' is a basic technique used to generate observations from a distribution. It is also commonly called 'the Acceptance-Rejection method' or 'Accept-Reject algorithm' and is a type of Monte Carlo method. 'Acceptance-Rejection method' is based on the observation that to sample a random variable one can perform a uniformly random sampling of the 2D cartesian graph, and keep the samples in the region under the graph of its density function. Package 'AR' is able to generate/simulate random data from a probability density function by Acceptance-Rejection method. Moreover, this package is a useful teaching resource for graphical presentation of Acceptance-Rejection method. From the practical point of view, the user needs to calculate a constant in Acceptance-Rejection method, which package 'AR' is able to compute this constant by optimization tools. Several numerical examples are provided to illustrate the graphical presentation for the Acceptance-Rejection Method.","Published":"2017-05-18","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"AR1seg","Version":"1.0","Title":"Segmentation of an autoregressive Gaussian process of order 1","Description":"This package corresponds to the implementation of the robust approach for estimating change-points in the mean of an AR(1) Gaussian process by using the methodology described in the paper arXiv 1403.1958","Published":"2014-06-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"arabicStemR","Version":"1.2","Title":"Arabic Stemmer for Text Analysis","Description":"Allows users to stem Arabic texts for text analysis.","Published":"2017-02-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ArArRedux","Version":"0.2","Title":"Rigorous Data Reduction and Error Propagation of Ar40 / Ar39\nData","Description":"Processes noble gas mass spectrometer data to determine the isotopic composition of argon (comprised of Ar36, Ar37, Ar38, Ar39 and Ar40) released from neutron-irradiated potassium-bearing minerals. Then uses these compositions to calculate precise and accurate geochronological ages for multiple samples as well as the covariances between them. Error propagation is done in matrix form, which jointly treats all samples and all isotopes simultaneously at every step of the data reduction process. Includes methods for regression of the time-resolved mass spectrometer signals to t=0 ('time zero') for both single- and multi-collector instruments, blank correction, mass fractionation correction, detector intercalibration, decay corrections, interference corrections, interpolation of the irradiation parameter between neutron fluence monitors, and (weighted mean) age calculation. All operations are performed on the logs of the ratios between the different argon isotopes so as to properly treat them as 'compositional data', sensu Aitchison [1986, The Statistics of Compositional Data, Chapman and Hall].","Published":"2015-08-26","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"arc","Version":"1.1","Title":"Association Rule Classification","Description":"Implements the Classification-based on\n Association Rules (CBA) algorithm for association rule classification (ARC).\n The package also contains several convenience methods that allow to automatically\n set CBA parameters (minimum confidence, minimum support) and it also natively\n handles numeric attributes by integrating a pre-discretization step.\n The rule generation phase is handled by the 'arules' package.","Published":"2017-03-02","License":"AGPL-3","snapshot_date":"2017-06-23"}
{"Package":"ARCensReg","Version":"2.1","Title":"Fitting Univariate Censored Linear Regression Model with\nAutoregressive Errors","Description":"It fits an univariate left or right censored linear regression model\n with autoregressive errors under the normal distribution. It provides estimates\n and standard errors of the parameters, prediction of future observations and\n it supports missing values on the dependent variable.\n It also performs influence diagnostic through local influence for three possible\n perturbation schemes.","Published":"2016-09-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ArchaeoPhases","Version":"1.2","Title":"Post-Processing of the Markov Chain Simulated by 'ChronoModel',\n'Oxcal' or 'BCal'","Description":"Provides a list of functions for the statistical analysis of archaeological dates and groups of dates. It is based on the post-processing of the Markov Chains whose stationary distribution is the posterior distribution of a series of dates. Such output can be simulated by different applications as for instance 'ChronoModel' (see ), 'Oxcal' (see ) or 'BCal' (see http://bcal.shef.ac.uk/). The only requirement is to have a csv file containing a sample from the posterior distribution.","Published":"2017-06-13","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"archdata","Version":"1.1","Title":"Example Datasets from Archaeological Research","Description":"The archdata package provides several types of data that are typically used in archaeological research. ","Published":"2016-04-05","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"archetypes","Version":"2.2-0","Title":"Archetypal Analysis","Description":"The main function archetypes implements a\n framework for archetypal analysis supporting arbitrary\n problem solving mechanisms for the different conceptual\n parts of the algorithm.","Published":"2014-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"archiDART","Version":"2.0","Title":"Plant Root System Architecture Analysis Using DART and RSML\nFiles","Description":"Analysis of complex plant root system architectures (RSA) using the output files created by Data Analysis of Root Tracings (DART), an open-access software dedicated to the study of plant root architecture and development across time series (Le Bot et al (2010) \"DART: a software to analyse root system architecture and development from captured images\", Plant and Soil, ), and RSA data encoded with the Root System Markup Language (RSML) (Lobet et al (2015) \"Root System Markup Language: toward a unified root architecture description language\", Plant Physiology, ). More information can be found in Delory et al (2016) \"archiDART: an R package for the automated computation of plant root architectural traits\", Plant and Soil, .","Published":"2017-05-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"archivist","Version":"2.1.2","Title":"Tools for Storing, Restoring and Searching for R Objects","Description":"Data exploration and modelling is a process in which a lot of data\n artifacts are produced. Artifacts like: subsets, data aggregates, plots,\n statistical models, different versions of data sets and different versions\n of results. The more projects we work with the more artifacts are produced\n and the harder it is to manage these artifacts. Archivist helps to store\n and manage artifacts created in R. Archivist allows you to store selected\n artifacts as a binary files together with their metadata and relations.\n Archivist allows to share artifacts with others, either through shared\n folder or github. Archivist allows to look for already created artifacts by\n using it's class, name, date of the creation or other properties. Makes it\n easy to restore such artifacts. Archivist allows to check if new artifact\n is the exact copy that was produced some time ago. That might be useful\n either for testing or caching.","Published":"2016-12-31","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"archivist.github","Version":"0.2.2","Title":"Tools for Archiving, Managing and Sharing R Objects via GitHub","Description":"The extension of the 'archivist' package integrating the archivist with GitHub via GitHub API, 'git2r' packages and 'httr' package. ","Published":"2016-08-30","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ArCo","Version":"0.1-2","Title":"Artificial Counterfactual Package","Description":"Set of functions to analyse and estimate Artificial Counterfactual models from Carvalho, Masini and Medeiros (2016) .","Published":"2017-04-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"ArDec","Version":"2.0","Title":"Time series autoregressive-based decomposition","Description":"Package ArDec implements autoregressive-based\n decomposition of a time series based on the constructive\n approach in West (1997). Particular cases include the\n extraction of trend and seasonal components.","Published":"2013-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"arf3DS4","Version":"2.5-10","Title":"Activated Region Fitting, fMRI data analysis (3D)","Description":"Activated Region Fitting (ARF) is an analysis method for fMRI data. ","Published":"2014-02-21","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"arfima","Version":"1.4-0","Title":"Fractional ARIMA (and Other Long Memory) Time Series Modeling","Description":"Simulates, fits, and predicts long-memory and anti-persistent time\n series, possibly mixed with ARMA, regression, transfer-function components.\n Exact methods (MLE, forecasting, simulation) are used.","Published":"2017-06-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"ArfimaMLM","Version":"1.3","Title":"Arfima-MLM Estimation For Repeated Cross-Sectional Data","Description":"Functions to facilitate the estimation of Arfima-MLM models for repeated cross-sectional data and pooled cross-sectional time-series data (see Lebo and Weber 2015). The estimation procedure uses double filtering with Arfima methods to account for autocorrelation in repeated cross-sectional data followed by multilevel modeling (MLM) to estimate aggregate as well as individual-level parameters simultaneously.","Published":"2015-01-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"argon2","Version":"0.2-0","Title":"Secure Password Hashing","Description":"Utilities for secure password hashing via the argon2 algorithm.\n It is a relatively new hashing algorithm and is believed to be very secure.\n The 'argon2' implementation included in the package is the reference\n implementation. The package also includes some utilities that should be\n useful for digest authentication, including a wrapper of 'blake2b'. For\n similar R packages, see sodium and 'bcrypt'. See\n or\n for more information.","Published":"2017-06-12","License":"BSD 2-clause License + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"argosfilter","Version":"0.63","Title":"Argos locations filter","Description":"Functions to filters animal satellite tracking data\n obtained from Argos. It is especially indicated for telemetry\n studies of marine animals, where Argos locations are\n predominantly of low-quality.","Published":"2012-11-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"argparse","Version":"1.0.4","Title":"Command Line Optional and Positional Argument Parser","Description":"A command line parser to\n be used with Rscript to write \"#!\" shebang scripts that gracefully\n accept positional and optional arguments and automatically generate usage.","Published":"2016-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"argparser","Version":"0.4","Title":"Command-Line Argument Parser","Description":"Cross-platform command-line argument parser written purely in R\n with no external dependencies. It is useful with the Rscript\n front-end and facilitates turning an R script into an executable script.","Published":"2016-04-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"ArgumentCheck","Version":"0.10.2","Title":"Improved Communication to Users with Respect to Problems in\nFunction Arguments","Description":"The typical process of checking arguments in functions is\n iterative. In this process, an error may be returned and the user may fix\n it only to receive another error on a different argument. 'ArgumentCheck'\n facilitates a more helpful way to perform argument checks allowing the\n programmer to run all of the checks and then return all of the errors and\n warnings in a single message.","Published":"2016-04-01","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"arm","Version":"1.9-3","Title":"Data Analysis Using Regression and Multilevel/Hierarchical\nModels","Description":"Functions to accompany A. Gelman and J. Hill, Data Analysis Using Regression and Multilevel/Hierarchical Models, Cambridge University Press, 2007.","Published":"2016-11-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"arnie","Version":"0.1.2","Title":"\"Arnie\" box office records 1982-2014","Description":"Arnold Schwarzenegger movie weekend box office records from\n 1982-2014","Published":"2014-06-16","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"}
{"Package":"aroma.affymetrix","Version":"3.1.0","Title":"Analysis of Large Affymetrix Microarray Data Sets","Description":"A cross-platform R framework that facilitates processing of any number of Affymetrix microarray samples regardless of computer system. The only parameter that limits the number of chips that can be processed is the amount of available disk space. The Aroma Framework has successfully been used in studies to process tens of thousands of arrays. This package has actively been used since 2006.","Published":"2017-03-24","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"}
{"Package":"aroma.apd","Version":"0.6.0","Title":"A Probe-Level Data File Format Used by 'aroma.affymetrix'\n[deprecated]","Description":"DEPRECATED. Do not start building new projects based on this package. (The (in-house) APD file format was initially developed to store Affymetrix probe-level data, e.g. normalized CEL intensities. Chip types can be added to APD file and similar to methods in the affxparser package, this package provides methods to read APDs organized by units (probesets). In addition, the probe elements can be arranged optimally such that the elements are guaranteed to be read in order when, for instance, data is read unit by unit. This speeds up the read substantially. This package is supporting the Aroma framework and should not be used elsewhere.)","Published":"2015-02-25","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"}
{"Package":"aroma.cn","Version":"1.6.1","Title":"Copy-Number Analysis of Large Microarray Data Sets","Description":"Methods for analyzing DNA copy-number data. Specifically,\n this package implements the multi-source copy-number normalization (MSCN)\n method for normalizing copy-number data obtained on various platforms and\n technologies. It also implements the TumorBoost method for normalizing\n paired tumor-normal SNP data.","Published":"2015-10-28","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"}
{"Package":"aroma.core","Version":"3.1.0","Title":"Core Methods and Classes Used by 'aroma.*' Packages Part of the\nAroma Framework","Description":"Core methods and classes used by higher-level aroma.* packages\n part of the Aroma Project, e.g. aroma.affymetrix and aroma.cn.","Published":"2017-03-23","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"}
{"Package":"ARPobservation","Version":"1.1","Title":"Tools for Simulating Direct Behavioral Observation Recording\nProcedures Based on Alternating Renewal Processes","Description":"Tools for simulating data generated by direct observation\n recording. Behavior streams are simulated based on an alternating renewal\n process, given specified distributions of event durations and interim\n times. Different procedures for recording data can then be applied to the\n simulated behavior streams. Functions are provided for the following\n recording methods: continuous duration recording, event counting, momentary\n time sampling, partial interval recording, and whole interval recording.","Published":"2015-02-11","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"aRpsDCA","Version":"1.1.0","Title":"Arps Decline Curve Analysis in R","Description":"Functions for Arps decline-curve analysis on oil and gas data. Includes exponential, hyperbolic, harmonic, and hyperbolic-to-exponential models as well as the preceding with initial curtailment or a period of linear rate buildup. Functions included for computing rate, cumulative production, instantaneous decline, EUR, time to economic limit, and performing least-squares best fits.","Published":"2016-04-05","License":"LGPL-2.1","snapshot_date":"2017-06-23"}
{"Package":"arrApply","Version":"2.0.1","Title":"Apply a Function to a Margin of an Array","Description":"High performance variant of apply() for a fixed set of functions.\n Considerable speedup is a trade-off for universality, user defined\n functions cannot be used with this package. However, 20 most currently employed\n functions are available for usage. They can be divided in three types:\n reducing functions (like mean(), sum() etc., giving a scalar when applied to a vector),\n mapping function (like normalise(), cumsum() etc., giving a vector of the same length\n as the input vector) and finally, vector reducing function (like diff() which produces\n result vector of a length different from the length of input vector).\n Optional or mandatory additional arguments required by some functions\n (e.g. norm type for norm() or normalise() functions) can be\n passed as named arguments in '...'.","Published":"2016-11-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ArrayBin","Version":"0.2","Title":"Binarization of numeric data arrays","Description":"Fast adaptive binarization for numeric data arrays,\n particularly designed for high-throughput biological datasets.\n Includes options to filter out rows of the array with\n insufficient magnitude or variation (based on gap statistic).","Published":"2013-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"arrayhelpers","Version":"1.0-20160527","Title":"Convenience Functions for Arrays","Description":"Some convenient functions to work with arrays.","Published":"2016-05-28","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"ars","Version":"0.5","Title":"Adaptive Rejection Sampling","Description":"Adaptive Rejection Sampling, Original version","Published":"2014-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"arsenal","Version":"0.3.0","Title":"An Arsenal of 'R' Functions for Large-Scale Statistical\nSummaries","Description":"An Arsenal of 'R' functions for large-scale statistical summaries,\n which are streamlined to work within the latest reporting tools in 'R' and\n 'RStudio' and which use formulas and versatile summary statistics for summary\n tables and models. The primary functions include tableby(), a Table-1-like\n summary of multiple variable types 'by' the levels of a categorical\n variable; modelsum(), which performs simple model fits on the same endpoint\n for many variables (univariate or adjusted for standard covariates);\n freqlist(), a powerful frequency table across many categorical variables; and\n write2(), a function to output tables to a document.","Published":"2017-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ART","Version":"1.0","Title":"Aligned Rank Transform for Nonparametric Factorial Analysis","Description":"An implementation of the Aligned Rank Transform technique for\n factorial analysis (see references below for details) including models with\n missing terms (unsaturated factorial models). The function first\n computes a separate aligned ranked response variable for each effect of the\n user-specified model, and then runs a classic ANOVA on each of the aligned\n ranked responses. For further details, see Higgins, J. J. and Tashtoush, S.\n (1994). An aligned rank transform test for interaction. Nonlinear World 1\n (2), pp. 201-211. Wobbrock, J.O., Findlater, L., Gergle, D. and\n Higgins,J.J. (2011). The Aligned Rank Transform for nonparametric factorial\n analyses using only ANOVA procedures. Proceedings of the ACM Conference on\n Human Factors in Computing Systems (CHI '11). New York: ACM Press, pp.\n 143-146. .","Published":"2015-08-13","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"artfima","Version":"1.5","Title":"ARTFIMA Model Estimation","Description":"Fit and simulate ARTFIMA. Theoretical autocovariance function and spectral density function for stationary ARTFIMA.","Published":"2016-07-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ARTIVA","Version":"1.2.3","Title":"Time-Varying DBN Inference with the ARTIVA (Auto Regressive TIme\nVArying) Model","Description":"Reversible Jump MCMC (RJ-MCMC)sampling for approximating the posterior \n distribution of a time varying regulatory network, under the Auto Regressive TIme VArying\n\t\t(ARTIVA) model (for a detailed description of the algorithm, see Lebre et al. BMC Systems\n\t\tBiology, 2010). Starting from time-course gene expression measurements for a gene of \n\t\tinterest (referred to as \"target gene\") and a set of genes (referred to as \"parent genes\")\n\t\twhich may explain the expression of the target gene, the ARTIVA procedure identifies\n temporal segments for which a set of interactions occur between the \"parent genes\" and the\n\t\t\"target gene\". The time points that delimit the different temporal segments are referred to\n\t\tas changepoints (CP).","Published":"2015-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ARTool","Version":"0.10.4","Title":"Aligned Rank Transform","Description":"The Aligned Rank Transform for nonparametric\n factorial ANOVAs as described by J. O. Wobbrock,\n L. Findlater, D. Gergle, & J. J. Higgins, \"The Aligned\n Rank Transform for nonparametric factorial analyses\n using only ANOVA procedures\", CHI 2011 .","Published":"2016-10-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ARTP","Version":"2.0.4","Title":"Gene and Pathway p-values computed using the Adaptive Rank\nTruncated Product","Description":"A package for calculating gene and pathway p-values using the Adaptive Rank Truncated Product test","Published":"2014-02-20","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ARTP2","Version":"0.9.32","Title":"Pathway and Gene-Level Association Test","Description":"Pathway and gene level association test using raw data or summary statistics.","Published":"2017-05-24","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"arules","Version":"1.5-2","Title":"Mining Association Rules and Frequent Itemsets","Description":"Provides the infrastructure for representing,\n manipulating and analyzing transaction data and patterns (frequent\n itemsets and association rules). Also provides interfaces to\n C implementations of the association mining algorithms Apriori and Eclat\n by C. Borgelt.","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"arulesCBA","Version":"1.1.1","Title":"Classification Based on Association Rules","Description":"Provides a function to build an association rule-based classifier for data frames, and to classify incoming data frames using such a classifier.","Published":"2017-04-03","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"arulesNBMiner","Version":"0.1-5","Title":"Mining NB-Frequent Itemsets and NB-Precise Rules","Description":"NBMiner is an implementation of the model-based mining algorithm \n for mining NB-frequent itemsets presented in \"Michael Hahsler. A\n model-based frequency constraint for mining associations from\n transaction data. Data Mining and Knowledge Discovery, 13(2):137-166,\n September 2006.\" In addition an extension for NB-precise rules is \n implemented. ","Published":"2015-07-02","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"arulesSequences","Version":"0.2-19","Title":"Mining Frequent Sequences","Description":"Add-on for arules to handle and mine frequent sequences.\n Provides interfaces to the C++ implementation of cSPADE by \n Mohammed J. Zaki.","Published":"2017-05-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"arulesViz","Version":"1.2-1","Title":"Visualizing Association Rules and Frequent Itemsets","Description":"Extends package arules with various visualization techniques for association rules and itemsets. The package also includes several interactive visualizations for rule exploration.","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"aRxiv","Version":"0.5.16","Title":"Interface to the arXiv API","Description":"An interface to the API for 'arXiv'\n (), a repository of electronic preprints for\n computer science, mathematics, physics, quantitative biology,\n quantitative finance, and statistics.","Published":"2017-04-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"as.color","Version":"0.1","Title":"Assign Random Colors to Unique Items in a Vector","Description":"The as.color function takes an R vector of any class as an input,\n and outputs a vector of unique hexadecimal color values that correspond to the\n unique input values. This is most handy when overlaying points and lines for\n data that correspond to different levels or factors. The function will also\n print the random seed used to generate the colors. If you like the color palette\n generated, you can save the seed and reuse those colors.","Published":"2016-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"asaur","Version":"0.50","Title":"Data Sets for \"Applied Survival Analysis Using R\"\"","Description":"Data sets are referred to in the text \"Applied Survival Analysis Using R\"\n by Dirk F. Moore, Springer, 2016, ISBN: 978-3-319-31243-9, .","Published":"2016-04-12","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"asbio","Version":"1.4-2","Title":"A Collection of Statistical Tools for Biologists","Description":"Contains functions from: Aho, K. (2014) Foundational and Applied Statistics for Biologists using R. CRC/Taylor and Francis, Boca Raton, FL, ISBN: 978-1-4398-7338-0.","Published":"2017-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ascii","Version":"2.1","Title":"Export R objects to several markup languages","Description":"Coerce R object to asciidoc, txt2tags, restructuredText,\n org, textile or pandoc syntax. Package comes with a set of\n drivers for Sweave.","Published":"2011-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"asd","Version":"2.2","Title":"Simulations for Adaptive Seamless Designs","Description":"Package runs simulations for adaptive seamless designs with and without early outcomes \n for treatment selection and subpopulation type designs.","Published":"2016-05-23","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"asdreader","Version":"0.1-2","Title":"Reading ASD Binary Files in R","Description":"A simple driver that reads binary data created by the ASD Inc.\n portable spectrometer instruments, such as the FieldSpec (for more information,\n see ). Spectral data\n can be extracted from the ASD files as raw (DN), white reference, radiance, or\n reflectance. Additionally, the metadata information contained in the ASD file\n header can also be accessed.","Published":"2016-03-03","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ash","Version":"1.0-15","Title":"David Scott's ASH Routines","Description":"David Scott's ASH routines ported from S-PLUS to R.","Published":"2015-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ashr","Version":"2.0.5","Title":"Methods for Adaptive Shrinkage, using Empirical Bayes","Description":"The R package 'ashr' implements an Empirical Bayes approach for large-scale hypothesis testing and false discovery rate (FDR) estimation based on the methods proposed in M. Stephens, 2016, \"False discovery rates: a new deal\", . These methods can be applied whenever two sets of summary statistics---estimated effects and standard errors---are available, just as 'qvalue' can be applied to previously computed p-values. Two main interfaces are provided: ash(), which is more user-friendly; and ash.workhorse(), which has more options and is geared toward advanced users.","Published":"2016-12-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"asht","Version":"0.9.1","Title":"Applied Statistical Hypothesis Tests","Description":"Some hypothesis test functions (sign test, median and other quantile tests, Wilcoxon signed rank test, coefficient of variation test, test of normal variance, test on weighted sums of Poisson, sample size for t-tests with different variances and non-equal n per arm, Behrens-Fisher test) with a focus on non-asymptotic methods that have matching confidence intervals. ","Published":"2017-05-01","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"AsioHeaders","Version":"1.11.0-1","Title":"'Asio' C++ Header Files","Description":"'Asio' is a cross-platform C++ library for network and low-level\n I/O programming that provides developers with a consistent asynchronous model\n using a modern C++ approach.\n\n 'Asio' is also included in Boost but requires linking when used with\n Boost. Standalone it can be used header-only provided a recent-enough\n compiler. 'Asio' is written and maintained by Christopher M. Kohlhoff.\n 'Asio' is released under the 'Boost Software License', Version 1.0.","Published":"2016-01-07","License":"BSL-1.0","snapshot_date":"2017-06-23"}
{"Package":"aslib","Version":"0.1","Title":"Interface to the Algorithm Selection Benchmark Library","Description":"Provides an interface to the algorithm selection benchmark library\n at and the 'LLAMA' package\n () for building\n algorithm selection models.","Published":"2016-11-25","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ASMap","Version":"0.4-7","Title":"Linkage Map Construction using the MSTmap Algorithm","Description":"Functions for Accurate and Speedy linkage map construction, manipulation and diagnosis of Doubled Haploid, Backcross and Recombinant Inbred 'R/qtl' objects. This includes extremely fast linkage map clustering and optimal marker ordering using 'MSTmap' (see Wu et al.,2008).","Published":"2016-06-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"asnipe","Version":"1.1.3","Title":"Animal Social Network Inference and Permutations for Ecologists","Description":"Implements several tools that are used in animal social network analysis. In particular, this package provides the tools to infer groups and generate networks from observation data, perform permutation tests on the data, calculate lagged association rates, and performed multiple regression analysis on social network data.","Published":"2017-02-21","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"aspace","Version":"3.2","Title":"A collection of functions for estimating centrographic\nstatistics and computational geometries for spatial point\npatterns","Description":"A collection of functions for computing centrographic\n statistics (e.g., standard distance, standard deviation\n ellipse, standard deviation box) for observations taken at\n point locations. Separate plotting functions have been\n developed for each measure. Users interested in writing results\n to ESRI shapefiles can do so by using results from aspace\n functions as inputs to the convert.to.shapefile and\n write.shapefile functions in the shapefiles library. The aspace\n library was originally conceived to aid in the analysis of\n spatial patterns of travel behaviour (see Buliung and Remmel,\n 2008). Major changes in the current version include (1) removal\n of dependencies on several external libraries (e.g., gpclib,\n maptools, sp), (2) the separation of plotting and estimation\n capabilities, (3) reduction in the number of functions, and (4)\n expansion of analytical capabilities with additional functions\n for descriptive analysis and visualization (e.g., standard\n deviation box, centre of minimum distance, central feature).","Published":"2012-08-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ASPBay","Version":"1.2","Title":"Bayesian Inference on Causal Genetic Variants using Affected\nSib-Pairs Data","Description":"This package allows to make inference on the properties of causal genetic\n variants in linkage disequilibrium with genotyped markers. In a first step, \n\t\t\t we select a subset of variants using a score statistic for affected \n\t\t\t sib-pairs. In a second step, on the selected subset, we make \n inference on causal genetic variants in the considered region. ","Published":"2015-01-14","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"aSPC","Version":"0.1.2","Title":"An Adaptive Sum of Powered Correlation Test (aSPC) for Global\nAssociation Between Two Random Vectors","Description":"The aSPC test is designed to test global association between two groups of variables potentially with moderate to high dimension (e.g. in hundreds). The aSPC is particularly useful when the association signals between two groups of variables are sparse. ","Published":"2017-04-27","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"aspect","Version":"1.0-4","Title":"A General Framework for Multivariate Analysis with Optimal\nScaling","Description":"Contains various functions for optimal scaling. One function performs optimal scaling by maximizing an aspect (i.e. a target function such as the sum of eigenvalues, sum of squared correlations, squared multiple correlations, etc.) of the corresponding correlation matrix. Another function performs implements the LINEALS approach for optimal scaling by minimization of an aspect based on pairwise correlations and correlation ratios. The resulting correlation matrix and category scores can be used for further multivariate methods such as structural equation models. ","Published":"2015-07-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"aspi","Version":"0.2.0","Title":"Analysis of Symmetry of Parasitic Infections","Description":"Tools for the analysis and visualization of bilateral asymmetry in parasitic infections.","Published":"2016-09-20","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"aSPU","Version":"1.47","Title":"Adaptive Sum of Powered Score Test","Description":"R codes for the (adaptive) Sum of Powered Score ('SPU' and 'aSPU')\n tests, inverse variance weighted Sum of Powered score ('SPUw' and 'aSPUw') tests\n and gene-based and some pathway based association tests (Pathway based Sum of\n Powered Score tests ('SPUpath'), adaptive 'SPUpath' ('aSPUpath') test, 'GEEaSPU'\n test for multiple traits - single 'SNP' (single nucleotide polymorphism)\n association in generalized estimation equations, 'MTaSPUs' test for multiple\n traits - single 'SNP' association with Genome Wide Association Studies ('GWAS')\n summary statistics, Gene-based Association Test that uses an extended 'Simes'\n procedure ('GATES'), Hybrid Set-based Test ('HYST') and extended version\n of 'GATES' test for pathway-based association testing ('GATES-Simes'). ).\n The tests can be used with genetic and other data sets with covariates. The\n response variable is binary or quantitative. Summary; (1) Single trait-'SNP' set\n association with individual-level data ('aSPU', 'aSPUw', 'aSPUr'), (2) Single trait-'SNP'\n set association with summary statistics ('aSPUs'), (3) Single trait-pathway\n association with individual-level data ('aSPUpath'), (4) Single trait-pathway\n association with summary statistics ('aSPUsPath'), (5) Multiple traits-single\n 'SNP' association with individual-level data ('GEEaSPU'), (6) Multiple traits-\n single 'SNP' association with summary statistics ('MTaSPUs'), (7) Multiple traits-'SNP' set association with summary statistics('MTaSPUsSet'), (8) Multiple traits-pathway association with summary statistics('MTaSPUsSetPath').","Published":"2017-05-21","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"asremlPlus","Version":"2.0-12","Title":"Augments the Use of 'ASReml-R' in Fitting Mixed Models","Description":"Assists in automating the testing of terms in mixed models when 'asreml' is used \n to fit the models. The content falls into the following natural groupings: (i) Data, (ii) Object \n manipulation functions, (iii) Model modification functions, (iv) Model testing functions, \n (v) Model diagnostics functions, (vi) Prediction production and presentation functions, \n (vii) Response transformation functions, and (viii) Miscellaneous functions. A history of the \n fitting of a sequence of models is kept in a data frame. Procedures are available for choosing \n models that conform to the hierarchy or marginality principle and for displaying predictions \n for significant terms in tables and graphs. The package 'asreml' provides a computationally \n efficient algorithm for fitting mixed models using Residual Maximum Likelihood. It can be \n purchased from 'VSNi' as 'asreml-R', who will supply a zip file for \n local installation/updating. ","Published":"2016-09-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AssayCorrector","Version":"1.1.3","Title":"Detection and Correction of Spatial Bias in HTS Screens","Description":"(1) Detects plate-specific spatial bias by identifying rows and columns of all plates of the assay affected by this bias (following the results of the Mann-Whitney U test) as well as assay-specific spatial bias by identifying well locations (i.e., well positions scanned across all plates of a given assay) affected by this bias (also following the results of the Mann-Whitney U test); (2) Allows one to correct plate-specific spatial bias using either the additive or multiplicative PMP (Partial Mean Polish) method (the most appropriate spatial bias model can be either specified by the user or determined by the program following the results of the Kolmogorov-Smirnov two-sample test) to correct the assay measurements as well as to correct assay-specific spatial bias by carrying out robust Z-scores within each plate of the assay and then traditional Z-scores across well locations.","Published":"2016-12-29","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"assertable","Version":"0.2.3","Title":"Verbose Assertions for Tabular Data (Data.frames and\nData.tables)","Description":"Simple, flexible, assertions on data.frame or data.table objects with verbose output for vetting. While other assertion packages apply towards more general use-cases, assertable is tailored towards tabular data. It includes functions to check variable names and values, whether the dataset contains all combinations of a given set of unique identifiers, and whether it is a certain length. In addition, assertable includes utility functions to check the existence of target files and to efficiently import multiple tabular data files into one data.table.","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"assertive","Version":"0.3-5","Title":"Readable Check Functions to Ensure Code Integrity","Description":"Lots of predicates (is_* functions) to check the state of your\n variables, and assertions (assert_* functions) to throw errors if they\n aren't in the right form.","Published":"2016-12-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.base","Version":"0.0-7","Title":"A Lightweight Core of the 'assertive' Package","Description":"A minimal set of predicates and assertions used by the assertive\n package. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2016-12-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.code","Version":"0.0-1","Title":"Assertions to Check Properties of Code","Description":"A set of predicates and assertions for checking the properties of\n code. This is mainly for use by other package developers who want to include\n run-time testing features in their own packages. End-users will usually want to\n use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.data","Version":"0.0-1","Title":"Assertions to Check Properties of Data","Description":"A set of predicates and assertions for checking the properties of\n (country independent) complex data types. This is mainly for use by other\n package developers who want to include run-time testing features in\n their own packages. End-users will usually want to use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.data.uk","Version":"0.0-1","Title":"Assertions to Check Properties of Strings","Description":"A set of predicates and assertions for checking the properties of\n UK-specific complex data types. This is mainly for use by other package\n developers who want to include run-time testing features in their own\n packages. End-users will usually want to use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.data.us","Version":"0.0-1","Title":"Assertions to Check Properties of Strings","Description":"A set of predicates and assertions for checking the properties of\n US-specific complex data types. This is mainly for use by other package\n developers who want to include run-time testing features in their own\n packages. End-users will usually want to use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.datetimes","Version":"0.0-2","Title":"Assertions to Check Properties of Dates and Times","Description":"A set of predicates and assertions for checking the properties of\n dates and times. This is mainly for use by other package developers who\n want to include run-time testing features in their own packages. End-users\n will usually want to use assertive directly.","Published":"2016-05-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.files","Version":"0.0-2","Title":"Assertions to Check Properties of Files","Description":"A set of predicates and assertions for checking the properties of\n files and connections. This is mainly for use by other package developers\n who want to include run-time testing features in their own packages.\n End-users will usually want to use assertive directly.","Published":"2016-05-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.matrices","Version":"0.0-1","Title":"Assertions to Check Properties of Matrices","Description":"A set of predicates and assertions for checking the properties of\n matrices. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.models","Version":"0.0-1","Title":"Assertions to Check Properties of Models","Description":"A set of predicates and assertions for checking the properties of\n models. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2015-10-06","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.numbers","Version":"0.0-2","Title":"Assertions to Check Properties of Numbers","Description":"A set of predicates and assertions for checking the properties of\n numbers. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2016-05-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.properties","Version":"0.0-4","Title":"Assertions to Check Properties of Variables","Description":"A set of predicates and assertions for checking the properties of\n variables, such as length, names and attributes. This is mainly for use by\n other package developers who want to include run-time testing features in\n their own packages. End-users will usually want to use assertive directly.","Published":"2016-12-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.reflection","Version":"0.0-4","Title":"Assertions for Checking the State of R","Description":"A set of predicates and assertions for checking the state and\n capabilities of R, the operating system it is running on, and the IDE\n being used. This is mainly for use by other package developers who\n want to include run-time testing features in their own packages.\n End-users will usually want to use assertive directly.","Published":"2016-12-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.sets","Version":"0.0-3","Title":"Assertions to Check Properties of Sets","Description":"A set of predicates and assertions for checking the properties of\n sets. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2016-12-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.strings","Version":"0.0-3","Title":"Assertions to Check Properties of Strings","Description":"A set of predicates and assertions for checking the properties of\n strings. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2016-05-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertive.types","Version":"0.0-3","Title":"Assertions to Check Types of Variables","Description":"A set of predicates and assertions for checking the types of\n variables. This is mainly for use by other package developers who want to\n include run-time testing features in their own packages. End-users will\n usually want to use assertive directly.","Published":"2016-12-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"assertr","Version":"2.0.2.2","Title":"Assertive Programming for R Analysis Pipelines","Description":"Provides functionality to assert conditions\n that have to be met so that errors in data used in\n analysis pipelines can fail quickly. Similar to\n 'stopifnot()' but more powerful, friendly, and easier\n for use in pipelines.","Published":"2017-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"assertthat","Version":"0.2.0","Title":"Easy Pre and Post Assertions","Description":"assertthat is an extension to stopifnot() that makes it\n easy to declare the pre and post conditions that you code should\n satisfy, while also producing friendly error messages so that your\n users know what they've done wrong.","Published":"2017-04-11","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"AssetPricing","Version":"1.0-0","Title":"Optimal pricing of assets with fixed expiry date","Description":"Calculates the optimal price of assets (such as\n\tairline flight seats, hotel room bookings) whose value\n\tbecomes zero after a fixed ``expiry date''. Assumes\n\tpotential customers arrive (possibly in groups) according\n\tto a known inhomogeneous Poisson process. Also assumes a\n\tknown time-varying elasticity of demand (price sensitivity)\n\tfunction. Uses elementary techniques based on ordinary\n\tdifferential equations. Uses the package deSolve to effect\n\tthe solution of these differential equations.","Published":"2014-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"assignPOP","Version":"1.1.3","Title":"Population Assignment using Genetic, Non-Genetic or Integrated\nData in a Machine Learning Framework","Description":"Use Monte-Carlo and K-fold cross-validation coupled with machine-learning classification algorithms to perform population assignment, with functionalities of evaluating discriminatory power of independent training samples, identifying informative loci, reducing data dimensionality for genomic data, integrating genetic and non-genetic data, and visualizing results. ","Published":"2017-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"assist","Version":"3.1.3","Title":"A Suite of R Functions Implementing Spline Smoothing Techniques","Description":"A comprehensive package for fitting various non-parametric/semi-parametric linear/nonlinear fixed/mixed smoothing spline models.","Published":"2015-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ASSISTant","Version":"1.2-3","Title":"Adaptive Subgroup Selection in Group Sequential Trials","Description":"Clinical trial design for subgroup selection in three-stage group\n sequential trial. Includes facilities for design, exploration and analysis of\n such trials. An implementation of the initial DEFUSE-3 trial is also provided\n as a vignette.","Published":"2016-05-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AssocTests","Version":"0.0-3","Title":"Genetic Association Studies","Description":"Some procedures including EIGENSTRAT (a procedure for\n\tdetecting and correcting for population stratification through \n\tsearching for the eigenvectors in genetic association studies),\n\tPCoC (a procedure for correcting for population stratification\n\tthrough calculating the principal coordinates and the clustering\n\tof the subjects), Tracy-Wisdom test (a procedure for detecting\n\tthe significant eigenvalues of a matrix), distance regression (a\n\tprocedure for detecting the association between a distance matrix\n\tand some independent variants of interest), single-marker test (a\n\tprocedure for identifying the association between the genotype at\n\ta biallelic marker and a trait using the Wald test or the Fisher\n\texact test), MAX3 (a procedure for testing for the association\n\tbetween a single nucleotide polymorphism and a binary phenotype\n\tusing the maximum value of the three test statistics derived for\n\tthe recessive, additive, and dominant models), nonparametric trend\n\ttest (a procedure for testing for the association between a genetic\n\tvariant and a non-normal distributed quantitative trait based on the\n\tnonparametric risk), and nonparametric MAX3 (a procedure for testing\n\tfor the association between a biallelic single nucleotide polymorphism\n\tand a quantitative trait using the maximum value of the three\n\tnonparametric trend tests derived for the recessive, additive, and\n\tdominant models), which are commonly used in genetic association studies.","Published":"2015-08-08","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"assortnet","Version":"0.12","Title":"Calculate the Assortativity Coefficient of Weighted and Binary\nNetworks","Description":"Functions to calculate the assortment of vertices in social networks. This can be measured on both weighted and binary networks, with discrete or continuous vertex values.","Published":"2016-01-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AssotesteR","Version":"0.1-10","Title":"Statistical Tests for Genetic Association Studies","Description":"R package with statistical tests and methods for genetic\n association studies with emphasis on rare variants and binary (dichotomous)\n traits","Published":"2013-12-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"aster","Version":"0.9.1","Title":"Aster Models","Description":"Aster models are exponential family regression models for life\n history analysis. They are like generalized linear models except that\n elements of the response vector can have different families (e. g.,\n some Bernoulli, some Poisson, some zero-truncated Poisson, some normal)\n and can be dependent, the dependence indicated by a graphical structure.\n Discrete time survival analysis, zero-inflated Poisson regression, and\n generalized linear models that are exponential family (e. g., logistic\n regression and Poisson regression with log link) are special cases.\n Main use is for data in which there is survival over discrete time periods\n and there is additional data about what happens conditional on survival\n (e. g., number of offspring). Uses the exponential family canonical\n parameterization (aster transform of usual parameterization).","Published":"2017-03-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"aster2","Version":"0.3","Title":"Aster Models","Description":"Aster models are exponential family regression models for life\n history analysis. They are like generalized linear models except that\n elements of the response vector can have different families (e. g.,\n some Bernoulli, some Poisson, some zero-truncated Poisson, some normal)\n and can be dependent, the dependence indicated by a graphical structure.\n Discrete time survival analysis, zero-inflated Poisson regression, and\n generalized linear models that are exponential family (e. g., logistic\n regression and Poisson regression with log link) are special cases.\n Main use is for data in which there is survival over discrete time periods\n and there is additional data about what happens conditional on survival\n (e. g., number of offspring). Uses the exponential family canonical\n parameterization (aster transform of usual parameterization).\n Unlike the aster package, this package does dependence groups (nodes of\n the graph need not be conditionally independent given their predecessor\n node), including multinomial and two-parameter normal as families. Thus\n this package also generalizes mark-capture-recapture analysis.","Published":"2017-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"astro","Version":"1.2","Title":"Astronomy Functions, Tools and Routines","Description":"The astro package provides a series of functions, tools and routines in everyday use within astronomy. Broadly speaking, one may group these functions into 7 main areas, namely: cosmology, FITS file manipulation, the Sersic function, plotting, data manipulation, statistics and general convenience functions and scripting tools.","Published":"2014-09-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"astrochron","Version":"0.7","Title":"A Computational Tool for Astrochronology","Description":"Routines for astrochronologic testing, astronomical time scale construction, and time series analysis. Also included are a range of statistical analysis and modeling routines that are relevant to time scale development and paleoclimate analysis.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"astrodatR","Version":"0.1","Title":"Astronomical Data","Description":"A collection of 19 datasets from contemporary astronomical research. They are described the textbook `Modern Statistical Methods for Astronomy with R Applications' by Eric D. Feigelson and G. Jogesh Babu (Cambridge University Press, 2012, Appendix C) or on the website of Penn State's Center for Astrostatistics (http://astrostatistics.psu.edu/datasets). These datasets can be used to exercise methodology involving: density estimation; heteroscedastic measurement errors; contingency tables; two-sample hypothesis tests; spatial point processes; nonlinear regression; mixture models; censoring and truncation; multivariate analysis; classification and clustering; inhomogeneous Poisson processes; periodic and stochastic time series analysis. ","Published":"2014-08-12","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"astroFns","Version":"4.1-0","Title":"Astronomy: time and position functions, misc. utilities","Description":"Miscellaneous astronomy functions, utilities, and data.","Published":"2012-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"astrolibR","Version":"0.1","Title":"Astronomy Users Library","Description":"Several dozen low-level utilities and codes from the Interactive Data Language (IDL) Astronomy Users Library (http://idlastro.gsfc.nasa.gov) are implemented in R. They treat: time, coordinate and proper motion transformations; terrestrial precession and nutation, atmospheric refraction and aberration, barycentric corrections, and related effects; utilities for astrometry, photometry, and spectroscopy; and utilities for planetary, stellar, Galactic, and extragalactic science.","Published":"2014-08-09","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"astsa","Version":"1.7","Title":"Applied Statistical Time Series Analysis","Description":"Contains data sets and scripts to accompany Time Series Analysis and Its Applications: With R Examples by Shumway and Stoffer, fourth edition, . ","Published":"2016-12-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"asVPC","Version":"1.0.2","Title":"Average Shifted Visual Predictive Checks","Description":"The visual predictive checks are well-known method to validate the \n nonlinear mixed effect model, especially in pharmacometrics area. \n The average shifted visual predictive checks are the newly \n developed method of Visual predictive checks combined with \n the idea of the average shifted histogram.","Published":"2015-05-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"asymLD","Version":"0.1","Title":"Asymmetric Linkage Disequilibrium (ALD) for Polymorphic Genetic\nData","Description":"Computes asymmetric LD measures (ALD) for multi-allelic genetic data. These measures are identical to the correlation measure (r) for bi-allelic data.","Published":"2016-01-30","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"asymmetry","Version":"1.2.1","Title":"Visualizing Asymmetric Data","Description":"Models and methods for the visualization for asymmetric data. A matrix is asymmetric if the number of rows equals the number of columns, and these rows and columns refer to the same set of objects. An example is a student migration table, where the rows correspond to the countries of origin of the students and the columns to the destination countries. This package provides the slide-vector model and the asymscal model for asymmetric multidimensional scaling. Furthermore, a heat map for skew-symmetric data, and the decomposition of asymmetry are provided for the analysis of asymmetric tables.","Published":"2017-05-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"asympTest","Version":"0.1.3","Title":"Asymptotic statistic","Description":"Asymptotic testing","Published":"2012-08-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AsynchLong","Version":"2.0","Title":"Regression Analysis of Sparse Asynchronous Longitudinal Data","Description":"Estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent response and covariates are mismatched and observed intermittently within subjects. Kernel weighted estimating equations are used for generalized linear models with either time-invariant or time-dependent coefficients.","Published":"2016-01-23","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"asypow","Version":"2015.6.25","Title":"Calculate Power Utilizing Asymptotic Likelihood Ratio Methods","Description":"A set of routines written in the S language\n that calculate power and related quantities utilizing asymptotic\n likelihood ratio methods.","Published":"2015-06-26","License":"ACM | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"ATE","Version":"0.2.0","Title":"Inference for Average Treatment Effects using Covariate\nBalancing","Description":"Nonparametric estimation and inference for average treatment effects based on covariate balancing.","Published":"2015-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AtelieR","Version":"0.24","Title":"A GTK GUI for teaching basic concepts in statistical inference,\nand doing elementary bayesian tests","Description":"A collection of statistical simulation and computation tools with a GTK GUI, to help teach statistical concepts and compute probabilities. Two domains are covered: I. Understanding (Central-Limit Theorem and the Normal Distribution, Distribution of a sample mean, Distribution of a sample variance, Probability calculator for common distributions), and II. Elementary Bayesian Statistics (bayesian inference on proportions, contingency tables, means and variances, with informative and noninformative priors).","Published":"2013-09-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"atlantistools","Version":"0.4.2","Title":"Process and Visualise Output from Atlantis Models","Description":"Atlantis is an end-to-end marine ecosystem modelling framework. It was originally developed in Australia by E.A. Fulton, A.D.M. Smith and D.C. Smith (2007) and has since been adopted in many marine ecosystems around the world (). The output of an Atlantis simulation is stored in various file formats like .netcdf and .txt and different output structures are used for the output variables like e.g. productivity or biomass. This package is used to convert the different output types to a unified format according to the \"tidy-data\" approach by H. Wickham (2014) . Additionally, ecological metrics like for example spatial overlap of predator and prey or consumption can be calculated and visualised with this package. Due to the unified data structure it is very easy to share model output with each other and perform model comparisons.","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"atmcmc","Version":"1.0","Title":"Automatically Tuned Markov Chain Monte Carlo","Description":"Uses adaptive diagnostics to tune and run a random walk Metropolis MCMC algorithm, to converge to a specified target distribution and estimate means of functionals.","Published":"2014-09-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ATmet","Version":"1.2","Title":"Advanced Tools for Metrology","Description":"This package provides functions for smart sampling and sensitivity analysis for metrology applications, including computationally expensive problems.","Published":"2014-04-17","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"AtmRay","Version":"1.31","Title":"Acoustic Traveltime Calculations for 1-D Atmospheric Models","Description":"Calculates acoustic traveltimes and ray paths in 1-D,\n linear atmospheres. Later versions will support arbitrary 1-D\n atmospheric models, such as radiosonde measurements and\n standard reference atmospheres.","Published":"2013-03-01","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"aTSA","Version":"3.1.2","Title":"Alternative Time Series Analysis","Description":"Contains some tools for testing, analyzing time series data and\n fitting popular time series models such as ARIMA, Moving Average and Holt\n Winters, etc. Most functions also provide nice and clear outputs like SAS\n does, such as identify, estimate and forecast, which are the same statements\n in PROC ARIMA in SAS.","Published":"2015-07-08","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"atsd","Version":"1.0.8441","Title":"Support Querying Axibase Time-Series Database","Description":"Provides functions for retrieving time-series and related \n meta-data such as entities, metrics, and tags from the Axibase \n Time-Series Database (ATSD). ATSD is a non-relational clustered \n database used for storing performance measurements from IT infrastructure \n resources: servers, network devices, storage systems, and applications.","Published":"2016-12-05","License":"Apache License 2.0","snapshot_date":"2017-06-23"}
{"Package":"attrCUSUM","Version":"0.1.0","Title":"Tools for Attribute VSI CUSUM Control Chart","Description":"An implementation of tools for design of attribute \n variable sampling interval cumulative sum chart. \n It currently provides information for monitoring of mean increase such as \n average number of sample to signal, average time to signal,\n a matrix of transient probabilities, suitable control limits when the data are\n (zero inflated) Poisson/binomial distribution. \n Functions in the tools can be easily applied to other count processes.\n Also, tools might be extended to more complicated cumulative sum control chart.\n We leave these issues as our perpetual work.","Published":"2016-12-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"attribrisk","Version":"0.1","Title":"Population Attributable Risk","Description":"Estimates population (etiological) attributable risk for\n unmatched, pair-matched or set-matched case-control designs and returns a\n list containing the estimated attributable risk, estimates of coefficients,\n and their standard errors, from the (conditional, If necessary) logistic\n regression used for estimating the relative risk.","Published":"2014-11-18","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"AUC","Version":"0.3.0","Title":"Threshold independent performance measures for probabilistic\nclassifiers","Description":"This package includes functions to compute the area under the curve of selected measures: The area under the sensitivity curve (AUSEC), the area under the specificity curve (AUSPC), the area under the accuracy curve (AUACC), and the area under the receiver operating characteristic curve (AUROC). The curves can also be visualized. Support for partial areas is provided.","Published":"2013-09-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aucm","Version":"2017.3-2","Title":"AUC Maximization","Description":"Implements methods for identifying linear and nonlinear marker combinations that maximizes the Area Under the AUC Curve (AUC).","Published":"2017-03-03","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AUCRF","Version":"1.1","Title":"Variable Selection with Random Forest and the Area Under the\nCurve","Description":"Variable selection using Random Forest based on optimizing\n the area-under-the ROC curve (AUC) of the Random Forest.","Published":"2012-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"audio","Version":"0.1-5","Title":"Audio Interface for R","Description":"Interfaces to audio devices (mainly sample-based) from R to allow recording and playback of audio. Built-in devices include Windows MM, Mac OS X AudioUnits and PortAudio (the last one is very experimental).","Published":"2013-12-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"audiolyzR","Version":"0.4-9","Title":"audiolyzR: Give your data a listen","Description":"Creates audio representations of common plots in R","Published":"2013-02-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"audit","Version":"0.1-1","Title":"Bounds for Accounting Populations","Description":"Two Bayesian methods for Accounting Populations","Published":"2012-10-29","License":"MIT","snapshot_date":"2017-06-23"}
{"Package":"auRoc","Version":"0.1-0","Title":"Various Methods to Estimate the AUC","Description":"Estimate the AUC using a variety of methods as follows: \n (1) frequentist nonparametric methods based on the Mann-Whitney statistic or kernel methods. \n (2) frequentist parametric methods using the likelihood ratio test based on higher-order \n asymptotic results, the signed log-likelihood ratio test, the Wald test, \n or the approximate ''t'' solution to the Behrens-Fisher problem. \n (3) Bayesian parametric MCMC methods.","Published":"2015-12-21","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"AUtests","Version":"0.98","Title":"Approximate Unconditional and Permutation Tests","Description":"Performs approximate unconditional and permutation testing for\n 2x2 contingency tables. Motivated by testing for disease association with rare\n genetic variants in case-control studies. When variants are extremely rare,\n these tests give better control of Type I error than standard tests.","Published":"2016-06-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AutoDeskR","Version":"0.1.2","Title":"An Interface to the 'AutoDesk' 'API' Platform","Description":"An interface to the 'AutoDesk' 'API' Platform including the Authentication \n 'API' for obtaining authentication to the 'AutoDesk' Forge Platform, Data Management \n 'API' for managing data across the platform's cloud services, Design Automation 'API'\n for performing automated tasks on design files in the cloud, Model\n Derivative 'API' for translating design files into different formats, sending\n them to the viewer app, and extracting design data, and Viewer for rendering\n 2D and 3D models (see for more information).","Published":"2017-02-18","License":"Apache License | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"autoencoder","Version":"1.1","Title":"Sparse Autoencoder for Automatic Learning of Representative\nFeatures from Unlabeled Data","Description":"Implementation of the sparse autoencoder in R environment, following the notes of Andrew Ng (http://www.stanford.edu/class/archive/cs/cs294a/cs294a.1104/sparseAutoencoder.pdf). The features learned by the hidden layer of the autoencoder (through unsupervised learning of unlabeled data) can be used in constructing deep belief neural networks. ","Published":"2015-07-02","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"autoimage","Version":"1.3","Title":"Multiple Heat Maps for Projected Coordinates","Description":"Functions for displaying multiple images with a color \n scale, i.e., heat maps, possibly with projected coordinates. The\n package relies on the base graphics system, so graphics are\n rendered rapidly.","Published":"2017-03-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"automagic","Version":"0.3","Title":"Automagically Document and Install Packages Necessary to Run R\nCode","Description":"Parse R code in a given directory for R packages and attempt to install them from CRAN or GitHub. Optionally use a dependencies file for tighter control over which package versions to install.","Published":"2017-02-26","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"automap","Version":"1.0-14","Title":"Automatic interpolation package","Description":"This package performs an automatic interpolation by automatically estimating the variogram and then calling gstat.","Published":"2013-08-29","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"AutoModel","Version":"0.4.9","Title":"Automated Hierarchical Multiple Regression with Assumptions\nChecking","Description":"A set of functions that automates the process and produces reasonable output for hierarchical multiple regression models. It allows you to specify predictor blocks, from which it generates all of the linear models, and checks the assumptions of the model, producing the requisite plots and statistics to allow you to judge the suitability of the model.","Published":"2015-08-23","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"automultinomial","Version":"1.0.0","Title":"Autologistic and Automultinomial Spatial Regression and Variable\nSelection","Description":"Contains functions for autologistic variable selection and parameter estimation for spatially correlated categorical data (including k>2). The main function is MPLE. Capable of fitting the centered autologistic model described in Caragea and Kaiser (2009), as well as the traditional autologistic model of Besag (1974). ","Published":"2016-10-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"autopls","Version":"1.3","Title":"Partial Least Squares Regression with Backward Selection of\nPredictors","Description":"Some convenience functions for pls regression, including backward \n variable selection and validation procedures, image based predictions\n\t\tand plotting.","Published":"2015-02-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AutoregressionMDE","Version":"1.0","Title":"Minimum Distance Estimation in Autoregressive Model","Description":"Consider autoregressive model of order p where the distribution function of innovation is unknown, but innovations are independent and symmetrically distributed. The package contains a function named ARMDE which takes X (vector of n observations) and p (order of the model) as input argument and returns minimum distance estimator of the parameters in the model.","Published":"2015-09-14","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AutoSEARCH","Version":"1.5","Title":"General-to-Specific (GETS) Modelling","Description":"General-to-Specific (GETS) modelling of the mean and variance of a regression. NOTE: The package has been succeeded by gets, also available on the CRAN, which is more user-friendly, faster and easier to extend. Users are therefore encouraged to consider gets instead.","Published":"2015-03-27","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"autoSEM","Version":"0.1.0","Title":"Performs Specification Search in Structural Equation Models","Description":"Implements multiple heuristic search algorithms for\n automatically creating structural equation models.","Published":"2016-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"autothresholdr","Version":"0.5.0","Title":"An R Port of the 'ImageJ' Plugin 'Auto Threshold'","Description":"Provides the 'ImageJ' 'Auto Threshold' plugin functionality to R users. \n See and Landini et al. (2017) .","Published":"2017-05-17","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"autovarCore","Version":"1.0-0","Title":"Automated Vector Autoregression Models and Networks","Description":"Automatically find the best vector autoregression\n models and networks for a given time series data set. 'AutovarCore'\n evaluates eight kinds of models: models with and without log\n transforming the data, lag 1 and lag 2 models, and models with and\n without day dummy variables. For each of these 8 model configurations,\n 'AutovarCore' evaluates all possible combinations for including\n outlier dummies (at 2.5x the standard deviation of the residuals)\n and retains the best model. Model evaluation includes the Eigenvalue\n stability test and a configurable set of residual tests. These eight\n models are further reduced to four models because 'AutovarCore'\n determines whether adding day dummies improves the model fit.","Published":"2015-07-01","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"averisk","Version":"1.0.3","Title":"Calculation of Average Population Attributable Fractions and\nConfidence Intervals","Description":"Average population attributable fractions are calculated for a set\n of risk factors (either binary or ordinal valued) for both prospective and case-\n control designs. Confidence intervals are found by Monte Carlo simulation. The\n method can be applied to either prospective or case control designs, provided an\n estimate of disease prevalence is provided. In addition to an exact calculation\n of AF, an approximate calculation, based on randomly sampling permutations has\n been implemented to ensure the calculation is computationally tractable when the\n number of risk factors is large.","Published":"2017-03-21","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"aVirtualTwins","Version":"1.0.0","Title":"Adaptation of Virtual Twins Method from Jared Foster","Description":"Research of subgroups in random clinical trials with binary outcome and two treatments groups. This is an adaptation of the Jared Foster method.","Published":"2016-10-09","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"AWR","Version":"1.11.89","Title":"'AWS' Java 'SDK' for R","Description":"Installs the compiled Java modules of the Amazon Web Services ('AWS') 'SDK' to be used in downstream R packages interacting with 'AWS'. See for more information on the 'AWS' 'SDK' for Java.","Published":"2017-02-13","License":"AGPL-3","snapshot_date":"2017-06-23"}
{"Package":"AWR.Athena","Version":"1.1.0","Title":"'AWS' Athena DBI Wrapper","Description":"'RJDBC' based DBI driver to Amazon Athena, which is an interactive\n query service to analyze data in Amazon S3 using standard SQL.","Published":"2017-06-16","License":"AGPL-3","snapshot_date":"2017-06-23"}
{"Package":"AWR.Kinesis","Version":"1.7.3","Title":"Amazon 'Kinesis' Consumer Application for Stream Processing","Description":"Fetching data from Amazon 'Kinesis' Streams using the Java-based 'MultiLangDaemon' interacting with Amazon Web Services ('AWS') for easy stream processing from R. For more information on 'Kinesis', see .","Published":"2017-02-26","License":"AGPL-3","snapshot_date":"2017-06-23"}
{"Package":"AWR.KMS","Version":"0.1","Title":"A Simple Client to the 'AWS' Key Management Service","Description":"Encrypt plain text and 'decrypt' cipher text using encryption keys hosted at Amazon Web Services ('AWS') Key Management Service ('KMS'), on which see for more information.","Published":"2017-02-20","License":"AGPL-3","snapshot_date":"2017-06-23"}
{"Package":"aws","Version":"1.9-6","Title":"Adaptive Weights Smoothing","Description":"Collection of R-functions implementing the\n Propagation-Separation Approach to adaptive smoothing as\n described in \"J. Polzehl and V. Spokoiny (2006)\n \"\n and \"J. Polzehl and V. Spokoiny (2004) \".","Published":"2016-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aws.alexa","Version":"0.1.4","Title":"Client for the Amazon Alexa Web Information Services API","Description":"Use the Amazon Alexa Web Information Services API to \n find information about domains, including the kind of content \n that they carry, how popular are they---rank and traffic history, \n sites linking to them, among other things. See \n for more information.","Published":"2017-04-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"aws.ec2metadata","Version":"0.1.1","Title":"Get EC2 Instance Metadata","Description":"Retrieve Amazon EC2 instance metadata from within the running instance.","Published":"2016-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aws.polly","Version":"0.1.2","Title":"Client for AWS Polly","Description":"A client for AWS Polly , a speech synthesis service.","Published":"2016-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aws.s3","Version":"0.3.3","Title":"AWS S3 Client Package","Description":"A simple client package for the Amazon Web Services (AWS) Simple\n Storage Service (S3) REST API .","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aws.ses","Version":"0.1.4","Title":"AWS SES Client Package","Description":"A simple client package for the Amazon Web Services (AWS) Simple\n Email Service (SES) REST API.","Published":"2016-12-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aws.signature","Version":"0.3.2","Title":"Amazon Web Services Request Signatures","Description":"Generates version 2 and version 4 request signatures for Amazon Web Services ('AWS') Application Programming Interfaces ('APIs').","Published":"2017-06-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aws.sns","Version":"0.1.5","Title":"AWS SNS Client Package","Description":"A simple client package for the Amazon Web Services (AWS) Simple\n Notification Service (SNS) API.","Published":"2016-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aws.sqs","Version":"0.1.8","Title":"AWS SQS Client Package","Description":"A simple client package for the Amazon Web Services (AWS) Simple\n Queue Service (SQS) API.","Published":"2016-12-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"awsjavasdk","Version":"0.2.0","Title":"Boilerplate R Access to the Amazon Web Services ('AWS') Java SDK","Description":"Provides boilerplate access to all of the classes included in the \n Amazon Web Services ('AWS') Java Software Development Kit (SDK) via \n package:'rJava'. According to Amazon, the 'SDK helps take the complexity \n out of coding by providing Java APIs for many AWS services including \n Amazon S3, Amazon EC2, DynamoDB, and more'. You can read more about the \n included Java code on Amazon's website: \n .","Published":"2017-01-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"awsMethods","Version":"1.0-4","Title":"Class and Methods Definitions for Packages 'aws', 'adimpro',\n'fmri', 'dwi'","Description":"Defines the method extract and provides 'openMP' support as needed in several packages.","Published":"2016-09-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"aylmer","Version":"1.0-11","Title":"A generalization of Fisher's exact test","Description":"A generalization of Fisher's exact test that allows for\n structural zeros.","Published":"2013-12-10","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"AzureML","Version":"0.2.13","Title":"Interface with Azure Machine Learning Datasets, Experiments and\nWeb Services","Description":"Functions and datasets to support Azure Machine Learning. This\n allows you to interact with datasets, as well as publish and consume R functions\n as API services.","Published":"2016-08-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"B2Z","Version":"1.4","Title":"Bayesian Two-Zone Model","Description":"This package fits the Bayesian two-Zone Models.","Published":"2011-07-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"b6e6rl","Version":"1.1","Title":"Adaptive differential evolution, b6e6rl variant","Description":"This package contains b6e6rl algorithm, adaptive\n differential evolution for global optimization.","Published":"2013-06-27","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"babar","Version":"1.0","Title":"Bayesian Bacterial Growth Curve Analysis in R","Description":"Babar is designed to use nested sampling (a Bayesian analysis technique) to compare possible models for bacterial growth curves, as well as extracting parameters. It allows model evidence and parameter likelihood values to be extracted, and also contains helper functions for comparing distributions as well as direct access to the underlying nested sampling code.","Published":"2015-02-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"babel","Version":"0.3-0","Title":"Ribosome Profiling Data Analysis","Description":"Included here are babel routines for identifying unusual ribosome protected fragment counts given mRNA counts.","Published":"2016-06-23","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"}
{"Package":"BaBooN","Version":"0.2-0","Title":"Bayesian Bootstrap Predictive Mean Matching - Multiple and\nSingle Imputation for Discrete Data","Description":"Included are two variants of Bayesian Bootstrap\n Predictive Mean Matching to multiply impute missing data. The\n first variant is a variable-by-variable imputation combining\n sequential regression and Predictive Mean Matching (PMM) that\n has been extended for unordered categorical data. The Bayesian\n Bootstrap allows for generating approximately proper multiple\n imputations. The second variant is also based on PMM, but the\n focus is on imputing several variables at the same time. The\n suggestion is to use this variant, if the missing-data pattern\n resembles a data fusion situation, or any other\n missing-by-design pattern, where several variables have\n identical missing-data patterns. Both variants can be run as\n 'single imputation' versions, in case the analysis objective is\n of a purely descriptive nature.","Published":"2015-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"babynames","Version":"0.3.0","Title":"US Baby Names 1880-2015","Description":"US baby names provided by the SSA. This package contains all\n names used for at least 5 children of either sex.","Published":"2017-04-14","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"BACA","Version":"1.3","Title":"Bubble Chart to Compare Biological Annotations by using DAVID","Description":"R-based graphical tool to concisely visualise and compare biological annotations queried from the DAVID web service. It provides R functions to perform enrichment analysis (via DAVID - http://david.abcc.ncifcrf.gov) on several gene lists at once, and then visualizing all the results in one generated figure that allows R users to compare the annotations found for each list. ","Published":"2015-05-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BacArena","Version":"1.6","Title":"Modeling Framework for Cellular Communities in their\nEnvironments","Description":"Can be used for simulation of organisms living in\n communities. Each organism is represented individually and genome scale\n metabolic models determine the uptake and release of compounds. Biological\n processes such as movement, diffusion, chemotaxis and kinetics are available\n along with data analysis techniques.","Published":"2017-05-23","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BACCO","Version":"2.0-9","Title":"Bayesian Analysis of Computer Code Output (BACCO)","Description":"The BACCO bundle of packages is replaced by the BACCO\n package, which provides a vignette that illustrates the constituent\n packages (emulator, approximator, calibrator) in use.","Published":"2013-12-12","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"BACCT","Version":"1.0","Title":"Bayesian Augmented Control for Clinical Trials","Description":"Implements the Bayesian Augmented Control (BAC, a.k.a. Bayesian historical data borrowing) method under clinical trial setting by calling 'Just Another Gibbs Sampler' ('JAGS') software. In addition, the 'BACCT' package evaluates user-specified decision rules by computing the type-I error/power, or probability of correct go/no-go decision at interim look. The evaluation can be presented numerically or graphically. Users need to have 'JAGS' 4.0.0 or newer installed due to a compatibility issue with 'rjags' package. Currently, the package implements the BAC method for binary outcome only. Support for continuous and survival endpoints will be added in future releases. We would like to thank AbbVie's Statistical Innovation group and Clinical Statistics group for their support in developing the 'BACCT' package.","Published":"2016-06-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"backblazer","Version":"0.1.0","Title":"Bindings to the Backblaze B2 API","Description":"Provides convenience functions for the Backblaze B2 cloud storage\n API (see https://www.backblaze.com/b2/docs/). All B2 API calls are mapped\n to equivalent R functions. Files can be easily uploaded, downloaded and\n deleted from B2, all from within R programs.","Published":"2016-01-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"backpipe","Version":"0.1.8.1","Title":"Backward Pipe Operator","Description":"Provides a backward-pipe operator for 'magrittr' (%<%) or \n 'pipeR' (%<<%) that allows for a performing operations from right-to-left. \n This indispensable for writing clear code where there is natural \n right-to-left ordering common with nested structures \n and hierarchies such as trees/directories or markup languages such as HTML \n and XML. ","Published":"2016-10-04","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"backports","Version":"1.1.0","Title":"Reimplementations of Functions Introduced Since R-3.0.0","Description":"Implementations of functions which have been introduced in\n R since version 3.0.0. The backports are conditionally exported which\n results in R resolving the function names to the version shipped with R (if\n available) and uses the implemented backports as fallback. This way package\n developers can make use of the new functions without worrying about the\n minimum required R version.","Published":"2017-05-22","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"backShift","Version":"0.1.4.1","Title":"Learning Causal Cyclic Graphs from Unknown Shift Interventions","Description":"Code for 'backShift', an algorithm to estimate the connectivity\n matrix of a directed (possibly cyclic) graph with hidden variables. The\n underlying system is required to be linear and we assume that observations\n under different shift interventions are available. For more details,\n see .","Published":"2017-01-09","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"backtest","Version":"0.3-4","Title":"Exploring Portfolio-Based Conjectures About Financial\nInstruments","Description":"The backtest package provides facilities for exploring\n portfolio-based conjectures about financial instruments\n (stocks, bonds, swaps, options, et cetera).","Published":"2015-09-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"backtestGraphics","Version":"0.1.6","Title":"Interactive Graphics for Portfolio Data","Description":"Creates an interactive graphics \n interface to visualize backtest results of different financial \n instruments, such as equities, futures, and credit default swaps.\n The package does not run backtests on the given data set but \n displays a graphical explanation of the backtest results. Users can\n look at backtest graphics for different instruments, investment \n strategies, and portfolios. Summary statistics of different \n portfolio holdings are shown in the left panel, and interactive \n plots of profit and loss (P\\&L), net market value (NMV) and \n gross market value (GMV) are displayed in the right panel. ","Published":"2015-10-21","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BACprior","Version":"2.0","Title":"Choice of the Hyperparameter Omega in the Bayesian Adjustment\nfor Confounding (BAC) Algorithm","Description":"The BACprior package provides an approximate sensitivity analysis of the \n Bayesian Adjustment for Confounding (BAC) algorithm (Wang et al., 2012) with regards to the\n hyperparameter omega. The package also provides functions to guide the user in their choice\n of an appropriate omega value. The method is based on Lefebvre, Atherton and Talbot (2014).","Published":"2014-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bacr","Version":"1.0.1","Title":"Bayesian Adjustment for Confounding","Description":"Estimating the average causal effect based on the Bayesian Adjustment for Confounding (BAC) algorithm.","Published":"2016-10-23","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"badgecreatr","Version":"0.1.0","Title":"Create Badges for 'Travis', 'Repostatus' 'Codecov.io' Etc in\nGithub Readme","Description":"Tired of copy and pasting almost identical markdown for badges in\n every new R package that you create on Github? \n This package will search your DESCRIPTION file and extract the package name,\n licence, R-version, and current projectversion and transform that into \n badges. It will also search for a .travis.yml file and create a 'Travis' badge,\n if you use 'Codecov.io' to check your code coverage after a 'Travis' build \n this package will also build a 'Codecov.io'-badge. All the badges will be \n placed below the top YAML content of your Rmarkdown file (Readme.Rmd). \n Currently creates badges for Projectstatus (Repostatus.org), licence\n travis build status, codecov, minimal R version, CRAN status, \n current version of your package and last change of Readme.Rmd.","Published":"2016-07-03","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"badger","Version":"0.0.2","Title":"Badge for R Package","Description":"Query information and generate badge for using in README\n and GitHub Pages.","Published":"2017-03-08","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"BAEssd","Version":"1.0.1","Title":"Bayesian Average Error approach to Sample Size Determination","Description":"Implements sample size calculations following the approach\n described in \"Bayesian Average Error Based Approach to\n Hypothesis Testing and Sample Size Determination.\"","Published":"2012-11-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Bagidis","Version":"1.0","Title":"BAses GIving DIStances","Description":"This is the companion package of a PhD thesis entitled \"Bases Giving Distances. A new paradigm for investigating functional data with applications for spectroscopy\" by Timmermans (2012). See references for details and related publications. The core of the BAGIDIS methodology is a functional wavelet based semi-distance that has been introduced by Timmermans and von Sachs (2010, 2015) and Timmermans, Delsol and von Sachs (2013). This semi-distance allows for comparing curves with sharp local patterns that might not be well aligned from one curve to another. It is data-driven and highly adaptive to the curves being studied. Its main originality is its ability to consider simultaneously horizontal and vertical variations of patterns, which proofs highly useful when used together with clustering algorithms or visualization method. BAGIDIS is an acronym for BAsis GIving DIStances. The extension of BAGIDIS to image data relies on the same principles and has been described in Timmermans and Fryzlewicz (2012), Fryzlewicz and Timmermans (2015). ","Published":"2015-06-26","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"bagRboostR","Version":"0.0.2","Title":"Ensemble bagging and boosting classifiers","Description":"bagRboostR is a set of ensemble classifiers for multinomial\n classification. The bagging function is the implementation of Breiman's\n ensemble as described by Opitz & Maclin (1999). The boosting function is\n the implementation of Stagewise Additive Modeling using a Multi-class\n Exponential loss function (SAMME) created by Zhu et al (2006). Both bagging\n and SAMME implementations use randomForest as the weak classifier and\n expect a character outcome variable. Each ensemble classifier returns a\n character vector of predictions for the test set.","Published":"2014-03-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"baitmet","Version":"1.0.1","Title":"Library Driven Compound Profiling in Gas Chromatography - Mass\nSpectrometry Data","Description":"Automated quantification of metabolites by targeting mass spectral/retention time libraries into full scan-acquired gas chromatography - mass spectrometry (GC-MS) chromatograms. Baitmet outputs a table with compounds name, spectral matching score, retention index error, and compounds area in each sample. Baitmet can automatically determine the compounds retention indexes with or without co-injection of internal standards with samples.","Published":"2017-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BalanceCheck","Version":"0.1","Title":"Balance Check for Multiple Covariates in Matched Observational\nStudies","Description":"Two practical tests are provided for assessing whether multiple covariates in a treatment group and a matched control group are balanced in observational studies. ","Published":"2016-09-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BalancedSampling","Version":"1.5.2","Title":"Balanced and Spatially Balanced Sampling","Description":"Select balanced and spatially balanced probability samples in multi-dimensional spaces with any prescribed inclusion probabilities. It contains fast (C++ via Rcpp) implementations of the included sampling methods. The local pivotal method and spatially correlated Poisson sampling (for spatially balanced sampling) are included. Also the cube method (for balanced sampling) and the local cube method (for doubly balanced sampling) are included.","Published":"2016-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BaM","Version":"1.0.1","Title":"Functions and Datasets for Books by Jeff Gill","Description":"Functions and datasets for Jeff Gill: \"Bayesian Methods: A Social and Behavioral Sciences Approach\". First, Second, and Third Edition. Published by Chapman and Hall/CRC (2002, 2007, 2014).","Published":"2016-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BAMBI","Version":"1.1.0","Title":"Bivariate Angular Mixture Models","Description":"Fit (using Bayesian methods) and simulate mixtures of univariate and bivariate angular distributions.","Published":"2017-02-06","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bamboo","Version":"0.9.18","Title":"Protein Secondary Structure Prediction Using the Bamboo Method","Description":"Implementation of the Bamboo methods described in Li, Dahl, Vannucci, Joo, and Tsai (2014) .","Published":"2017-05-25","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bamdit","Version":"3.1.0","Title":"Bayesian Meta-Analysis of Diagnostic Test Data","Description":"Functions for Bayesian meta-analysis of diagnostic test data which\n are based on a scale mixtures bivariate random-effects model.","Published":"2017-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bamlss","Version":"0.1-2","Title":"Bayesian Additive Models for Location Scale and Shape (and\nBeyond)","Description":"Infrastructure for estimating probabilistic distributional regression models in a Bayesian framework.\n The distribution parameters may capture location, scale, shape, etc. and every parameter may depend\n on complex additive terms (fixed, random, smooth, spatial, etc.) similar to a generalized additive model.\n The conceptual and computational framework is introduced in Umlauf, Klein, Zeileis (2017)\n .","Published":"2017-04-14","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BAMMtools","Version":"2.1.6","Title":"Analysis and Visualization of Macroevolutionary Dynamics on\nPhylogenetic Trees","Description":"Provides functions for analyzing and visualizing complex\n macroevolutionary dynamics on phylogenetic trees. It is a companion\n package to the command line program BAMM (Bayesian Analysis of\n Macroevolutionary Mixtures) and is entirely oriented towards the analysis,\n interpretation, and visualization of evolutionary rates. Functionality\n includes visualization of rate shifts on phylogenies, estimating\n evolutionary rates through time, comparing posterior distributions of\n evolutionary rates across clades, comparing diversification models using\n Bayes factors, and more.","Published":"2017-02-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bandit","Version":"0.5.0","Title":"Functions for simple A/B split test and multi-armed bandit\nanalysis","Description":"A set of functions for doing analysis of A/B split test data and web metrics in general.","Published":"2014-05-06","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BANFF","Version":"2.0","Title":"Bayesian Network Feature Finder","Description":"Provides a full package of posterior inference, model comparison, and graphical illustration of model fitting. A parallel computing algorithm for the Markov chain Monte Carlo (MCMC) based posterior inference and an Expectation-Maximization (EM) based algorithm for posterior approximation are are developed, both of which greatly reduce the computational time for model inference.","Published":"2017-03-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bannerCommenter","Version":"0.1.0","Title":"Make Banner Comments with a Consistent Format","Description":"A convenience package for use while drafting code.\n It facilitates making stand-out comment lines decorated with\n bands of characters. The input text strings are converted into\n R comment lines, suitably formatted. These are then displayed in\n a console window and, if possible, automatically transferred to a\n clipboard ready for pasting into an R script. Designed to save\n time when drafting R scripts that will need to be navigated and\n maintained by other programmers.","Published":"2016-12-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BANOVA","Version":"0.8","Title":"Hierarchical Bayesian ANOVA Models","Description":"It covers several Bayesian Analysis of Variance (BANOVA) models used in analysis of experimental designs in which both within- and between- subjects factors are manipulated. They can be applied to data that are common in the behavioral and social sciences. The package includes: Hierarchical Bayes ANOVA models with normal response, t response, Binomial(Bernoulli) response, Poisson response, ordered multinomial response and multinomial response variables. All models accommodate unobserved heterogeneity by including a normal distribution of the parameters across individuals. Outputs of the package include tables of sums of squares, effect sizes and p-values, and tables of predictions, which are easily interpretable for behavioral and social researchers. The floodlight analysis and mediation analysis based on these models are also provided. BANOVA uses JAGS as the computational platform.","Published":"2016-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"banxicoR","Version":"0.9.0","Title":"Download Data from the Bank of Mexico","Description":"Provides functions to scrape IQY calls to Bank of Mexico,\n downloading and ordering the data conveniently.","Published":"2016-08-17","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"bapred","Version":"1.0","Title":"Batch Effect Removal and Addon Normalization (in Phenotype\nPrediction using Gene Data)","Description":"Various tools dealing with batch effects, in particular enabling the \n removal of discrepancies between training and test sets in prediction scenarios.\n Moreover, addon quantile normalization and addon RMA normalization (Kostka & Spang, \n 2008) is implemented to enable integrating the quantile normalization step into \n prediction rules. The following batch effect removal methods are implemented: \n FAbatch, ComBat, (f)SVA, mean-centering, standardization, Ratio-A and Ratio-G. \n For each of these we provide an additional function which enables a posteriori \n ('addon') batch effect removal in independent batches ('test data'). Here, the\n (already batch effect adjusted) training data is not altered. For evaluating the\n success of batch effect adjustment several metrics are provided. Moreover, the \n package implements a plot for the visualization of batch effects using principal \n component analysis. The main functions of the package for batch effect adjustment \n are ba() and baaddon() which enable batch effect removal and addon batch effect \n removal, respectively, with one of the seven methods mentioned above. Another \n important function here is bametric() which is a wrapper function for all implemented\n methods for evaluating the success of batch effect removal. For (addon) quantile \n normalization and (addon) RMA normalization the functions qunormtrain(), \n qunormaddon(), rmatrain() and rmaaddon() can be used.","Published":"2016-06-03","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BaPreStoPro","Version":"0.1","Title":"Bayesian Prediction of Stochastic Processes","Description":"Bayesian estimation and prediction for stochastic processes based\n on the Euler approximation. Considered processes are: jump diffusion,\n (mixed) diffusion models, hidden (mixed) diffusion models, non-homogeneous\n Poisson processes (NHPP), (mixed) regression models for comparison and a\n regression model including a NHPP.","Published":"2016-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BarBorGradient","Version":"1.0.5","Title":"Function Minimum Approximator","Description":"Tool to find where a function has its lowest value(minimum). The\n functions can be any dimensions. Recommended use is with eps=10^-10, but can be\n run with 10^-20, although this depends on the function. Two more methods are in\n this package, simple gradient method (Gradmod) and Powell method (Powell). These\n are not recommended for use, their purpose are purely for comparison.","Published":"2017-04-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"barcode","Version":"1.1","Title":"Barcode distribution plots","Description":"This package includes the function \\code{barcode()}, which\n produces a histogram-like plot of a distribution that shows\n granularity in the data.","Published":"2012-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BarcodingR","Version":"1.0-2","Title":"Species Identification using DNA Barcodes","Description":"To perform species identification using DNA barcodes.","Published":"2016-10-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Barnard","Version":"1.8","Title":"Barnard's Unconditional Test","Description":"Barnard's unconditional test for 2x2 contingency tables.","Published":"2016-10-20","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BART","Version":"1.2","Title":"Bayesian Additive Regression Trees","Description":"Bayesian Additive Regression Trees (BART) provide flexible nonparametric modeling of covariates for continuous, binary and time-to-event outcomes. For more information on BART, see Chipman, George and McCulloch (2010) and Sparapani, Logan, McCulloch and Laud (2016) . ","Published":"2017-04-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bartMachine","Version":"1.2.3","Title":"Bayesian Additive Regression Trees","Description":"An advanced implementation of Bayesian Additive Regression Trees with expanded features for data analysis and visualization.","Published":"2016-05-12","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bartMachineJARs","Version":"1.0","Title":"bartMachine JARs","Description":"These are bartMachine's Java dependency libraries. Note: this package has no functionality of its own and should not be installed as a standalone package without bartMachine.","Published":"2016-02-27","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"Barycenter","Version":"1.0","Title":"Wasserstein Barycenter","Description":"Computation of a Wasserstein Barycenter. The package implements a method described in Cuturi (2014) \"Fast Computation of Wasserstein Barycenters\". The paper is available at . To speed up the computation time the main iteration step is based on 'RcppArmadillo'.","Published":"2016-09-21","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BAS","Version":"1.4.6","Title":"Bayesian Model Averaging using Bayesian Adaptive Sampling","Description":"Package for Bayesian Model Averaging in linear models and\n generalized linear models using stochastic or\n deterministic sampling without replacement from posterior\n distributions. Prior distributions on coefficients are\n from Zellner's g-prior or mixtures of g-priors\n corresponding to the Zellner-Siow Cauchy Priors or the\n mixture of g-priors from Liang et al (2008)\n \n for linear models or mixtures of g-priors in GLMs of Li and Clyde (2015)\n . Other model\n selection criteria include AIC, BIC and Empirical Bayes estimates of g.\n Sampling probabilities may be updated based on the sampled models\n using Sampling w/out Replacement or an efficient MCMC algorithm\n samples models using the BAS tree structure as an efficient hash table.\n Uniform priors over all models or beta-binomial prior distributions on\n model size are allowed, and for large p truncated priors on the model\n space may be used. The user may force variables to always be included.\n Details behind the sampling algorithm are provided in\n Clyde, Ghosh and Littman (2010) .\n This material is based upon work supported by the National Science\n Foundation under Grant DMS-1106891. Any opinions, findings, and\n conclusions or recommendations expressed in this material are those of\n the author(s) and do not necessarily reflect the views of the\n National Science Foundation.","Published":"2017-05-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"base64","Version":"2.0","Title":"Base64 Encoder and Decoder","Description":"Compatibility wrapper to replace the orphaned package by\n Romain Francois. New applications should use the 'openssl' or\n 'base64enc' package instead.","Published":"2016-05-10","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"base64enc","Version":"0.1-3","Title":"Tools for base64 encoding","Description":"This package provides tools for handling base64 encoding. It is more flexible than the orphaned base64 package.","Published":"2015-07-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"base64url","Version":"1.2","Title":"Fast and URL-Safe Base64 Encoder and Decoder","Description":"In contrast to RFC3548, the 62nd character (\"+\") is replaced with\n \"-\", the 63rd character (\"/\") is replaced with \"_\". Furthermore, the encoder\n does not fill the string with trailing \"=\". The resulting encoded strings\n comply to the regular expression pattern \"[A-Za-z0-9_-]\" and thus are\n safe to use in URLs or for file names.\n The package also comes with a simple base32 encoder/decoder suited for\n case insensitive file systems.","Published":"2017-06-14","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"baseballDBR","Version":"0.1.2","Title":"Sabermetrics and Advanced Baseball Statistics","Description":"A tool for gathering and analyzing data from the Baseball Databank , which includes player performance statistics from major league baseball in the United States beginning in the year 1871.","Published":"2017-06-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"basefun","Version":"0.0-38","Title":"Infrastructure for Computing with Basis Functions","Description":"Some very simple infrastructure for basis functions.","Published":"2017-05-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"baseline","Version":"1.2-1","Title":"Baseline Correction of Spectra","Description":"Collection of baseline correction algorithms, along with a framework and a GUI for optimising baseline algorithm parameters. Typical use of the package is for removing background effects from spectra originating from various types of spectroscopy and spectrometry, possibly optimizing this with regard to regression or classification results. Correction methods include polynomial fitting, weighted local smoothers and many more.","Published":"2015-07-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BASIX","Version":"1.1","Title":"BASIX: An efficient C/C++ toolset for R","Description":"BASIX provides some efficient C/C++ implementations to speed up calculations in R. ","Published":"2013-10-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BASS","Version":"0.2.2","Title":"Bayesian Adaptive Spline Surfaces","Description":"Bayesian fitting and sensitivity analysis methods for adaptive\n spline surfaces. Built to handle continuous and categorical inputs as well as\n functional or scalar output. An extension of the methodology in Denison, Mallick\n and Smith (1998) .","Published":"2017-03-17","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BaSTA","Version":"1.9.4","Title":"Age-Specific Survival Analysis from Incomplete\nCapture-Recapture/Recovery Data","Description":"Estimates survival and mortality with covariates from capture-recapture/recovery data in a Bayesian framework when many individuals are of unknown age. It includes tools for data checking, model diagnostics and outputs such as life-tables and plots.","Published":"2015-11-08","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"bastah","Version":"1.0.7","Title":"Big Data Statistical Analysis for High-Dimensional Models","Description":"Big data statistical analysis for high-dimensional models is made possible by modifying lasso.proj() in 'hdi' package by replacing its nodewise-regression with sparse precision matrix computation using 'BigQUIC'.","Published":"2016-06-02","License":"GPL (== 2)","snapshot_date":"2017-06-23"}
{"Package":"BAT","Version":"1.5.5","Title":"Biodiversity Assessment Tools","Description":"Includes algorithms to assess alpha and beta\n diversity in all their dimensions (taxon, phylogenetic and functional\n diversity), whether communities are completely sampled or not. It allows\n performing a number of analyses based on either species identities or\n phylogenetic/functional trees depicting species relationships.","Published":"2016-12-03","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"batade","Version":"0.1","Title":"HTML reports and so on","Description":"This package provides some utility functions (e.g HTML\n report maker).","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"batch","Version":"1.1-4","Title":"Batching Routines in Parallel and Passing Command-Line Arguments\nto R","Description":"Functions to allow you to easily pass command-line\n arguments into R, and functions to aid in submitting your R\n code in parallel on a cluster and joining the results afterward\n (e.g. multiple parameter values for simulations running in\n parallel, splitting up a permutation test in parallel, etc.).\n See `parseCommandArgs(...)' for the main example of how to use\n this package.","Published":"2013-06-05","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"BatchExperiments","Version":"1.4.1","Title":"Statistical Experiments on Batch Computing Clusters","Description":"Extends the BatchJobs package to run statistical experiments on\n batch computing clusters. For further details see the project web page.","Published":"2015-03-18","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"BatchGetSymbols","Version":"1.1","Title":"Downloads and Organizes Financial Data for Multiple Tickers","Description":"Makes it easy to download a large number of trade data from Yahoo or Google Finance.","Published":"2016-12-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BatchJobs","Version":"1.6","Title":"Batch Computing with R","Description":"Provides Map, Reduce and Filter variants to generate jobs on batch\n computing systems like PBS/Torque, LSF, SLURM and Sun Grid Engine.\n Multicore and SSH systems are also supported. For further details see the\n project web page.","Published":"2015-03-18","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"BatchMap","Version":"1.0.1.0","Title":"Software for the Creation of High Density Linkage Maps in\nOutcrossing Species","Description":"Algorithms that build on the 'OneMap' package to create linkage\n maps from high density data in outcrossing species in reasonable time frames.","Published":"2017-03-30","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"batchmeans","Version":"1.0-3","Title":"Consistent Batch Means Estimation of Monte Carlo Standard Errors","Description":"Provides consistent batch means estimation of Monte\n Carlo standard errors.","Published":"2016-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"batchtools","Version":"0.9.3","Title":"Tools for Computation on Batch Systems","Description":"As a successor of the packages 'BatchJobs' and 'BatchExperiments',\n this package provides a parallel implementation of the Map function for high\n performance computing systems managed by schedulers 'IBM Spectrum LSF'\n (),\n 'OpenLava' (), 'Univa Grid Engine'/'Oracle Grid\n Engine' (), 'Slurm' (),\n 'TORQUE/PBS'\n (), or\n 'Docker Swarm' ().\n A multicore and socket mode allow the parallelization on a local machines,\n and multiple machines can be hooked up via SSH to create a makeshift\n cluster. Moreover, the package provides an abstraction mechanism to define\n large-scale computer experiments in a well-organized and reproducible way.","Published":"2017-04-21","License":"LGPL-3","snapshot_date":"2017-06-23"}
{"Package":"BaTFLED3D","Version":"0.2.1","Title":"Bayesian Tensor Factorization Linked to External Data","Description":"BaTFLED is a machine learning algorithm designed to make predictions and determine interactions in data that varies along three independent modes. For example BaTFLED was developed to predict the growth of cell lines when treated with drugs at different doses. The first mode corresponds to cell lines and incorporates predictors such as cell line genomics and growth conditions. The second mode corresponds to drugs and incorporates predictors indicating known targets and structural features. The third mode corresponds to dose and there are no dose-specific predictors (although the algorithm is capable of including predictors for the third mode if present). See 'BaTFLED3D_vignette.rmd' for a simulated example.","Published":"2017-04-02","License":"CC BY-NC-SA 4.0","snapshot_date":"2017-06-23"}
{"Package":"batman","Version":"0.1.0","Title":"Convert Categorical Representations of Logicals to Actual\nLogicals","Description":"Survey systems and other third-party data sources commonly use non-standard representations of logical values when\n it comes to qualitative data - \"Yes\", \"No\" and \"N/A\", say. batman is a package designed to seamlessly convert these into logicals.\n It is highly localised, and contains equivalents to boolean values in languages including German, French, Spanish, Italian,\n Turkish, Chinese and Polish.","Published":"2015-10-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"batteryreduction","Version":"0.1.1","Title":"An R Package for Data Reduction by Battery Reduction","Description":"Battery reduction is a method used in data reduction. It uses Gram-Schmidt orthogonal rotations to find out a subset of variables best representing the original set of variables. ","Published":"2015-12-23","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"BayClone2","Version":"1.1","Title":"Bayesian Feature Allocation Model for Tumor Heterogeneity","Description":"A Bayesian feature allocation model is implemented for inference on tumor heterogeneity using next-generation sequencing data. The model identifies the subclonal copy number and single nucleotide mutations at a selected set of loci and provides inference on genetic tumor variation.","Published":"2014-12-24","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bayesAB","Version":"0.7.0","Title":"Fast Bayesian Methods for AB Testing","Description":"A suite of functions that allow the user to analyze A/B test\n data in a Bayesian framework. Intended to be a drop-in replacement for\n common frequentist hypothesis test such as the t-test and chi-sq test.","Published":"2016-10-09","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"BayesBD","Version":"1.1","Title":"Bayesian Inference for Image Boundaries","Description":"Provides tools for carrying out a Bayesian analysis of image boundaries. Functions are provided\n for both binary (Bernoulli) and continuous (Gaussian) images. Examples, along with an interactive shiny function\n illustrate how to perform simulations, analyze custom data, and plot estimates and credible intervals. ","Published":"2016-12-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"BayesBinMix","Version":"1.4","Title":"Bayesian Estimation of Mixtures of Multivariate Bernoulli\nDistributions","Description":"Fully Bayesian inference for estimating the number of clusters and related parameters to heterogeneous binary data.","Published":"2017-01-15","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bayesbio","Version":"1.0.0","Title":"Miscellaneous Functions for Bioinformatics and Bayesian\nStatistics","Description":"A hodgepodge of hopefully helpful functions. Two of these perform\n shrinkage estimation: one using a simple weighted method where the user can\n specify the degree of shrinkage required, and one using James-Stein shrinkage\n estimation for the case of unequal variances.","Published":"2016-05-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bayesboot","Version":"0.2.1","Title":"An Implementation of Rubin's (1981) Bayesian Bootstrap","Description":"Functions for performing the Bayesian bootstrap as introduced by\n Rubin (1981) and for summarizing the result.\n The implementation can handle both summary statistics that works on a\n weighted version of the data and summary statistics that works on a\n resampled data set.","Published":"2016-04-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"BayesBridge","Version":"0.6","Title":"Bridge Regression","Description":"Bayesian bridge regression.","Published":"2015-02-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"bayesCL","Version":"0.0.1","Title":"Bayesian Inference on a GPU using OpenCL","Description":"Bayesian Inference on a GPU. The package currently supports sampling from PolyaGamma, Multinomial logit and Bayesian lasso.","Published":"2017-04-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"BayesCombo","Version":"1.0","Title":"Bayesian Evidence Combination","Description":"Combine diverse evidence across multiple studies to test a high level scientific theory. The methods can also be used as an alternative to a standard meta-analysis.","Published":"2017-02-08","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BayesComm","Version":"0.1-2","Title":"Bayesian Community Ecology Analysis","Description":"Bayesian multivariate binary (probit) regression\n models for analysis of ecological communities.","Published":"2015-07-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayescount","Version":"0.9.99-5","Title":"Power Calculations and Bayesian Analysis of Count Distributions\nand FECRT Data using MCMC","Description":"A set of functions to allow analysis of count data (such\n as faecal egg count data) using Bayesian MCMC methods. Returns\n information on the possible values for mean count, coefficient\n of variation and zero inflation (true prevalence) present in\n the data. A complete faecal egg count reduction test (FECRT)\n model is implemented, which returns inference on the true\n efficacy of the drug from the pre- and post-treatment data\n provided, using non-parametric bootstrapping as well as using\n Bayesian MCMC. Functions to perform power analyses for faecal\n egg counts (including FECRT) are also provided.","Published":"2015-04-20","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BayesCR","Version":"2.0","Title":"Bayesian Analysis of Censored Regression Models Under Scale\nMixture of Skew Normal Distributions","Description":"Propose a parametric fit for censored linear regression models based on SMSN distributions, from a Bayesian perspective. Also, generates SMSN random variables.","Published":"2015-01-31","License":"GPL (>= 3.1.2)","snapshot_date":"2017-06-23"}
{"Package":"BayesDA","Version":"2012.04-1","Title":"Functions and Datasets for the book \"Bayesian Data Analysis\"","Description":"Functions for Bayesian Data Analysis, with datasets from\n the book \"Bayesian data Analysis (second edition)\" by Gelman,\n Carlin, Stern and Rubin. Not all datasets yet, hopefully\n completed soon.","Published":"2012-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesDccGarch","Version":"2.0","Title":"The Bayesian Dynamic Conditional Correlation GARCH Model","Description":"Bayesian estimation of dynamic conditional correlation GARCH model for multivariate time series volatility (Fioruci, J.A., Ehlers, R.S. and Andrade-Filho, M.G., (2014), DOI:10.1080/02664763.2013.839635).","Published":"2016-02-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BAYESDEF","Version":"0.1.0","Title":"Bayesian Analysis of DSD","Description":"Definitive Screening Designs are a class of experimental designs that under factor sparsity have the potential to estimate linear, quadratic and interaction effects with little experimental effort. BAYESDEF is a package that performs a five step strategy to analyze this kind of experiments that makes use of tools coming from the Bayesian approach. It also includes the least absolute shrinkage and selection operator (lasso) as a check (Aguirre VM. (2016) ).","Published":"2017-06-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesDem","Version":"2.5-1","Title":"Graphical User Interface for bayesTFR, bayesLife and bayesPop","Description":"Provides graphical user interface for the packages 'bayesTFR', 'bayesLife' and 'bayesPop'.","Published":"2016-11-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesDP","Version":"1.1.1","Title":"Tools for the Bayesian Discount Prior Function","Description":"Functions for data augmentation using the\n Bayesian discount prior function for 1 arm and 2 arm clinical trials.","Published":"2017-05-15","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BayesFactor","Version":"0.9.12-2","Title":"Computation of Bayes Factors for Common Designs","Description":"A suite of functions for computing\n various Bayes factors for simple designs, including contingency tables,\n one- and two-sample designs, one-way designs, general ANOVA designs, and\n linear regression.","Published":"2015-09-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BayesFM","Version":"0.1.2","Title":"Bayesian Inference for Factor Modeling","Description":"Collection of procedures to perform Bayesian analysis on a variety\n of factor models. Currently, it includes: Bayesian Exploratory Factor\n Analysis (befa), an approach to dedicated factor analysis with stochastic\n search on the structure of the factor loading matrix. The number of latent\n factors, as well as the allocation of the manifest variables to the factors,\n are not fixed a priori but determined during MCMC sampling.\n More approaches will be included in future releases of this package.","Published":"2017-02-20","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bayesGARCH","Version":"2.1.3","Title":"Bayesian Estimation of the GARCH(1,1) Model with Student-t\nInnovations","Description":"Provides the bayesGARCH() function which performs the\n Bayesian estimation of the GARCH(1,1) model with Student's t innovations as described in Ardia (2008) .","Published":"2017-02-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesGDS","Version":"0.6.2","Title":"Scalable Rejection Sampling for Bayesian Hierarchical Models","Description":"Functions for implementing the Braun and Damien (2015) rejection\n sampling algorithm for Bayesian hierarchical models. The algorithm generates\n posterior samples in parallel, and is scalable when the individual units are\n conditionally independent.","Published":"2016-03-16","License":"MPL (== 2.0)","snapshot_date":"2017-06-23"}
{"Package":"BayesGESM","Version":"1.4","Title":"Bayesian Analysis of Generalized Elliptical Semi-Parametric\nModels and Flexible Measurement Error Models","Description":"Set of tools to perform the statistical inference based on the Bayesian approach for regression models under the assumption that independent additive errors follow normal, Student-t, slash, contaminated normal, Laplace or symmetric hyperbolic distributions, i.e., additive errors follow a scale mixtures of normal distributions. The regression models considered in this package are: (i) Generalized elliptical semi-parametric models, where both location and dispersion parameters of the response variable distribution include non-parametric additive components described by using B-splines; and (ii) Flexible measurement error models under the presence of homoscedastic and heteroscedastic random errors, which admit explanatory variables with and without measurement additive errors as well as the presence of a non-parametric components approximated by using B-splines. ","Published":"2015-06-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BayesH","Version":"1.0","Title":"Bayesian Regression Model with Mixture of Two Scaled Inverse Chi\nSquare as Hyperprior","Description":"Functions to performs Bayesian regression model with mixture of two scaled inverse\n chi square as hyperprior distribution for variance of each regression coefficient.","Published":"2016-04-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BayesianAnimalTracker","Version":"1.2","Title":"Bayesian Melding of GPS and DR Path for Animal Tracking","Description":"Bayesian melding approach to combine the GPS observations and Dead-Reckoned path for an accurate animal's track, or equivalently, use the GPS observations to correct the Dead-Reckoned path. It can take the measurement errors in the GPS observations into account and provide uncertainty statement about the corrected path. The main calculation can be done by the BMAnimalTrack function.","Published":"2014-12-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Bayesianbetareg","Version":"1.2","Title":"Bayesian Beta regression: joint mean and precision modeling","Description":"This package performs beta regression","Published":"2014-07-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesianETAS","Version":"1.0.3","Title":"Bayesian Estimation of the ETAS Model for Earthquake Occurrences","Description":"The Epidemic Type Aftershock Sequence (ETAS) model is one of the best-performing methods for modeling and forecasting earthquake occurrences. This package implements Bayesian estimation routines to draw samples from the full posterior distribution of the model parameters, given an earthquake catalog. The paper on which this package is based is Gordon J. Ross - Bayesian Estimation of the ETAS Model for Earthquake Occurrences (2016), available from the below URL.","Published":"2017-01-17","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BayesianNetwork","Version":"0.1.1","Title":"Bayesian Network Modeling and Analysis","Description":"A 'Shiny' web application for creating interactive Bayesian Network models,\n learning the structure and parameters of Bayesian networks, and utilities for classical\n network analysis.","Published":"2016-10-25","License":"Apache License | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"BayesianTools","Version":"0.1.2","Title":"General-Purpose MCMC and SMC Samplers and Tools for Bayesian\nStatistics","Description":"General-purpose MCMC and SMC samplers, as well as plot and\n diagnostic functions for Bayesian statistics, with a particular focus on\n calibrating complex system models. Implemented samplers include various\n Metropolis MCMC variants (including adaptive and/or delayed rejection MH), the\n T-walk, two differential evolution MCMCs, two DREAM MCMCs, and a sequential\n Monte Carlo (SMC) particle filter.","Published":"2017-05-27","License":"CC BY-SA 4.0","snapshot_date":"2017-06-23"}
{"Package":"bayesImageS","Version":"0.4-0","Title":"Bayesian Methods for Image Segmentation using a Potts Model","Description":"Various algorithms for segmentation of 2D and 3D images, such\n as computed tomography and satellite remote sensing. This package implements\n Bayesian image analysis using the hidden Potts model with external field\n prior. Latent labels are sampled using chequerboard updating or Swendsen-Wang.\n Algorithms for the smoothing parameter include pseudolikelihood, path sampling,\n the exchange algorithm, and approximate Bayesian computation (ABC).","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BayesLCA","Version":"1.7","Title":"Bayesian Latent Class Analysis","Description":"Bayesian Latent Class Analysis using several different\n methods.","Published":"2015-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesLife","Version":"3.0-5","Title":"Bayesian Projection of Life Expectancy","Description":"Making probabilistic projections of life expectancy for all countries of the world, using a Bayesian hierarchical model .","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BayesLogit","Version":"0.6","Title":"Logistic Regression","Description":"The BayesLogit package does posterior simulation for binomial and\n multinomial logistic regression using the Polya-Gamma latent variable\n technique. This method is fully automatic, exact, and fast. A routine to\n efficiently sample from the Polya-Gamma class of distributions is included.","Published":"2016-10-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"bayesloglin","Version":"1.0.1","Title":"Bayesian Analysis of Contingency Table Data","Description":"The function MC3() searches for log-linear models with the highest posterior probability. The function gibbsSampler() is a blocked Gibbs sampler for sampling from the posterior distribution of the log-linear parameters. The functions findPostMean() and findPostCov() compute the posterior mean and covariance matrix for decomposable models which, for these models, is available in closed form.","Published":"2016-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesm","Version":"3.0-2","Title":"Bayesian Inference for Marketing/Micro-Econometrics","Description":"Covers many important models used\n in marketing and micro-econometrics applications. \n The package includes:\n Bayes Regression (univariate or multivariate dep var),\n Bayes Seemingly Unrelated Regression (SUR),\n Binary and Ordinal Probit,\n Multinomial Logit (MNL) and Multinomial Probit (MNP),\n Multivariate Probit,\n Negative Binomial (Poisson) Regression,\n Multivariate Mixtures of Normals (including clustering),\n Dirichlet Process Prior Density Estimation with normal base,\n Hierarchical Linear Models with normal prior and covariates,\n Hierarchical Linear Models with a mixture of normals prior and covariates,\n Hierarchical Multinomial Logits with a mixture of normals prior\n and covariates,\n Hierarchical Multinomial Logits with a Dirichlet Process prior and covariates,\n Hierarchical Negative Binomial Regression Models,\n Bayesian analysis of choice-based conjoint data,\n Bayesian treatment of linear instrumental variables models,\n Analysis of Multivariate Ordinal survey data with scale\n usage heterogeneity (as in Rossi et al, JASA (01)),\n Bayesian Analysis of Aggregate Random Coefficient Logit Models as in BLP (see\n Jiang, Manchanda, Rossi 2009)\n For further reference, consult our book, Bayesian Statistics and\n Marketing by Rossi, Allenby and McCulloch (Wiley 2005) and Bayesian Non- and Semi-Parametric\n Methods and Applications (Princeton U Press 2014).","Published":"2015-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BayesMAMS","Version":"0.1","Title":"Designing Bayesian Multi-Arm Multi-Stage Studies","Description":"Calculating Bayesian sample sizes for multi-arm trials where several experimental treatments are compared to a common control, perhaps even at multiple stages.","Published":"2015-11-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesMCClust","Version":"1.0","Title":"Mixtures-of-Experts Markov Chain Clustering and Dirichlet\nMultinomial Clustering","Description":"This package provides various Markov Chain Monte Carlo\n (MCMC) sampler for model-based clustering of discrete-valued\n time series obtained by observing a categorical variable with\n several states (in a Bayesian approach). In order to analyze\n group membership, we provide also an extension to the\n approaches by formulating a probabilistic model for the latent\n group indicators within the Bayesian classification rule using\n a multinomial logit model.","Published":"2012-01-31","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BayesMed","Version":"1.0.1","Title":"Default Bayesian Hypothesis Tests for Correlation, Partial\nCorrelation, and Mediation","Description":"Default Bayesian hypothesis tests for correlation, partial correlation, and mediation","Published":"2015-02-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bayesmeta","Version":"1.4","Title":"Bayesian Random-Effects Meta-Analysis","Description":"A collection of functions allowing to derive the posterior distribution of the two parameters in a random-effects meta-analysis, and providing functionality to evaluate joint and marginal posterior probability distributions, predictive distributions, shrinkage effects, etc.","Published":"2017-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesmix","Version":"0.7-4","Title":"Bayesian Mixture Models with JAGS","Description":"The fitting of finite mixture models of univariate\n\t Gaussian distributions using JAGS within a Bayesian\n\t framework is provided.","Published":"2015-07-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BayesMixSurv","Version":"0.9.1","Title":"Bayesian Mixture Survival Models using Additive\nMixture-of-Weibull Hazards, with Lasso Shrinkage and\nStratification","Description":"Bayesian Mixture Survival Models using Additive Mixture-of-Weibull Hazards, with Lasso Shrinkage and\n Stratification. As a Bayesian dynamic survival model, it relaxes the proportional-hazard assumption. Lasso shrinkage controls\n overfitting, given the increase in the number of free parameters in the model due to presence of two Weibull components\n in the hazard function.","Published":"2016-09-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BayesNetBP","Version":"1.2.1","Title":"Bayesian Network Belief Propagation","Description":"Belief propagation methods in Bayesian Networks to propagate evidence through the network. The implementation of these methods are based on the article: Cowell, RG (2005). Local Propagation in Conditional Gaussian Bayesian Networks .","Published":"2017-06-06","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BayesNI","Version":"0.1","Title":"BayesNI: Bayesian Testing Procedure for Noninferiority with\nBinary Endpoints","Description":"A Bayesian testing procedure for noninferiority trials\n with binary endpoints. The prior is constructed based on\n Bernstein polynomials with options for both informative and\n non-informative prior. The critical value of the test statistic\n (Bayes factor) is determined by minimizing total weighted error\n (TWE) criteria","Published":"2012-09-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BayesPieceHazSelect","Version":"1.1.0","Title":"Variable Selection in a Hierarchical Bayesian Model for a Hazard\nFunction","Description":"Fits a piecewise exponential hazard to survival data using a\n Hierarchical Bayesian model with an Intrinsic Conditional Autoregressive\n formulation for the spatial dependency in the hazard rates for each piece.\n This function uses Metropolis- Hastings-Green MCMC to allow the number of split\n points to vary and also uses Stochastic Search Variable Selection to determine\n what covariates drive the risk of the event. This function outputs trace plots\n depicting the number of split points in the hazard and the number of variables\n included in the hazard. The function saves all posterior quantities to the\n desired path.","Published":"2017-01-26","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BayesPiecewiseICAR","Version":"0.2.1","Title":"Hierarchical Bayesian Model for a Hazard Function","Description":"Fits a piecewise exponential hazard to survival data using a\n Hierarchical Bayesian model with an Intrinsic Conditional Autoregressive\n formulation for the spatial dependency in the hazard rates for each piece.\n This function uses Metropolis- Hastings-Green MCMC to allow the number of split\n points to vary. This function outputs graphics that display the histogram of\n the number of split points and the trace plots of the hierarchical parameters.\n The function outputs a list that contains the posterior samples for the number\n of split points, the location of the split points, and the log hazard rates\n corresponding to these splits. Additionally, this outputs the posterior samples\n of the two hierarchical parameters, Mu and Sigma^2.","Published":"2017-01-04","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bayesplot","Version":"1.2.0","Title":"Plotting for Bayesian Models","Description":"Plotting functions for posterior analysis, model checking,\n and MCMC diagnostics. The package is designed not only to provide convenient\n functionality for users, but also a common set of functions that can be\n easily used by developers working on a variety of R packages for Bayesian\n modeling, particularly (but not exclusively) packages interfacing with Stan.","Published":"2017-04-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"bayesPop","Version":"6.0-4","Title":"Probabilistic Population Projection","Description":"Generating population projections for all countries of the world using several probabilistic components, such as total fertility rate and life expectancy.","Published":"2017-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayespref","Version":"1.0","Title":"Hierarchical Bayesian analysis of ecological count data","Description":"This program implements a hierarchical Bayesian analysis\n of count data, such as preference experiments. It provides\n population-level and individual-level preference parameter\n estimates obtained via MCMC. It also allows for model\n comparison using Deviance Information Criterion.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesQR","Version":"2.3","Title":"Bayesian Quantile Regression","Description":"Bayesian quantile regression using the asymmetric Laplace distribution, both continuous as well as binary dependent variables are supported. The package consists of implementations of the methods of Yu & Moyeed (2001) , Benoit & Van den Poel (2012) and Al-Hamzawi, Yu & Benoit (2012) . To speed up the calculations, the Markov Chain Monte Carlo core of all algorithms is programmed in Fortran and called from R.","Published":"2017-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesreg","Version":"1.0","Title":"Bayesian Regression Models with Continuous Shrinkage Priors","Description":"Fits linear or logistic regression model using Bayesian continuous\n shrinkage prior distributions. Handles ridge, lasso, horseshoe and horseshoe+\n regression with logistic, Gaussian, Laplace or Student-t distributed targets.","Published":"2016-11-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bayess","Version":"1.4","Title":"Bayesian Essentials with R","Description":"bayess contains a collection of functions that allows the\n reenactment of the R programs used in the book \"Bayesian\n Essentials with R\" (revision of \"Bayesian Core\") without\n further programming. R code being available as well, they can\n be modified by the user to conduct one's own simulations.","Published":"2013-02-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BayesS5","Version":"1.30","Title":"Bayesian Variable Selection Using Simplified Shotgun Stochastic\nSearch with Screening (S5)","Description":"In p >> n settings, full posterior sampling using existing Markov chain Monte\n Carlo (MCMC) algorithms is highly inefficient and often not feasible from a practical\n perspective. To overcome this problem, we propose a scalable stochastic search algorithm that is called the Simplified Shotgun Stochastic Search (S5) and aimed at rapidly explore interesting regions of model space and finding the maximum a posteriori(MAP) model. Also, the S5 provides an approximation of posterior probability of each model (including the marginal inclusion probabilities). This algorithm is a part of an article titled Scalable Bayesian Variable Selection Using Nonlocal Prior Densities in Ultrahigh-dimensional Settings (2017+), by Minsuk Shin, Anirban Bhattachary, and Valen E. Johnson, accepted in Statistica Sinica. ","Published":"2017-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BayesSAE","Version":"1.0-1","Title":"Bayesian Analysis of Small Area Estimation","Description":"This package provides a variety of functions to deal with several specific small area area-\n level models in Bayesian context. Models provided range from the basic Fay-Herriot model to \n its improvement such as You-Chapman models, unmatched models, spatial models and so on. \n Different types of priors for specific parameters could be chosen to obtain MCMC posterior \n draws. The main sampling function is written in C with GSL lab so as to facilitate the \n computation. Model internal checking and model comparison criteria are also involved.","Published":"2013-10-28","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BayesSingleSub","Version":"0.6.2","Title":"Computation of Bayes factors for interrupted time-series designs","Description":"The BayesSingleSub package is a suite of functions for computing various Bayes factors for interrupted time-series, based on the models described in de Vries and Morey (2013).","Published":"2014-01-22","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BayesSpec","Version":"0.5.3","Title":"Bayesian Spectral Analysis Techniques","Description":"An implementation of methods for spectral analysis using the Bayesian framework. It includes functions for modelling spectrum as well as appropriate plotting and output estimates. There is segmentation capability with RJ MCMC (Reversible Jump Markov Chain Monte Carlo). The package takes these methods predominantly from the 2012 paper \"AdaptSPEC: Adaptive Spectral Estimation for Nonstationary Time Series\" .","Published":"2017-02-22","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BayesSummaryStatLM","Version":"1.0-1","Title":"MCMC Sampling of Bayesian Linear Models via Summary Statistics","Description":"Methods for generating Markov Chain Monte Carlo (MCMC) posterior samples of Bayesian linear regression model parameters that require only summary statistics of data as input. Summary statistics are useful for systems with very limited amounts of physical memory. The package provides two functions: one function that computes summary statistics of data and one function that carries out the MCMC posterior sampling for Bayesian linear regression models where summary statistics are used as input. The function read.regress.data.ff utilizes the R package 'ff' to handle data sets that are too large to fit into a user's physical memory, by reading in data in chunks.","Published":"2015-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesSurv","Version":"3.0","Title":"Bayesian Survival Regression with Flexible Error and Random\nEffects Distributions","Description":"Contains Bayesian implementations of Mixed-Effects Accelerated Failure Time (MEAFT) models\n for censored data. Those can be not only right-censored but also interval-censored,\n\t doubly-interval-censored or misclassified interval-censored.","Published":"2017-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayesTFR","Version":"6.0-0","Title":"Bayesian Fertility Projection","Description":"Making probabilistic projections of total fertility rate for all countries of the world, using a Bayesian hierarchical model.","Published":"2017-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Bayesthresh","Version":"2.0.1","Title":"Bayesian thresholds mixed-effects models for categorical data","Description":"This package fits a linear mixed model for ordinal\n categorical responses using Bayesian inference via Monte Carlo\n Markov Chains. Default is Nandran & Chen algorithm using\n Gaussian link function and saving just the summaries of the\n chains. Among the options, package allow for two other options\n of algorithms, for using Student's \"t\" link function and for\n saving the full chains.","Published":"2013-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BayesTree","Version":"0.3-1.4","Title":"Bayesian Additive Regression Trees","Description":"This is an implementation of BART:Bayesian Additive Regression Trees,\n by Chipman, George, McCulloch (2010).","Published":"2016-07-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BayesTreePrior","Version":"1.0.1","Title":"Bayesian Tree Prior Simulation","Description":"Provides a way to simulate from the prior distribution of Bayesian trees by Chipman et al. (1998) . The prior distribution of Bayesian trees is highly dependent on the design matrix X, therefore using the suggested hyperparameters by Chipman et al. (1998) is not recommended and could lead to unexpected prior distribution. This work is part of my master thesis (expected 2016).","Published":"2016-07-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BayesValidate","Version":"0.0","Title":"BayesValidate Package","Description":"BayesValidate implements the software validation method\n described in the paper \"Validation of Software for Bayesian\n Models using Posterior Quantiles\" (Cook, Gelman, and Rubin,\n 2005). It inputs a function to perform Bayesian inference as\n well as functions to generate data from the Bayesian model\n being fit, and repeatedly generates and analyzes data to check\n that the Bayesian inference program works properly.","Published":"2006-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BayesVarSel","Version":"1.7.0","Title":"Bayes Factors, Model Choice and Variable Selection in Linear\nModels","Description":"Conceived to calculate Bayes factors in linear models and then to provide a formal Bayesian answer to testing and variable selection problems. From a theoretical side, the emphasis in this package is placed on the prior distributions and it allows a wide range of them: Jeffreys (1961); Zellner and Siow(1980); Zellner and Siow(1984); Zellner (1986); Fernandez et al. (2001); Liang et al. (2008) and Bayarri et al. (2012). The interaction with the package is through a friendly interface that syntactically mimics the well-known lm() command of R. The resulting objects can be easily explored providing the user very valuable information (like marginal, joint and conditional inclusion probabilities of potential variables; the highest posterior probability model, HPM; the median probability model, MPM) about the structure of the true -data generating- model. Additionally, this package incorporates abilities to handle problems with a large number of potential explanatory variables through parallel and heuristic versions of the main commands, Garcia-Donato and Martinez-Beneito (2013). ","Published":"2016-11-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BayesX","Version":"0.2-9","Title":"R Utilities Accompanying the Software Package BayesX","Description":"This package provides functionality for exploring and visualising estimation results\n\t obtained with the software package BayesX for structured additive regression. It also provides\n\t functions that allow to read, write and manipulate map objects that are required in spatial analyses\n\t performed with BayesX, a free software for estimating structured additive regression models \n (http://www.bayesx.org).","Published":"2014-08-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BayesXsrc","Version":"2.1-2","Title":"R Package Distribution of the BayesX C++ Sources","Description":"BayesX performs Bayesian inference in structured additive regression (STAR) models.\n\tThe R package BayesXsrc provides the BayesX command line tool for easy installation.\n\tA convenient R interface is provided in package R2BayesX.","Published":"2013-11-22","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BayHap","Version":"1.0.1","Title":"Bayesian analysis of haplotype association using Markov Chain\nMonte Carlo","Description":"The package BayHap performs simultaneous estimation of\n uncertain haplotype frequencies and association with haplotypes\n based on generalized linear models for quantitative, binary and\n survival traits. Bayesian statistics and Markov Chain Monte\n Carlo techniques are the theoretical framework for the methods\n of estimation included in this package. Prior values for model\n parameters can be included by the user. Convergence diagnostics\n and statistical and graphical analysis of the sampling output\n can be also carried out.","Published":"2013-03-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BayHaz","Version":"0.1-3","Title":"R Functions for Bayesian Hazard Rate Estimation","Description":"A suite of R functions for Bayesian estimation of smooth\n hazard rates via Compound Poisson Process (CPP) and Bayesian\n Penalized Spline (BPS) priors.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BaylorEdPsych","Version":"0.5","Title":"R Package for Baylor University Educational Psychology\nQuantitative Courses","Description":"Functions and data used for Baylor University Educational\n Psychology Quantitative Courses","Published":"2012-07-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bayou","Version":"1.1.0","Title":"Bayesian Fitting of Ornstein-Uhlenbeck Models to Phylogenies","Description":"Tools for fitting and simulating multi-optima Ornstein-Uhlenbeck\n models to phylogenetic comparative data using Bayesian reversible-jump\n methods.","Published":"2015-10-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BaySIC","Version":"1.0","Title":"Bayesian Analysis of Significantly Mutated Genes in Cancer","Description":"This R package is the software implementation of the\n algorithm BaySIC, a Bayesian approach toward analysis of\n significantly mutated genes in cancer data.","Published":"2013-04-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BAYSTAR","Version":"0.2-9","Title":"On Bayesian analysis of Threshold autoregressive model (BAYSTAR)","Description":"The manuscript introduces the BAYSTAR package, which\n provides the functionality for Bayesian estimation in\n autoregressive threshold models.","Published":"2013-09-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bazar","Version":"0.1.4","Title":"Miscellaneous Basic Functions","Description":"A collection of miscellaneous functions for \n copying objects to the clipboard ('Copy');\n manipulating strings ('concat', 'mgsub', 'trim', 'verlan'); \n loading or showing packages ('library_with_rep', 'require_with_rep', \n 'sessionPackages'); \n creating or testing for named lists ('nlist', 'as.nlist', 'is.nlist'), \n formulas ('is.formula'), empty objects ('as.empty', 'is.empty'), \n whole numbers ('as.wholenumber', 'is.wholenumber'); \n testing for equality ('almost.equal', 'almost.zero'); \n getting modified versions of usual functions ('rle2', 'sumNA'); \n making a pause or a stop ('pause', 'stopif'); \n and others ('erase', '%nin%', 'unwhich'). ","Published":"2017-01-14","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BB","Version":"2014.10-1","Title":"Solving and Optimizing Large-Scale Nonlinear Systems","Description":"Barzilai-Borwein spectral methods for solving nonlinear\n system of equations, and for optimizing nonlinear objective\n functions subject to simple constraints. A tutorial style\n introduction to this package is available in a vignette on the\n CRAN download page or, when the package is loaded in an R\n session, with vignette(\"BB\").","Published":"2014-11-07","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bbefkr","Version":"4.2","Title":"Bayesian bandwidth estimation and semi-metric selection for the\nfunctional kernel regression with unknown error density","Description":"Estimating optimal bandwidths for the regression mean function approximated by the functional Nadaraya-Watson estimator and the error density approximated by a kernel density of residuals simultaneously in a scalar-on-function regression. As a by-product of Markov chain Monte Carlo, the optimal choice of semi-metric is selected based on largest marginal likelihood.","Published":"2014-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bbemkr","Version":"2.0","Title":"Bayesian bandwidth estimation for multivariate kernel regression\nwith Gaussian error","Description":"Bayesian bandwidth estimation for Nadaraya-Watson type\n multivariate kernel regression with Gaussian error density","Published":"2014-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BBEST","Version":"0.1-6","Title":"Bayesian Estimation of Incoherent Neutron Scattering Backgrounds","Description":"We implemented a Bayesian-statistics approach for \n subtraction of incoherent scattering from neutron total-scattering data. \n In this approach, the estimated background signal associated with \n incoherent scattering maximizes the posterior probability, which combines \n the likelihood of this signal in reciprocal and real spaces with the prior \n that favors smooth lines.","Published":"2016-03-09","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BBmisc","Version":"1.11","Title":"Miscellaneous Helper Functions for B. Bischl","Description":"Miscellaneous helper functions for and from B. Bischl and\n some other guys, mainly for package development.","Published":"2017-03-10","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bbmle","Version":"1.0.19","Title":"Tools for General Maximum Likelihood Estimation","Description":"Methods and functions for fitting maximum likelihood models in R.\n This package modifies and extends the 'mle' classes in the 'stats4' package.","Published":"2017-04-18","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"BBMM","Version":"3.0","Title":"Brownian bridge movement model","Description":"The model provides an empirical estimate of a movement\n path using discrete location data obtained at relatively short\n time intervals.","Published":"2013-03-08","License":"GNU General Public License","snapshot_date":"2017-06-23"}
{"Package":"BBMV","Version":"1.0","Title":"Models for Continuous Traits Evolving in Macroevolutionary\nLandscapes of any Shape","Description":"Provides a set of functions to fit general macroevolutionary models for continuous traits evolving in adaptive landscapes of any shape. The model is based on bounded Brownian motion (BBM), in which a continuous trait evolves along a phylogenetic tree under constant-rate diffusion between two reflective bounds. In addition to this random component, the trait evolves in a potential and is thus subject to a force that pulls it towards specific values - this force can be of any shape.","Published":"2017-05-02","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bbo","Version":"0.2","Title":"Biogeography-Based Optimization","Description":"This package provides an R implementation of\n Biogeography-Based Optimization (BBO), originally invented by\n Prof. Dan Simon, Cleveland State University, Ohio. This method\n is an application of the concept of biogeography, a study of\n the geographical distribution of biological organisms, to\n optimization problems. More information about this method can\n be found here: http://academic.csuohio.edu/simond/bbo/.","Published":"2014-09-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"BBRecapture","Version":"0.1","Title":"Bayesian Behavioural Capture-Recapture Models","Description":"Model fitting of flexible behavioural recapture models based on conditional probability reparameterization and meaningful partial capture history quantification also referred to as meaningful behavioural covariate","Published":"2013-12-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bc3net","Version":"1.0.4","Title":"Gene Regulatory Network Inference with Bc3net","Description":"Implementation of the BC3NET algorithm for gene regulatory network inference (de Matos Simoes and Frank Emmert-Streib, Bagging Statistical Network Inference from Large-Scale Gene Expression Data, PLoS ONE 7(3): e33624, ).","Published":"2016-11-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BCA","Version":"0.9-3","Title":"Business and Customer Analytics","Description":"Underlying support functions for RcmdrPlugin.BCA and a\n companion to the book Customer and Business Analytics: Applied\n Data Mining for Business Decision Making Using R by Daniel S.\n Putler and Robert E. Krider","Published":"2014-09-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BCBCSF","Version":"1.0-1","Title":"Bias-Corrected Bayesian Classification with Selected Features","Description":"Fully Bayesian Classification with a subset of high-dimensional features, such as expression levels of genes. The data are modeled with a hierarchical Bayesian models using heavy-tailed t distributions as priors. When a large number of features are available, one may like to select only a subset of features to use, typically those features strongly correlated with the response in training cases. Such a feature selection procedure is however invalid since the relationship between the response and the features has be exaggerated by feature selection. This package provides a way to avoid this bias and yield better-calibrated predictions for future cases when one uses F-statistic to select features. ","Published":"2015-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BCC1997","Version":"0.1.1","Title":"Calculation of Option Prices Based on a Universal Solution","Description":"Calculates the prices of European options based on the universal solution provided by Bakshi, Cao and Chen (1997) . This solution considers stochastic volatility, stochastic interest and random jumps. Please cite their work if this package is used. ","Published":"2017-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BCDating","Version":"0.9.7","Title":"Business Cycle Dating and Plotting Tools","Description":"Tools for Dating Business Cycles using Harding-Pagan (Quarterly Bry-Boschan) method and various plotting features.","Published":"2014-12-30","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BcDiag","Version":"1.0.10","Title":"Diagnostics Plots for Bicluster Data","Description":"Diagnostic tools based on two-way\n anova and median-polish residual plots for Bicluster output\n obtained from packages; \"biclust\" by Kaiser et al.(2008),\"isa2\"\n by Csardi et al. (2010) and \"fabia\" by Hochreiter et al.\n (2010). Moreover, It provides visualization tools for bicluster\n output and corresponding non-bicluster rows- or columns\n outcomes. It has also extended the idea of Kaiser et al.(2008)\n which is, extracting bicluster output in a text format, by\n adding two bicluster methods from the fabia and isa2 R\n packages.","Published":"2015-10-17","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BCE","Version":"2.1","Title":"Bayesian composition estimator: estimating sample (taxonomic)\ncomposition from biomarker data","Description":"Function to estimates taxonomic compositions from biomarker data, using a Bayesian approach.","Published":"2014-05-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BCEA","Version":"2.2-5","Title":"Bayesian Cost Effectiveness Analysis","Description":"Produces an economic evaluation of a Bayesian model in the form of MCMC simulations. Given suitable variables of cost and effectiveness / utility for two or more interventions, This package computes the most cost-effective alternative and produces graphical summaries and probabilistic sensitivity analysis.","Published":"2016-11-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BCEE","Version":"1.1","Title":"The Bayesian Causal Effect Estimation Algorithm","Description":"Implementation of the Bayesian Causal Effect Estimation algorithm, \n a data-driven method for the estimation of the causal effect of an exposure \n on a continuous outcome. For more details, see Talbot et al. (2015) DOI:10.1515/jci-2014-0035. ","Published":"2015-11-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BCellMA","Version":"0.3.4","Title":"B Cell Receptor Somatic Hyper Mutation Analysis","Description":"Includes a set of functions to analyze for instance nucleotide frequencies as well as transition and transversion. Can reconstruct germline sequences based on the international ImMunoGeneTics information system (IMGT/HighV-QUEST) outputs, calculate and plot the difference (%) of nucleotides at 6 positions around a mutation to identify and characterize hotspot motifs as well as calculate and plot average mutation frequencies of nucleotide mutations resulting in amino acid substitution.","Published":"2017-03-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BCEs0","Version":"1.1-1","Title":"Bayesian Models for Cost-Effectiveness Analysis in the Presence\nof Structural Zero Costs","Description":"Implements a full Bayesian cost-effectiveness analysis in the case where the cost variable is characterised by structural zeros. The package implements the Gamma, log-Normal and Normal models for the cost variable and the Gamma, Beta, Bernoulli and Normal models for the measure of clinical effectiveness. ","Published":"2015-08-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BCgee","Version":"0.1","Title":"Bias-Corrected Estimates for Generalized Linear Models for\nDependent Data","Description":"Provides bias-corrected estimates for the regression coefficients of a marginal model estimated with generalized estimating equations. Details about the bias formula used are in Lunardon, N., Scharfstein, D. (2017) .","Published":"2017-06-23","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Bchron","Version":"4.2.6","Title":"Radiocarbon Dating, Age-Depth Modelling, Relative Sea Level Rate\nEstimation, and Non-Parametric Phase Modelling","Description":"Enables quick calibration of radiocarbon dates under various\n calibration curves (including user generated ones); Age-depth modelling as\n per the algorithm of Haslett and Parnell (2008) ; Relative sea level rate\n estimation incorporating time uncertainty in polynomial regression models; and\n non-parametric phase modelling via Gaussian mixtures as a means to determine\n the activity of a site (and as an alternative to the Oxcal function SUM). The\n package includes a vignette which explains most of the basic functionality.","Published":"2016-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Bclim","Version":"3.1.2","Title":"Bayesian Palaeoclimate Reconstruction from Pollen Data","Description":"Takes pollen and chronology data from lake cores and produces\n a Bayesian posterior distribution of palaeoclimate from that location after\n fitting a non-linear non-Gaussian state-space model. For more details see the\n paper Parnell et al. (2015), Bayesian inference for palaeoclimate with\n time uncertainty and stochastic volatility. Journal of the Royal Statistical\n Society: Series C (Applied Statistics), 64: 115–138 .","Published":"2016-12-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bclust","Version":"1.5","Title":"Bayesian Hierarchical Clustering Using Spike and Slab Models","Description":"Builds a dendrogram using log posterior as a natural distance defined by the model and meanwhile waits the clustering variables. It is also capable to computing equivalent Bayesian discrimination probabilities. The adopted method suites small sample large dimension setting. The model parameter estimation maybe difficult, depending on data structure and the chosen distribution family.","Published":"2015-09-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bcp","Version":"4.0.0","Title":"Bayesian Analysis of Change Point Problems","Description":"Provides an implementation of the Barry and Hartigan (1993) product partition model for the normal errors change point problem using Markov Chain Monte Carlo. It also extends the methodology to regression models on a connected graph (Wang and Emerson, 2015); this allows estimation of change point models with multivariate responses. Parallel MCMC, previously available in bcp v.3.0.0, is currently not implemented.","Published":"2015-07-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bcpa","Version":"1.1","Title":"Behavioral change point analysis of animal movement","Description":"The Behavioral Change Point Analysis (BCPA) is a method of\n identifying hidden shifts in the underlying parameters of a time series,\n developed specifically to be applied to animal movement data which is\n irregularly sampled. The method is based on: E.\n Gurarie, R. Andrews and K. Laidre A novel method for identifying\n behavioural changes in animal movement data (2009) Ecology Letters 12:5\n 395-408.","Published":"2014-11-02","License":"Unlimited","snapshot_date":"2017-06-23"}
{"Package":"bcpmeta","Version":"1.0","Title":"Bayesian Multiple Changepoint Detection Using Metadata","Description":"A Bayesian approach to detect mean shifts in AR(1) time series while accommodating metadata (if available). In addition, a linear trend component is allowed. ","Published":"2014-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BCRA","Version":"1.0","Title":"Breast Cancer Risk Assessment","Description":"Functions provide risk projections of invasive breast cancer based on Gail model according to National Cancer Institute's Breast Cancer Risk Assessment Tool algorithm for specified race/ethnic groups and age intervals.","Published":"2015-04-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bcRep","Version":"1.3.6","Title":"Advanced Analysis of B Cell Receptor Repertoire Data","Description":"Methods for advanced analysis of B cell receptor repertoire\n data, like gene usage, mutations, clones, diversity, distance measures and\n multidimensional scaling and their visualisation.","Published":"2016-12-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bcrm","Version":"0.4.6","Title":"Bayesian Continual Reassessment Method for Phase I\nDose-Escalation Trials","Description":"Implements a wide variety of one and two-parameter Bayesian CRM\n designs. The program can run interactively, allowing the user to enter outcomes\n after each cohort has been recruited, or via simulation to assess operating\n characteristics.","Published":"2015-11-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bcROCsurface","Version":"1.0-1","Title":"Bias-Corrected Methods for Estimating the ROC Surface of\nContinuous Diagnostic Tests","Description":"The bias-corrected estimation methods for the receiver operating characteristics\n ROC surface and the volume under ROC surfaces (VUS) under missing at random (MAR)\n assumption.","Published":"2016-11-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bcrypt","Version":"0.2","Title":"'Blowfish' Password Hashing Algorithm","Description":"An R interface to the OpenBSD 'blowfish' password hashing algorithm,\n as described in \"A Future-Adaptable Password Scheme\" by Niels Provos. The\n implementation is derived from the 'py-bcrypt' module for Python which is a\n wrapper for the OpenBSD implementation.","Published":"2015-06-18","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bcs","Version":"1.0.0","Title":"Bayesian Compressive Sensing Using Laplace Priors","Description":"A Bayesian method for solving the compressive sensing problem. \n In particular, this package implements the algorithm 'Fast Laplace' found \n in the paper 'Bayesian Compressive Sensing Using Laplace Priors' by Babacan, \n Molina, Katsaggelos (2010) .","Published":"2017-04-04","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"BCSub","Version":"0.5","Title":"A Bayesian Semiparametric Factor Analysis Model for Subtype\nIdentification (Clustering)","Description":"Gene expression profiles are commonly utilized to infer disease\n subtypes and many clustering methods can be adopted for this task.\n However, existing clustering methods may not perform well when\n genes are highly correlated and many uninformative genes are included\n for clustering. To deal with these challenges, we develop a novel\n clustering method in the Bayesian setting. This method, called BCSub,\n adopts an innovative semiparametric Bayesian factor analysis model\n to reduce the dimension of the data to a few factor scores for\n clustering. Specifically, the factor scores are assumed to follow\n the Dirichlet process mixture model in order to induce clustering.","Published":"2017-03-16","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bcv","Version":"1.0.1","Title":"Cross-Validation for the SVD (Bi-Cross-Validation)","Description":"\n Methods for choosing the rank of an SVD approximation via cross\n validation. The package provides both Gabriel-style \"block\"\n holdouts and Wold-style \"speckled\" holdouts. It also includes an \n implementation of the SVDImpute algorithm. For more information about\n Bi-cross-validation, see Owen & Perry's 2009 AoAS article\n (at http://arxiv.org/abs/0908.2062) and Perry's 2009 PhD thesis\n (at http://arxiv.org/abs/0909.3052).","Published":"2015-05-22","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bda","Version":"5.1.6","Title":"Density Estimation for Grouped Data","Description":"Functions for density estimation based on grouped (or pre-binned) \n data. ","Published":"2015-07-29","License":"Unlimited","snapshot_date":"2017-06-23"}
{"Package":"bde","Version":"1.0.1","Title":"Bounded Density Estimation","Description":"A collection of S4 classes which implements different methods to estimate and deal with densities in bounded domains. That is, densities defined within the interval [lower.limit, upper.limit], where lower.limit and upper.limit are values that can be set by the user.","Published":"2015-02-16","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BDgraph","Version":"2.39","Title":"Bayesian Structure Learning in Graphical Models using\nBirth-Death MCMC","Description":"Provides statistical tools for Bayesian structure learning in undirected graphical models for continuous, discrete, and mixed data. The package is implemented the recent improvements in the Bayesian graphical models literature, including Mohammadi and Wit (2015) and Mohammadi et al. (2017) . To speed up the computations, the BDMCMC sampling algorithms are implemented in parallel using OpenMP in C++.","Published":"2017-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bdlp","Version":"0.9-1","Title":"Transparent and Reproducible Artificial Data Generation","Description":"The main function generateDataset() processes a user-supplied .R file that \n contains metadata parameters in order to generate actual data. The metadata parameters \n have to be structured in the form of metadata objects, the format of which is \n outlined in the package vignette. This approach allows to generate artificial data \n in a transparent and reproducible manner.","Published":"2017-06-06","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bdots","Version":"0.1.13","Title":"Bootstrapped Differences of Time Series","Description":"Analyze differences among time series curves with p-value adjustment for multiple comparisons introduced in Oleson et al (2015) .","Published":"2017-06-15","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"bdpopt","Version":"1.0-1","Title":"Optimisation of Bayesian Decision Problems","Description":"Optimisation of the expected utility in single-stage and multi-stage Bayesian decision problems. The expected utility is estimated by simulation. For single-stage problems, JAGS is used to draw MCMC samples.","Published":"2016-03-30","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bdpv","Version":"1.1","Title":"Inference and design for predictive values in binary diagnostic\ntests","Description":"Computation of asymptotic confidence intervals for negative and positive predictive values in binary diagnostic tests in case-control studies. Experimental design for hypothesis tests on predictive values.","Published":"2014-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bdrift","Version":"1.2.2","Title":"Beta Drift Analysis","Description":"Beta drift poses a serious challenge to asset managers \n and financial researchers. Beta drift causes problems in asset \n pricing models and can have serious ramifications for hedging \n attempts. Providing users with a tool that allows them to \n quantify beta drift and form educated opinions about it is \n the primary purpose of this package.\n This package contains the BDA() function that performs a beta \n drift analysis, typically for multi-factor asset pricing models. \n The BDA() function tests the underlying model parameters for \n drift across time, drift across model horizon, and applies a \n jackknife procedure to the baseline model. This allows the users \n to draw conclusions about the stability of model parameters or \n make inferences about the behavior of funds. For example, the \n drift of parameters for active funds could be interpreted as \n implicit style drift or, in the case of passive funds, management's \n inability to track a benchmark completely.","Published":"2016-04-06","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bdscale","Version":"2.0.0","Title":"Remove Weekends and Holidays from ggplot2 Axes","Description":"Provides a continuous date scale, omitting weekends and holidays.","Published":"2016-03-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bdsmatrix","Version":"1.3-2","Title":"Routines for Block Diagonal Symmetric matrices","Description":"This is a special case of sparse matrices, used by coxme ","Published":"2014-08-22","License":"LGPL-2","snapshot_date":"2017-06-23"}
{"Package":"bdvis","Version":"0.2.15","Title":"Biodiversity Data Visualizations","Description":"Provides a set of functions to create basic visualizations to quickly\n preview different aspects of biodiversity information such as inventory \n completeness, extent of coverage (taxonomic, temporal and geographic), gaps\n and biases.","Published":"2017-03-09","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BDWreg","Version":"1.2.0","Title":"Bayesian Inference for Discrete Weibull Regression","Description":"A Bayesian regression model for discrete response, where the conditional distribution is modelled via a discrete Weibull distribution. This package provides an implementation of Metropolis-Hastings and Reversible-Jumps algorithms to draw samples from the posterior. It covers a wide range of regularizations through any two parameter prior. Examples are Laplace (Lasso), Gaussian (ridge), Uniform, Cauchy and customized priors like a mixture of priors. An extensive visual toolbox is included to check the validity of the results as well as several measures of goodness-of-fit.","Published":"2017-02-17","License":"LGPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bdynsys","Version":"1.3","Title":"Bayesian Dynamical System Model","Description":"The package bdynsys for panel/longitudinal data combines methods to model \n changes in up to four indicators over times as a function of the indicators\n themselves and up to three predictors using ordinary differential equations \n (ODEs) with polynomial terms that allow to model complex and nonlinear \n effects. A Bayesian model selection approach is implemented. The package \n provides also visualisation tools to plot phase portraits of the dynamic \n system, showing the complex co-evolution of two indicators over time with the\n possibility to highlight trajectories for specified entities (e.g. countries, \n individuals). Furthermore the visualisation tools allow for making \n predictions of the trajectories of specified entities with respect to the \n indicators. ","Published":"2014-12-08","License":"GNU General Public License (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bea.R","Version":"1.0.1","Title":"Bureau of Economic Analysis API","Description":"Provides an R interface for the Bureau of Economic Analysis (BEA) \n\t\tAPI (see for \n\t\tmore information) that serves two core purposes - \n 1. To Extract/Transform/Load data [beaGet()] from the BEA API as R-friendly \n\t\tformats in the user's work space [transformation done by default in beaGet() \n\t\tcan be modified using optional parameters; see, too, bea2List(), bea2Tab()].\n\t\t2. To enable the search of descriptive meta data [beaSearch()].\n\t\tOther features of the library exist mainly as intermediate methods \n\t\tor are in early stages of development.\n\t\tImportant Note - You must have an API key to use this library. \n\t\tRegister for a key at .","Published":"2017-01-26","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"BEACH","Version":"1.1.2","Title":"Biometric Exploratory Analysis Creation House","Description":"A platform is provided for interactive analyses with a goal of totally easy to develop, deploy, interact, and explore (TEDDIE). Using this package, users can create customized analyses and make them available to end users who can perform interactive analyses and save analyses to RTF or HTML files. It allows developers to focus on R code for analysis, instead of dealing with html or shiny code.","Published":"2016-10-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"beadarrayFilter","Version":"1.1.0","Title":"Bead filtering for Illumina bead arrays","Description":"This package contains functions to fit the filtering model\n of Forcheh et al., (2012) which is used to derive the\n intra-cluster correlation (ICC). Model fitting is done using\n the modified version of the ``MLM.beadarray\" function of Kim\n and Lin (2011).","Published":"2013-02-07","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"beadarrayMSV","Version":"1.1.0","Title":"Analysis of Illumina BeadArray SNP data including MSV markers","Description":"Imports bead-summary data from Illumina scanner.\n Pre-processes using a suite of optional normalizations and\n transformations. Clusters and automatically calls genotypes,\n critically able to handle markers in duplicated regions of the\n genome (multisite variants; MSVs). Interactive clustering if\n needed. MSVs with variation in both paralogs may be resolved\n and mapped to their respective chromosomes. Quality control\n including pedigree checking and visual assessment of clusters.\n Too large data-sets are handled by working on smaller subsets\n of the data in sequence.","Published":"2011-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"beanplot","Version":"1.2","Title":"Visualization via Beanplots (like Boxplot/Stripchart/Violin\nPlot)","Description":"Plots univariate comparison graphs, an alternative to\n boxplot/stripchart/violin plot.","Published":"2014-09-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"beanz","Version":"2.1","Title":"Bayesian Analysis of Heterogeneous Treatment Effect","Description":"It is vital to assess the heterogeneity of treatment effects\n (HTE) when making health care decisions for an individual patient or a group\n of patients. Nevertheless, it remains challenging to evaluate HTE based\n on information collected from clinical studies that are often designed and\n conducted to evaluate the efficacy of a treatment for the overall population.\n The Bayesian framework offers a principled and flexible approach to estimate\n and compare treatment effects across subgroups of patients defined by their\n characteristics. This package allows users to explore a wide range of Bayesian\n HTE analysis models, and produce posterior inferences about HTE.","Published":"2017-05-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"BEDASSLE","Version":"1.5","Title":"Quantifies effects of geo/eco distance on genetic\ndifferentiation","Description":"provides functions that allow users to quantify the relative \n\tcontributions of geographic and ecological distances to empirical patterns of genetic \n\tdifferentiation on a landscape. Specifically, we use a custom Markov chain \n\tMonte Carlo (MCMC) algorithm, which is used to estimate the parameters of the \n\tinference model, as well as functions for performing MCMC diagnosis and assessing \n\tmodel adequacy.","Published":"2014-12-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BEDMatrix","Version":"1.4.0","Title":"Extract Genotypes from a PLINK .bed File","Description":"A matrix-like data structure that allows for efficient,\n convenient, and scalable subsetting of binary genotype/phenotype files\n generated by PLINK (), the whole\n genome association analysis toolset, without loading the entire file into\n memory.","Published":"2017-05-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bedr","Version":"1.0.3","Title":"Genomic Region Processing using Tools Such as BEDtools, BEDOPS\nand Tabix","Description":"Genomic regions processing using open-source command line tools such as BEDtools, BEDOPS and Tabix. \n These tools offer scalable and efficient utilities to perform genome arithmetic e.g indexing, formatting and merging.\n bedr API enhances access to these tools as well as offers additional utilities for genomic regions processing.","Published":"2016-08-23","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"beepr","Version":"1.2","Title":"Easily Play Notification Sounds on any Platform","Description":"The sole function of this package is beep(), with the purpose to\n make it easy to play notification sounds on whatever platform you are on.\n It is intended to be useful, for example, if you are running a long analysis\n in the background and want to know when it is ready.","Published":"2015-06-14","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"beeswarm","Version":"0.2.3","Title":"The Bee Swarm Plot, an Alternative to Stripchart","Description":"The bee swarm plot is a one-dimensional scatter plot like \"stripchart\", but with closely-packed, non-overlapping points. ","Published":"2016-04-25","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"beginr","Version":"0.0.1","Title":"Functions for R Beginners","Description":"Useful functions for R beginners, including hints for the arguments of the 'plot()' function, self-defined functions for error bars, user-customized pair plots and hist plots, enhanced linear regression figures, etc.. This package could be helpful to R experts as well.","Published":"2017-06-23","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"belex","Version":"0.1.0","Title":"Download Historical Data from the Belgrade Stock Exchange","Description":"Tools for downloading historical financial data from the www.belex.rs.","Published":"2016-08-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"benchden","Version":"1.0.5","Title":"28 benchmark densities from Berlinet/Devroye (1994)","Description":"Full implementation of the 28 distributions introduced as\n benchmarks for nonparametric density estimation by Berlinet and\n Devroye (1994). Includes densities, cdfs, quantile functions\n and generators for samples as well as additional information on\n features of the densities. Also contains the 4 histogram\n densities used in Rozenholc/Mildenberger/Gather (2010).","Published":"2012-02-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"benchmark","Version":"0.3-6","Title":"Benchmark Experiments Toolbox","Description":"The benchmark package provides a toolbox for setup, execution\n and analysis of benchmark experiments. Main focus is the analysis of\n data accumulating during the execution -- one primary objective is the\n statistical correct computation of the candidate algorithms' order.","Published":"2014-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Benchmarking","Version":"0.26","Title":"Benchmark and Frontier Analysis Using DEA and SFA","Description":"Methods for frontier\n\tanalysis, Data Envelopment Analysis (DEA), under different\n\ttechnology assumptions (fdh, vrs, drs, crs, irs, add/frh, and fdh+),\n\tand using different efficiency measures (input based, output based,\n\thyperbolic graph, additive, super, and directional efficiency). Peers\n\tand slacks are available, partial price information can be included,\n\tand optimal cost, revenue and profit can be calculated. Evaluation of\n\tmergers is also supported. Methods for graphing the technology sets\n\tare also included. There is also support comparative methods based\n\ton Stochastic Frontier Analyses (SFA). In general, the methods can be\n\tused to solve not only standard models, but also many other model\n\tvariants. It complements the book, Bogetoft and Otto,\n\tBenchmarking with DEA, SFA, and R, Springer-Verlag, 2011, but can of\n\tcourse also be used as a stand-alone package.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"benchmarkme","Version":"0.4.0","Title":"Crowd Sourced System Benchmarks","Description":"Benchmark your CPU and compare against other CPUs. Also provides \n functions for obtaining system specifications, such as\n RAM, CPU type, and R version.","Published":"2017-01-05","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"benchmarkmeData","Version":"0.4.0","Title":"Data Set for the 'benchmarkme' Package","Description":"Crowd sourced benchmarks from running the 'benchmarkme' package.","Published":"2017-01-04","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"benchr","Version":"0.2.0","Title":"High Precise Measurement of R Expressions Execution Time","Description":"Provides infrastructure to accurately measure and compare\n the execution time of R expressions.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"benford.analysis","Version":"0.1.4.1","Title":"Benford Analysis for Data Validation and Forensic Analytics","Description":"Provides tools that make it easier to validate data using Benford's Law.","Published":"2017-03-22","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BenfordTests","Version":"1.2.0","Title":"Statistical Tests for Evaluating Conformity to Benford's Law","Description":"Several specialized statistical tests and support functions \n\t\t\tfor determining if numerical data could conform to Benford's law.","Published":"2015-08-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bentcableAR","Version":"0.3.0","Title":"Bent-Cable Regression for Independent Data or Autoregressive\nTime Series","Description":"\n\tIncluded are two main interfaces for fitting and diagnosing\n\tbent-cable regressions for autoregressive time-series data or\n\tindependent data (time series or otherwise): 'bentcable.ar()' and\n\t'bentcable.dev.plot()'. Some components in the package can also be\n\tused as stand-alone functions. The bent cable\n\t(linear-quadratic-linear) generalizes the broken stick\n\t(linear-linear), which is also handled by this package. Version 0.2\n\tcorrects a glitch in the computation of confidence intervals for the\n\tCTP. References that were updated from Versions 0.2.1 and 0.2.2 appear\n\tin Version 0.2.3 and up. Version 0.3.0 improves robustness of the\n\terror-message producing mechanism. It is the author's intention to\n\tdistribute any future updates via GitHub.","Published":"2015-04-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"BEQI2","Version":"2.0-0","Title":"Benthic Ecosystem Quality Index 2","Description":"Tool for analysing benthos data. It estimates several quality \n indices like the total abundance of species, species richness, \n Margalef's d, AZTI Marine Biotic Index (AMBI), and the BEQI-2 index. \n Furthermore, additional (optional) features are provided that enhance data \n preprocessing: (1) genus to species conversion, i.e.,taxa counts at the \n taxonomic genus level can optionally be converted to the species level and\n (2) pooling: small samples are combined to bigger samples with a \n standardized size to (a) meet the data requirements of the AMBI, \n (b) generate comparable species richness values and \n (c) give a higher benthos signal to noise ratio.","Published":"2015-01-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"ber","Version":"4.0","Title":"Batch Effects Removal","Description":"The functions in this package remove batch effects from\n microarrary normalized data. The expression levels of the genes\n are represented in a matrix where rows correspond to\n independent samples and columns to genes (variables). The\n batches are represented by categorical variables (objects of\n class factor). When further covariates of interest are\n available they can be used to remove efficiently the batch\n effects and adjust the data.","Published":"2013-03-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Bergm","Version":"4.0.0","Title":"Bayesian Exponential Random Graph Models","Description":"Set of tools to analyse Bayesian exponential random graph models.","Published":"2017-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"berryFunctions","Version":"1.15.0","Title":"Function Collection Related to Plotting and Hydrology","Description":"Draw horizontal histograms, color scattered points by 3rd dimension,\n enhance date- and log-axis plots, zoom in X11 graphics, trace errors and warnings, \n use the unit hydrograph in a linear storage cascade, convert lists to data.frames and arrays, \n fit multiple functions.","Published":"2017-04-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BeSS","Version":"1.0.1","Title":"Best Subset Selection for Sparse Generalized Linear Model and\nCox Model","Description":"An implementation of best subset selection in generalized linear model and Cox proportional hazard model via the primal dual active set algorithm. The algorithm formulates coefficient parameters and residuals as primal and dual variables and utilizes efficient active set selection strategies based on the complementarity of the primal and dual variables.","Published":"2017-05-26","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"Bessel","Version":"0.5-5","Title":"Bessel -- Bessel Functions Computations and Approximations","Description":"Bessel Function Computations for complex and real numbers;\n notably interfacing TOMS 644; approximations for large arguments,\n experiments, etc.","Published":"2013-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BEST","Version":"0.5.0","Title":"Bayesian Estimation Supersedes the t-Test","Description":"An alternative to t-tests, producing posterior estimates\n for group means and standard deviations and their differences and\n effect sizes.","Published":"2017-05-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"bestglm","Version":"0.36","Title":"Best Subset GLM","Description":"Best subset glm using information criteria or cross-validation.","Published":"2017-02-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BetaBit","Version":"1.3","Title":"Mini Games from Adventures of Beta and Bit","Description":"Three games: proton, frequon and regression. Each one is a console-based data-crunching game for younger and older data scientists.\n Act as a data-hacker and find Slawomir Pietraszko's credentials to the Proton server.\n In proton you have to solve four data-based puzzles to find the login and password.\n There are many ways to solve these puzzles. You may use loops, data filtering, ordering, aggregation or other tools.\n Only basics knowledge of R is required to play the game, yet the more functions you know, the more approaches you can try.\n In frequon you will help to perform statistical cryptanalytic attack on a corpus of ciphered messages.\n This time seven sub-tasks are pushing the bar much higher. Do you accept the challenge?\n In regression you will test your modeling skills in a series of eight sub-tasks.\n Try only if ANOVA is your close friend.\n It's a part of Beta and Bit project.\n You will find more about the Beta and Bit project at .","Published":"2016-07-30","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"betacal","Version":"0.1.0","Title":"Beta Calibration","Description":"Fit beta calibration models and obtain calibrated probabilities from\n them.","Published":"2017-02-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"betafam","Version":"1.0","Title":"Detecting rare variants for quantitative traits using nuclear\nfamilies","Description":"To detecting rare variants for quantitative traits using\n nuclear families, the linear combination methods are proposed\n using the estimated regression coefficients from the multiple\n regression and regularized regression as the weights.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"betalink","Version":"2.2.1","Title":"Beta-Diversity of Species Interactions","Description":"Measures of beta-diversity in networks, and easy visualization of why two networks are different.","Published":"2016-03-26","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"betapart","Version":"1.4-1","Title":"Partitioning Beta Diversity into Turnover and Nestedness\nComponents","Description":"Functions to compute pair-wise dissimilarities (distance matrices) and multiple-site dissimilarities, separating the turnover and nestedness-resultant components of taxonomic (incidence and abundance based), functional and phylogenetic beta diversity.","Published":"2017-01-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"betaper","Version":"1.1-0","Title":"Functions to incorporate taxonomic uncertainty on multivariate\nanalyses of ecological data","Description":"Permutational method to incorporate taxonomic uncertainty\n and some functions to assess its effects on parameters of some\n widely used multivariate methods in ecology","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"betareg","Version":"3.1-0","Title":"Beta Regression","Description":"Beta regression for modeling beta-distributed dependent variables, e.g., rates and proportions.\n In addition to maximum likelihood regression (for both mean and precision of a beta-distributed\n response), bias-corrected and bias-reduced estimation as well as finite mixture models and\n recursive partitioning for beta regressions are provided.","Published":"2016-08-06","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"betas","Version":"0.1.1","Title":"Standardized Beta Coefficients","Description":"Computes standardized beta coefficients and corresponding\n standard errors for the following models:\n linear regression models with numerical covariates only,\n linear regression models with numerical and factorial covariates,\n weighted linear regression models,\n all these linear regression models with interaction terms, and\n robust linear regression models with numerical covariates only.","Published":"2015-03-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"betategarch","Version":"3.3","Title":"Simulation, Estimation and Forecasting of Beta-Skew-t-EGARCH\nModels","Description":"Simulation, estimation and forecasting of first-order Beta-Skew-t-EGARCH models with leverage (one-component, two-component, skewed versions).","Published":"2016-10-16","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bethel","Version":"0.2","Title":"Bethel's algorithm","Description":"The sample size according to the Bethel's procedure.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BETS","Version":"0.2.1","Title":"Brazilian Economic Time Series","Description":"It provides access to and information about the most important\n Brazilian economic time series - from the Getulio Vargas Foundation, the Central\n Bank of Brazil and the Brazilian Institute of Geography and Statistics. It also\n presents tools for managing, analysing (e.g. generating dynamic reports with a\n complete analysis of a series) and exporting these time series.","Published":"2017-04-30","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BeviMed","Version":"5.0","Title":"Bayesian Evaluation of Variant Involvement in Mendelian Disease","Description":"A fast integrative genetic association test for rare diseases based on a model for disease status given allele counts at rare variant sites. Probability of association, mode of inheritance and probability of pathogenicity for individual variants are all inferred in a Bayesian framework.","Published":"2017-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"beyondWhittle","Version":"0.18.1","Title":"Bayesian Spectral Inference for Stationary Time Series","Description":"Implementations of a Bayesian parametric (autoregressive), a Bayesian nonparametric (Whittle likelihood with Bernstein-Dirichlet prior) and a Bayesian semiparametric (autoregressive likelihood with Bernstein-Dirichlet correction) procedure are provided. The work is based on the corrected parametric likelihood by C. Kirch et al (2017) . It was supported by DFG grant KI 1443/3-1.","Published":"2017-04-07","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"bezier","Version":"1.1","Title":"Bezier Curve and Spline Toolkit","Description":"The bezier package is a toolkit for working with Bezier curves and splines. The package provides functions for point generation, arc length estimation, degree elevation and curve fitting.","Published":"2014-07-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bfa","Version":"0.4","Title":"Bayesian Factor Analysis","Description":"Provides model fitting for\n several Bayesian factor models including Gaussian,\n ordinal probit, mixed and semiparametric Gaussian\n copula factor models under a range of priors.","Published":"2016-09-29","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bfast","Version":"1.5.7","Title":"Breaks For Additive Season and Trend (BFAST)","Description":"BFAST integrates the decomposition of time series into trend,\n seasonal, and remainder components with methods for detecting\n\t and characterizing abrupt changes within the trend and seasonal\n\t components. BFAST can be used to analyze different types of\n\t satellite image time series and can be applied to other disciplines\n\t dealing with seasonal or non-seasonal time series, such as hydrology,\n\t climatology, and econometrics. The algorithm can be extended to\n\t label detected changes with information on the parameters of the\n\t fitted piecewise linear models. BFAST monitoring functionality is added\n\t based on a paper that has been submitted to Remote Sensing of Environment.\n\t BFAST monitor provides functionality to detect disturbance in near real-time based on BFAST-type models.\n BFAST approach is flexible approach that handles missing data without interpolation.\n Furthermore now different models can be used to fit the time series data and detect structural changes (breaks).","Published":"2014-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bfork","Version":"0.1.2","Title":"Basic Unix Process Control","Description":"Wrappers for fork()/waitpid() meant to allow R users to quickly\n and easily fork child processes and wait for them to finish.","Published":"2016-01-04","License":"MPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bfp","Version":"0.0-35","Title":"Bayesian Fractional Polynomials","Description":"Implements the Bayesian paradigm for fractional\n polynomial models under the assumption of normally distributed error terms.","Published":"2017-04-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BGData","Version":"1.0.0","Title":"A Suite of Packages for Analysis of Big Genomic Data","Description":"An umbrella package providing a phenotype/genotype data structure\n and scalable and efficient computational methods for large genomic datasets\n in combination with several other packages: 'BEDMatrix', 'LinkedMatrix',\n and 'symDMatrix'.","Published":"2017-05-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bgeva","Version":"0.3-1","Title":"Binary Generalized Extreme Value Additive Models","Description":"Routine for fitting regression models for binary rare events with linear and nonlinear covariate effects when using the quantile function of the Generalized Extreme Value random variable.","Published":"2017-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bglm","Version":"1.0","Title":"Bayesian Estimation in Generalized Linear Models","Description":"Implementation of Bayesian estimation in generalized linear models following Gamerman (1997).","Published":"2014-11-20","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BGLR","Version":"1.0.5","Title":"Bayesian Generalized Linear Regression","Description":"Bayesian Generalized Linear Regression.","Published":"2016-08-19","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bgmfiles","Version":"0.0.6","Title":"Example BGM Files for the Atlantis Ecosystem Model","Description":"A collection of box-geometry model (BGM) files for the Atlantis \n ecosystem model. Atlantis is a deterministic, biogeochemical, \n whole-of-ecosystem model (see for more information).","Published":"2016-08-10","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"bgmm","Version":"1.8.3","Title":"Gaussian Mixture Modeling Algorithms and the Belief-Based\nMixture Modeling","Description":"Two partially supervised mixture modeling methods: \n soft-label and belief-based modeling are implemented.\n For completeness, we equipped the package also with the\n functionality of unsupervised, semi- and fully supervised\n mixture modeling. The package can be applied also to selection\n of the best-fitting from a set of models with different\n component numbers or constraints on their structures.\n For detailed introduction see:\n Przemyslaw Biecek, Ewa Szczurek, Martin Vingron, Jerzy\n Tiuryn (2012), The R Package bgmm: Mixture Modeling with\n Uncertain Knowledge, Journal of Statistical Software \n .","Published":"2017-02-27","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BGPhazard","Version":"1.2.3","Title":"Markov Beta and Gamma Processes for Modeling Hazard Rates","Description":"Computes the hazard rate estimate as described by Nieto-Barajas and Walker (2002) and Nieto-Barajas (2003).","Published":"2016-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BGSIMD","Version":"1.0","Title":"Block Gibbs Sampler with Incomplete Multinomial Distribution","Description":"Implement an efficient block Gibbs sampler with incomplete\n data from a multinomial distribution taking values from the k\n categories 1,2,...,k, where data are assumed to miss at random\n and each missing datum belongs to one and only one of m\n distinct non-empty proper subsets A1, A2,..., Am of 1,2,...,k\n and the k categories are labelled such that only consecutive\n A's may overlap.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bgsmtr","Version":"0.1","Title":"Bayesian Group Sparse Multi-Task Regression","Description":"Fits a Bayesian group-sparse multi-task regression model using Gibbs\n sampling. The hierarchical prior encourages shrinkage of the estimated regression\n coefficients at both the gene and SNP level. The model has been applied\n successfully to imaging phenotypes of dimension up to 100; it can be used more\n generally for multivariate (non-imaging) phenotypes.","Published":"2016-10-04","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BH","Version":"1.62.0-1","Title":"Boost C++ Header Files","Description":"Boost provides free peer-reviewed portable C++ source \n libraries. A large part of Boost is provided as C++ template code\n which is resolved entirely at compile-time without linking. This \n package aims to provide the most useful subset of Boost libraries \n for template use among CRAN package. By placing these libraries in \n this package, we offer a more efficient distribution system for CRAN \n as replication of this code in the sources of other packages is \n avoided. As of release 1.62.0-1, the following Boost libraries are\n included: 'algorithm' 'any' 'atomic' 'bimap' 'bind' 'circular_buffer'\n 'concept' 'config' 'container' 'date'_'time' 'detail' 'dynamic_bitset'\n 'exception' 'filesystem' 'flyweight' 'foreach' 'functional' 'fusion'\n 'geometry' 'graph' 'heap' 'icl' 'integer' 'interprocess' 'intrusive' 'io'\n 'iostreams' 'iterator' 'math' 'move' 'mpl' 'multiprcecision' 'numeric'\n 'pending' 'phoenix' 'preprocessor' 'propery_tree' 'random' 'range'\n 'scope_exit' 'smart_ptr' 'spirit' 'tuple' 'type_traits' 'typeof' 'unordered'\n 'utility' 'uuid'.","Published":"2016-11-19","License":"BSL-1.0","snapshot_date":"2017-06-23"}
{"Package":"Bhat","Version":"0.9-10","Title":"General likelihood exploration","Description":"Functions for MLE, MCMC, CIs (originally in Fortran)","Published":"2013-01-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BHH2","Version":"2016.05.31","Title":"Useful Functions for Box, Hunter and Hunter II","Description":"Functions and data sets reproducing some examples in\n Box, Hunter and Hunter II. Useful for statistical design\n of experiments, especially factorial experiments. ","Published":"2016-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bhm","Version":"1.11","Title":"Biomarker Threshold Models","Description":"Biomarker threshold models are tools to fit both predictive and prognostic biomarker effects. ","Published":"2017-05-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BHMSMAfMRI","Version":"1.1","Title":"Bayesian Hierarchical Multi-Subject Multiscale Analysis of\nFunctional MRI Data","Description":"Performs Bayesian hierarchical multi-subject multiscale analysis of fMRI data as described in Sanyal & Ferreira (2012) using wavelet based prior that borrows strength across subjects and returns posterior smoothed versions of the fMRI data and samples from the posterior distribution.","Published":"2017-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BHPMF","Version":"1.0","Title":"Uncertainty Quantified Matrix Completion using Bayesian\nHierarchical Matrix Factorization","Description":"Fills the gaps of a matrix incorporating a hierarchical side\n information while providing uncertainty quantification.","Published":"2017-06-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"biasbetareg","Version":"1.0","Title":"Bias correction of the parameter estimates of the beta\nregression model","Description":"Bias correction of second order of the maximum likelihood\n estimators of the parameters of the beta regression model.","Published":"2012-10-01","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"BiasedUrn","Version":"1.07","Title":"Biased Urn Model Distributions","Description":"Statistical models of biased sampling in the form of \n univariate and multivariate noncentral hypergeometric distributions, \n including Wallenius' noncentral hypergeometric distribution and\n Fisher's noncentral hypergeometric distribution \n (also called extended hypergeometric distribution). \n See vignette(\"UrnTheory\") for explanation of these distributions.","Published":"2015-12-28","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bib2df","Version":"0.2","Title":"Parse a BibTeX File to a Data.frame","Description":"Parse a BibTeX file to a data.frame to make it accessible for further analysis and visualization.","Published":"2017-05-21","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BiBitR","Version":"0.2.2","Title":"R Wrapper for Java Implementation of BiBit","Description":"A simple R wrapper for the Java BiBit algorithm from \"A\n biclustering algorithm for extracting bit-patterns from binary datasets\"\n from Domingo et al. (2011) . An simple adaption for the BiBit algorithm which allows noise in the biclusters is also introduced. Further, a workflow to guide the algorithm towards given patterns is included as well. ","Published":"2017-02-11","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bibliometrix","Version":"1.6","Title":"Bibliometric and Co-Citation Analysis Tool","Description":"Tool for quantitative research in scientometrics and bibliometrics.\n It provides various routines for importing bibliographic data from SCOPUS () and \n Thomson Reuters' ISI Web of Knowledge () databases, performing bibliometric analysis \n and building data matrices for co-citation, coupling, scientific collaboration and co-word analysis.","Published":"2017-05-09","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bibliospec","Version":"0.0.4","Title":"Reading Mass Spectrometric Search Results","Description":"R class to access 'sqlite', 'BiblioSpec' generated, mass spectrometry search result files,\n containing detailed information about peptide spectra matches.\n Convert 'Mascot' '.dat' or e.g. 'comet' '.pep.xml' files with 'BiblioSpec' into 'sqlite' files and than \n access them with the 'CRAN' 'bibliospec' package to analyse with the R-packages 'specL' to generate\n spectra libraries, 'protViz' to annotate spectra, or 'prozor' for false discovery rate \n estimation and protein inference.","Published":"2016-07-01","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bibtex","Version":"0.4.0","Title":"bibtex parser","Description":"Utility to parse a bibtex file","Published":"2014-12-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"biclique","Version":"1.0.1","Title":"Maximal Biclique Enumeration in Bipartite Graphs","Description":"A tool for enumerating maximal complete bipartite graphs. The input should be a edge list file or a binary matrix file. \n The output are maximal complete bipartite graphs. Algorithms used can be found in this paper Y Zhang et al. BMC Bioinformatics 2014 15:110 .","Published":"2017-05-07","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"biclust","Version":"1.2.0","Title":"BiCluster Algorithms","Description":"The main function biclust provides several algorithms to\n find biclusters in two-dimensional data: Cheng and Church,\n Spectral, Plaid Model, Xmotifs and Bimax. In addition, the\n package provides methods for data preprocessing (normalization\n and discretisation), visualisation, and validation of bicluster\n solutions.","Published":"2015-05-12","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BiDimRegression","Version":"1.0.6","Title":"Calculates the bidimensional regression between two 2D\nconfigurations","Description":"An S3 class with a method for calculates the bidimensional regression between two 2D configurations following the approach by Tobler (1965).","Published":"2014-03-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BIEN","Version":"1.1.0","Title":"Tools for Accessing the Botanical Information and Ecology\nNetwork Database","Description":"Provides Tools for Accessing the Botanical Information and Ecology Network Database. The BIEN database contains cleaned and standardized botanical data including occurrence, trait, plot and taxonomic data (See for more Information). This package provides functions that query the BIEN database by constructing and executing optimized SQL queries.","Published":"2017-03-08","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bifactorial","Version":"1.4.7","Title":"Inferences for bi- and trifactorial trial designs","Description":"This package makes global and multiple inferences for\n given bi- and trifactorial clinical trial designs using\n bootstrap methods and a classical approach.","Published":"2013-03-04","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"bife","Version":"0.4","Title":"Binary Choice Models with Fixed Effects","Description":"Estimates fixed effects binary choice models (logit and probit) with potentially many individual fixed effects and computes average partial effects. Incidental parameter bias can be reduced with a bias-correction proposed by Hahn and Newey (2004) .","Published":"2017-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BIFIEsurvey","Version":"2.1-6","Title":"Tools for Survey Statistics in Educational Assessment","Description":"\n Contains tools for survey statistics (especially in educational\n assessment) for datasets with replication designs (jackknife, \n bootstrap, replicate weights). Descriptive statistics, linear\n and logistic regression, path models for manifest variables\n with measurement error correction and two-level hierarchical\n regressions for weighted samples are included. Statistical \n inference can be conducted for multiply imputed datasets and\n nested multiply imputed datasets. \n This package is developed by BIFIE (Federal Institute for \n Educational Research, Innovation and Development of the Austrian \n School System; Salzburg, Austria).","Published":"2017-05-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bigalgebra","Version":"0.8.4","Title":"BLAS routines for native R matrices and big.matrix objects","Description":"This package provides arithmetic functions for R matrix and big.matrix objects.","Published":"2014-04-16","License":"LGPL-3 | Apache License 2.0","snapshot_date":"2017-06-23"}
{"Package":"biganalytics","Version":"1.1.14","Title":"Utilities for 'big.matrix' Objects from Package 'bigmemory'","Description":"Extend the 'bigmemory' package with various analytics.\n Functions 'bigkmeans' and 'binit' may also be used with native R objects.\n For 'tapply'-like functions, the bigtabulate package may also be helpful.\n For linear algebra support, see 'bigalgebra'. For mutex (locking) support\n for advanced shared-memory usage, see 'synchronicity'.","Published":"2016-02-18","License":"LGPL-3 | Apache License 2.0","snapshot_date":"2017-06-23"}
{"Package":"BIGDAWG","Version":"1.5.5","Title":"Case-Control Analysis of Multi-Allelic Loci","Description":"Data sets and functions for chi-squared Hardy-Weinberg and case-control\n association tests of highly polymorphic genetic data [e.g., human leukocyte antigen\n (HLA) data]. Performs association tests at multiple levels of polymorphism\n (haplotype, locus and HLA amino-acids) as described in Pappas DJ, Marin W, Hollenbach\n JA, Mack SJ (2016) . Combines rare variants to a \n common class to account for sparse cells in tables as described by Hollenbach JA, \n Mack SJ, Thomson G, Gourraud PA (2012) .","Published":"2016-08-31","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"bigFastlm","Version":"0.0.2","Title":"Fast Linear Models for Objects from the 'bigmemory' Package","Description":"A reimplementation of the fastLm() functionality of 'RcppEigen' for\n big.matrix objects for fast out-of-memory linear model fitting.","Published":"2017-03-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bigGP","Version":"0.1-6","Title":"Distributed Gaussian Process Calculations","Description":"Distributes Gaussian process calculations across nodes\n in a distributed memory setting, using Rmpi. The bigGP class \n provides high-level methods for maximum likelihood with normal data, \n prediction, calculation of uncertainty (i.e., posterior covariance \n calculations), and simulation of realizations. In addition, bigGP \n provides an API for basic matrix calculations with distributed \n covariance matrices, including Cholesky decomposition, back/forwardsolve, \n crossproduct, and matrix multiplication.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bigKRLS","Version":"1.5.3","Title":"Optimized Kernel Regularized Least Squares","Description":"Functions for Kernel-Regularized Least Squares optimized for speed and memory usage are provided along with visualization tools. \n For working papers, sample code, and recent presentations visit .","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"biglars","Version":"1.0.2","Title":"Scalable Least-Angle Regression and Lasso","Description":"Least-angle regression, lasso and stepwise regression for\n numeric datasets in which the number of observations is greater\n than the number of predictors. The functions can be used with\n the ff library to accomodate datasets that are too large to be\n held in memory.","Published":"2011-12-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"biglasso","Version":"1.3-6","Title":"Extending Lasso Model Fitting to Big Data","Description":"Extend lasso and elastic-net model fitting for ultrahigh-dimensional, \n multi-gigabyte data sets that cannot be loaded into memory. It's much more \n memory- and computation-efficient as compared to existing lasso-fitting packages \n like 'glmnet' and 'ncvreg', thus allowing for very powerful big data analysis \n even with an ordinary laptop.","Published":"2017-05-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"biglm","Version":"0.9-1","Title":"bounded memory linear and generalized linear models","Description":"Regression for data too large to fit in memory","Published":"2013-05-16","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"bigmemory","Version":"4.5.19","Title":"Manage Massive Matrices with Shared Memory and Memory-Mapped\nFiles","Description":"Create, store, access, and manipulate massive matrices.\n Matrices are allocated to shared memory and may use memory-mapped\n files. Packages 'biganalytics', 'bigtabulate', 'synchronicity', and\n 'bigalgebra' provide advanced functionality.","Published":"2016-03-28","License":"LGPL-3 | Apache License 2.0","snapshot_date":"2017-06-23"}
{"Package":"bigmemory.sri","Version":"0.1.3","Title":"A shared resource interface for Bigmemory Project packages","Description":"This package provides a shared resource interface for the bigmemory and synchronicity packages.","Published":"2014-08-18","License":"LGPL-3 | Apache License 2.0","snapshot_date":"2017-06-23"}
{"Package":"bigml","Version":"0.1.2","Title":"Bindings for the BigML API","Description":"The 'bigml' package contains bindings for the BigML API.\n The package includes methods that provide straightforward access\n to basic API functionality, as well as methods that accommodate\n idiomatic R data types and concepts.","Published":"2015-05-20","License":"LGPL-3","snapshot_date":"2017-06-23"}
{"Package":"bigpca","Version":"1.0.3","Title":"PCA, Transpose and Multicore Functionality for 'big.matrix'\nObjects","Description":"Adds wrappers to add functionality for big.matrix objects (see the bigmemory project).\n This allows fast scalable principle components analysis (PCA), or singular value decomposition (SVD).\n There are also functions for transposing, using multicore 'apply' functionality, data importing \n and for compact display of big.matrix objects. Most functions also work for standard matrices if \n RAM is sufficient.","Published":"2015-11-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bigQueryR","Version":"0.3.1","Title":"Interface with Google BigQuery with Shiny Compatibility","Description":"Interface with 'Google BigQuery',\n see for more information.\n This package uses 'googleAuthR' so is compatible with similar packages, \n including 'Google Cloud Storage' () for result extracts. ","Published":"2017-05-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"BigQuic","Version":"1.1-7","Title":"Big Quadratic Inverse Covariance Estimation","Description":"Use Newton's method, coordinate descent, and METIS clustering\n to solve the L1 regularized Gaussian MLE inverse covariance\n matrix estimation problem.","Published":"2017-02-02","License":"GPL (>= 3) | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bigReg","Version":"0.1.2","Title":"Generalized Linear Models (GLM) for Large Data Sets","Description":"Allows the user to carry out GLM on very large\n data sets. Data can be created using the data_frame() function and appended\n to the object with object$append(data); data_frame and data_matrix objects\n are available that allow the user to store large data on disk. The data is\n stored as doubles in binary format and any character columns are transformed\n to factors and then stored as numeric (binary) data while a look-up table is\n stored in a separate .meta_data file in the same folder. The data is stored in\n blocks and GLM regression algorithm is modified and carries out a MapReduce-\n like algorithm to fit the model. The functions bglm(), and summary()\n and bglm_predict() are available for creating and post-processing of models.\n The library requires Armadillo installed on your system. It probably won't \n function on windows since multi-core processing is done using mclapply() \n which forks R on Unix/Linux type operating systems.","Published":"2016-07-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bigrquery","Version":"0.4.0","Title":"An Interface to Google's 'BigQuery' 'API'","Description":"Easily talk to Google's 'BigQuery' database from R.","Published":"2017-06-23","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bigRR","Version":"1.3-10","Title":"Generalized Ridge Regression (with special advantage for p >> n\ncases)","Description":"The package fits large-scale (generalized) ridge regression for various distributions of response. The shrinkage parameters (lambdas) can be pre-specified or estimated using an internal update routine (fitting a heteroscedastic effects model, or HEM). It gives possibility to shrink any subset of parameters in the model. It has special computational advantage for the cases when the number of shrinkage parameters exceeds the number of observations. For example, the package is very useful for fitting large-scale omics data, such as high-throughput genotype data (genomics), gene expression data (transcriptomics), metabolomics data, etc.","Published":"2014-08-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BigSEM","Version":"0.2","Title":"Constructing Large Systems of Structural Equations","Description":"Construct large systems of structural equations using the two-stage penalized least squares (2SPLS) method proposed by Chen, Zhang and Zhang (2016).","Published":"2016-09-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bigsplines","Version":"1.1-0","Title":"Smoothing Splines for Large Samples","Description":"Fits smoothing spline regression models using scalable algorithms designed for large samples. Seven marginal spline types are supported: linear, cubic, different cubic, cubic periodic, cubic thin-plate, ordinal, and nominal. Random effects and parametric effects are also supported. Response can be Gaussian or non-Gaussian: Binomial, Poisson, Gamma, Inverse Gaussian, or Negative Binomial.","Published":"2017-02-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bigstep","Version":"0.7.4","Title":"Stepwise Selection for Large Data Sets","Description":"Selecting linear and generalized linear models for large data sets\n using modified stepwise procedure and modern selection criteria (like\n modifications of Bayesian Information Criterion). Selection can be\n performed on data which exceed RAM capacity. Special selection strategy is\n available, faster than classical stepwise procedure.","Published":"2017-04-05","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bigtabulate","Version":"1.1.5","Title":"Table, Apply, and Split Functionality for Matrix and\n'big.matrix' Objects","Description":"Extend the bigmemory package with 'table', 'tapply', and 'split'\n support for 'big.matrix' objects. The functions may also be used with native R\n matrices for improving speed and memory-efficiency.","Published":"2016-02-18","License":"LGPL-3 | Apache License 2.0","snapshot_date":"2017-06-23"}
{"Package":"bigtcr","Version":"1.0","Title":"Nonparametric Analysis of Bivariate Gap Time with Competing\nRisks","Description":"For studying recurrent disease and death with competing\n risks, comparisons based on the well-known cumulative incidence function\n can be confounded by different prevalence rates of the competing events.\n Alternatively, comparisons of the conditional distribution of the survival\n time given the failure event type are more relevant for investigating the\n prognosis of different patterns of recurrence disease. This package implements\n a nonparametric estimator for the conditional cumulative incidence function\n and a nonparametric conditional bivariate cumulative incidence function for the\n bivariate gap times proposed in Huang et al. (2016) .","Published":"2016-10-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"BigTSP","Version":"1.0","Title":"Top Scoring Pair based methods for classification","Description":"This package is trying to implement Top Scoring Pair based\n methods for classification including LDCA, TSP-tree, TSP-random\n forest and TSP gradient boosting algorithm.","Published":"2012-08-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BigVAR","Version":"1.0.2","Title":"Dimension Reduction Methods for Multivariate Time Series","Description":"Estimates VAR and VARX models with structured Lasso Penalties.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bikedata","Version":"0.0.1","Title":"Download and Aggregate Data from Public Hire Bicycle Systems","Description":"Download and aggregate data from all public hire bicycle systems\n which provide open data, currently including Santander Cycles in London,\n U.K., and from the U.S.A., citibike in New York City NY, Divvy in Chicago\n IL, Capital Bikeshare in Washington DC, Hubway in Boston MA, and Metro in\n Los Angeles LA.","Published":"2017-05-31","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bikeshare14","Version":"0.1.0","Title":"Bay Area Bike Share Trips in 2014","Description":"Anonymised Bay Area bike share trip data for the year 2014. \n Also contains additional metadata on stations and weather.","Published":"2016-08-21","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"bild","Version":"1.1-5","Title":"BInary Longitudinal Data","Description":"Performs logistic regression for binary longitudinal\n data, allowing for serial dependence among observations from a given\n individual and a random intercept term. Estimation is via maximization\n of the exact likelihood of a suitably defined model. Missing values and \n unbalanced data are allowed, with some restrictions. ","Published":"2015-04-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bimetallic","Version":"1.0","Title":"Power for SNP analyses using silver standard cases","Description":"A power calculator for Genome-wide association studies\n (GWAs) with combined gold (error-free) and silver (erroneous)\n phenotyping per McDavid A, Crane PK, Newton KM, Crosslin DR, et\n al. (2011)","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bimixt","Version":"1.0","Title":"Estimates Mixture Models for Case-Control Data","Description":"Estimates non-Gaussian mixture models of case-control data. The four types of models supported are binormal, two component constrained, two component unconstrained, and four component. The most general model is the four component model, under which both cases and controls are distributed according to a mixture of two unimodal distributions. In the four component model, the two component distributions of the control mixture may be distinct from the two components of the case mixture distribution. In the two component unconstrained model, the components of the control and case mixtures are the same; however the mixture probabilities may differ for cases and controls. In the two component constrained model, all controls are distributed according to one of the two components while cases follow a mixture distribution of the two components. In the binormal model, cases and controls are distributed according to distinct unimodal distributions. These models assume that Box-Cox transformed case and control data with a common lambda parameter are distributed according to Gaussian mixture distributions. Model parameters are estimated using the expectation-maximization (EM) algorithm. Likelihood ratio test comparison of nested models can be performed using the lr.test function. AUC and PAUC values can be computed for the model-based and empirical ROC curves using the auc and pauc functions, respectively. The model-based and empirical ROC curves can be graphed using the roc.plot function. Finally, the model-based density estimates can be visualized by plotting a model object created with the bimixt.model function. ","Published":"2015-08-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"Binarize","Version":"1.2","Title":"Binarization of One-Dimensional Data","Description":"Provides methods for the binarization of one-dimensional data and some visualization functions.","Published":"2017-02-06","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"BinaryEMVS","Version":"0.1","Title":"Variable Selection for Binary Data Using the EM Algorithm","Description":"Implements variable selection for high dimensional datasets with a binary response\n variable using the EM algorithm. Both probit and logit models are supported. Also included \n is a useful function to generate high dimensional data with correlated variables.","Published":"2016-01-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BinaryEPPM","Version":"2.0","Title":"Mean and Variance Modeling of Binary Data","Description":"Modeling under- and over-dispersed binary data using extended Poisson process models (EPPM).","Published":"2016-11-28","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"binaryLogic","Version":"0.3.5","Title":"Binary Logic","Description":"Convert to binary numbers (Base2). Shift, rotate, summary. Based on logical vector.","Published":"2016-06-24","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"binda","Version":"1.0.3","Title":"Multi-Class Discriminant Analysis using Binary Predictors","Description":"The \"binda\" package implements functions for multi-class\n discriminant analysis using binary predictors, for corresponding \n variable selection, and for dichotomizing continuous data.","Published":"2015-07-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"bindata","Version":"0.9-19","Title":"Generation of Artificial Binary Data","Description":"Generation of correlated artificial binary data.","Published":"2012-11-14","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bindr","Version":"0.1","Title":"Parametrized Active Bindings","Description":"Provides a simple interface for creating active bindings where the\n bound function accepts additional arguments.","Published":"2016-11-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bindrcpp","Version":"0.2","Title":"An 'Rcpp' Interface to Active Bindings","Description":"Provides an easy way to fill an environment with active bindings\n that call a C++ function.","Published":"2017-06-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"binequality","Version":"1.0.1","Title":"Methods for Analyzing Binned Income Data","Description":"Methods for model selection, model averaging, and calculating metrics, such as the Gini, Theil, Mean Log Deviation, etc, on binned income data where the topmost bin is right-censored. We provide both a non-parametric method, termed the bounded midpoint estimator (BME), which assigns cases to their bin midpoints; except for the censored bins, where cases are assigned to an income estimated by fitting a Pareto distribution. Because the usual Pareto estimate can be inaccurate or undefined, especially in small samples, we implement a bounded Pareto estimate that yields much better results. We also provide a parametric approach, which fits distributions from the generalized beta (GB) family. Because some GB distributions can have poor fit or undefined estimates, we fit 10 GB-family distributions and use multimodel inference to obtain definite estimates from the best-fitting distributions. We also provide binned income data from all United States of America school districts, counties, and states.","Published":"2016-12-17","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"}
{"Package":"binford","Version":"0.1.0","Title":"Binford's Hunter-Gatherer Data","Description":"Binford's hunter-gatherer data includes more than 200 variables\n coding aspects of hunter-gatherer subsistence, mobility, and social organization\n for 339 ethnographically documented groups of hunter-gatherers.","Published":"2016-08-01","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bingat","Version":"1.2.2","Title":"Binary Graph Analysis Tools","Description":"Tools to analyze binary graph objects.","Published":"2016-01-15","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"}
{"Package":"binGroup","Version":"1.1-0","Title":"Evaluation and experimental design for binomial group testing","Description":"This package provides methods for estimation and\n hypothesis testing of proportions in group testing designs. It\n involves methods for estimating a proportion in a single\n population (assuming sensitivity and specificity 1 in designs\n with equal group sizes), as well as hypothesis tests and\n functions for experimental design for this situation. For\n estimating one proportion or the difference of proportions, a\n number of confidence interval methods are included, which can\n deal with various different pool sizes. Further, regression\n methods are implemented for simple pooling and matrix pooling\n designs.","Published":"2012-08-14","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"binhf","Version":"1.0-1","Title":"Haar-Fisz functions for binomial data","Description":"Binomial Haar-Fisz transforms for Gaussianization","Published":"2014-04-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"binman","Version":"0.1.0","Title":"A Binary Download Manager","Description":"Tools and functions for managing the download of binary files.\n Binary repositories are defined in 'YAML' format. Defining new \n pre-download, download and post-download templates allow additional \n repositories to be added.","Published":"2017-01-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"binMto","Version":"0.0-6","Title":"Asymptotic simultaneous confidence intervals for many-to-one\ncomparisons of proportions","Description":"Asymptotic simultaneous confidence intervals for comparison of many treatments with one control,\n for the difference of binomial proportions, allows for Dunnett-like-adjustment, Bonferroni or unadjusted intervals.\n Simulation of power of the above interval methods, approximate calculation of any-pair-power, and sample size\n iteration based on approximate any-pair power. \n Exact conditional maximum test for many-to-one comparisons to a control.","Published":"2013-10-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BinNonNor","Version":"1.3","Title":"Data Generation with Binary and Continuous Non-Normal Components","Description":"Generation of multiple binary and continuous non-normal variables simultaneously \n given the marginal characteristics and association structure based on the methodology \n proposed by Demirtas et al. (2012).","Published":"2016-05-13","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BinNor","Version":"2.1","Title":"Simultaneous Generation of Multivariate Binary and Normal\nVariates","Description":"Generating multiple binary and normal variables simultaneously given marginal characteristics and association structure based on the methodology proposed by Demirtas and Doganay (2012).","Published":"2016-05-27","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"binom","Version":"1.1-1","Title":"Binomial Confidence Intervals For Several Parameterizations","Description":"Constructs confidence intervals on the probability of\n success in a binomial experiment via several parameterizations","Published":"2014-01-02","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"binomen","Version":"0.1.2","Title":"'Taxonomic' Specification and Parsing Methods","Description":"Includes functions for working with taxonomic data,\n including functions for combining, separating, and filtering\n taxonomic groups by any rank or name. Allows standard ('SE')\n and non-standard evaluation ('NSE').","Published":"2017-04-25","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"binomialcftp","Version":"1.0","Title":"Generates binomial random numbers via the coupling from the past\nalgorithm","Description":"Binomial random numbers are generated via the perfect\n sampling algorithm. At each iteration dual markov chains are\n generated and coalescence is checked. In case coalescence\n occurs, the resulting number is outputted. In case not, then\n the algorithm is restarted from T(t)=2*T(t) until coalescence\n occurs.","Published":"2012-09-20","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"binomlogit","Version":"1.2","Title":"Efficient MCMC for Binomial Logit Models","Description":"The R package contains different MCMC schemes to estimate the regression coefficients of a binomial (or binary) logit model within a Bayesian framework: a data-augmented independence MH-sampler, an auxiliary mixture sampler and a hybrid auxiliary mixture (HAM) sampler. All sampling procedures are based on algorithms using data augmentation, where the regression coefficients are estimated by rewriting the logit model as a latent variable model called difference random utility model (dRUM).","Published":"2014-03-12","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"binomSamSize","Version":"0.1-5","Title":"Confidence Intervals and Sample Size Determination for a\nBinomial Proportion under Simple Random Sampling and Pooled\nSampling","Description":"\n A suite of functions to compute confidence intervals and necessary\n sample sizes for the parameter p of the Bernoulli B(p)\n distribution under simple random sampling or under pooled\n sampling. Such computations are e.g. of interest when investigating\n the incidence or prevalence in populations.\n The package contains functions to compute coverage probabilities and\n coverage coefficients of the provided confidence intervals\n procedures. Sample size calculations are based on expected length.","Published":"2017-03-08","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"binomTools","Version":"1.0-1","Title":"Performing diagnostics on binomial regression models","Description":"This package provides a range of diagnostic methods for\n binomial regression models.","Published":"2011-08-09","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"BinOrdNonNor","Version":"1.3","Title":"Concurrent Generation of Binary, Ordinal and Continuous Data","Description":"Generation of samples from a mix of binary, ordinal and continuous random variables with a pre-specified correlation matrix and marginal distributions.","Published":"2017-03-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"binr","Version":"1.1","Title":"Cut Numeric Values into Evenly Distributed Groups","Description":"Implementation of algorithms for cutting numerical values\n exhibiting a potentially highly skewed distribution into evenly distributed\n groups (bins). This functionality can be applied for binning discrete\n values, such as counts, as well as for discretization of continuous values,\n for example, during generation of features used in machine learning\n algorithms.","Published":"2015-03-10","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"}
{"Package":"binseqtest","Version":"1.0.3","Title":"Exact Binary Sequential Designs and Analysis","Description":"For a series of binary responses, create stopping boundary with exact results after stopping, allowing updating for missing assessments.","Published":"2016-12-16","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"binsmooth","Version":"0.1.0","Title":"Generate PDFs and CDFs from Binned Data","Description":"Provides several methods for generating density functions\n based on binned data. Data are assumed to be nonnegative, but the bin widths\n need not be uniform, and the top bin may be unbounded. All PDF smoothing methods\n maintain the areas specified by the binned data. (Equivalently, all CDF\n smoothing methods interpolate the points specified by the binned data.) An\n estimate for the mean of the distribution may be supplied as an optional\n argument, which greatly improves the reliability of statistics computed from\n the smoothed density functions. Methods include step function, recursive\n subdivision, and optimized spline.","Published":"2016-08-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"binst","Version":"0.2.0","Title":"Data Preprocessing, Binning for Classification and Regression","Description":"Various supervised and unsupervised binning tools\n including using entropy, recursive partition methods\n and clustering.","Published":"2016-06-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bio.infer","Version":"1.3-3","Title":"Predict environmental conditions from biological observations","Description":"Imports benthic count data, reformats this data, and\n computes environmental inferences from this data.","Published":"2014-02-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bio3d","Version":"2.3-2","Title":"Biological Structure Analysis","Description":"Utilities to process, organize and explore protein structure,\n sequence and dynamics data. Features include the ability to read and write\n structure, sequence and dynamic trajectory data, perform sequence and structure\n database searches, data summaries, atom selection, alignment, superposition,\n rigid core identification, clustering, torsion analysis, distance matrix\n analysis, structure and sequence conservation analysis, normal mode analysis,\n principal component analysis of heterogeneous structure data, and correlation\n network analysis from normal mode and molecular dynamics data. In addition,\n various utility functions are provided to enable the statistical and graphical\n power of the R environment to work with biological sequence and structural data.\n Please refer to the URLs below for more information.","Published":"2017-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Biocomb","Version":"0.3","Title":"Feature Selection and Classification with the Embedded\nValidation Procedures for Biomedical Data Analysis","Description":"Contains functions for the data analysis with the emphasis on biological data, including several algorithms for feature ranking, feature selection, classification\n algorithms with the embedded validation procedures.\n The functions can deal with numerical as well as with nominal features. Includes also the functions for calculation\n of feature AUC (Area Under the ROC Curve) and HUM (hypervolume under manifold) values and construction 2D- and 3D- ROC curves.\n Provides the calculation of Area Above the RCC (AAC) values and construction of Relative Cost Curves\n (RCC) to estimate the classifier performance under unequal misclassification costs problem.\n There exists the special function to deal with missing values, including different imputing schemes.","Published":"2017-03-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"Biodem","Version":"0.4","Title":"Biodemography Functions","Description":"The Biodem package provides a number of functions for Biodemographic analysis.","Published":"2015-07-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BiodiversityR","Version":"2.8-3","Title":"Package for Community Ecology and Suitability Analysis","Description":"Graphical User Interface (via the R-Commander) and utility functions (often based on the vegan package) for statistical analysis of biodiversity and ecological communities, including species accumulation curves, diversity indices, Renyi profiles, GLMs for analysis of species abundance and presence-absence, distance matrices, Mantel tests, and cluster, constrained and unconstrained ordination analysis. A book on biodiversity and community ecology analysis is available for free download from the website. In 2012, methods for (ensemble) suitability modelling and mapping were expanded in the package.","Published":"2017-06-22","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BIOdry","Version":"0.5","Title":"Multilevel Modeling of Dendroclimatical Fluctuations","Description":"Multilevel ecological data series (MEDS) are sequences of observations ordered according to temporal/spatial hierarchies that are defined by sample designs, with sample variability confined to ecological factors. Dendroclimatic MEDS of tree rings and climate are modeled into normalized fluctuations of tree growth and aridity. Modeled fluctuations (model frames) are compared with Mantel correlograms on multiple levels defined by sample design. Package implementation can be understood by running examples in modelFrame(), and muleMan() functions. ","Published":"2017-04-20","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BioFTF","Version":"1.2-0","Title":"Biodiversity Assessment Using Functional Tools","Description":"The main drawback of the most common biodiversity indices is that different measures may lead to different rankings among communities. This instrument overcomes this limit using some functional tools with the diversity profiles. In particular, the derivatives, the curvature, the radius of curvature, the arc length, and the surface area are proposed. The goal of this method is to interpret in detail the diversity profiles and obtain an ordering between different ecological communities on the basis of diversity. In contrast to the typical indices of diversity, the proposed method is able to capture the multidimensional aspect of biodiversity, because it takes into account both the evenness and the richness of the species present in an ecological community.","Published":"2016-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"biogas","Version":"1.7.0","Title":"Process Biogas Data and Predict Biogas Production","Description":"High- and low-level functions for processing biogas data and predicting biogas production. Molar mass and calculated oxygen demand (COD') can be determined from a chemical formula. Measured gas volume can be corrected for water vapor and to (possibly user-defined) standard temperature and pressure. Gas composition, cumulative production, or other variables can be interpolated to a specified time. Cumulative biogas and methane production (and rates) can be calculated using volumetric, manometric, or gravimetric methods for any number of reactors. With cumulative methane production data and data on reactor contents, biochemical methane potential (BMP) can be calculated and summarized, including subtraction of the inoculum contribution and normalization by substrate mass. Cumulative production and production rates can be summarized in several different ways (e.g., omitting normalization) using the same function. Lastly, biogas quantity and composition can be predicted from substrate composition and additional, optional data.","Published":"2017-02-25","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"biogeo","Version":"1.0","Title":"Point Data Quality Assessment and Coordinate Conversion","Description":"Functions for error detection and correction in point data quality datasets that are used in species distribution modelling. Includes functions for parsing and converting coordinates into decimal degrees from various formats.","Published":"2016-04-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"BioGeoBEARS","Version":"0.2.1","Title":"BioGeography with Bayesian (and Likelihood) Evolutionary\nAnalysis in R Scripts","Description":"BioGeoBEARS allows probabilistic inference of both historical biogeography (ancestral geographic ranges on a phylogeny) as well as comparison of different models of range evolution. It reproduces the model available in LAGRANGE (Ree and Smith 2008), as well as making available numerous additional models. For example, LAGRANGE as typically run has two free parameters, d (dispersal rate, i.e. the rate of range addition along a phylogenetic branch) and e (extinction rate, really the rate of local range loss along a phylogenetic branch). LAGRANGE also has a fixed cladogenic model which gives equal probability to a number of allowed range inheritance events, e.g.: (1) vicariance, (2) a new species starts in a subset of the ancestral range, (3) the ancestral range is copied to both species; in all cases, at least one species must have a starting range of size 1. LAGRANGE assigns equal probability to each of these events, and zero probability to other events. BioGeoBEARS adds an additional cladogenic event: founder-event speciation (the new species jumps to a range outside of the ancestral range), and also allows the relative weighting of the different sorts of events to be made into free parameters, allowing optimization and standard model choice procedures to pick the best model. The relative probability of different descendent range sizes is also parameterized and thus can also be specified or estimated. The flexibility available in BioGeoBEARS also enables the natural incorporation of (1) imperfect detection of geographic ranges in the tips, and (2) inclusion of fossil geographic range data, when the fossils are tips on the phylogeny. Bayesian analysis has been implemented through use of the \"LaplacesDemon\" package, however this package is now maintained off of CRAN, so its usage is not formally included in BioGeoBEARS at the current time. CITATION INFO: This package is the result of my Ph.D. research, please cite the package if you use it! Type: citation(package=\"BioGeoBEARS\") to get the citation information.","Published":"2014-01-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"biogram","Version":"1.4","Title":"N-Gram Analysis of Biological Sequences","Description":"Tools for extraction and analysis of various\n n-grams (k-mers) derived from biological sequences (proteins\n or nucleic acids). Contains QuiPT (quick permutation test) for fast\n feature-filtering of the n-gram data.","Published":"2017-01-06","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"Biograph","Version":"2.0.6","Title":"Explore Life Histories","Description":"Transition rates are computed from transitions and exposures.Useful graphics and life-course indicators are computed. The package structures the data for multistate statistical and demographic modeling of life histories. \t","Published":"2016-03-31","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bioimagetools","Version":"1.1.0","Title":"Tools for Microscopy Imaging","Description":"Tools for 3D imaging, mostly for biology/microscopy. \n Read and write TIFF stacks. Functions for segmentation, filtering and analysing 3D point patterns.","Published":"2017-02-18","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bioinactivation","Version":"1.1.5","Title":"Simulation of Dynamic Microbial Inactivation","Description":"Prediction and adjustment to experimental data of microbial\n inactivation. Several models available in the literature are implemented.","Published":"2017-01-29","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BioInstaller","Version":"0.1.2","Title":"Lightweight Biology Software Installer","Description":"\n Can be used to install and download massive bioinformatics analysis softwares and databases, such as NGS reads mapping tools with its required databases.","Published":"2017-06-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"biolink","Version":"0.1.2","Title":"Create Hyperlinks to Biological Databases and Resources","Description":"Generate urls and hyperlinks to commonly used biological databases\n and resources based on standard identifiers. This is primarily useful when\n writing dynamic reports that reference things like gene symbols in text or\n tables, allowing you to, for example, convert gene identifiers to hyperlinks\n pointing to their entry in the NCBI Gene database. Currently supports NCBI\n Gene, PubMed, Gene Ontology, CRAN and Bioconductor.","Published":"2017-03-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"Biolinv","Version":"0.1-1","Title":"Modelling and Forecasting Biological Invasions","Description":"Analysing and forecasting biological invasions time series\n with a stochastic, non mechanistic approach that gives proper weight\n to the anthropic component, accounts for habitat suitability and\n provides measures of precision for its estimates.","Published":"2017-02-03","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BIOM.utils","Version":"0.9","Title":"Utilities for the BIOM (Biological Observation Matrix) Format","Description":"Provides utilities to facilitate import, export and computation with the \n BIOM (Biological Observation Matrix) format (http://biom-format.org).","Published":"2014-08-29","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"BioMark","Version":"0.4.5","Title":"Find Biomarkers in Two-Class Discrimination Problems","Description":"Variable selection methods are provided for several classification methods: the lasso/elastic net, PCLDA, PLSDA, and several t-tests. Two approaches for selecting cutoffs can be used, one based on the stability of model coefficients under perturbation, and the other on higher criticism.","Published":"2015-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"biomartr","Version":"0.5.1","Title":"Genomic Data Retrieval","Description":"Perform metagenomic data retrieval and functional annotation\n retrieval. In detail, this package aims to provide users with a standardized\n way to automate genome, proteome, coding sequence ('CDS'), 'GFF', and metagenome\n retrieval from 'NCBI' and 'ENSEMBL' databases. Furthermore, an interface to the 'BioMart' database\n (Smedley et al. (2009) ) allows users to retrieve\n functional annotation for genomic loci. Users can download entire databases such\n as 'NCBI RefSeq' (Pruitt et al. (2007) ), 'NCBI nr',\n 'NCBI nt' and 'NCBI Genbank' (Benson et al. (2013) ) as\n well as 'ENSEMBL' and 'ENSEMBLGENOMES' with only one command.","Published":"2017-05-28","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BIOMASS","Version":"1.1","Title":"Estimating Aboveground Biomass and Its Uncertainty in Tropical\nForests","Description":"Contains functions to estimate aboveground biomass/carbon and its uncertainty in tropical forests. These functions allow to (1) retrieve and to correct taxonomy, (2) estimate wood density and its uncertainty, (3) construct height-diameter models, (4) estimate the above-ground biomass/carbon at the stand level with associated uncertainty. To cite BIOMASS, please use citation(\"BIOMASS\").","Published":"2017-01-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"biomod2","Version":"3.3-7","Title":"Ensemble Platform for Species Distribution Modeling","Description":"Functions for species distribution modeling, calibration and\n evaluation, ensemble of models.","Published":"2016-03-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bionetdata","Version":"1.0.1","Title":"Biological and chemical data networks","Description":"Data Package that includes several examples of chemical and biological data networks, i.e. data graph structured.","Published":"2014-09-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bioOED","Version":"0.1.1","Title":"Sensitivity Analysis and Optimum Experiment Design for Microbial\nInactivation","Description":"Extends the bioinactivation package with functions for Sensitivity\n Analysis and Optimum Experiment Design.","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BioPET","Version":"0.2.1","Title":"Biomarker Prognostic Enrichment Tool","Description":"Prognostic Enrichment is a clinical trial strategy of evaluating an intervention in a patient population with a higher rate of the unwanted event than the broader patient population (R. Temple (2010) ). A higher event rate translates to a lower sample size for the clinical trial, which can have both practical and ethical advantages. This package is a tool to help evaluate biomarkers for prognostic enrichment of clinical trials. ","Published":"2017-02-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BioPhysConnectoR","Version":"1.6-10","Title":"BioPhysConnectoR","Description":"Utilities and functions to investigate the relation\n between biomolecular structures, their interactions, and the\n evolutionary information revealed in sequence alignments of\n these molecules.","Published":"2013-01-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bioplots","Version":"0.0.1","Title":"Visualization of Overlapping Results with Heatmap","Description":"Visualization of complex biological datasets is\n essential to understand complementary spects of biology\n in big data era.\n In addition, analyzing of multiple datasets enables to\n understand biologcal processes deeply and accurately.\n Multiple datasets produce multiple analysis results, and\n these overlappings are usually visualized in Venn diagram.\n bioplots is a tiny R package that generates a heatmap to\n visualize overlappings instead of using Venn diagram.","Published":"2016-06-06","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bioPN","Version":"1.2.0","Title":"Simulation of deterministic and stochastic biochemical reaction\nnetworks using Petri Nets","Description":"\n bioPN is a package suited to perform simulation of deterministic and stochastic systems of biochemical reaction\n networks.\n Models are defined using a subset of Petri Nets, in a way that is close at how chemical reactions\n are defined.\n For deterministic solutions, bioPN creates the associated system of differential equations \"on the fly\", and\n solves it with a Runge Kutta Dormand Prince 45 explicit algorithm.\n For stochastic solutions, bioPN offers variants of Gillespie algorithm, or SSA.\n For hybrid deterministic/stochastic,\n it employs the Haseltine and Rawlings algorithm, that partitions the system in fast and slow reactions.\n bioPN algorithms are developed in C to achieve adequate performance.","Published":"2014-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"biorxivr","Version":"0.1.3","Title":"Search and Download Papers from the bioRxiv Preprint Server","Description":"The bioRxiv preprint server (http://www.biorxiv.org) is a website where scientists can post preprints of scholarly texts in biology. Users can search and download PDFs in bulk from the preprint server. The text of abstracts are stored as raw text within R, and PDFs can easily be saved and imported for text mining with packages such as 'tm'.","Published":"2016-04-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bios2mds","Version":"1.2.2","Title":"From BIOlogical Sequences to MultiDimensional Scaling","Description":"Bios2mds is primarily dedicated to the analysis of\n biological sequences by metric MultiDimensional Scaling with\n projection of supplementary data. It contains functions for\n reading multiple sequence alignment files, calculating distance\n matrices, performing metric multidimensional scaling and\n visualizing results.","Published":"2012-06-05","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"biosignalEMG","Version":"2.0.1","Title":"Tools for Electromyogram Signals (EMG) Analysis","Description":"Data processing tools to compute the rectified, integrated and the averaged EMG. Routines for automatic detection of activation phases. A routine to compute and plot the ensemble average of the EMG. An EMG signal simulator for general purposes.","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"biospear","Version":"1.0.0","Title":"Biomarker Selection in Penalized Regression Models","Description":"Provides a useful R tool for developing and validating prediction models, estimate expected survival of patients and visualize them graphically. \n Most of the implemented methods are based on penalized regressions such as: the lasso (Tibshirani R (1996)), the elastic net (Zou H et al. (2005) ), the adaptive lasso (Zou H (2006) ), the stability selection (Meinshausen N et al. (2010) ), some extensions of the lasso (Ternes et al. (2016) ), some methods for the interaction setting (Ternes N et al. (2016) ), or others.\n A function generating simulated survival data set is also provided.","Published":"2017-05-13","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BioStatR","Version":"2.0.0","Title":"Initiation à la Statistique avec R","Description":"This packages provides datasets and functions for the book \"Initiation à la Statistique avec R\", Dunod, 2ed, 2014.","Published":"2014-08-28","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"biotic","Version":"0.1.2","Title":"Calculation of Freshwater Biotic Indices","Description":"Calculates a range of UK freshwater invertebrate biotic indices\n including BMWP, Whalley, WHPT, Habitat-specific BMWP, AWIC, LIFE and PSI.","Published":"2016-04-20","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"biotools","Version":"3.1","Title":"Tools for Biometry and Applied Statistics in Agricultural\nScience","Description":"Tools designed to perform and work with cluster analysis (including Tocher's algorithm), \n\tdiscriminant analysis and path analysis (standard and under collinearity), as well as some \n\tuseful miscellaneous tools for dealing with sample size and optimum plot size calculations.\n\tMantel's permutation test can be found in this package. A new approach for calculating its\n\tpower is implemented. biotools also contains the new tests for genetic covariance components.\n\tAn approach for predicting spatial gene diversity is implemented.","Published":"2017-05-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bipartite","Version":"2.08","Title":"Visualising Bipartite Networks and Calculating Some (Ecological)\nIndices","Description":"Functions to visualise webs and calculate a series of indices commonly used to describe pattern in (ecological) webs. It focuses on webs consisting of only two levels (bipartite), e.g. pollination webs or predator-prey-webs. Visualisation is important to get an idea of what we are actually looking at, while the indices summarise different aspects of the web's topology. ","Published":"2017-03-31","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"biplotbootGUI","Version":"1.1","Title":"Bootstrap on Classical Biplots and Clustering Disjoint Biplot","Description":"A GUI with which the user can construct and interact with Bootstrap methods on Classical Biplots and with Clustering and/or Disjoint Biplot.","Published":"2015-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BiplotGUI","Version":"0.0-7","Title":"Interactive Biplots in R","Description":"Provides a GUI with which users can construct and interact\n with biplots.","Published":"2013-03-19","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"BIPOD","Version":"0.2.1","Title":"BIPOD (Bayesian Inference for Partially Observed diffusions)","Description":"Bayesian parameter estimation for (partially observed)\n two-dimensional diffusions.","Published":"2014-03-19","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"birdnik","Version":"0.1.0","Title":"Connector for the Wordnik API","Description":"A connector to the API for 'Wordnik' , a dictionary service that also provides\n bigram generation, word frequency data, and a whole host of other functionality.","Published":"2016-08-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"birdring","Version":"1.3","Title":"Methods to Analyse Ring Re-Encounter Data","Description":"R functions to read EURING data and analyse re-encounter data of birds marked by metal rings. For a tutorial, go to http://www.tandfonline.com/doi/full/10.1080/03078698.2014.933053.","Published":"2015-10-13","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"birk","Version":"2.1.2","Title":"MA Birk's Functions","Description":"Collection of tools to make R more convenient. Includes tools to\n summarize data using statistics not available with base R and manipulate\n objects for analyses.","Published":"2016-07-27","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bisectr","Version":"0.1.0","Title":"Tools to find bad commits with git bisect","Description":"Tools to find bad commits with git bisect. See\n https://github.com/wch/bisectr for examples and test script\n templates.","Published":"2012-06-15","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BiSEp","Version":"2.2","Title":"Toolkit to Identify Candidate Synthetic Lethality","Description":"Enables the user to infer potential synthetic lethal relationships\n by analysing relationships between bimodally distributed gene pairs in big\n gene expression datasets. Enables the user to visualise these candidate\n synthetic lethal relationships.","Published":"2017-01-26","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"bisoreg","Version":"1.4","Title":"Bayesian Isotonic Regression with Bernstein Polynomials","Description":"Provides functions for fitting Bayesian monotonic regression models to data.","Published":"2015-03-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BisRNA","Version":"0.2.1","Title":"Analysis of RNA Cytosine-5 Methylation","Description":"Bisulfite-treated RNA non-conversion in a set of samples is analysed as\n follows: each sample's Poisson parameter is estimated, and non-conversion\n p-values are calculated for each sample and adjusted for multiple testing.\n Finally, combined non-conversion p-value and standard error of the non-conversion\n are calculated on the intersection of the set of samples.\n A low combined non-conversion p-value points to methylation of the\n corresponding RNA cytosine, or another event blocking bisulfite conversion.","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bit","Version":"1.1-12","Title":"A class for vectors of 1-bit booleans","Description":"bitmapped vectors of booleans (no NAs), \n coercion from and to logicals, integers and integer subscripts; \n fast boolean operators and fast summary statistics. \n With 'bit' vectors you can store true binary booleans {FALSE,TRUE} at the \n expense of 1 bit only, on a 32 bit architecture this means factor 32 less \n RAM and ~ factor 32 more speed on boolean operations. Due to overhead of \n R calls, actual speed gain depends on the size of the vector: expect gains \n for vectors of size > 10000 elements. Even for one-time boolean operations \n it can pay-off to convert to bit, the pay-off is obvious, when such \n components are used more than once. \n Reading from and writing to bit is approximately as fast as accessing \n standard logicals - mostly due to R's time for memory allocation. The package \n allows to work with pre-allocated memory for return values by calling .Call() \n directly: when evaluating the speed of C-access with pre-allocated vector \n memory, coping from bit to logical requires only 70% of the time for copying \n from logical to logical; and copying from logical to bit comes at a \n performance penalty of 150%. the package now contains further classes for \n representing logical selections: 'bitwhich' for very skewed selections and \n 'ri' for selecting ranges of values for chunked processing. All three index \n classes can be used for subsetting 'ff' objects (ff-2.1-0 and higher).","Published":"2014-04-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bit64","Version":"0.9-7","Title":"A S3 Class for Vectors of 64bit Integers","Description":"\n Package 'bit64' provides serializable S3 atomic 64bit (signed) integers. \n These are useful for handling database keys and exact counting in +-2^63.\n WARNING: do not use them as replacement for 32bit integers, integer64 are not\n supported for subscripting by R-core and they have different semantics when \n combined with double, e.g. integer64 + double => integer64. \n Class integer64 can be used in vectors, matrices, arrays and data.frames. \n Methods are available for coercion from and to logicals, integers, doubles, \n characters and factors as well as many elementwise and summary functions. \n Many fast algorithmic operations such as 'match' and 'order' support inter-\n active data exploration and manipulation and optionally leverage caching.","Published":"2017-05-08","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bitops","Version":"1.0-6","Title":"Bitwise Operations","Description":"Functions for bitwise operations on integer vectors.","Published":"2013-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BiTrinA","Version":"1.2","Title":"Binarization and Trinarization of One-Dimensional Data","Description":"Provides methods for the binarization and trinarization of one-dimensional data and some visualization functions.","Published":"2017-02-06","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"bitrugs","Version":"0.1","Title":"Bayesian Inference of Transmission Routes Using Genome Sequences","Description":"MCMC methods to estimate transmission dynamics and infection routes in hospitals using genomic sampling data.","Published":"2016-05-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BivarP","Version":"1.0","Title":"Estimating the Parameters of Some Bivariate Distributions","Description":"Parameter estimation of bivariate distribution functions\n modeled as a Archimedean copula function. The input data may contain\n values from right censored. Used marginal distributions are two-parameter.\n Methods for density, distribution, survival, random sample generation.","Published":"2015-04-18","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"bivarRIpower","Version":"1.2","Title":"Sample size calculations for bivariate longitudinal data","Description":"Implements sample size calculations for bivariate random\n intercept regression model that are described in Comulada and\n Weiss (2010)","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BivRegBLS","Version":"1.0.0","Title":"Tolerance Intervals and Errors-in-Variables Regressions in\nMethod Comparison Studies","Description":"Assess the agreement in method comparison studies by tolerance intervals and errors-in-variables regressions. The Ordinary Least Square regressions (OLSv and OLSh), the Deming Regression (DR), and the (Correlated)-Bivariate Least Square regressions (BLS and CBLS) can be used with unreplicated or replicated data. The BLS and CBLS are the two main functions to estimate a regression line, while XY.plot and MD.plot are the two main graphical functions to display, respectively an (X,Y) plot or (M,D) plot with the BLS or CBLS results. Assuming no proportional bias, the (M,D) plot (Band-Altman plot) may be simplified by calculating horizontal lines intervals with tolerance intervals (beta-expectation (type I) or beta-gamma content (type II)).","Published":"2017-01-06","License":"AGPL-3","snapshot_date":"2017-06-23"}
{"Package":"bivrp","Version":"1.0","Title":"Bivariate Residual Plots with Simulation Polygons","Description":"Generates bivariate residual plots with simulation polygons for any diagnostics and bivariate model from which functions to extract the desired diagnostics, simulate new data and refit the models are available.","Published":"2016-12-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BivUnifBin","Version":"1.1","Title":"Generation of Bivariate Uniform Data and Its Relation to\nBivariate Binary Data","Description":"Simulation of bivariate uniform data with a full range of correlations based on two beta densities and computation of the tetrachoric correlation (correlation of bivariate uniform data) from the phi coefficient (correlation of bivariate binary data) and vice versa.","Published":"2017-01-26","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"biwavelet","Version":"0.20.11","Title":"Conduct Univariate and Bivariate Wavelet Analyses","Description":"This is a port of the WTC MATLAB package written by Aslak Grinsted\n and the wavelet program written by Christopher Torrence and Gibert P.\n Compo. This package can be used to perform univariate and bivariate\n (cross-wavelet, wavelet coherence, wavelet clustering) analyses.","Published":"2016-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"biwt","Version":"1.0","Title":"Functions to compute the biweight mean vector and covariance &\ncorrelation matrices","Description":"Compute multivariate location, scale, and correlation\n estimates based on Tukey's biweight M-estimator.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bizdays","Version":"1.0.3","Title":"Business Days Calculations and Utilities","Description":"Business days calculations based on a list of holidays and\n nonworking weekdays. Quite useful for fixed income and derivatives pricing.","Published":"2017-05-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bkmr","Version":"0.2.0","Title":"Bayesian Kernel Machine Regression","Description":"Implementation of a statistical approach \n for estimating the joint health effects of multiple \n concurrent exposures.","Published":"2017-03-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BKPC","Version":"1.0","Title":"Bayesian Kernel Projection Classifier","Description":"Bayesian kernel projection classifier is a nonlinear multicategory classifier which performs the classification of the projections of the data to the principal axes of the feature space. A Gibbs sampler is implemented to find the posterior distributions of the parameters.","Published":"2016-02-27","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"blackbox","Version":"1.0.18","Title":"Black Box Optimization and Exploration of Parameter Space","Description":"Performs prediction of a response function from simulated response values, allowing black-box optimization of functions estimated with some error. Includes a simple user interface for such applications, as well as more specialized functions designed to be called by the Migraine software (see URL). The latter functions are used for prediction of likelihood surfaces and implied likelihood ratio confidence intervals, and for exploration of predictor space of the surface. Prediction of the response is based on ordinary kriging (with residual error) of the input. Estimation of smoothing parameters is performed by generalized cross-validation.","Published":"2017-02-03","License":"CeCILL-2","snapshot_date":"2017-06-23"}
{"Package":"BlakerCI","Version":"1.0-5","Title":"Blaker's Binomial Confidence Limits","Description":"Fast and accurate calculation of Blaker's binomial confidence limits (and some related stuff).","Published":"2015-08-20","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BlandAltmanLeh","Version":"0.3.1","Title":"Plots (Slightly Extended) Bland-Altman Plots","Description":"Bland-Altman Plots using either base graphics or ggplot2,\n augmented with confidence intervals, with detailed return values and\n a sunflowerplot option for data with ties.","Published":"2015-12-23","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"blatr","Version":"1.0.1","Title":"Send Emails Using 'Blat' for Windows","Description":"A wrapper around the 'Blat' command line SMTP mailer for Windows.\n 'Blat' is public domain software, but be sure to read the license before use.\n It can be found at the Blat website http://www.blat.net.","Published":"2015-03-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"Blaunet","Version":"2.0.4","Title":"Calculate and Analyze Blau Status for Measuring Social Distance","Description":"An integrated set of tools to calculate and analyze Blau statuses quantifying social distance between individuals belonging to organizations. Relational (network) data may be incorporated for additional analyses.","Published":"2016-04-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"blavaan","Version":"0.2-4","Title":"Bayesian Latent Variable Analysis","Description":"Fit a variety of Bayesian latent variable models, including confirmatory\n factor analysis, structural equation models, and latent growth curve models.","Published":"2017-04-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BLCOP","Version":"0.3.1","Title":"Black-Litterman and Copula Opinion Pooling Frameworks","Description":"An implementation of the Black-Litterman Model and Atilio\n Meucci's copula opinion pooling framework.","Published":"2015-02-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"blendedLink","Version":"1.0","Title":"A New Link Function that Blends Two Specified Link Functions","Description":"A new link function that equals one specified link function up to a cutover then a linear rescaling of another specified link function. For use in glm() or glm2(). The intended use is in binary regression, in which case the first link should be set to \"log\" and the second to \"logit\". This ensures that fitted probabilities are between 0 and 1 and that exponentiated coefficients can be interpreted as relative risks for probabilities up to the cutoff.","Published":"2017-01-31","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"blender","Version":"0.1.2","Title":"Analyze biotic homogenization of landscapes","Description":"Tools for assessing exotic species' contributions to\n landscape homogeneity using average pairwise Jaccard similarity\n and an analytical approximation derived in Harris et al. (2011,\n \"Occupancy is nine-tenths of the law,\" The American\n Naturalist). Also includes a randomization method for assessing\n sources of model error.","Published":"2014-02-22","License":"GPL-2 | Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"blighty","Version":"3.1-4","Title":"United Kingdom coastlines","Description":"Function for drawing the coastline of the British Isles","Published":"2012-04-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"blkbox","Version":"1.0","Title":"Data Exploration with Multiple Machine Learning Algorithms","Description":"Allows data to be processed by multiple machine learning algorithms\n at the same time, enables feature selection of data by single a algorithm or\n combinations of multiple. Easy to use tool for k-fold cross validation and\n nested cross validation.","Published":"2016-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"blkergm","Version":"1.1","Title":"Fitting block ERGM given the block structure on social networks","Description":"This package is an extension to the \"ergm\" package which implements the block ergms.","Published":"2014-08-27","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"blm","Version":"2013.2.4.4","Title":"Binomial linear and linear-expit regression","Description":"Implements regression models for binary data on the absolute risk scale. These models are applicable to cohort and population-based case-control data.","Published":"2013-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"blme","Version":"1.0-4","Title":"Bayesian Linear Mixed-Effects Models","Description":"Maximum a posteriori estimation for linear and generalized\n linear mixed-effects models in a Bayesian setting. Extends\n 'lme4' by Douglas Bates, Martin Maechler, Ben Bolker, and Steve Walker.","Published":"2015-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"blmeco","Version":"1.1","Title":"Data Files and Functions Accompanying the Book \"Bayesian Data\nAnalysis in Ecology using R, BUGS and Stan\"","Description":"Data files and functions accompanying the book Korner-Nievergelt, Roth, von Felten, Guelat, Almasi, Korner-Nievergelt (2015) \"Bayesian Data Analysis in Ecology using R, BUGS and Stan\", Elsevier, New York.","Published":"2015-08-22","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BLModel","Version":"1.0.2","Title":"Black-Litterman Posterior Distribution","Description":"Posterior distribution in the Black-Litterman model is computed from a prior distribution given in the form of a time series of asset returns and a continuous distribution of views provided by the user as an external function.","Published":"2017-03-29","License":"GNU General Public License version 3","snapshot_date":"2017-06-23"}
{"Package":"blob","Version":"1.1.0","Title":"A Simple S3 Class for Representing Vectors of Binary Data\n('BLOBS')","Description":"R's raw vector is useful for storing a single binary object.\n What if you want to put a vector of them in a data frame? The blob\n package provides the blob object, a list of raw vectors, suitable for\n use as a column in data frame.","Published":"2017-06-17","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"blockcluster","Version":"4.2.3","Title":"Coclustering Package for Binary, Categorical, Contingency and\nContinuous Data-Sets","Description":"Simultaneous clustering of rows and columns, usually designated by\n biclustering, co-clustering or block clustering, is an important technique\n in two way data analysis. It consists of estimating a mixture model which\n takes into account the block clustering problem on both the individual and\n variables sets. The blockcluster package provides a bridge between the C++\n core library and the R statistical computing environment. This package\n allows to co-cluster binary, contingency, continuous and categorical\n data-sets. It also provides utility functions to visualize the results.\n This package may be useful for various applications in fields of Data\n mining, Information retrieval, Biology, computer vision and many more. More\n information about the project and comprehensive tutorial can be found on\n the link mentioned in URL.","Published":"2017-02-27","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"blockmatrix","Version":"1.0","Title":"blockmatrix: Tools to solve algebraic systems with partitioned\nmatrices","Description":"Some elementary matrix algebra tools are implemented to manage\n block matrices or partitioned matrix, i.e. \"matrix of matrices\"\n (http://en.wikipedia.org/wiki/Block_matrix). The block matrix is here\n defined as a new S3 object. In this package, some methods for \"matrix\"\n object are rewritten for \"blockmatrix\" object. New methods are implemented.\n This package was created to solve equation systems with block matrices for\n the analysis of environmental vector time series .\n Bugs/comments/questions/collaboration of any kind are warmly welcomed.","Published":"2014-01-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BlockMessage","Version":"1.0","Title":"Creates strings that show a text message in 8 by 8 block letters","Description":"Creates strings that show a text message in 8 by 8 block\n letters","Published":"2013-03-14","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"blockmodeling","Version":"0.1.8","Title":"An R package for Generalized and classical blockmodeling of\nvalued networks","Description":"The package is primarly ment as an implementation of\n Generalized blockmodeling for valued networks. In addition,\n measurese of similarity or dissimilarity based on structural\n equivalence and regular equivalence (REGE algorithem) can be\n computed and partitioned matrices can be ploted.","Published":"2010-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"blockmodels","Version":"1.1.1","Title":"Latent and Stochastic Block Model Estimation by a 'V-EM'\nAlgorithm","Description":"Latent and Stochastic Block Model estimation by a Variational EM algorithm.\n Various probability distribution are provided (Bernoulli,\n Poisson...), with or without covariates.","Published":"2015-04-21","License":"LGPL-2.1","snapshot_date":"2017-06-23"}
{"Package":"blockrand","Version":"1.3","Title":"Randomization for block random clinical trials","Description":"Create randomizations for block random clinical trials.\n Can also produce a pdf file of randomization cards.","Published":"2013-01-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"blocksdesign","Version":"2.5","Title":"Nested and Crossed Block Designs for Factorial, Fractional\nFactorial and Unstructured Treatment Sets","Description":"Constructs randomized nested row-and-column type block designs\n with arbitrary depth of nesting for arbitrary factorial or fractional \n factorial treatment designs. The treatment model can be defined\n by a models.matrix formula which allows any feasible \n combination of quantitative or qualitative model terms.\n Any feasible design size can be defined and, where necessary, \n a D-optimal swapping routine will find the best fraction for the required \n design size. Blocks are nested hierarchically and the block model \n for any particular level of nesting can comprise either a simple nested blocks \n design or a crossed row-and-column blocks design. Block sizes \n are either all equal or differ, at most, by one plot within any particular row\n or column classification and any particular level of nesting. The design outputs \n include a data frame showing the allocation of treatments to blocks, a table\n showing block levels, the fractional design efficiency, \n the achieved D-efficiency, the achieved A-efficiency\n (unstructured treatments only) and A-efficiency upper bounds, where available,\n for each stratum in the design. For designs with simple unstructured treatments,\n a plan layout showing the allocation of treatments to blocks or to rows and\n columns in the bottom stratum of the design is also given.","Published":"2017-06-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"blockseg","Version":"0.2","Title":"Two Dimensional Change-Points Detection","Description":"Segments a matrix in blocks with constant values.","Published":"2016-02-10","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"}
{"Package":"blockTools","Version":"0.6-3","Title":"Block, Assign, and Diagnose Potential Interference in Randomized\nExperiments","Description":"Blocks units into experimental blocks, with one unit per treatment condition, by creating a measure of multivariate distance between all possible pairs of units. Maximum, minimum, or an allowable range of differences between units on one variable can be set. Randomly assign units to treatment conditions. Diagnose potential interference between units assigned to different treatment conditions. Write outputs to .tex and .csv files.","Published":"2016-12-02","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"Blossom","Version":"1.4","Title":"Statistical Comparisons with Distance-Function Based Permutation\nTests","Description":"Provides tools for making statistical comparisons with distance-function based permutation tests developed by P. W. Mielke, Jr. and colleagues at Colorado State University (Mielke, P. W. & Berry, K. J. Permutation Methods: A Distance Function Approach (Springer, New York, 2001)) and for testing parameters estimated in linear models with permutation procedures developed by B. S. Cade and colleagues at the Fort Collins Science Center, U. S. Geological Survey.","Published":"2016-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BLPestimatoR","Version":"0.1.4","Title":"Performs a BLP Demand Estimation","Description":"Provides the estimation algorithm to perform the demand estimation described in Berry, Levinsohn and Pakes (1995) . The routine uses analytic gradients and offers a large number of implemented integration methods and optimization routines.","Published":"2017-05-05","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BLR","Version":"1.4","Title":"Bayesian Linear Regression","Description":"Bayesian Linear Regression","Published":"2014-12-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"blsAPI","Version":"0.1.8","Title":"Request Data from the U.S. Bureau of Labor Statistics API","Description":"Allows users to request data for one or multiple series through the\n U.S. Bureau of Labor Statistics API. Users provide parameters as specified in\n and the function returns a JSON\n string.","Published":"2017-05-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"blscrapeR","Version":"2.1.5","Title":"An API Wrapper for the Bureau of Labor Statistics (BLS)","Description":"Scrapes various data from . The U.S. Bureau of Labor Statistics is the statistical branch of the United States Department of Labor. The package has additional functions to help parse, analyze and visualize the data.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"BMA","Version":"3.18.7","Title":"Bayesian Model Averaging","Description":"Package for Bayesian model averaging and variable selection for linear models,\n generalized linear models and survival models (cox\n regression).","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BMAmevt","Version":"1.0.1","Title":"Multivariate Extremes: Bayesian Estimation of the Spectral\nMeasure","Description":"Toolkit for Bayesian estimation of the dependence structure\n in Multivariate Extreme Value parametric models.","Published":"2017-03-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bmd","Version":"0.5","Title":"Benchmark dose analysis for dose-response data","Description":"Benchmark dose analysis for continuous and quantal\n dose-response data.","Published":"2012-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bmem","Version":"1.5","Title":"Mediation analysis with missing data using bootstrap","Description":"Four methods for mediation analysis with missing data: Listwise deletion, Pairwise deletion, Multiple imputation, and Two Stage Maximum Likelihood algorithm. For MI and TS-ML, auxiliary variables can be included. Bootstrap confidence intervals for mediation effects are obtained. The robust method is also implemented for TS-ML. Since version 1.4, bmem adds the capability to conduct power analysis for mediation models.","Published":"2013-10-08","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bmeta","Version":"0.1.2","Title":"Bayesian Meta-Analysis and Meta-Regression","Description":"Provides a collection of functions for conducting meta-analyses under Bayesian context in R. The package includes functions for computing various effect size or outcome measures (e.g. odds ratios, mean difference and incidence rate ratio) for different types of data based on MCMC simulations. Users are allowed to fit fixed- and random-effects models with different priors to the data. Meta-regression can be carried out if effects of additional covariates are observed. Furthermore, the package provides functions for creating posterior distribution plots and forest plot to display main model output. Traceplots and some other diagnostic plots are also available for assessing model fit and performance.","Published":"2016-01-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BMhyd","Version":"1.2-8","Title":"PCM for Hybridization","Description":"The BMhyd package analyzes the phenotypic evolution of species of hybrid origin on a phylogenetic network. This package can detect the hybrid vigor effect, a burst of variation at formation, and the relative portion of heritability from its parents. Parameters are estimated by maximum likelihood. Users need to enter a comparative data set, a phylogeny, and information on gene flow leading to hybrids. ","Published":"2015-08-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BMisc","Version":"1.0.1","Title":"Miscellaneous Functions for Panel Data, Quantiles, and Printing\nResults","Description":"These are miscellaneous functions for working with panel data, quantiles, and printing results. For panel data, the package includes functions for making a panel data balanced (that is, dropping missing individuals that have missing observations in any time period), converting id numbers to row numbers, and to treat repeated cross sections as panel data under the assumption of rank invariance. For quantiles, there are functions to make ecdf functions from a set of data points (this is particularly useful when a distribution function is created in several steps) and to combine distribution functions based on some external weights; these distribution functions can easily be inverted to obtain quantiles. Finally, there are several other miscellaneous functions for obtaining weighted means, weighted distribution functions, and weighted quantiles; to generate summary statistics and their differences for two groups; and to drop covariates from formulas.","Published":"2017-06-15","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Bmix","Version":"0.6","Title":"Bayesian Sampling for Stick-Breaking Mixtures","Description":"This is a bare-bones implementation of sampling algorithms\n for a variety of Bayesian stick-breaking (marginally DP)\n mixture models, including particle learning and Gibbs sampling\n for static DP mixtures, particle learning for dynamic BAR\n stick-breaking, and DP mixture regression. The software is\n designed to be easy to customize to suit different situations\n and for experimentation with stick-breaking models. Since\n particles are repeatedly copied, it is not an especially\n efficient implementation.","Published":"2016-02-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bmixture","Version":"0.5","Title":"Bayesian Estimation for Finite Mixture of Distributions","Description":"Provides statistical tools for Bayesian estimation for finite mixture of distributions, mainly mixture of Gamma, Normal and t-distributions. The package is implemented the recent improvements in Bayesian literature for the finite mixture of distributions, including Mohammadi and et al. (2013) and Mohammadi and Salehi-Rad (2012) .","Published":"2017-05-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bmk","Version":"1.0","Title":"MCMC diagnostics package","Description":"MCMC diagnostic package that contains tools to diagnose\n convergence as well as to evaluate sensitivity studies,\n Includes summary functions which output mean, median,\n 95percentCI, Gelman & Rubin diagnostics and the Hellinger\n distance based diagnostics, Also contains functions to\n determine when an MCMC chain has converged via Hellinger\n distance, A function is also provided to compare outputs from\n identically dimensioned chains for determining sensitivy to\n prior distribution assumptions","Published":"2012-10-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bmlm","Version":"1.3.0","Title":"Bayesian Multilevel Mediation","Description":"Easy estimation of Bayesian multilevel mediation models with Stan.","Published":"2017-06-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"bmmix","Version":"0.1-2","Title":"Bayesian multinomial mixture","Description":"Bayesian multinomial mixture model ","Published":"2014-05-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BMN","Version":"1.02","Title":"The pseudo-likelihood method for pairwise binary markov networks","Description":"This package implements approximate and exact methods for\n pairwise binary markov models. The exact method uses an\n implementation of the junction tree algorithm for binary\n graphical models. For more details see the help files","Published":"2010-04-25","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bmp","Version":"0.2","Title":"Read Windows Bitmap (BMP) images","Description":"Reads Windows BMP format images. Currently limited to 8 bit\n greyscale images and 24,32 bit (A)RGB images. Pure R implementation without\n external dependencies.","Published":"2013-08-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bmrm","Version":"3.3","Title":"Bundle Methods for Regularized Risk Minimization Package","Description":"Bundle methods for minimization of convex and non-convex risk\n under L1 or L2 regularization. Implements the algorithm proposed by Teo et\n al. (JMLR 2010) as well as the extension proposed by Do and Artieres (JMLR\n 2012). The package comes with lot of loss functions for machine learning\n which make it powerful for big data analysis. The applications includes:\n structured prediction, linear SVM, multi-class SVM, f-beta optimization,\n ROC optimization, ordinal regression, quantile regression,\n epsilon insensitive regression, least mean square, logistic regression,\n least absolute deviation regression (see package examples), etc... all with\n L1 and L2 regularization.","Published":"2017-05-19","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BMRV","Version":"1.32","Title":"Bayesian Models for Rare Variant Association Analysis","Description":"Provides two Bayesian models for detecting the association between rare genetic variants and a trait that can be continuous, ordinal or binary. Bayesian latent variable collapsing model (BLVCM) detects interaction effect and is dedicated to twin design while it can also be applied to independent samples. Hierarchical Bayesian multiple regression model (HBMR) incorporates genotype uncertainty information and can be applied to either independent or family samples. Furthermore, it deals with continuous, binary and ordinal traits.","Published":"2016-11-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BMS","Version":"0.3.4","Title":"Bayesian Model Averaging Library","Description":"Bayesian model averaging for linear models with a wide choice of (customizable) priors. Built-in priors include coefficient priors (fixed, flexible and hyper-g priors), 5 kinds of model priors, moreover model sampling by enumeration or various MCMC approaches. Post-processing functions allow for inferring posterior inclusion and model probabilities, various moments, coefficient and predictive densities. Plotting functions available for posterior model size, MCMC convergence, predictive and coefficient densities, best models representation, BMA comparison.","Published":"2015-11-24","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"BNDataGenerator","Version":"1.0","Title":"Data Generator based on Bayesian Network Model","Description":"Data generator based on Bayesian network model","Published":"2014-12-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bnlearn","Version":"4.1.1","Title":"Bayesian Network Structure Learning, Parameter Learning and\nInference","Description":"Bayesian network structure learning, parameter learning and\n inference.\n This package implements constraint-based (GS, IAMB, Inter-IAMB, Fast-IAMB,\n MMPC, Hiton-PC), pairwise (ARACNE and Chow-Liu), score-based (Hill-Climbing\n and Tabu Search) and hybrid (MMHC and RSMAX2) structure learning algorithms\n for discrete, Gaussian and conditional Gaussian networks, along with many\n score functions and conditional independence tests.\n The Naive Bayes and the Tree-Augmented Naive Bayes (TAN) classifiers are\n also implemented.\n Some utility functions (model comparison and manipulation, random data\n generation, arc orientation testing, simple and advanced plots) are\n included, as well as support for parameter estimation (maximum likelihood\n and Bayesian) and inference, conditional probability queries and\n cross-validation. Development snapshots with the latest bugfixes are\n available from .","Published":"2017-03-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bnnSurvival","Version":"0.1.5","Title":"Bagged k-Nearest Neighbors Survival Prediction","Description":"Implements a bootstrap aggregated (bagged) version of\n the k-nearest neighbors survival probability prediction method (Lowsky et\n al. 2013). In addition to the bootstrapping of training samples, the\n features can be subsampled in each baselearner to break the correlation\n between them. The Rcpp package is used to speed up the computation.","Published":"2017-05-10","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bnormnlr","Version":"1.0","Title":"Bayesian Estimation for Normal Heteroscedastic Nonlinear\nRegression Models","Description":"Implementation of Bayesian estimation in normal heteroscedastic nonlinear regression Models following Cepeda-Cuervo, (2001).","Published":"2014-12-10","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BNPdensity","Version":"2017.03","Title":"Ferguson-Klass Type Algorithm for Posterior Normalized Random\nMeasures","Description":"Bayesian nonparametric density estimation modeling mixtures by a Ferguson-Klass type algorithm for posterior normalized random measures.","Published":"2017-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BNPMIXcluster","Version":"0.2.0","Title":"Bayesian Nonparametric Model for Clustering with Mixed Scale\nVariables","Description":"Bayesian nonparametric approach for clustering that is capable to combine different types of variables (continuous, ordinal and nominal) and also accommodates for different sampling probabilities in a complex survey design. The model is based on a location mixture model with a Poisson-Dirichlet process prior on the location parameters of the associated latent variables. The package performs the clustering model described in Carmona, C., Nieto-Barajas, L. E., Canale, A. (2016) .","Published":"2017-02-01","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"bnpmr","Version":"1.1","Title":"Bayesian monotonic nonparametric regression","Description":"Implements the Bayesian nonparametric monotonic regression\n method described in Bornkamp & Ickstadt (2009), Biometrics, 65,\n 198-205.","Published":"2013-05-03","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"BNPTSclust","Version":"1.1","Title":"A Bayesian Nonparametric Algorithm for Time Series Clustering","Description":"Performs the algorithm for time series clustering described in Nieto-Barajas and Contreras-Cristan (2014).","Published":"2015-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BNSL","Version":"0.1.2","Title":"Bayesian Network Structure Learning","Description":"From a given data frame, this package learns its Bayesian network structure based on a selected score.","Published":"2017-06-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BNSP","Version":"1.1.1","Title":"Bayesian Non- And Semi-Parametric Model Fitting","Description":"MCMC for Dirichlet process mixtures.","Published":"2017-02-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bnspatial","Version":"1.0.1","Title":"Spatial Implementation of Bayesian Networks and Mapping","Description":"Package for the spatial implementation of Bayesian Networks and mapping in geographical space. It makes maps of expected value (or most likely state) given known and unknown conditions, maps of uncertainty measured as coefficient of variation or Shannon index (entropy), maps of probability associated to any states of any node of the network. Some additional features are provided as well: parallel processing options, data discretization routines and function wrappers designed for users with minimal knowledge of the R language. Outputs can be exported to any common GIS format. Development was funded by the European Union FP7 (2007-2013), under project ROBIN (agreement 283093).","Published":"2016-12-14","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bnstruct","Version":"1.0.2","Title":"Bayesian Network Structure Learning from Data with Missing\nValues","Description":"Bayesian Network Structure Learning from Data with Missing Values.\n The package implements the Silander-Myllymaki complete search,\n the Max-Min Parents-and-Children, the Hill-Climbing, the\n Max-Min Hill-climbing heuristic searches, and the Structural\n Expectation-Maximization algorithm. Available scoring functions are\n BDeu, AIC, BIC. The package also implements methods for generating and using\n bootstrap samples, imputed data, inference.","Published":"2016-12-13","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"boa","Version":"1.1.8-2","Title":"Bayesian Output Analysis Program (BOA) for MCMC","Description":"A menu-driven program and library of functions for carrying out\n convergence diagnostics and statistical and graphical analysis of Markov\n chain Monte Carlo sampling output.","Published":"2016-06-23","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BoardGames","Version":"1.0.0","Title":"Board Games and Tools for Building Board Games","Description":"Tools for constructing board/grid based games, as well as readily available game(s) for your entertainment.","Published":"2016-07-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bodenmiller","Version":"0.1","Title":"Profilling of Peripheral Blood Mononuclear Cells using CyTOF","Description":"This data package contains a subset of the Bodenmiller et al, Nat Biotech 2012 dataset for testing single cell, high dimensional analysis and visualization methods.","Published":"2015-12-18","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"BOG","Version":"2.0","Title":"Bacterium and Virus Analysis of Orthologous Groups (BOG) is a\nPackage for Identifying Differentially Regulated Genes in the\nLight of Gene Functions","Description":"An implementation of three statistical tests for identification of COG (Cluster of Orthologous Groups) that are over represented among genes that show differential expression under conditions. It also provides tabular and graphical summaries of the results for easy visualisation and presentation. ","Published":"2015-03-03","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"boilerpipeR","Version":"1.3","Title":"Interface to the Boilerpipe Java Library","Description":"Generic Extraction of main text content from HTML files; removal\n of ads, sidebars and headers using the boilerpipe \n (http://code.google.com/p/boilerpipe/) Java library. The\n extraction heuristics from boilerpipe show a robust performance for a wide\n range of web site templates.","Published":"2015-05-11","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"}
{"Package":"BOIN","Version":"2.4","Title":"Bayesian Optimal INterval (BOIN) Design for Single-Agent and\nDrug- Combination Phase I Clinical Trials","Description":"The Bayesian optimal interval (BOIN) design is a novel phase I\n clinical trial design for finding the maximum tolerated dose (MTD). It can be\n used to design both single-agent and drug-combination trials. The BOIN design\n is motivated by the top priority and concern of clinicians when testing a new\n drug, which is to effectively treat patients and minimize the chance of exposing\n them to subtherapeutic or overly toxic doses. The prominent advantage of the\n BOIN design is that it achieves simplicity and superior performance at the same\n time. The BOIN design is algorithm-based and can be implemented in a simple\n way similar to the traditional 3+3 design. The BOIN design yields an average\n performance that is comparable to that of the continual reassessment method\n (CRM, one of the best model-based designs) in terms of selecting the MTD, but\n has a substantially lower risk of assigning patients to subtherapeutic or overly\n toxic doses.","Published":"2016-08-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bold","Version":"0.4.0","Title":"Interface to Bold Systems 'API'","Description":"A programmatic interface to the Web Service methods provided by\n Bold Systems for genetic 'barcode' data. Functions include methods for\n searching by sequences by taxonomic names, ids, collectors, and\n institutions; as well as a function for searching for specimens, and\n downloading trace files.","Published":"2017-01-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"Bolstad","Version":"0.2-34","Title":"Functions for Elementary Bayesian Inference","Description":"A set of R functions and data sets for the book Introduction to Bayesian Statistics, Bolstad, W.M. (2017), John Wiley & Sons ISBN 978-1-118-09156-2.","Published":"2017-03-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Bolstad2","Version":"1.0-28","Title":"Bolstad functions","Description":"A set of R functions and data sets for the book\n Understanding Computational Bayesian Statistics, Bolstad, W.M.\n (2009), John Wiley & Sons ISBN 978-0470046098","Published":"2013-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BonEV","Version":"1.0","Title":"An Improved Multiple Testing Procedure for Controlling False\nDiscovery Rates","Description":"An improved multiple testing procedure for controlling false discovery rates which is developed based on the Bonferroni procedure with integrated estimates from the Benjamini-Hochberg procedure and the Storey's q-value procedure. It controls false discovery rates through controlling the expected number of false discoveries.","Published":"2016-02-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bookdown","Version":"0.4","Title":"Authoring Books and Technical Documents with R Markdown","Description":"Output formats and utilities for authoring books and technical documents with R Markdown.","Published":"2017-05-20","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bookdownplus","Version":"1.0.2","Title":"Generate Varied Books and Documents with R 'bookdown' Package","Description":"A collection and selector of R 'bookdown' templates. 'bookdownplus' helps you write academic journal articles, guitar books, chemical equations, mails, calendars, and diaries. R 'bookdownplus' extends the features of 'bookdown', and simplifies the procedure. Users only have to choose a template, clarify the book title and author name, and then focus on writing the text. No need to struggle in YAML and LaTeX.","Published":"2017-06-21","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"boolean3","Version":"3.1.6","Title":"Boolean Binary Response Models","Description":"This package implements a\n partial-observability procedure for testing Boolean\n hypotheses that generalizes the binary response GLM as\n outlined in Braumoeller (2003).","Published":"2014-11-18","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BoolFilter","Version":"1.0.0","Title":"Optimal Estimation of Partially Observed Boolean Dynamical\nSystems","Description":"Tools for optimal and approximate state estimation as well as\n network inference of Partially-Observed Boolean Dynamical Systems.","Published":"2017-01-09","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"BoolNet","Version":"2.1.3","Title":"Construction, Simulation and Analysis of Boolean Networks","Description":"Provides methods to reconstruct and generate synchronous,\n asynchronous, probabilistic and temporal Boolean networks, and to\n analyze and visualize attractors in Boolean networks.","Published":"2016-11-21","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"Boom","Version":"0.7","Title":"Bayesian Object Oriented Modeling","Description":"A C++ library for Bayesian modeling, with an emphasis on\n Markov chain Monte Carlo. Although boom contains a few R utilities\n (mainly plotting functions), its primary purpose is to install the\n BOOM C++ library on your system so that other packages can link\n against it.","Published":"2017-05-28","License":"LGPL-2.1 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"BoomSpikeSlab","Version":"0.9.0","Title":"MCMC for Spike and Slab Regression","Description":"Spike and slab regression a la McCulloch and George (1997).","Published":"2017-05-28","License":"LGPL-2.1 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"boostmtree","Version":"1.1.0","Title":"Boosted Multivariate Trees for Longitudinal Data","Description":"Implements Friedman's gradient descent boosting algorithm for longitudinal data using multivariate tree base learners. A time-covariate interaction effect is modeled using penalized B-splines (P-splines) with estimated adaptive smoothing parameter.","Published":"2016-04-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"boostr","Version":"1.0.0","Title":"A modular framework to bag or boost any estimation procedure","Description":"boostr provides a modular framework that return the focus of\n ensemble learning back to 'learning' (instead of programming).","Published":"2014-05-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"boostSeq","Version":"1.0","Title":"Optimized GWAS cohort subset selection for resequencing studies","Description":"This package contains functionality to select a subsample\n of a genotyped cohort e.g. from a GWAS that is preferential for\n resequencing under the assumtion that causal variants share a\n haplotype with the risk allele of associated variants. The\n subsample is selected such that is contains risk alleles at\n maximum frequency for all SNPs specified. Phentoypes can also\n be included as additional variables to obtain a higher fraction\n of extreme phenotypes. An arbitrary number of SNPs and/or\n phentoypes can be specified for enrichment in a single\n subsample.","Published":"2012-08-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"boot","Version":"1.3-19","Title":"Bootstrap Functions (Originally by Angelo Canty for S)","Description":"Functions and datasets for bootstrapping from the\n book \"Bootstrap Methods and Their Application\" by A. C. Davison and \n D. V. Hinkley (1997, CUP), originally written by Angelo Canty for S.","Published":"2017-04-21","License":"Unlimited","snapshot_date":"2017-06-23"}
{"Package":"bootES","Version":"1.2","Title":"Bootstrap Effect Sizes","Description":"Calculate robust measures of effect sizes using the bootstrap.","Published":"2015-08-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bootLR","Version":"1.0","Title":"Bootstrapped Confidence Intervals for (Negative) Likelihood\nRatio Tests","Description":"Computes appropriate confidence intervals for the likelihood ratio tests commonly used in medicine/epidemiology. It is particularly useful when the sensitivity or specificity in the sample is 100%. Note that this does not perform the test on nested models--for that, see 'epicalc::lrtest'.","Published":"2015-07-13","License":"LGPL-2.1","snapshot_date":"2017-06-23"}
{"Package":"BootMRMR","Version":"0.1","Title":"Bootstrap-MRMR Technique for Informative Gene Selection","Description":"Selection of informative features like genes, transcripts, RNA seq, etc. using Bootstrap Maximum Relevance and Minimum Redundancy technique from a given high dimensional genomic dataset. Informative gene selection involves identification of relevant genes and removal of redundant genes as much as possible from a large gene space. Main applications in high-dimensional expression data analysis (e.g. microarray data, NGS expression data and other genomics and proteomics applications).","Published":"2016-09-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bootnet","Version":"1.0.0","Title":"Bootstrap Methods for Various Network Estimation Routines","Description":"Bootstrap methods to assess accuracy and stability of estimated network structures\n and centrality indices. Allows for flexible specification of any undirected network \n estimation procedure in R, and offers default sets for 'qgraph', 'IsingFit', 'IsingSampler',\n 'glasso', 'huge' and 'parcor' packages.","Published":"2017-05-04","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BootPR","Version":"0.60","Title":"Bootstrap Prediction Intervals and Bias-Corrected Forecasting","Description":"Bias-Corrected Forecasting and Bootstrap Prediction Intervals for Autoregressive Time Series","Published":"2014-04-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bootRes","Version":"1.2.3","Title":"Bootstrapped Response and Correlation Functions","Description":"Calculation of Bootstrapped Response and Correlation\n Functions for Use in Dendroclimatology","Published":"2012-11-27","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bootruin","Version":"1.2-4","Title":"A Bootstrap Test for the Probability of Ruin in the Classical\nRisk Process","Description":"We provide a framework for testing the probability of ruin in the classical (compound Poisson) risk process. It also includes some procedures for assessing and comparing the performance between the bootstrap test and the test using asymptotic normality.","Published":"2016-12-30","License":"AGPL-3","snapshot_date":"2017-06-23"}
{"Package":"bootspecdens","Version":"3.0","Title":"Testing equality of spectral densities","Description":"Bootstrap for testing the hypothesis that the spectral\n densities of a number m, m>=2, not necessarily independent time\n series are equal","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bootsPLS","Version":"1.0.3","Title":"Bootstrap Subsamplings of Sparse Partial Least Squares -\nDiscriminant Analysis for Classification and Signature\nIdentification","Description":"Applicable to any classification problem with more than 2 classes. It relies on bootstrap subsamplings of sPLS-DA and provides tools to select the most stable variables (defined as the ones consistently selected over the bootstrap subsamplings) and to predict the class of test samples.","Published":"2015-08-20","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bootStepAIC","Version":"1.2-0","Title":"Bootstrap stepAIC","Description":"Model selection by bootstrapping the stepAIC() procedure.","Published":"2009-06-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bootstrap","Version":"2017.2","Title":"Functions for the Book \"An Introduction to the Bootstrap\"","Description":"Software (bootstrap, cross-validation, jackknife) and data\n for the book \"An Introduction to the Bootstrap\" by B. Efron and\n R. Tibshirani, 1993, Chapman and Hall. This package is\n primarily provided for projects already based on it, and for\n support of the book. New projects should preferentially use the\n recommended package \"boot\".","Published":"2017-02-27","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bootSVD","Version":"0.5","Title":"Fast, Exact Bootstrap Principal Component Analysis for High\nDimensional Data","Description":"Implements fast, exact bootstrap Principal Component Analysis and\n Singular Value Decompositions for high dimensional data, as described in\n . For data matrices that are too large to operate\n on in memory, users can input objects with class 'ff' (see the 'ff'\n package), where the actual data is stored on disk. In response, this\n package will implement a block matrix algebra procedure for calculating the\n principal components (PCs) and bootstrap PCs. Depending on options set by\n the user, the 'parallel' package can be used to parallelize the calculation of\n the bootstrap PCs.","Published":"2015-06-02","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"boottol","Version":"2.0","Title":"Bootstrap Tolerance Levels for Credit Scoring Validation\nStatistics","Description":"Used to create bootstrap tolerance levels for the Kolmogorov-Smirnov (KS) statistic, the area under receiver operator characteristic curve (AUROC) statistic, and the Gini coefficient for each score cutoff. Also provides a bootstrap alternative to the Vasicek test.","Published":"2015-03-13","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BootWPTOS","Version":"1.2","Title":"Test Stationarity using Bootstrap Wavelet Packet Tests","Description":"Provides significance tests for second-order stationarity\n\tfor time series using bootstrap wavelet packet tests.","Published":"2016-06-30","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"boral","Version":"1.3.1","Title":"Bayesian Ordination and Regression AnaLysis","Description":"Bayesian approaches for analyzing multivariate data in ecology. Estimation is performed using Markov Chain Monte Carlo (MCMC) methods via JAGS. Three types of models may be fitted: 1) With explanatory variables only, boral fits independent column Generalized Linear Models (GLMs) to each column of the response matrix; 2) With latent variables only, boral fits a purely latent variable model for model-based unconstrained ordination; 3) With explanatory and latent variables, boral fits correlated column GLMs with latent variables to account for any residual correlation between the columns of the response matrix. ","Published":"2017-04-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Boruta","Version":"5.2.0","Title":"Wrapper Algorithm for All Relevant Feature Selection","Description":"An all relevant feature selection wrapper algorithm.\n It finds relevant features by comparing original attributes'\n importance with importance achievable at random, estimated\n using their permuted copies.","Published":"2017-01-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BoSSA","Version":"2.1","Title":"A Bunch of Structure and Sequence Analysis","Description":"Reads and plots phylogenetic placements obtained using the 'pplacer' and 'guppy' softwares .","Published":"2017-05-09","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"bossMaps","Version":"0.1.0","Title":"Convert Binary Species Range Maps into Continuous Surfaces Based\non Distance to Range Edge","Description":"Contains functions to convert binary (presence-absence) expert species range maps (like those found in a field guide) into continuous surfaces based on distance to range edge. These maps can then be used in species distribution models such as Maximum Entropy (Phillips 2008 ) using additional information (such as point occurrence data) to refine the expert map.","Published":"2016-12-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"boussinesq","Version":"1.0.3","Title":"Analytic Solutions for (ground-water) Boussinesq Equation","Description":"This package is a collection of R functions implemented\n from published and available analytic solutions for the\n One-Dimensional Boussinesq Equation (ground-water). In\n particular, the function \"beq.lin\" is the analytic solution of\n the linearized form of Boussinesq Equation between two\n different head-based boundary (Dirichlet) conditions;\n \"beq.song\" is the non-linear power-series analytic solution of\n the motion of a wetting front over a dry bedrock (Song at al,\n 2007, see complete reference on function documentation).\n Bugs/comments/questions/collaboration of any kind are warmly\n welcomed.","Published":"2013-04-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"boxoffice","Version":"0.1.1","Title":"Downloads Box Office Information for Given Dates (How Much Each\nMovie Earned in Theaters)","Description":"Download daily box office information (how much each movie earned\n in theaters) using data from either Box Office Mojo () or\n The Numbers ().","Published":"2016-08-20","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"boxplotdbl","Version":"1.2.2","Title":"Double Box Plot for Two-Axes Correlation","Description":"Correlation chart of two set (x and y) of data. \n Using Quartiles with boxplot style. \n Visualize the effect of factor. ","Published":"2013-11-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"boxr","Version":"0.3.4","Title":"Interface for the 'Box.com API'","Description":"An R interface for the remote file hosting service 'Box' \n (). In addition to uploading and downloading files,\n this package includes functions which mirror base R operations for local \n files, (e.g. box_load(), box_save(), box_read(), box_setwd(), etc.), as well\n as 'git' style functions for entire directories (e.g. box_fetch(), \n box_push()).","Published":"2017-01-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bpa","Version":"0.1.1","Title":"Basic Pattern Analysis","Description":"Run basic pattern analyses on character sets, digits, or combined\n input containing both characters and numeric digits. Useful for data\n cleaning and for identifying columns containing multiple or nonstandard\n formats.","Published":"2016-04-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bpca","Version":"1.2-2","Title":"Biplot of Multivariate Data Based on Principal Components\nAnalysis","Description":"Implements biplot (2d and 3d) of multivariate data based\n on principal components analysis and diagnostic tools of the quality of the reduction.","Published":"2013-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bpcp","Version":"1.3.4","Title":"Beta Product Confidence Procedure for Right Censored Data","Description":"Calculates nonparametric pointwise confidence intervals for the survival distribution for right censored data. Has two-sample tests for dissimilarity (e.g., difference, ratio or odds ratio) in survival at a fixed time. Especially important for small sample sizes or heavily censored data. Includes mid-p options.","Published":"2016-06-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bPeaks","Version":"1.2","Title":"bPeaks: an intuitive peak-calling strategy to detect\ntranscription factor binding sites from ChIP-seq data in small\neukaryotic genomes","Description":"bPeaks is a simple approach to identify transcription factor binding sites from ChIP-seq data. Our general philosophy is to provide an easy-to-use tool, well-adapted for small eukaryotic genomes (< 20 Mb). bPeaks uses a combination of 4 cutoffs (T1, T2, T3 and T4) to mimic \"good peak\" properties as described by biologists who visually inspect the ChIP-seq data on a genome browser. For yeast genomes, bPeaks calculates the proportion of peaks that fall in promoter sequences. These peaks are good candidates as transcription factor binding sites. ","Published":"2014-02-28","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"BPEC","Version":"1.0","Title":"Bayesian Phylogeographic and Ecological Clustering","Description":"Model-based clustering for phylogeographic data comprising mtDNA sequences and geographical locations along with optional environmental characteristics, aiming to identify migration events that led to homogeneous population clusters. ","Published":"2016-04-06","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bpkde","Version":"1.0-7","Title":"Back-Projected Kernel Density Estimation","Description":"Nonparametric multivariate kernel density \\\n estimation using a back-projected kernel.","Published":"2014-09-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bpp","Version":"1.0.0","Title":"Computations Around Bayesian Predictive Power","Description":"Implements functions to update Bayesian Predictive Power Computations after not stopping a clinical trial at an interim analysis. Such an interim analysis can either be blinded or unblinded. Code is provided for Normally distributed endpoints with known variance, with a prominent example being the hazard ratio.","Published":"2016-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bqtl","Version":"1.0-32","Title":"Bayesian QTL Mapping Toolkit","Description":"QTL mapping toolkit for inbred crosses and recombinant\n inbred lines. Includes maximum likelihood and Bayesian tools.","Published":"2016-01-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BradleyTerry2","Version":"1.0-6","Title":"Bradley-Terry Models","Description":"Specify and fit the Bradley-Terry model, including structured versions in which the parameters are related to explanatory variables through a linear predictor and versions with contest-specific effects, such as a home advantage.","Published":"2015-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"braidReports","Version":"0.5.3","Title":"Visualize Combined Action Response Surfaces and Report BRAID\nAnalyses","Description":"Provides functions to generate, format, and style surface plots for visualizing combined action data. Also provides functions for reporting on a BRAID analysis, including plotting curve-shifts, calculating IAE values, and producing full BRAID analysis reports.","Published":"2016-04-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"braidrm","Version":"0.71","Title":"Fitting Dose Response with the BRAID Combined Action Model","Description":"Contains functions for evaluating, analyzing, and fitting combined action dose response surfaces with the Bivariate Response to Additive Interacting Dose (BRAID) model of combined action.","Published":"2016-03-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"BrailleR","Version":"0.24.2","Title":"Improved Access for Blind Users","Description":"Blind users do not have access to the graphical output from R\n without printing the content of graphics windows to an embosser of some kind. This\n is not as immediate as is required for efficient access to statistical output.\n The functions here are created so that blind people can make even better use\n of R. This includes the text descriptions of graphs, convenience functions\n to replace the functionality offered in many GUI front ends, and experimental\n functionality for optimising graphical content to prepare it for embossing as\n tactile images.","Published":"2016-03-30","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"brainGraph","Version":"1.0.0","Title":"Graph Theory Analysis of Brain MRI Data","Description":"A set of tools for performing graph theory analysis of brain MRI\n data. It is best suited to data from a Freesurfer analysis (cortical\n thickness, volumes, local gyrification index, surface area), but also works\n with e.g., tractography data from FSL and fMRI data from DPABI. It contains\n a graphical user interface for graph visualization and data exploration and\n several functions for generating useful figures.","Published":"2017-04-10","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"brainR","Version":"1.2","Title":"Helper functions to misc3d and rgl packages for brain imaging","Description":"This includes functions for creating 3D and 4D images using WebGL, RGL, and JavaScript Commands. This package relies on the X ToolKit (XTK, https://github.com/xtk/X#readme). ","Published":"2014-03-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"brainwaver","Version":"1.6","Title":"Basic wavelet analysis of multivariate time series with a\nvisualisation and parametrisation using graph theory","Description":"This package computes the correlation matrix for each\n scale of a wavelet decomposition, namely the one performed by\n the R package waveslim (Whitcher, 2000). An hypothesis test is\n applied to each entry of one matrix in order to construct an\n adjacency matrix of a graph. The graph obtained is finally\n analysed using the small-world theory (Watts and Strogatz,\n 1998) and using the computation of efficiency (Latora, 2001),\n tested using simulated attacks. The brainwaver project is\n complementary to the camba project for brain-data\n preprocessing. A collection of scripts (with a makefile) is\n avalaible to download along with the brainwaver package, see\n information on the webpage mentioned below.","Published":"2012-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Branching","Version":"0.9.4","Title":"Simulation and Estimation for Branching Processes","Description":"Simulation and parameter estimation of multitype Bienayme - Galton - Watson processes.","Published":"2016-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"brant","Version":"0.1-3","Title":"Test for Parallel Regression Assumption","Description":"Tests the parallel regression assumption for ordinal logit models generated with the function polr() from the package 'MASS'.","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"braQCA","Version":"0.9.9.6","Title":"Bootstrapped Robustness Assessment for Qualitative Comparative\nAnalysis","Description":"Test the robustness of a user's Qualitative Comparative Analysis\n solutions to randomness, using the bootstrapped assessment: baQCA(). This\n package also includes a function that provides recommendations for improving\n solutions to reach typical significance levels: brQCA(). After applying recommendations \n from brQCA(), QCAdiff() shows which cases are excluded from the final result.","Published":"2017-02-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"brea","Version":"0.1.0","Title":"Bayesian Recurrent Event Analysis","Description":"A function to produce MCMC samples for posterior inference in semiparametric Bayesian discrete time competing risks recurrent events models.","Published":"2016-10-06","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"breakage","Version":"1.1-1","Title":"SICM pipette tip geometry estimation","Description":"Estimates geometry of SICM pipette tips by fitting a physical model to recorded breakage-current data.","Published":"2014-12-08","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"breakaway","Version":"3.0","Title":"Species Richness Estimation and Modeling","Description":"Species richness estimation is an important problem in biodiversity analysis. This package provides methods for total species richness estimation (observed plus unobserved) and a method for modelling total diversity with covariates. breakaway() estimates total (observed plus unobserved) species richness. Microbial diversity datasets are characterized by a large number of rare species and a small number of highly abundant species. The class of models implemented by breakaway() is flexible enough to model both these features. breakaway_nof1() implements a similar procedure however does not require a singleton count. betta() provides a method for modelling total diversity with covariates in a way that accounts for its estimated nature and thus accounts for unobserved taxa, and betta_random() permits random effects modelling.","Published":"2016-03-30","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"breakfast","Version":"0.1.0","Title":"Multiple Change-Point Detection and Segmentation","Description":"Performs multiple change-point detection in data sequences, or data sequence\n segmentation, using computationally efficient multiscale methods. This version only\n implements the \"Tail-Greedy Unbalanced Haar\" change-point detection methodology; more\n methods will be added in future versions. To start with, see the function\n segment.mean.","Published":"2017-05-26","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"breakpoint","Version":"1.2","Title":"An R Package for Multiple Break-Point Detection via the\nCross-Entropy Method","Description":"Implements the Cross-Entropy (CE) method, which is a model based stochastic optimization technique to estimate both the number and their corresponding locations of break-points in continuous and discrete measurements (Priyadarshana and Sofronov (2015), Priyadarshana and Sofronov (2012a), Priyadarshana and Sofronov (2012b)).","Published":"2016-01-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"breathtestcore","Version":"0.3.0","Title":"Core Functions to Read and Fit 13c Time Series from Breath Tests","Description":"Reads several formats of 13C data (IRIS/Wagner, BreathID) and CSV.\n Creates artificial sample data for testing. \n Fits Maes/Ghoos, Bluck-Coward self-correcting formula using 'nls', 'nlme'.\n See Bluck L J C and Coward W A 2006 .\n This package contains a refactored subset of github package \n 'dmenne/d13cbreath' without database and display functions. Methods to \n fit breath test curves with Bayesian Stan methods are refactored to \n github package 'dmenne/breathteststan'. For a Shiny GUI, see \n package 'dmenne/breathtestshiny'.","Published":"2017-05-11","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"breathteststan","Version":"0.3.0","Title":"Stan-Based Fit to Gastric Emptying Curves","Description":"Stan-based curve-fitting function\n for use with package 'breathtestcore' by the same author.\n Stan functions are refactored here for easier testing.","Published":"2017-05-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bReeze","Version":"0.4-0","Title":"Functions for wind resource assessment","Description":"A collection of functions to analyse, visualize and interpret wind data\n and to calculate the potential energy production of wind turbines.","Published":"2014-09-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"brew","Version":"1.0-6","Title":"Templating Framework for Report Generation","Description":"brew implements a templating framework for mixing text and\n R code for report generation. brew template syntax is similar\n to PHP, Ruby's erb module, Java Server Pages, and Python's psp\n module.","Published":"2011-04-13","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"brewdata","Version":"0.4","Title":"Extracting Usable Data from the Grad Cafe Results Search","Description":"Retrieves and parses graduate admissions survey data from the Grad Cafe website (http://thegradcafe.com).","Published":"2015-01-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"brglm","Version":"0.5-9","Title":"Bias reduction in binomial-response generalized linear models","Description":"Fit generalized linear models with binomial responses using either an adjusted-score approach to bias reduction or maximum penalized likelihood where penalization is by Jeffreys invariant prior. These procedures return estimates with improved frequentist properties (bias, mean squared error) that are always finite even in cases where the maximum likelihood estimates are infinite (data separation). Fitting takes place by fitting generalized linear models on iteratively updated pseudo-data. The interface is essentially the same as 'glm'. More flexibility is provided by the fact that custom pseudo-data representations can be specified and used for model fitting. Functions are provided for the construction of confidence intervals for the reduced-bias estimates.","Published":"2013-11-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"brglm2","Version":"0.1.4","Title":"Bias Reduction in Generalized Linear Models","Description":"Estimation and inference from generalized linear models based on various methods for bias reduction. The 'brglmFit' fitting method can achieve reduction of estimation bias by solving either the mean bias-reducing adjusted score equations in Firth (1993) and Kosmidis and Firth (2009) , or the median bias-reduction adjusted score equations in Kenne et al. (2016) , or through the direct subtraction of an estimate of the bias of the maximum likelihood estimator from the maximum likelihood estimates as in Cordeiro and McCullagh (1991) . Estimation in all cases takes place via a quasi Fisher scoring algorithm, and S3 methods for the construction of of confidence intervals for the reduced-bias estimates are provided. In the special case of generalized linear models for binomial and multinomial responses, the adjusted score approaches return estimates with improved frequentist properties, that are also always finite, even in cases where the maximum likelihood estimates are infinite (e.g. complete and quasi-complete separation). 'brglm2' also provides pre-fit and post-fit methods for detecting separation and infinite maximum likelihood estimates in binomial response generalized linear models.","Published":"2017-05-23","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bride","Version":"1.3","Title":"Brier score decomposition of probabilistic forecasts for binary\nevents","Description":"Decomposes the empirical Brier score into reliability, resolution and uncertainty. Two different estimators for the components are provided: The original estimators proposed by Murphy (1974), and the bias-corrected estimators proposed by Ferro and Fricker (2012). Sampling variances of all the components are estimated. This package applies only to probabilistic predictions of binary events.","Published":"2013-07-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bridgedist","Version":"0.1.0","Title":"An Implementation of the Bridge Distribution with Logit-Link as\nin Wang and Louis (2003)","Description":"An implementation of the bridge distribution with logit-link in\n R. In Wang and Louis (2003) , such a univariate\n bridge distribution was derived as the distribution of the random intercept that\n 'bridged' a marginal logistic regression and a conditional logistic regression.\n The conditional and marginal regression coefficients are a scalar multiple\n of each other. Such is not the case if the random intercept distribution was\n Gaussian.","Published":"2016-04-22","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bridger2","Version":"0.1.0","Title":"Genome-Wide RNA Degradation Analysis Using BRIC-Seq Data","Description":"BRIC-seq is a genome-wide approach for determining RNA stability in mammalian cells.\n This package provides a series of functions for performing quality check of your BRIC-seq data,\n calculation of RNA half-life for each transcript and comparison of RNA half-lives between two conditions.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bridgesampling","Version":"0.1-1","Title":"Bridge Sampling for Marginal Likelihoods and Bayes Factors","Description":"Provides functions for estimating marginal likelihoods, Bayes factors,\n posterior model probabilities, and normalizing constants in general,\n via different versions of bridge sampling (Meng & Wong, 1996,\n ).","Published":"2017-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"briskaR","Version":"0.1.0","Title":"Biological Risk Assessment","Description":"A spatio-temporal exposure-hazard model for assessing biological\n risk and impact. The model is based on stochastic geometry for describing\n the landscape and the exposed individuals, a dispersal kernel for the\n dissemination of contaminants and an ecotoxicological equation.","Published":"2016-10-11","License":"GPL (>= 2) | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"brlrmr","Version":"0.1.2","Title":"Bias Reduction with Missing Binary Response","Description":"Provides two main functions, il() and fil(). The il() function implements the EM algorithm developed by Ibrahim and Lipsitz (1996) to estimate the parameters of a logistic regression model with the missing response when the missing data mechanism is nonignorable. The fil() function implements the algorithm proposed by Maity et. al. (2017+) to reduce the bias produced by the method of Ibrahim and Lipsitz (1996) .","Published":"2017-06-09","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"brm","Version":"1.0","Title":"Binary Regression Model","Description":"Fits novel models for the conditional relative risk, risk difference and odds ratio.","Published":"2016-09-17","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"brms","Version":"1.7.0","Title":"Bayesian Regression Models using Stan","Description":"Fit Bayesian generalized (non-)linear multilevel models \n using Stan for full Bayesian inference. A wide range of distributions \n and link functions are supported, allowing users to fit -- among others -- \n linear, robust linear, count data, survival, response times, ordinal, \n zero-inflated, hurdle, and even self-defined mixture models all in a \n multilevel context. Further modeling options include non-linear and \n smooth terms, auto-correlation structures, censored data, meta-analytic \n standard errors, and quite a few more. In addition, all parameters of the \n response distribution can be predicted in order to perform distributional \n regression. Prior specifications are flexible and explicitly encourage \n users to apply prior distributions that actually reflect their beliefs.\n Model fit can easily be assessed and compared with posterior predictive \n checks and leave-one-out cross-validation.","Published":"2017-05-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"brnn","Version":"0.6","Title":"Bayesian Regularization for Feed-Forward Neural Networks","Description":"Bayesian regularization for feed-forward neural networks.","Published":"2016-01-26","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Brobdingnag","Version":"1.2-4","Title":"Very large numbers in R","Description":"Handles very large numbers in R. Real numbers are held\n using their natural logarithms, plus a logical flag indicating\n sign. The package includes a vignette that gives a\n step-by-step introduction to using S4 methods.","Published":"2013-12-05","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"broman","Version":"0.65-4","Title":"Karl Broman's R Code","Description":"Miscellaneous R functions, including functions related to\n graphics (mostly for base graphics), permutation tests, running\n mean/median, and general utilities.","Published":"2017-05-28","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"broom","Version":"0.4.2","Title":"Convert Statistical Analysis Objects into Tidy Data Frames","Description":"Convert statistical analysis objects from R into tidy data frames,\n so that they can more easily be combined, reshaped and otherwise processed\n with tools like 'dplyr', 'tidyr' and 'ggplot2'. The package provides three\n S3 generics: tidy, which summarizes a model's statistical findings such as\n coefficients of a regression; augment, which adds columns to the original\n data such as predictions, residuals and cluster assignments; and glance, which\n provides a one-row summary of model-level statistics.","Published":"2017-02-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"brotli","Version":"1.0","Title":"A Compression Format Optimized for the Web","Description":"A lossless compressed data format that uses a combination of the\n LZ77 algorithm and Huffman coding. Brotli is similar in speed to deflate (gzip)\n but offers more dense compression.","Published":"2017-03-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"Brq","Version":"2.0","Title":"Bayesian Analysis of Quantile Regression Models","Description":"Bayesian estimation and variable selection for quantile\n regression models.","Published":"2017-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"brr","Version":"1.0.0","Title":"Bayesian Inference on the Ratio of Two Poisson Rates","Description":"Implementation of the Bayesian inference for the two independent Poisson samples model, using the semi-conjugate family of prior distributions.","Published":"2015-09-07","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"brranching","Version":"0.2.0","Title":"Fetch 'Phylogenies' from Many Sources","Description":"Includes methods for fetching 'phylogenies' from a variety\n of sources, currently includes 'Phylomatic'\n (), with more in the future.","Published":"2016-04-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"brt","Version":"1.1.0","Title":"Biological Relevance Testing","Description":"Analyses of large-scale -omics datasets commonly use p-values as the indicators of statistical significance. However, considering p-value alone neglects the importance of effect size (i.e., the mean difference between groups) in determining the biological relevance of a significant difference. Here, we present a novel algorithm for computing a new statistic, the biological relevance testing (BRT) index, in the frequentist hypothesis testing framework to address this problem. ","Published":"2017-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BRugs","Version":"0.8-6","Title":"Interface to the 'OpenBUGS' MCMC Software","Description":"Fully-interactive R interface to the 'OpenBUGS' software for Bayesian analysis using MCMC sampling. Runs natively and stably in 32-bit R under Windows. Versions running on Linux and on 64-bit R under Windows are in \"beta\" status and less efficient.","Published":"2015-12-16","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BSagri","Version":"0.1-8","Title":"Statistical methods for safety assessment in agricultural field\ntrials","Description":"Collection of functions, data sets and code examples \n for evaluations of field trials with the objective of equivalence assessment.","Published":"2013-11-21","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bsam","Version":"1.1.1","Title":"Bayesian State-Space Models for Animal Movement","Description":"Tools to fit Bayesian state-space models to animal tracking data. Models are provided for location \n filtering, location filtering and behavioural state estimation, and their hierarchical versions. \n The models are primarily intended for fitting to ARGOS satellite tracking data but options exist to fit \n to other tracking data types. For Global Positioning System data, consider the 'moveHMM' package. \n Simplified Markov Chain Monte Carlo convergence diagnostic plotting is provided but users are encouraged \n to explore tools available in packages such as 'coda' and 'boa'.","Published":"2016-11-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BSDA","Version":"1.01","Title":"Basic Statistics and Data Analysis","Description":"Data sets for book \"Basic Statistics and Data Analysis\" by\n Larry J. Kitchens","Published":"2012-03-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bsearchtools","Version":"0.0.61","Title":"Binary Search Tools","Description":"Exposes the binary search functions of the C++ standard library (std::lower_bound, std::upper_bound) plus other convenience functions, allowing faster lookups on sorted vectors.","Published":"2017-02-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BSGS","Version":"2.0","Title":"Bayesian Sparse Group Selection","Description":"The integration of Bayesian variable and sparse group variable selection approaches for regression models. ","Published":"2015-06-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BSGW","Version":"0.9.2","Title":"Bayesian Survival Model with Lasso Shrinkage Using Generalized\nWeibull Regression","Description":"Bayesian survival model using Weibull regression on both scale and shape parameters. Dependence of shape parameter on covariates permits deviation from proportional-hazard assumption, leading to dynamic - i.e. non-constant with time - hazard ratios between subjects. Bayesian Lasso shrinkage in the form of two Laplace priors - one for scale and one for shape coefficients - allows for many covariates to be included. Cross-validation helper functions can be used to tune the shrinkage parameters. Monte Carlo Markov Chain (MCMC) sampling using a Gibbs wrapper around Radford Neal's univariate slice sampler (R package MfUSampler) is used for coefficient estimation.","Published":"2016-09-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bshazard","Version":"1.0","Title":"Nonparametric Smoothing of the Hazard Function","Description":"The function estimates the hazard function non parametrically from a survival object (possibly adjusted for covariates). The smoothed estimate is based on B-splines from the perspective of generalized linear mixed models. Left truncated and right censoring data are allowed.","Published":"2014-02-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BsMD","Version":"2013.0718","Title":"Bayes Screening and Model Discrimination","Description":"Bayes screening and model discrimination follow-up designs.","Published":"2013-07-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bspec","Version":"1.5","Title":"Bayesian Spectral Inference","Description":"Bayesian inference on the (discrete) power spectrum of time series.","Published":"2015-04-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bsplus","Version":"0.1.0","Title":"Adds Functionality to the R Markdown + Shiny Bootstrap Framework","Description":"The Bootstrap framework lets you add some JavaScript functionality to your web site by\n adding attributes to your HTML tags - Bootstrap takes care of the JavaScript\n . If you are using R Markdown or Shiny, you can\n use these functions to create collapsible sections, accordion panels, modals, tooltips,\n popovers, and an accordion sidebar framework (not described at Bootstrap site).","Published":"2017-01-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bspmma","Version":"0.1-1","Title":"bspmma: Bayesian Semiparametric Models for Meta-Analysis","Description":"Some functions for nonparametric and semiparametric\n Bayesian models for random effects meta-analysis","Published":"2012-07-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"BSquare","Version":"1.1","Title":"Bayesian Simultaneous Quantile Regression","Description":"This package models the quantile process as a function of\n predictors.","Published":"2013-05-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BSSasymp","Version":"1.2-0","Title":"Asymptotic Covariance Matrices of Some BSS Mixing and Unmixing\nMatrix Estimates","Description":"Functions to compute the asymptotic covariance matrices of mixing and unmixing matrix estimates of the following blind source separation (BSS) methods: symmetric and squared symmetric FastICA, regular and adaptive deflation-based FastICA, FOBI, JADE, AMUSE and deflation-based and symmetric SOBI. Also functions to estimate these covariances based on data are available. ","Published":"2017-01-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bssn","Version":"0.7","Title":"Birnbaum-Saunders Model Based on Skew-Normal Distribution","Description":"It provides the density, distribution function, quantile function, random number generator, reliability function, failure rate, likelihood function,\n moments and EM algorithm for Maximum Likelihood estimators, also empirical quantile and generated envelope for a given sample, all this for the three parameter\n Birnbaum-Saunders model based on Skew-Normal Distribution.\n Additionally, it provides the random number generator for the mixture of Birnbaum-Saunders model based on Skew-Normal distribution.","Published":"2016-03-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bst","Version":"0.3-14","Title":"Gradient Boosting","Description":"Functional gradient descent algorithm for a variety of convex and non-convex loss functions, for both classical and robust regression and classification problems. ","Published":"2016-09-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bsts","Version":"0.7.1","Title":"Bayesian Structural Time Series","Description":"Time series regression using dynamic linear models fit using\n MCMC. See Scott and Varian (2014) , among many\n other sources.","Published":"2017-05-28","License":"LGPL-2.1 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"btb","Version":"0.1.14","Title":"Beyond the Border","Description":"Kernel density estimation dedicated to urban geography.","Published":"2017-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"btergm","Version":"1.9.0","Title":"Temporal Exponential Random Graph Models by Bootstrapped\nPseudolikelihood","Description":"Temporal Exponential Random Graph Models (TERGM) estimated by maximum pseudolikelihood with bootstrapped confidence intervals or Markov Chain Monte Carlo maximum likelihood. Goodness of fit assessment for ERGMs, TERGMs, and SAOMs. Micro-level interpretation of ERGMs and TERGMs.","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"btf","Version":"1.2","Title":"Estimates Univariate Function via Bayesian Trend Filtering","Description":"Trend filtering uses the generalized\n lasso framework to fit an adaptive polynomial of degree k to\n estimate the function f_0 at each input x_i in the model: y_i =\n f_0(x_i) + epsilon_i, for i = 1, ..., n, and epsilon_i\n is sub-Gaussian with E(epsilon_i) = 0. Bayesian trend filtering adapts\n the genlasso framework to a fully Bayesian hierarchical model, estimating\n the penalty parameter lambda within a tractable Gibbs sampler.","Published":"2017-05-31","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"}
{"Package":"BTLLasso","Version":"0.1-6","Title":"Modelling Heterogeneity in Paired Comparison Data","Description":"Performs 'BTLLasso' (Schauberger and Tutz, 2017: Subject-Specific Modelling of Paired Comparison Data - a Lasso-Type Penalty Approach), a method to include different types of variables in paired\n comparison models and, therefore, to allow for heterogeneity between subjects. Variables can be subject-specific, object-specific and subject-object-specific and\n can have an influence on the attractiveness/strength of the objects. Suitable L1 penalty terms are used \n to cluster certain effects and to reduce the complexity of the models.","Published":"2017-05-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BTR","Version":"1.2.4","Title":"Training and Analysing Asynchronous Boolean Models","Description":"Tools for inferring asynchronous Boolean\n models from single-cell expression data.","Published":"2016-09-17","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BTSPAS","Version":"2014.0901","Title":"Bayesian Time-Strat. Population Analysis","Description":"BTSPAS provides advanced Bayesian methods to estimate\n\t abundance and run-timing from temporally-stratified\n\t Petersen mark-recapture experiments. Methods include\n\t hierarchical modelling of the capture probabilities\n \t and spline smoothing of the daily run size. This version \n\t uses JAGS to sample from the posterior distribution.","Published":"2014-08-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BTYD","Version":"2.4","Title":"Implementing Buy 'Til You Die Models","Description":"This package contains functions for data preparation, parameter estimation, scoring, and plotting for the BG/BB, BG/NBD and Pareto/NBD models.","Published":"2014-11-07","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BTYDplus","Version":"1.0.1","Title":"Probabilistic Models for Assessing and Predicting your Customer\nBase","Description":"Provides advanced statistical methods to describe and predict customers'\n purchase behavior in a non-contractual setting. It uses historic transaction records to fit a\n probabilistic model, which then allows to compute quantities of managerial interest on a cohort-\n as well as on a customer level (Customer Lifetime Value, Customer Equity, P(alive), etc.). This\n package complements the BTYD package by providing several additional buy-till-you-die models, that\n have been published in the marketing literature, but whose implementation are complex and non-trivial.\n These models are: NBD, MBG/NBD, BG/CNBD-k, MBG/CNBD-k, Pareto/NBD (HB), Pareto/NBD (Abe) and Pareto/GGG.","Published":"2016-12-14","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BUCSS","Version":"0.0.2","Title":"Bias and Uncertainty Corrected Sample Size","Description":"Implements a method of correcting for publication bias and\n uncertainty when planning sample sizes in a future study from an original study. See Anderson, Kelley, & Maxwell (submitted, revised and resubmitted). ","Published":"2017-04-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"bujar","Version":"0.2-3","Title":"Buckley-James Regression for Survival Data with High-Dimensional\nCovariates","Description":"Buckley-James regression for right-censoring survival data with high-dimensional covariates. Implementations for survival data include boosting with componentwise linear least squares, componentwise smoothing splines, regression trees and MARS. Other high-dimensional tools include penalized regression for survival data.","Published":"2017-04-27","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"bulletr","Version":"0.1","Title":"Algorithms for Matching Bullet Lands","Description":"Analyze bullet lands using nonparametric methods. We provide a\n reading routine for x3p files (see for more\n information) and a host of analysis functions designed to assess the\n probability that two bullets were fired from the same gun barrel.","Published":"2017-04-26","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bunchr","Version":"1.2.0","Title":"Analyze Bunching in a Kink or Notch Setting","Description":"View and analyze data where bunching is expected. Estimate counter-\n factual distributions. For earnings data, estimate the compensated\n elasticity of earnings w.r.t. the net-of-tax rate.","Published":"2017-01-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"bundesligR","Version":"0.1.0","Title":"All Final Tables of the Bundesliga","Description":"All final tables of Germany's highest football (soccer!) league, the Bundesliga. Contains data from 1964 to 2016.","Published":"2016-08-12","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bupaR","Version":"0.2.0","Title":"Business Process Analytics in R","Description":"Functionalities for process analysis in R. This packages implements an S3-class for event log objects, and related handler functions. Imports related packages for subsetting event data, computation of descriptive statistics, handling of Petri Net objects and visualization of process maps. See also packages 'edeaR','processmapR', 'eventdataR' and 'processmonitR'.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"burnr","Version":"0.2.0","Title":"Advanced Fire History Analysis in R","Description":"Basic tools to analyze forest fire history data (e.g. FHX2) in R.","Published":"2017-05-30","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"BurStFin","Version":"1.02","Title":"Burns Statistics Financial","Description":"A suite of functions for finance, including the estimation\n\tof variance matrices via a statistical factor model or\n\tLedoit-Wolf shrinkage.","Published":"2014-03-09","License":"Unlimited","snapshot_date":"2017-06-23"}
{"Package":"BurStMisc","Version":"1.1","Title":"Burns Statistics Miscellaneous","Description":"Script search, corner, genetic optimization, permutation tests, write expect test.","Published":"2016-08-13","License":"Unlimited","snapshot_date":"2017-06-23"}
{"Package":"bursts","Version":"1.0-1","Title":"Markov model for bursty behavior in streams","Description":"An implementation of Jon Kleinberg's burst detection algorithm. Uses an infinite Markov model to detect periods of increased activity in a series of discrete events with known times, and provides a simple visualization of the results.","Published":"2014-02-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"BuyseTest","Version":"1.0","Title":"Generalized Pairwise Comparisons","Description":"Implementation of the Generalized Pairwise Comparisons. This test\n enables to compare two groups of observations in randomized trials(e.g treated\n vs. control patients) on several prioritized outcomes. Pairwise comparisons\n require consideration of all possible pairs of individuals, one taken from the\n treatment group and the other taken from the control group. The outcomes of the\n two individuals forming a pair are compared. Thresholds of minimal clinically\n significant differences can be defined. It is possible to analyse simultaneously\n several outcomes by prioritizing the variables that capture them. The highest\n priority is assigned to the variable considered the most clinically relevant.\n A natural way of handling uninformative or neutral pairs is to consider the\n outcomes in descending order of priority: whenever a pair is uninformative or\n neutral for an outcome of higher priority, the outcomes of lower priority are\n examined In the case of time-to-event endpoint, four methods to handle censored\n observations are available in this package (Gehan, Peto, Efron, and Peron).","Published":"2016-08-17","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"bvarsv","Version":"1.1","Title":"Bayesian Analysis of a Vector Autoregressive Model with\nStochastic Volatility and Time-Varying Parameters","Description":"R/C++ implementation of the model proposed by Primiceri (\"Time Varying Structural Vector Autoregressions and Monetary Policy\", Review of Economic Studies, 2005), with functionality for computing posterior predictive distributions and impulse responses.","Published":"2015-11-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bvenn","Version":"0.1","Title":"A Simple alternative to proportional Venn diagrams","Description":"This package implements a simple alternative to the\n traditional Venn diagram. It depicts each overlap as a separate\n bubble with area proportional to the overlap size. Relation of\n the bubbles to input sets is shown by their their arrangement.","Published":"2012-08-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bvls","Version":"1.4","Title":"The Stark-Parker algorithm for bounded-variable least squares","Description":"An R interface to the Stark-Parker implementation of an\n algorithm for bounded-variable least squares","Published":"2013-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"bvpSolve","Version":"1.3.3","Title":"Solvers for Boundary Value Problems of Differential Equations","Description":"Functions that solve boundary value problems ('BVP') of systems of ordinary\n differential equations ('ODE') and differential algebraic equations ('DAE').\n The functions provide an interface to the FORTRAN functions\n 'twpbvpC', 'colnew/colsys', and an R-implementation of the shooting method.","Published":"2016-12-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"BVS","Version":"4.12.1","Title":"Bayesian Variant Selection: Bayesian Model Uncertainty\nTechniques for Genetic Association Studies","Description":"The functions in this package focus on analyzing\n case-control association studies involving a group of genetic\n variants. In particular, we are interested in modeling the\n outcome variable as a function of a multivariate genetic\n profile using Bayesian model uncertainty and variable selection\n techniques. The package incorporates functions to analyze data\n sets involving common variants as well as extensions to model\n rare variants via the Bayesian Risk Index (BRI) as well as\n haplotypes. Finally, the package also allows the incorporation\n of external biological information to inform the marginal\n inclusion probabilities via the iBMU.","Published":"2012-08-09","License":"Unlimited","snapshot_date":"2017-06-23"}
{"Package":"bWGR","Version":"1.4","Title":"Bagging Whole-Genome Regression","Description":"Whole-genome regression methods on Bayesian framework fitted via EM\n or Gibbs sampling, with optional sampling techniques and kernel term.","Published":"2017-03-22","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"BWStest","Version":"0.2.1","Title":"Baumgartner Weiss Schindler Test of Equal Distributions","Description":"Performs the 'Baumgartner-Weiss-Schindler' two-sample test of equal\n probability distributions.","Published":"2017-03-21","License":"LGPL-3","snapshot_date":"2017-06-23"}
{"Package":"bytescircle","Version":"1.1","Title":"Statistics About Bytes Contained in a File as a Circle Plot","Description":"Shows statistics about bytes contained in a file \n as a circle graph of deviations from mean in sigma increments. \n The function can be useful for statistically analyze the content of files \n in a glimpse: text files are shown as a green centered crown, compressed \n and encrypted files should be shown as equally distributed variations with \n a very low CV (sigma/mean), and other types of files can be classified between \n these two categories depending on their text vs binary content, which can be \n useful to quickly determine how information is stored inside them (databases, \n multimedia files, etc). ","Published":"2017-01-19","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"c060","Version":"0.2-4","Title":"Extended Inference for Lasso and Elastic-Net Regularized Cox and\nGeneralized Linear Models","Description":"c060 provides additional functions to perform stability selection, model validation and parameter tuning for glmnet models","Published":"2014-12-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"c212","Version":"0.93","Title":"Methods for Detecting Safety Signals in Clinical Trials Using\nBody-Systems (System Organ Classes)","Description":"Methods for detecting safety signals in clinical trials using groupings of adverse events by body-system or system organ class.The package title c212 is in reference to the original Engineering and Physical Sciences Research Council (UK) funded project which was named CASE 2/12.","Published":"2017-04-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"c3net","Version":"1.1.1","Title":"Infering large-scale gene networks with C3NET","Description":"This package allows inferring gene regulatory networks\n with direct physical interactions from microarray expression\n data using C3NET.","Published":"2012-07-23","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"C50","Version":"0.1.0-24","Title":"C5.0 Decision Trees and Rule-Based Models","Description":"C5.0 decision trees and rule-based models for pattern recognition.","Published":"2015-03-09","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ca","Version":"0.70","Title":"Simple, Multiple and Joint Correspondence Analysis","Description":"Computation and visualization of simple, multiple and joint correspondence analysis.","Published":"2016-12-14","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"cablecuttr","Version":"0.1.1","Title":"A CanIStream.It API Wrapper","Description":"A wrapper for the 'CanIStream.It' API for searching across the\n most popular streaming, rental, and purchase services to find where a\n movie is available. See for more information. ","Published":"2017-01-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"cabootcrs","Version":"1.0","Title":"Bootstrap Confidence Regions for Correspondence Analysis","Description":"Performs correspondence analysis on a two-way contingency\n table and produces bootstrap-based elliptical confidence\n regions around the projected coordinates for the category\n points. Includes routines to plot the results in a variety of\n styles. Also reports the standard numerical output for\n correspondence analysis.","Published":"2013-06-10","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cacIRT","Version":"1.4","Title":"Classification Accuracy and Consistency under Item Response\nTheory","Description":"Computes classification accuracy and consistency indices under Item Response Theory. Implements the total score IRT-based methods in Lee, Hanson & Brennen (2002) and Lee (2010), the IRT-based methods in Rudner (2001, 2005), and the total score nonparametric methods in Lathrop & Cheng (2014). For dichotomous and polytomous tests.","Published":"2015-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CaDENCE","Version":"1.2.4","Title":"Conditional Density Estimation Network Construction and\nEvaluation","Description":"Parameters of a user-specified probability distribution are modelled by a multi-layer perceptron artificial neural network. This framework can be used to implement probabilistic nonlinear models including mixture density networks, heteroscedastic regression models, zero-inflated models, and the like.","Published":"2017-03-23","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CADFtest","Version":"0.3-3","Title":"A Package to Perform Covariate Augmented Dickey-Fuller Unit Root\nTests","Description":"Hansen's (1995) Covariate-Augmented\n Dickey-Fuller (CADF) test. The only required argument is y, the\n Tx1 time series to be tested. If no stationary covariate X is\n passed to the procedure, then an ordinary ADF test is\n performed. The p-values of the test are computed using the\n procedure illustrated in Lupi (2009).","Published":"2017-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CADStat","Version":"3.0.8","Title":"Provides a GUI to Several Statistical Methods","Description":"Using Java GUI for R (JGR), CADStat provides a user\n interface for several statistical methods -\n scatterplot, boxplot, linear regression, generalized linear\n regression, quantile regression, conditional probability\n calculations, and regression trees.","Published":"2017-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"caesar","Version":"0.1.0","Title":"Encrypts and Decrypts Strings","Description":"Encrypts and decrypts strings using either the Caesar cipher or a\n pseudorandom number generation (using set.seed()) method.","Published":"2017-01-18","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"cAIC4","Version":"0.2","Title":"Conditional Akaike information criterion for lme4","Description":"Provides functions for the estimation of the conditional Akaike \n\t\t\t information in generalized mixed-effects models fitted with (g)lmer \n\t\t\t form lme4.","Published":"2014-08-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Cairo","Version":"1.5-9","Title":"R graphics device using cairo graphics library for creating\nhigh-quality bitmap (PNG, JPEG, TIFF), vector (PDF, SVG,\nPostScript) and display (X11 and Win32) output","Description":"Cairo graphics device that can be use to create high-quality vector (PDF, PostScript and SVG) and bitmap output (PNG,JPEG,TIFF), and high-quality rendering in displays (X11 and Win32). Since it uses the same back-end for all output, copying across formats is WYSIWYG. Files are created without the dependence on X11 or other external programs. This device supports alpha channel (semi-transparent drawing) and resulting images can contain transparent and semi-transparent regions. It is ideal for use in server environments (file output) and as a replacement for other devices that don't have Cairo's capabilities such as alpha support or anti-aliasing. Backends are modular such that any subset of backends is supported.","Published":"2015-09-26","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cairoDevice","Version":"2.24","Title":"Embeddable Cairo Graphics Device Driver","Description":"This device uses Cairo and GTK to draw to the screen,\n file (png, svg, pdf, and ps) or memory (arbitrary GdkDrawable\n or Cairo context). The screen device may be embedded into RGtk2\n interfaces and supports all interactive features of other graphics\n devices, including getGraphicsEvent().","Published":"2017-01-06","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"calACS","Version":"2.2.2","Title":"Calculations for All Common Subsequences","Description":"Implements several string comparison algorithms, including calACS (count all common subsequences), lenACS (calculate the lengths of all common subsequences), and lenLCS (calculate the length of the longest common subsequence). Some algorithms differentiate between the more strict definition of subsequence, where a common subsequence cannot be separated by any other items, from its looser counterpart, where a common subsequence can be interrupted by other items. This difference is shown in the suffix of the algorithm (-Strict vs -Loose). For example, q-w is a common subsequence of q-w-e-r and q-e-w-r on the looser definition, but not on the more strict definition. calACSLoose Algorithm from Wang, H. All common subsequences (2007) IJCAI International Joint Conference on Artificial Intelligence, pp. 635-640.","Published":"2016-03-31","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"Calculator.LR.FNs","Version":"1.2","Title":"Calculator for LR Fuzzy Numbers","Description":"Arithmetic operations scalar multiplication, addition, subtraction, multiplication and division of LR fuzzy numbers (which are on the basis of extension principle) have a complicate form for using in fuzzy Statistics, fuzzy Mathematics, machine learning, fuzzy data analysis and etc. Calculator for LR Fuzzy Numbers package relieve and aid applied users to achieve a simple and closed form for some complicated operator based on LR fuzzy numbers and also the user can easily draw the membership function of the obtained result by this package. ","Published":"2017-04-03","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"CALF","Version":"0.2.0","Title":"Coarse Approximation Linear Function","Description":"Contains greedy algorithms for coarse approximation linear\n functions.","Published":"2017-05-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CALIBERrfimpute","Version":"0.1-6","Title":"Multiple imputation using MICE and Random Forest","Description":"Functions to impute using Random Forest under Full Conditional Specifications (Multivariate Imputation by Chained Equations). The CALIBER programme is funded by the Wellcome Trust (086091/Z/08/Z) and the National Institute for Health Research (NIHR) under its Programme Grants for Applied Research programme (RP-PG-0407-10314). The author is supported by a Wellcome Trust Clinical Research Training Fellowship (0938/30/Z/10/Z).","Published":"2014-05-07","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"calibrar","Version":"0.2.0","Title":"Automated Parameter Estimation for Complex (Ecological) Models","Description":"Automated parameter estimation for complex (ecological) models in R. \n This package allows the parameter estimation or calibration of complex models, \n including stochastic ones. It is a generic tool that can be used for fitting \n any type of models, especially those with non-differentiable objective functions. \n It supports multiple phases and constrained optimization. \n It implements maximum likelihood estimation methods and automated construction \n of the objective function from simulated model outputs. \n See for more details.","Published":"2016-02-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"calibrate","Version":"1.7.2","Title":"Calibration of Scatterplot and Biplot Axes","Description":"Package for drawing calibrated scales with tick marks on (non-orthogonal) \n variable vectors in scatterplots and biplots. ","Published":"2013-09-10","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CalibrateSSB","Version":"1.0","Title":"Weighting and Estimation for Panel Data with Non-Response","Description":"Function to calculate weights and estimates for panel data with non-response.","Published":"2016-04-28","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"calibrator","Version":"1.2-6","Title":"Bayesian calibration of complex computer codes","Description":"Performs Bayesian calibration of computer models as per\n Kennedy and O'Hagan 2001. The package includes routines to find the\n hyperparameters and parameters; see the help page for stage1() for a\n worked example using the toy dataset. A tutorial is provided in the\n calex.Rnw vignette; and a suite of especially simple one dimensional\n examples appears in inst/doc/one.dim/.","Published":"2013-12-09","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"callr","Version":"1.0.0","Title":"Call R from R","Description":"It is sometimes useful to perform a computation in a\n separate R process, without affecting the current R process at all.\n This packages does exactly that.","Published":"2016-06-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"calmate","Version":"0.12.1","Title":"Improved Allele-Specific Copy Number of SNP Microarrays for\nDownstream Segmentation","Description":"A multi-array post-processing method of allele-specific copy-number estimates (ASCNs).","Published":"2015-10-27","License":"LGPL (>= 2.1)","snapshot_date":"2017-06-23"}
{"Package":"CAM","Version":"1.0","Title":"Causal Additive Model (CAM)","Description":"The code takes an n x p data matrix and fits a Causal Additive Model (CAM) for estimating the causal structure of the underlying process. The output is a p x p adjacency matrix (a one in entry (i,j) indicates an edge from i to j). Details of the algorithm can be found in: P. Bühlmann, J. Peters, J. Ernest: \"CAM: Causal Additive Models, high-dimensional order search and penalized regression\", Annals of Statistics 42:2526-2556, 2014.","Published":"2015-03-05","License":"FreeBSD","snapshot_date":"2017-06-23"}
{"Package":"CAMAN","Version":"0.74","Title":"Finite Mixture Models and Meta-Analysis Tools - Based on C.A.MAN","Description":"Tools for the analysis of finite semiparametric mixtures.\n These are useful when data is heterogeneous, e.g. in\n pharmacokinetics or meta-analysis. The NPMLE and VEM algorithms\n (flexible support size) and EM algorithms (fixed support size)\n are provided for univariate and bivariate data.","Published":"2016-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"camel","Version":"0.2.0","Title":"Calibrated Machine Learning","Description":"The package \"camel\" provides the implementation of a family of high-dimensional calibrated machine learning tools, including (1) LAD, SQRT Lasso and Calibrated Dantzig Selector for estimating sparse linear models; (2) Calibrated Multivariate Regression for estimating sparse multivariate linear models; (3) Tiger, Calibrated Clime for estimating sparse Gaussian graphical models. We adopt the combination of the dual smoothing and monotone fast iterative soft-thresholding algorithm (MFISTA). The computation is memory-optimized using the sparse matrix output, and accelerated by the path following and active set tricks.","Published":"2013-09-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CampaR1","Version":"0.8.4","Title":"Trajectory Analysis","Description":"Analysis algorithms extracted from the original 'campari' software package.\n They consists in a kinetic annotation of the trajectory based on the minimum spanning tree\n constructed on the distances between snapshots. The fast algorithm is implemented on\n the basis of a modified version of the birch algorithm, while the slow one is based on a\n simple leader clustering. For more information please visit the original documentation\n on .","Published":"2017-01-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"camsRad","Version":"0.3.0","Title":"Client for CAMS Radiation Service","Description":"Copernicus Atmosphere Monitoring Service (CAMS) radiations service \n provides time series of global, direct, and diffuse irradiations on horizontal\n surface, and direct irradiation on normal plane for the actual weather \n conditions as well as for clear-sky conditions.\n The geographical coverage is the field-of-view of the Meteosat satellite,\n roughly speaking Europe, Africa, Atlantic Ocean, Middle East. The time coverage\n of data is from 2004-02-01 up to 2 days ago. Data are available with a time step\n ranging from 15 min to 1 month. For license terms and to create an account,\n please see . ","Published":"2016-11-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"camtrapR","Version":"0.99.8","Title":"Camera Trap Data Management and Preparation of Occupancy and\nSpatial Capture-Recapture Analyses","Description":"Management of and data extraction from camera trap photographs in wildlife studies. The package provides a workflow for storing and sorting camera trap photographs, computes record databases and detection/non-detection matrices for occupancy and spatial capture-recapture analyses with great flexibility. In addition, it provides simple mapping functions (number of species, number of independent species detections by station) and can visualise activity data.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cancerGI","Version":"1.0.0","Title":"Analyses of Cancer Gene Interaction","Description":"Functions to perform the following analyses: i) inferring epistasis from RNAi double knockdown data; ii) identifying gene pairs of multiple mutation patterns; iii) assessing association between gene pairs and survival; and iv) calculating the smallworldness of a graph (e.g., a gene interaction network). Data and analyses are described in Wang, X., Fu, A. Q., McNerney, M. and White, K. P. (2014). Widespread genetic epistasis among breast cancer genes. Nature Communications. 5 4828. .","Published":"2016-04-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cancerTiming","Version":"3.1.8","Title":"Estimation of Temporal Ordering of Cancer Abnormalities","Description":"Timing copy number changes using estimates of mutational allele frequency from resequencing of tumor samples.","Published":"2016-04-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"candisc","Version":"0.7-2","Title":"Visualizing Generalized Canonical Discriminant and Canonical\nCorrelation Analysis","Description":"Functions for computing and visualizing \n\tgeneralized canonical discriminant analyses and canonical correlation analysis\n\tfor a multivariate linear model.\n\tTraditional canonical discriminant analysis is restricted to a one-way 'MANOVA'\n\tdesign and is equivalent to canonical correlation analysis between a set of quantitative\n\tresponse variables and a set of dummy variables coded from the factor variable.\n\tThe 'candisc' package generalizes this to higher-way 'MANOVA' designs\n\tfor all factors in a multivariate linear model,\n\tcomputing canonical scores and vectors for each term. The graphic functions provide low-rank (1D, 2D, 3D) \n\tvisualizations of terms in an 'mlm' via the 'plot.candisc' and 'heplot.candisc' methods. Related plots are\n\tnow provided for canonical correlation analysis when all predictors are quantitative.","Published":"2016-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Canopy","Version":"1.2.0","Title":"Accessing Intra-Tumor Heterogeneity and Tracking Longitudinal\nand Spatial Clonal Evolutionary History by Next-Generation\nSequencing","Description":"A statistical framework and computational procedure for identifying\n the sub-populations within a tumor, determining the mutation profiles of each \n subpopulation, and inferring the tumor's phylogenetic history. The input are \n variant allele frequencies (VAFs) of somatic single nucleotide alterations \n (SNAs) along with allele-specific coverage ratios between the tumor and matched\n normal sample for somatic copy number alterations (CNAs). These quantities can\n be directly taken from the output of existing software. Canopy provides a \n general mathematical framework for pooling data across samples and sites to \n infer the underlying parameters. For SNAs that fall within CNA regions, Canopy\n infers their temporal ordering and resolves their phase. When there are \n multiple evolutionary configurations consistent with the data, Canopy outputs \n all configurations along with their confidence assessment.","Published":"2017-04-08","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"canprot","Version":"0.1.0","Title":"Chemical Composition of Differential Protein Expression","Description":"Datasets are collected here for differentially (up- and down-)\n expressed proteins identified in proteomic studies of cancer and in cell\n culture experiments. Tables of amino acid compositions of proteins are\n used for calculations of chemical composition, projected into selected\n basis species. Plotting functions are used to visualize the compositional\n differences and thermodynamic potentials for proteomic transformations.","Published":"2017-06-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CANSIM2R","Version":"0.11","Title":"Directly Extracts Complete CANSIM Data Tables","Description":"Extract CANSIM (Statistics Canada) tables and transform them into readily usable data in panel (wide) format. It can also extract more than one table at a time and produce the resulting merge by time period and geographical region.","Published":"2015-09-07","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"canvasXpress","Version":"0.16.2","Title":"Visualization Package for CanvasXpress in R","Description":"Enables creation of visualizations using the CanvasXpress framework\n in R. CanvasXpress is a standalone JavaScript library for reproducible research\n with complete tracking of data and end-user modifications stored in a single\n PNG image that can be played back. See for more\n information.","Published":"2017-06-02","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cape","Version":"2.0.2","Title":"Combined Analysis of Pleiotropy and Epistasis","Description":"Combines complementary information across multiple related\n phenotypes to infer directed epistatic interactions between genetic markers.\n This analysis can be applied to a variety of engineered and natural populations.","Published":"2016-06-09","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"caper","Version":"0.5.2","Title":"Comparative Analyses of Phylogenetics and Evolution in R","Description":"Functions for performing phylogenetic comparative analyses.","Published":"2013-11-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"capm","Version":"0.11.0","Title":"Companion Animal Population Management","Description":"Quantitative analysis to support companion animal population\n management. Some functions assist survey sampling tasks (calculate sample \n size for simple and complex designs, select sampling units and estimate \n population parameters) while others assist the modelling of population \n dynamics. For sampling methods see: Levy PS & Lemeshow S. (2013), \n ISBN-10: 0470040076; Lumley (2010), ISBN: 978-0-470-28430-8. For \n modelling of population dynamics see: Baquero et al (2016) \n ; Baquero et al (2016), \n ISSN 1679-9216; Amaku et al (2010) \n .","Published":"2017-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"capn","Version":"1.0.0","Title":"Capital Asset Pricing for Nature","Description":"Implements approximation methods for natural capital asset prices suggested by Fenichel and Abbott (2014) in Journal of the Associations of Environmental and Resource Economists (JAERE), Fenichel et al. (2016) in Proceedings of the National Academy of Sciences (PNAS), and Yun et al. (2017) in PNAS (accepted), and their extensions: creating Chebyshev polynomial nodes and grids, calculating basis of Chebyshev polynomials, approximation and their simulations for: V-approximation (single and multiple stocks, PNAS), P-approximation (single stock, PNAS), and Pdot-approximation (single stock, JAERE). Development of this package was generously supported by the Knobloch Family Foundation.","Published":"2017-06-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"captioner","Version":"2.2.3","Title":"Numbers Figures and Creates Simple Captions","Description":"Provides a method for automatically numbering figures,\n tables, or other objects. Captions can be displayed in full, or as citations.\n This is especially useful for adding figures and tables to R markdown\n documents without having to numbering them manually.","Published":"2015-07-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"captr","Version":"0.3.0","Title":"Client for the Captricity API","Description":"Get text from images of text using Captricity Optical Character\n Recognition (OCR) API. Captricity allows you to get text from handwritten\n forms --- think surveys --- and other structured paper documents. And it can\n output data in form a delimited file keeping field information intact. For more\n information, read .","Published":"2017-04-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"capushe","Version":"1.1.1","Title":"CAlibrating Penalities Using Slope HEuristics","Description":"Calibration of penalized criteria for model selection. The calibration methods available are based on the slope heuristics.","Published":"2016-04-19","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"}
{"Package":"capwire","Version":"1.1.4","Title":"Estimates population size from non-invasive sampling","Description":"Fits models from Miller et al. 2005 to estimate population\n sizes from natural populations. Several models are implemented.\n Package also includes functions to perform a likelihood ratio\n test to choose between models, perform parametric bootstrapping\n to obtain confidence intervals and multiple functions to\n simulate data.","Published":"2012-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"car","Version":"2.1-4","Title":"Companion to Applied Regression","Description":"\n Functions and Datasets to Accompany J. Fox and S. Weisberg, \n An R Companion to Applied Regression, Second Edition, Sage, 2011.","Published":"2016-12-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CARBayes","Version":"5.0","Title":"Spatial Generalised Linear Mixed Models for Areal Unit Data","Description":"Implements a class of univariate and multivariate spatial generalised linear mixed models for areal unit data, with inference in a Bayesian setting using Markov chain Monte Carlo (MCMC) simulation. The response variable can be binomial, Gaussian or Poisson, and spatial autocorrelation is modelled by a set of random effects that are assigned a conditional autoregressive (CAR) prior distribution. A number of different models are available for univariate spatial data, including models with no random effects as well as random effects modelled by different types of CAR prior. Additionally, a multivariate CAR (MCAR) model for multivariate spatial data is available, as is a two-level hierarchical model for individuals within areas. Full details are given in the vignette accompanying this package. The initial creation of this package was supported by the Economic and Social Research Council (ESRC) grant RES-000-22-4256, and on-going development has / is supported by the Engineering and Physical Science Research Council (EPSRC) grant EP/J017442/1, ESRC grant ES/K006460/1, and Innovate UK / Natural Environment Research Council (NERC) grant NE/N007352/1. ","Published":"2017-06-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CARBayesdata","Version":"2.0","Title":"Data Used in the Vignettes Accompanying the CARBayes and\nCARBayesST Packages","Description":"Spatio-temporal data from Scotland used in the vignettes accompanying the CARBayes (spatial modelling) and CARBayesST (spatio-temporal modelling) packages. For the CARBayes vignette the data include the Scottish lip cancer data and property price and respiratory hospitalisation data from the Greater Glasgow and Clyde health board. For the CARBayesST vignette the data include spatio-temporal data on property sales and respiratory hospitalisation and air pollution from the Greater Glasgow and Clyde health board. ","Published":"2016-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CARBayesST","Version":"2.5","Title":"Spatio-Temporal Generalised Linear Mixed Models for Areal Unit\nData","Description":"Implements a class of spatio-temporal generalised linear mixed models for areal unit data, with inference in a Bayesian setting using Markov chain Monte Carlo (MCMC) simulation. The response variable can be binomial, Gaussian or Poisson, but for some models only the binomial and Poisson data likelihoods are available. The spatio-temporal autocorrelation is modelled by random effects, which are assigned conditional autoregressive (CAR) style prior distributions. A number of different random effects structures are available, and full details are given in the vignette accompanying this package and the references in the help files. The creation of this package was supported by the Engineering and Physical Sciences Research Council (EPSRC) grant EP/J017442/1 and the Medical Research Council (MRC) grant MR/L022184/1.","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"carcass","Version":"1.6","Title":"Estimation of the Number of Fatalities from Carcass Searches","Description":"The number of bird or bat fatalities from collisions with buildings, towers or wind energy turbines can be estimated based on carcass searches and experimentally assessed carcass persistence times and searcher efficiency. Functions for estimating the probability that a bird or bat that died is found by a searcher are provided. Further functions calculate the posterior distribution of the number of fatalities based on the number of carcasses found and the estimated detection probability.","Published":"2016-03-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cardidates","Version":"0.4.7","Title":"Identification of Cardinal Dates in Ecological Time Series","Description":"Identification of cardinal dates\n (begin, time of maximum, end of mass developments)\n in ecological time series using fitted Weibull functions.","Published":"2015-09-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cardioModel","Version":"1.4","Title":"Cardiovascular Safety Exposure-Response Modeling in Early-Phase\nClinical Studies","Description":"Includes over 100 mixed-effects model structures describing the relationship between drug concentration and QT interval, heart rate/pulse rate or blood pressure. Given an exposure-response dataset, the tool fits each model structure to the observed data.","Published":"2016-04-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"care","Version":"1.1.10","Title":"High-Dimensional Regression and CAR Score Variable Selection","Description":"Implements the regression approach \n of Zuber and Strimmer (2011) \"High-dimensional regression and variable \n selection using CAR scores\" SAGMB 10: 34, .\n CAR scores measure the correlation between the response and the \n Mahalanobis-decorrelated predictors. The squared CAR score is a \n natural measure of variable importance and provides a canonical \n ordering of variables. This package provides functions for estimating \n CAR scores, for variable selection using CAR scores, and for estimating \n corresponding regression coefficients. Both shrinkage as well as \n empirical estimators are available.","Published":"2017-03-29","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"CARE1","Version":"1.1.0","Title":"Statistical package for population size estimation in\ncapture-recapture models","Description":"The R package CARE1, the first part of the program CARE\n (Capture-Recapture) in\n http://chao.stat.nthu.edu.tw/softwareCE.html, can be used to\n analyze epidemiological data via sample coverage approach (Chao\n et al. 2001a). Based on the input of records from several\n incomplete lists (or samples) of individuals, the R package\n CARE1 provides output of population size estimate and related\n statistics.","Published":"2012-10-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"caret","Version":"6.0-76","Title":"Classification and Regression Training","Description":"Misc functions for training and plotting classification and\n regression models.","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"caretEnsemble","Version":"2.0.0","Title":"Ensembles of Caret Models","Description":"Functions for creating ensembles of caret models: caretList\n and caretStack. caretList is a convenience function for fitting multiple\n caret::train models to the same dataset. caretStack will make linear or\n non-linear combinations of these models, using a caret::train model as a\n meta-model, and caretEnsemble will make a robust linear combination of\n models using a glm.","Published":"2016-02-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"caribou","Version":"1.1","Title":"Estimation of caribou abundance based on large scale\naggregations monitored by radio telemetry","Description":"This is a package for estimating the population size of\n migratory caribou herds based on large scale aggregations\n monitored by radio telemetry. It implements the methodology\n found in the article by Rivest et al. (1998) about caribou\n abundance estimation. It also includes a function based on the\n Lincoln-Petersen Index as applied to radio telemetry data by\n White and Garrott (1990).","Published":"2012-06-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CarletonStats","Version":"1.3","Title":"Functions for Statistics Classes at Carleton College","Description":"Includes commands for bootstrapping and permutation tests, a command for created grouped bar plots, and a demo of the quantile-normal plot for data drawn from different distributions.","Published":"2016-07-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CARLIT","Version":"1.0","Title":"Ecological Quality Ratios Calculation and Plot","Description":"Functions to calculate and plot ecological quality ratios (EQR) as specified by Ballesteros et al. 2007.","Published":"2015-03-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"caroline","Version":"0.7.6","Title":"A Collection of Database, Data Structure, Visualization, and\nUtility Functions for R","Description":"The caroline R library contains dozens of functions useful\n for: database migration (dbWriteTable2), database style joins &\n aggregation (nerge, groupBy & bestBy), data structure\n conversion (nv, tab2df), legend table making (sstable &\n leghead), plot annotation (labsegs & mvlabs), data\n visualization (violins, pies & raPlot), character string\n manipulation (m & pad), file I/O (write.delim), batch scripting\n and more. The package's greatest\n contributions lie in the database style merge, aggregation and\n interface functions as well as in it's extensive use and\n propagation of row, column and vector names in most functions.","Published":"2013-10-08","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"carpenter","Version":"0.2.1","Title":"Build Common Tables of Summary Statistics for Reports","Description":"Mainly used to build tables that are commonly presented for\n bio-medical/health research, such as basic characteristic tables or\n descriptive statistics.","Published":"2017-05-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"caRpools","Version":"0.83","Title":"CRISPR AnalyzeR for Pooled CRISPR Screens","Description":"CRISPR-Analyzer for pooled CRISPR screens (caRpools) provides an end-to-end analysis of CRISPR screens including quality control, hit candidate analysis, visualization and automated report generation using R markdown. Needs MAGeCK (http://sourceforge.net/p/mageck/wiki/Home/), bowtie2 for all functions. CRISPR (clustered regularly interspaced short palindromic repeats) is a method to perform genome editing. See for more information on\n CRISPR.","Published":"2015-12-06","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"CARrampsOcl","Version":"0.1.4","Title":"Reparameterized and marginalized posterior sampling for\nconditional autoregressive models, OpenCL implementation","Description":"This package fits Bayesian conditional autoregressive models for spatial and spatiotemporal data on a lattice. It uses OpenCL kernels running on GPUs to perform rejection sampling to obtain independent samples from the joint posterior distribution of model parameters.","Published":"2013-10-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"cartogram","Version":"0.0.2","Title":"Create Cartograms with R","Description":"Construct continuous and non-contiguous area cartograms.","Published":"2016-09-28","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cartography","Version":"1.4.2","Title":"Thematic Cartography","Description":"Create and integrate maps in your R workflow. This package allows\n various cartographic representations such as proportional symbols, chroropleth,\n typology, flows or discontinuities. In addition, it also proposes some useful\n features like cartographic palettes, layout (scale, north arrow, title...), labels,\n legends or access to cartographic API to ease the graphic presentation of maps.","Published":"2017-03-29","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"carx","Version":"0.6.2","Title":"Censored Autoregressive Model with Exogenous Covariates","Description":"A censored time series class is designed. An estimation procedure\n is implemented to estimate the Censored AutoRegressive time series with\n eXogenous covariates (CARX), assuming normality of the innovations. Some other\n functions that might be useful are also included.","Published":"2016-03-09","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"caschrono","Version":"2.0","Title":"Séries Temporelles Avec R","Description":"Functions, data sets and exercises solutions for the book 'Séries Temporelles Avec R' (Yves Aragon, edp sciences, 2016). For all chapters, a vignette is available with some additional material and exercises solutions.","Published":"2016-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"casebase","Version":"0.1.0","Title":"Fitting Flexible Smooth-in-Time Hazards and Risk Functions via\nLogistic and Multinomial Regression","Description":"Implements the case-base sampling approach of Hanley and Miettinen (2009) , \n Saarela and Arjas (2015) , and Saarela (2015) , for fitting flexible hazard \n regression models to survival data with single event type or multiple competing causes via logistic and multinomial regression. \n From the fitted hazard function, cumulative incidence, risk functions of time, treatment and profile \n can be derived. This approach accommodates any log-linear hazard function of prognostic time, treatment, \n and covariates, and readily allows for non-proportionality. We also provide a plot method for visualizing \n incidence density via population time plots.","Published":"2017-04-28","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"caseMatch","Version":"1.0.7","Title":"Identify Similar Cases for Qualitative Case Studies","Description":"Allows users to identify similar cases for qualitative case studies using statistical matching methods.","Published":"2017-01-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"castor","Version":"1.1","Title":"Efficient Comparative Phylogenetics on Large Trees","Description":"Efficient tree manipulation functions including pruning, rerooting, calculation of most-recent common ancestors, calculating distances from the tree root and calculating pairwise distance matrices. Calculation of phylogenetic signal and mean trait depth (trait conservatism). Ancestral state reconstruction and hidden character prediction of discrete characters, using Maximum Likelihood and Maximum Parsimony methods. Simulating and fitting models of trait evolution, and generating random trees using birth-death models.","Published":"2017-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cat","Version":"0.0-6.5","Title":"Analysis of categorical-variable datasets with missing values","Description":"Analysis of categorical-variable with missing values","Published":"2012-10-30","License":"file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"catdap","Version":"1.2.4","Title":"Categorical Data Analysis Program Package","Description":"Categorical data analysis program package.","Published":"2016-09-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"catdata","Version":"1.2.1","Title":"Categorical Data","Description":"This R-package contains examples from the book \"Regression for Categorical Data\", Tutz 2011, Cambridge University Press. The names of the examples refer to the chapter and the data set that is used. ","Published":"2014-11-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CatDyn","Version":"1.1-0","Title":"Fishery Stock Assessment by Generalized Depletion Models","Description":"Based on fishery Catch Dynamics instead of fish Population Dynamics (hence CatDyn) and using high-frequency or medium-frequency catch in biomass or numbers, fishing nominal effort, and mean fish body weight by time step, from one or two fishing fleets, estimate stock abundance, natural mortality rate, and fishing operational parameters. It includes methods for data organization, plotting standard exploratory and analytical plots, predictions, for 77 types of models of increasing complexity, and 56 likelihood models for the data.","Published":"2015-05-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cate","Version":"1.0.4","Title":"High Dimensional Factor Analysis and Confounder Adjusted Testing\nand Estimation","Description":"Provides several methods for factor analysis in high dimension (both n,p >> 1) and methods to adjust for possible confounders in multiple hypothesis testing.","Published":"2015-10-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"catenary","Version":"1.1.1","Title":"Fits a Catenary to Given Points","Description":"Gives methods to create a catenary object and then plot it and get\n properties of it. Can construct from parameters or endpoints. Also can get\n catenary fitted to data.","Published":"2015-11-18","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CatEncoders","Version":"0.1.1","Title":"Encoders for Categorical Variables","Description":"Contains some commonly used categorical variable encoders, such as 'LabelEncoder' and 'OneHotEncoder'. Inspired by the encoders implemented in Python 'sklearn.preprocessing' package (see ).","Published":"2017-03-08","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CateSelection","Version":"1.0","Title":"Categorical Variable Selection Methods","Description":"A multi-factor dimensionality reduction based forward selection method for genetic association mapping.","Published":"2014-10-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cati","Version":"0.99.1","Title":"Community Assembly by Traits: Individuals and Beyond","Description":"Detect and quantify community assembly processes using trait values of individuals or populations, the T-statistics and other metrics, and dedicated null models.","Published":"2016-03-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"catIrt","Version":"0.5-0","Title":"An R Package for Simulating IRT-Based Computerized Adaptive\nTests","Description":"Functions designed to simulate data that conform to basic\n unidimensional IRT models (for now 3-parameter binary response models\n and graded response models) along with Post-Hoc CAT simulations of\n those models with various item selection methods, ability estimation\n methods, and termination criteria.","Published":"2014-10-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CATkit","Version":"3.0.0.2","Title":"Chronomics Analysis Toolkit (CAT): Analyze Periodicity","Description":"Performs analysis of sinusoidal rhythms in time series data: actogram, smoothing, autocorrelation, crosscorrelation, several flavors of cosinor. ","Published":"2017-02-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"catlearn","Version":"0.4","Title":"Formal Modeling for Psychology","Description":"Formal psychological models, independently-replicated data sets against which to test them, and simulation archives.","Published":"2017-02-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"catnet","Version":"1.15.0","Title":"Categorical Bayesian Network Inference","Description":"Structure learning and parameter estimation of discrete Bayesian networks using likelihood-based criteria. Exhaustive search for fixed node orders and stochastic search of optimal orders via simulated annealing algorithm are implemented. ","Published":"2016-06-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"caTools","Version":"1.17.1","Title":"Tools: moving window statistics, GIF, Base64, ROC AUC, etc","Description":"Contains several basic utility functions including: moving\n (rolling, running) window statistic functions, read/write for\n GIF and ENVI binary files, fast calculation of AUC, LogitBoost\n classifier, base64 encoder/decoder, round-off-error-free sum\n and cumsum, etc.","Published":"2014-09-10","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"catR","Version":"3.12","Title":"Generation of IRT Response Patterns under Computerized Adaptive\nTesting","Description":"Provides routines for the generation of response patterns under unidimensional dichotomous and polytomous computerized adaptive testing (CAT) framework. It holds many standard functions to estimate ability, select the first item(s) to administer and optimally select the next item, as well as several stopping rules. Options to control for item exposure and content balancing are also available (Magis and Raiche (2012) ).","Published":"2017-01-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"catspec","Version":"0.97","Title":"Special models for categorical variables","Description":"`ctab' creates (multiway) percentage tables. `sqtab'\n contains a set of functions for estimating models for square\n tables such as quasi-independence, symmetry, uniform\n association. Examples show how to use these models in a\n loglinear model using glm or in a multinomial logistic model\n using mlogit or clogit","Published":"2013-04-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"catSurv","Version":"1.0.1","Title":"Computerized Adaptive Testing for Survey Research","Description":"Provides methods of computerized adaptive testing for survey researchers. Includes functionality for data fit with the classic item response methods including the latent trait model, Birnbaum`s three parameter model, the graded response, and the generalized partial credit model. Additionally, includes several ability parameter estimation and item selection routines. During item selection, all calculations are done in compiled C++ code.","Published":"2017-06-16","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CATT","Version":"2.0","Title":"The Cochran-Armitage Trend Test","Description":"This function conducts the Cochran-Armitage trend test to a 2 by k contingency table. It will report the test statistic (Z) and p-value.A linear trend in the frequencies will be calculated, because the weights (0,1,2) will be used by default. ","Published":"2017-05-19","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"causaldrf","Version":"0.3","Title":"Tools for Estimating Causal Dose Response Functions","Description":"Functions and data to estimate causal dose response functions given continuous, ordinal, or binary treatments.","Published":"2015-11-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"causaleffect","Version":"1.3.4","Title":"Deriving Expressions of Joint Interventional Distributions and\nTransport Formulas in Causal Models","Description":"Functions for identification and transportation of causal effects. Provides a conditional causal effect identification algorithm (IDC) by Shpitser, I. and Pearl, J. (2006) , an algorithm for transportability from multiple domains with limited experiments by Bareinboim, E. and Pearl, J. (2014) and a selection bias recovery algorithm by Bareinboim, E. and Tian, J. (2015) . All of the previously mentioned algorithms are based on a causal effect identification algorithm by Tian , J. (2002) . ","Published":"2017-05-10","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CausalFX","Version":"1.0.1","Title":"Methods for Estimating Causal Effects from Observational Data","Description":"Estimate causal effects of one variable on another, currently for\n binary data only. Methods include instrumental variable bounds, adjustment by a \n given covariate set, adjustment by an induced covariate set using a variation of \n the PC algorithm, and an effect bounding method (the Witness Protection Program) \n based on covariate adjustment with observable independence constraints.","Published":"2015-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CausalGAM","Version":"0.1-3","Title":"Estimation of Causal Effects with Generalized Additive Models","Description":"This package implements various estimators for average\n treatment effects---an inverse probability weighted (IPW)\n estimator, an augmented inverse probability weighted (AIPW)\n estimator, and a standard regression estimator---that make use\n of generalized additive models for the treatment assignment\n model and/or outcome model.","Published":"2010-02-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CausalImpact","Version":"1.2.1","Title":"Inferring Causal Effects using Bayesian Structural Time-Series\nModels","Description":"Implements a Bayesian approach to causal impact estimation in time\n series, as described in Brodersen et al. (2015) .\n See the package documentation on GitHub\n to get started.","Published":"2017-05-31","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"causalsens","Version":"0.1.1","Title":"Selection Bias Approach to Sensitivity Analysis for Causal\nEffects","Description":"The causalsens package provides functions to perform sensitivity analyses and to study how various assumptions about selection bias affects estimates of causal effects.","Published":"2015-07-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Causata","Version":"4.2-0","Title":"Analysis utilities for binary classification and Causata users","Description":"The Causata package provides utilities for \n extracting data from the Causata application, training binary classification \n models, and exporting models as PMML for scoring.","Published":"2016-12-05","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"CAvariants","Version":"3.4","Title":"Correspondence Analysis Variants","Description":"Provides six variants of two-way correspondence analysis (ca):\n simple ca, singly ordered ca, doubly ordered ca, non symmetrical ca,\n singly ordered non symmetrical ca, and doubly ordered non symmetrical\n ca.","Published":"2017-02-27","License":"GPL (> 2)","snapshot_date":"2017-06-23"}
{"Package":"cba","Version":"0.2-19","Title":"Clustering for Business Analytics","Description":"Implements clustering techniques such as Proximus and Rock, utility functions for efficient computation of cross distances and data manipulation. ","Published":"2017-05-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cbanalysis","Version":"0.1.0","Title":"Coffee Break Descriptive Analysis","Description":"Contains function which subsets the input data frame based on the variable types and returns list of data frames.","Published":"2017-04-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cbar","Version":"0.1.0","Title":"Contextual Bayesian Anomaly Detection in R","Description":"Detect contextual anomalies in time-series data with Bayesian data\n analysis. It focuses on determining a normal range of target value, and\n provides simple-to-use functions to abstract the outcome.","Published":"2017-06-23","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"cbird","Version":"1.0","Title":"Clustering of Multivariate Binary Data with Dimension Reduction\nvia L1-Regularized Likelihood Maximization","Description":"The clustering of binary data with reducing the dimensionality (CLUSBIRD) proposed by Yamamoto and Hayashi (2015) .","Published":"2017-02-06","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CBPS","Version":"0.13","Title":"Covariate Balancing Propensity Score","Description":"Implements the covariate balancing propensity score (CBPS) proposed\n by Imai and Ratkovic (2014) . The propensity score is\n estimated such that it maximizes the resulting covariate balance as well as the\n prediction of treatment assignment. The method, therefore, avoids an iteration\n between model fitting and balance checking. The package also implements several\n extensions of the CBPS beyond the cross-sectional, binary treatment setting.\n The current version implements the CBPS for longitudinal settings so that it can\n be used in conjunction with marginal structural models from Imai and Ratkovic\n (2015) , treatments with three- and four-\n valued treatment variables, continuous-valued treatments from Fong, Hazlett,\n and Imai (2015) , and the\n situation with multiple distinct binary treatments administered simultaneously.\n In the future it will be extended to other settings including the generalization\n of experimental and instrumental variable estimates. Recently add the optimal\n CBPS which chooses the optimal balancing function and results in doubly robust\n and efficient estimator for the treatment effect.","Published":"2016-12-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cbsodataR","Version":"0.2.1","Title":"Statistics Netherlands (CBS) Open Data API Client","Description":"The data and meta data from Statistics\n Netherlands (www.cbs.nl) can be browsed and downloaded. The client uses\n the open data API of Statistics Netherlands.","Published":"2016-01-20","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CCA","Version":"1.2","Title":"Canonical correlation analysis","Description":"The package provide a set of functions that extend the\n cancor function with new numerical and graphical outputs. It\n also include a regularized extension of the cannonical\n correlation analysis to deal with datasets with more variables\n than observations.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ccafs","Version":"0.1.0","Title":"Client for 'CCAFS' 'GCM' Data","Description":"Client for Climate Change, Agriculture, and Food Security ('CCAFS')\n General Circulation Models ('GCM') data. Data is stored in Amazon 'S3', from\n which we provide functions to fetch data.","Published":"2017-02-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CCAGFA","Version":"1.0.8","Title":"Bayesian Canonical Correlation Analysis and Group Factor\nAnalysis","Description":"Variational Bayesian algorithms for learning canonical correlation analysis (CCA), inter-battery factor analysis (IBFA), and group factor analysis (GFA). Inference with several random initializations can be run with the functions CCAexperiment() and GFAexperiment().","Published":"2015-12-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ccaPP","Version":"0.3.2","Title":"(Robust) Canonical Correlation Analysis via Projection Pursuit","Description":"Canonical correlation analysis and maximum correlation via\n projection pursuit, as well as fast implementations of correlation\n estimators, with a focus on robust and non-parametric methods.","Published":"2016-03-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cccd","Version":"1.5","Title":"Class Cover Catch Digraphs","Description":"Class Cover Catch Digraphs, neighborhood graphs, and\n relatives.","Published":"2015-06-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ccChooser","Version":"0.2.6","Title":"Developing a core collections","Description":"ccChooser can be used to developing and evaluation of core\n collections for germplasm collections (entire collection). This\n package used to develop a core collection for biological\n resources like genbanks. A core collection is defined as a\n sample of accessions that represent, with the lowest possible\n level of redundancy, the genetic diversity (the richness of\n gene or genotype categories) of the entire collection. The\n establishing a core collection that represents genetic\n diversity of the entire collection with minimum loss of its\n original diversity and minimum redundancies is an important\n problem for gene-banks curators and crop breeders. ccChooser\n establish core collection base on phenotypic data (agronomic,\n morphological, phenological).","Published":"2012-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cccp","Version":"0.2-4","Title":"Cone Constrained Convex Problems","Description":"Routines for solving convex optimization problems with cone constraints by means of interior-point methods. The implemented algorithms are partially ported from CVXOPT, a Python module for convex optimization (see for more information). ","Published":"2015-02-10","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"cccrm","Version":"1.2.1","Title":"Concordance Correlation Coefficient for Repeated (and\nNon-Repeated) Measures","Description":"Estimates the Concordance Correlation Coefficient to assess agreement. The scenarios considered are non-repeated measures, non-longitudinal repeated measures (replicates) and longitudinal repeated measures. The estimation approaches implemented are variance components and U-statistics approaches.","Published":"2015-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ccda","Version":"1.1","Title":"Combined Cluster and Discriminant Analysis","Description":"This package implements the combined cluster and discriminant analysis method for finding homogeneous groups of data with known origin as described in Kovacs et. al (2014): Classification into homogeneous groups using combined cluster and discriminant analysis (CCDA). Environmental Modelling & Software. DOI: http://dx.doi.org/10.1016/j.envsoft.2014.01.010","Published":"2014-12-08","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ccdrAlgorithm","Version":"0.0.3","Title":"CCDr Algorithm for Learning Sparse Gaussian Bayesian Networks","Description":"Implementation of the CCDr (Concave penalized Coordinate Descent with reparametrization) structure learning algorithm as described in Aragam and Zhou (2015) . This is a fast, score-based method for learning Bayesian networks that uses sparse regularization and block-cyclic coordinate descent.","Published":"2017-03-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ccgarch","Version":"0.2.3","Title":"Conditional Correlation GARCH models","Description":"Functions for estimating and simulating the family of the\n CC-GARCH models.","Published":"2014-03-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cchs","Version":"0.3.0","Title":"Cox Model for Case-Cohort Data with Stratified\nSubcohort-Selection","Description":"Contains a function, also called 'cchs', that calculates Estimator III of Borgan et al (2000), . This estimator is for fitting a Cox proportional hazards model to data from a case-cohort study where the subcohort was selected by stratified simple random sampling.","Published":"2016-07-15","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cclust","Version":"0.6-21","Title":"Convex Clustering Methods and Clustering Indexes","Description":"Convex Clustering methods, including K-means algorithm,\n On-line Update algorithm (Hard Competitive Learning) and Neural Gas\n algorithm (Soft Competitive Learning), and calculation of several\n indexes for finding the number of clusters in a data set.","Published":"2017-01-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CCM","Version":"1.1","Title":"Correlation classification method (CCM)","Description":"Classification method that classifies a sample according\n to the class with the maximum mean (or any other function of)\n correlation between the test and training samples with known\n classes.","Published":"2013-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CCMnet","Version":"0.0-3","Title":"Simulate Congruence Class Model for Networks","Description":"Tools to simulate networks based on Congruence Class models.","Published":"2015-12-10","License":"GPL-3 + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CCP","Version":"1.1","Title":"Significance Tests for Canonical Correlation Analysis (CCA)","Description":"Significance tests for canonical correlation analysis,\n including asymptotic tests and a Monte Carlo method","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"CCpop","Version":"1.0","Title":"One and two locus GWAS of binary phenotype with\ncase-control-population design","Description":"Tests of association between SNPs or pairs of SNPs and binary phenotypes, in case-control / case-population / case-control-population studies.","Published":"2014-03-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ccRemover","Version":"1.0.1","Title":"Removes the Cell-Cycle Effect from Single-Cell RNA-Sequencing\nData","Description":"Implements a method for identifying and removing\n\t\t\t\tthe cell-cycle effect from scRNA-Seq data. The description of the \n\t\t\t\tmethod is in Barron M. and Li J. (2016) . Identifying and removing \n\t\t\t\tthe cell-cycle effect from single-cell RNA-Sequencing data. Submitted. \n\t\t\t\tDifferent from previous methods, ccRemover implements a mechanism that\n\t\t\t\tformally tests whether a component is cell-cycle related or not, and thus\n\t\t\t\twhile it often thoroughly removes the cell-cycle effect, it preserves\n\t\t\t\tother features/signals of interest in the data.","Published":"2017-05-18","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cctools","Version":"0.1.0","Title":"Tools for the Continuous Convolution Trick in Nonparametric\nEstimation","Description":"Implements the uniform scaled beta distribution and\n the continuous convolution kernel density estimator.","Published":"2017-04-26","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CCTpack","Version":"1.5.1","Title":"Consensus Analysis, Model-Based Clustering, and Cultural\nConsensus Theory Applications","Description":"Consensus analysis, model-based clustering, and cultural consensus theory applications to response data (e.g. questionnaires). The models are applied using hierarchical Bayesian inference. The current package version supports binary, ordinal, and continuous data formats. ","Published":"2017-02-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cda","Version":"2.0.0","Title":"Coupled-Dipole Approximation for Electromagnetic Scattering by\nThree-Dimensional Clusters of Sub-Wavelength Particles","Description":"Coupled-dipole simulations for electromagnetic scattering of light by sub-wavelength particles in arbitrary 3-dimensional configurations. Scattering and absorption spectra are simulated by inversion of the interaction matrix, or by an order-of-scattering approximation scheme. High-level functions are provided to simulate spectra with varying angles of incidence, as well as with full angular averaging. ","Published":"2016-08-16","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cdata","Version":"0.1.1","Title":"Wrappers for 'tidyr::gather()' and 'tidyr::spread()'","Description":"Supplies deliberately verbose wrappers for 'tidyr::gather()' and 'tidyr::spread()', and an explanatory vignette. Useful for training and for enforcing preconditions.","Published":"2017-05-05","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cdb","Version":"0.0.1","Title":"Reading and Writing Constant DataBases","Description":"A constant database is a data structure created by Daniel\n J. Bernstein in his cdb package. Its format consists on a\n sequence of (key,value)-pairs. This R package replicates the\n basic utilities for reading (cdbget) and writing (cdbdump)\n constant databases.","Published":"2013-04-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cdcfluview","Version":"0.5.1","Title":"Retrieve U.S. Flu Season Data from the CDC FluView Portal","Description":"The U.S. Centers for Disease Control (CDC) maintains a portal\n for\n accessing state, regional and national influenza statistics as well as\n Mortality Surveillance Data. The Flash interface makes it difficult and \n time-consuming to select and retrieve influenza data. This package \n provides functions to access the data provided by the portal's underlying API.","Published":"2016-12-07","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"cdcsis","Version":"1.0","Title":"Conditional Distance Correlation and Its Related Feature\nScreening Method","Description":"Gives conditional distance correlation and performs the conditional distance correlation sure independence screening procedure for ultrahigh dimensional data. The conditional distance correlation is a novel conditional dependence measurement of two random variables given a third variable. The conditional distance correlation sure independence screening is used for screening variables in ultrahigh dimensional setting.","Published":"2014-10-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CDF.PSIdekick","Version":"1.2","Title":"Evaluate Differentially Private Algorithms for Publishing\nCumulative Distribution Functions","Description":"Designed by and for the community of differential privacy algorithm developers. It can be used to empirically evaluate and visualize Cumulative Distribution Functions incorporating noise that satisfies differential privacy, with numerous options made to streamline collection of utility measurements across variations of key parameters, such as epsilon, domain size, sample size, data shape, etc. Developed by researchers at Harvard PSI.","Published":"2016-08-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cdfquantreg","Version":"1.1.1","Title":"Quantile Regression for Random Variables on the Unit Interval","Description":"Employs a two-parameter family of\n distributions for modelling random variables on the (0, 1) interval by\n applying the cumulative distribution function (cdf) of one parent\n distribution to the quantile function of another.","Published":"2017-01-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CDFt","Version":"1.0.1","Title":"Statistical downscaling through CDF-transform","Description":"This package proposes a statistical downscaling method for\n cumulative distribution functions (CDF), as well as the\n computation of the Cram\\`er-von Mises statistics U, and the\n Kolmogorov-Smirnov statistics KS.","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CDLasso","Version":"1.1","Title":"Coordinate Descent Algorithms for Lasso Penalized L1, L2, and\nLogistic Regression","Description":"Coordinate Descent Algorithms for Lasso Penalized L1, L2,\n and Logistic Regression","Published":"2013-05-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cdlTools","Version":"0.11","Title":"Tools to Download and Work with USDA Cropscape Data","Description":"Downloads USDA National Agricultural Statistics Service (NASS) \n cropscape data for a specified state. Utilities for fips, abbreviation, \n and name conversion are also provided. Full functionality requires an \n internet connection, but data sets can be cached for later off-line use.","Published":"2016-08-01","License":"Unlimited","snapshot_date":"2017-06-23"}
{"Package":"CDM","Version":"5.6-16","Title":"Cognitive Diagnosis Modeling","Description":"\n Functions for cognitive diagnosis modeling\n and multidimensional item response modeling for\n dichotomous and polytomous data. This package\n enables the estimation of the DINA and DINO model,\n the multiple group (polytomous) GDINA model,\n the multiple choice DINA model, the general diagnostic\n model (GDM), the multidimensional linear compensatory\n item response model and the structured latent class\n model (SLCA).","Published":"2017-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CDNmoney","Version":"2012.4-2","Title":"Components of Canadian Monetary and Credit Aggregates","Description":"Components of Canadian Credit Aggregates and Monetary Aggregates with continuity adjustments.","Published":"2015-05-01","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"cdom","Version":"0.1.0","Title":"R Functions to Model CDOM Spectra","Description":"Wrapper functions to model and extract various quantitative information from absorption spectra of chromophoric dissolved organic matter (CDOM).","Published":"2016-03-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CDROM","Version":"1.1","Title":"Phylogenetically Classifies Retention Mechanisms of Duplicate\nGenes from Gene Expression Data","Description":"Classification is based on the recently developed phylogenetic\n approach by Assis and Bachtrog (2013). The method classifies the\n evolutionary mechanisms retaining pairs of duplicate genes (conservation,\n neofunctionalization, subfunctionalization, or specialization) by comparing gene\n expression profiles of duplicate genes in one species to those of their single-\n copy ancestral genes in a sister species.","Published":"2016-04-27","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cds","Version":"1.0.3","Title":"Constrained Dual Scaling for Detecting Response Styles","Description":"This is an implementation of constrained dual scaling for\n detecting response styles in categorical data, including utility functions. The\n procedure involves adding additional columns to the data matrix representing the\n boundaries between the rating categories. The resulting matrix is then doubled\n and analyzed by dual scaling. One-dimensional solutions are sought which provide\n optimal scores for the rating categories. These optimal scores are constrained\n to follow monotone quadratic splines. Clusters are introduced within which the\n response styles can vary. The type of response style present in a cluster can\n be diagnosed from the optimal scores for said cluster, and this can be used to\n construct an imputed version of the data set which adjusts for response styles.","Published":"2016-01-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CDVine","Version":"1.4","Title":"Statistical Inference of C- And D-Vine Copulas","Description":"Functions for statistical inference of canonical vine (C-vine)\n and D-vine copulas. Tools for bivariate exploratory data analysis and for bivariate\n as well as vine copula selection are provided. Models can be estimated\n either sequentially or by joint maximum likelihood estimation.\n Sampling algorithms and plotting methods are also included.\n Data is assumed to lie in the unit hypercube (so-called copula\n data).","Published":"2015-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CDVineCopulaConditional","Version":"0.1.0","Title":"Sampling from Conditional C- and D-Vine Copulas","Description":"Provides tools for sampling from a conditional copula density decomposed via \n Pair-Copula Constructions as C- or D- vine. Here, the vines which can be used for such \n sampling are those which sample as first the conditioning variables (when following the \n sampling algorithms shown in Aas et al. (2009) ). \n The used sampling algorithm is presented and discussed in Bevacqua et al. (2017) \n , and it is a modified version of that from Aas et al. (2009) \n . A function is available to select the best vine \n (based on information criteria) among those which allow for such conditional sampling. \n The package includes a function to compare scatterplot matrices and pair-dependencies of \n two multivariate datasets.","Published":"2017-03-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CEC","Version":"0.9.4","Title":"Cross-Entropy Clustering","Description":"Cross-Entropy Clustering (CEC) divides the data into Gaussian type clusters. It performs the automatic reduction of unnecessary clusters, while at the same time allows the simultaneous use of various type Gaussian mixture models.","Published":"2016-04-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cec2005benchmark","Version":"1.0.4","Title":"Benchmark for the CEC 2005 Special Session on Real-Parameter\nOptimization","Description":"This package is a wrapper for the C implementation of the 25 benchmark functions for the CEC 2005 Special Session on Real-Parameter Optimization. The original C code by Santosh Tiwari and related documentation are available at http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC-05/CEC05.htm.","Published":"2015-02-08","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"cec2013","Version":"0.1-5","Title":"Benchmark functions for the Special Session and Competition on\nReal-Parameter Single Objective Optimization at CEC-2013","Description":"This package provides R wrappers for the C implementation of 28 benchmark functions defined for the Special Session and Competition on Real-Parameter Single Objective Optimization at CEC-2013. The focus of this package is to provide an open-source and multi-platform implementation of the CEC2013 benchmark functions, in order to make easier for researchers to test the performance of new optimization algorithms in a reproducible way. The original C code (Windows only) was provided by Jane Jing Liang, while GNU/Linux comments were made by Janez Brest. This package was gently authorised for publication on CRAN by Ponnuthurai Nagaratnam Suganthan. The official documentation is available at http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC2013/CEC2013.htm. Bugs reports/comments/questions are very welcomed (in English, Spanish or Italian).","Published":"2015-01-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"CEGO","Version":"2.1.0","Title":"Combinatorial Efficient Global Optimization","Description":"Model building, surrogate model\n based optimization and Efficient Global Optimization in combinatorial\n or mixed search spaces.","Published":"2016-08-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"celestial","Version":"1.3","Title":"Collection of Common Astronomical Conversion Routines and\nFunctions","Description":"Contains a number of common astronomy conversion routines, particularly the HMS and degrees schemes, which can be fiddly to convert between on mass due to the textural nature of the former. It allows users to coordinate match datasets quickly. It also contains functions for various cosmological calculations.","Published":"2015-06-10","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cellranger","Version":"1.1.0","Title":"Translate Spreadsheet Cell Ranges to Rows and Columns","Description":"Helper functions to work with spreadsheets and the \"A1:D10\" style\n of cell range specification.","Published":"2016-07-27","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CellularAutomaton","Version":"1.1-1","Title":"One-Dimensional Cellular Automata","Description":"This package is an object-oriented implementation of one-dimensional cellular automata. It supports many of the features offered by Mathematica, including elementary rules, user-defined rules, radii, user-defined seeding, and plotting.","Published":"2013-08-20","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"cellVolumeDist","Version":"1.3","Title":"Functions to fit cell volume distributions and thereby estimate\ncell growth rates and division times","Description":"This package implements a methodology for using cell\n volume distributions to estimate cell growth rates and division\n times that is described in the paper entitled \"Cell Volume\n Distributions Reveal Cell Growth Rates and Division Times\", by\n Michael Halter, John T. Elliott, Joseph B. Hubbard, Alessandro\n Tona and Anne L. Plant, which is in press in the Journal of\n Theoretical Biology. In order to reproduce the analysis used\n to obtain Table 1 in the paper, execute the command\n \"example(fitVolDist)\".","Published":"2013-12-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cellWise","Version":"1.0.0","Title":"Analyzing Data with Cellwise Outliers","Description":"Tools for detecting cellwise outliers and robust methods to analyze data which may contain them. ","Published":"2016-12-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cem","Version":"1.1.17","Title":"Coarsened Exact Matching","Description":"Implementation of the Coarsened Exact Matching algorithm.","Published":"2016-12-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cems","Version":"0.4","Title":"Conditional Expectation Manifolds","Description":"Conditional expectation manifolds are an approach to compute principal curves and surfaces.","Published":"2015-11-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"censCov","Version":"1.0-0","Title":"Linear Regression with a Randomly Censored Covariate","Description":"Implementations of threshold regression approaches for linear\n\t regression models with a covariate subject to random censoring,\n\t including deletion threshold regression and completion threshold regression.\n\t Reverse survival regression, which flip the role of response variable and the\n\t covariate, is also considered.","Published":"2017-04-25","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"CensMixReg","Version":"1.0","Title":"Censored Linear Mixture Regression Models","Description":"Fit censored linear regression models where the random errors follow a finite mixture of Normal or Student-t distributions.\n Fit censored linear models of finite mixture multivariate Student-t and Normal distributions.","Published":"2016-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"censNID","Version":"0-0-1","Title":"censored NID samples","Description":"Implements AS138, AS139. ","Published":"2013-10-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"censorcopula","Version":"2.0","Title":"Estimate Parameter of Bivariate Copula","Description":"Implement an interval censor method \n to break ties when using data with ties to fitting a \n bivariate copula.","Published":"2016-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"censReg","Version":"0.5-26","Title":"Censored Regression (Tobit) Models","Description":"Maximum Likelihood estimation of censored regression (Tobit) models\n with cross-sectional and panel data.","Published":"2017-03-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CensRegMod","Version":"1.0","Title":"Fits Normal and Student-t Censored Regression Model","Description":"Fits univariate censored linear regression model under Normal or Student-t distribution","Published":"2015-01-24","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"}
{"Package":"CensSpatial","Version":"1.3","Title":"Censored Spatial Models","Description":"Fits linear regression models for censored spatial data. Provides different estimation methods as the SAEM (Stochastic Approximation of Expectation Maximization) algorithm and seminaive that uses Kriging prediction to estimate the response at censored locations and predict new values at unknown locations. Also offers graphical tools for assessing the fitted model.","Published":"2017-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"censusapi","Version":"0.2.0","Title":"Retrieve Data from the U.S. Census Bureau APIs","Description":"A wrapper for the U.S. Census Bureau APIs that returns data frames of \n\tCensus data and metadata. Available datasets include the \n\tDecennial Census, American Community Survey, Small Area Health Insurance Estimates,\n\tSmall Area Income and Poverty Estimates, and Population Estimates and Projections.\n\tSee for more information.","Published":"2017-06-06","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"censusGeography","Version":"0.1.0","Title":"Changes United States Census Geographic Code into Name of\nLocation","Description":"Converts the United States Census geographic code for city, state (FIP and ICP),\n region, and birthplace, into the name of the region. e.g. takes an input of\n Census city code 5330 to it's actual city, Philadelphia. Will return NA for code\n that doesn't correspond to real location.","Published":"2016-08-29","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"censusr","Version":"0.0.3","Title":"Collect Data from the Census API","Description":"Use the US Census API to collect summary data tables\n for SF1 and ACS datasets at arbitrary geographies.","Published":"2017-06-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"censys","Version":"0.1.0","Title":"Tools to Query the 'Censys' API","Description":"The 'Censys' public search engine enables researchers to quickly ask \n questions about the hosts and networks that compose the Internet. Details on how \n 'Censys' was designed and how it is operated are available at . \n Both basic and extended research access queries are made available. More information \n on the SQL dialect used by the 'Censys' engine can be found at \n .","Published":"2016-12-31","License":"AGPL + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"cents","Version":"0.1-41","Title":"Censored time series","Description":"Fit censored time series","Published":"2014-08-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CEoptim","Version":"1.2","Title":"Cross-Entropy R Package for Optimization","Description":"Optimization solver based on the Cross-Entropy method.","Published":"2017-02-20","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"}
{"Package":"CePa","Version":"0.5","Title":"Centrality-based pathway enrichment","Description":"Use pathway topology information to assign weight to\n pathway nodes.","Published":"2012-09-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CepLDA","Version":"1.0.0","Title":"Discriminant Analysis of Time Series in the Presence of\nWithin-Group Spectral Variability","Description":"Performs cepstral based discriminant analysis of groups of time series \n when there exists Variability in power spectra from time series within the same group \n as described in R.T. Krafty (2016) \"Discriminant Analysis of Time Series in the \n Presence of Within-Group Spectral Variability\" Journal of Time Series Analysis.","Published":"2016-01-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cepp","Version":"1.7","Title":"Context Driven Exploratory Projection Pursuit","Description":"Functions and Data to support Context Driven Exploratory Projection Pursuit.","Published":"2016-01-30","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CerioliOutlierDetection","Version":"1.1.5","Title":"Outlier Detection Using the Iterated RMCD Method of Cerioli\n(2010)","Description":"Implements the iterated RMCD method of Cerioli (2010)\n\tfor multivariate outlier detection via robust Mahalanobis distances. Also\n\tprovides the finite-sample RMCD method discussed in the paper, as well as \n\tthe methods provided in Hardin and Rocke (2005) and Green and Martin (2014).","Published":"2016-07-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cernn","Version":"0.1","Title":"Covariance Estimation Regularized by Nuclear Norm Penalties","Description":"An implementation of the covariance estimation method\n proposed in Chi and Lange (2014), \"Stable estimation of a covariance matrix guided by nuclear norm penalties,\"\n Computational Statistics and Data Analysis 80:117-128.","Published":"2015-04-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"cfa","Version":"0.10-0","Title":"Configural Frequency Analysis (CFA)","Description":"Analysis of configuration frequencies for simple and repeated measures, multiple-samples CFA, hierarchical CFA, bootstrap CFA, functional CFA, Kieser-Victor CFA, and Lindner's test using a conventional and an accelerated algorithm.","Published":"2017-05-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CFC","Version":"1.1.0","Title":"Cause-Specific Framework for Competing-Risk Analysis","Description":"Numerical integration of cause-specific survival curves to arrive at cause-specific cumulative incidence functions,\n with three usage modes: 1) Convenient API for parametric survival regression followed by competing-risk analysis, 2) API for\n CFC, accepting user-specified survival functions in R, and 3) Same as 2, but accepting survival functions in C++.","Published":"2017-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CfEstimateQuantiles","Version":"1.0","Title":"Estimate quantiles using any order Cornish-Fisher expansion","Description":"Estimate quantiles using formula (18) from\n http://www.jaschke-net.de/papers/CoFi.pdf (Yaschke; 2001)","Published":"2013-05-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cffdrs","Version":"1.7.6","Title":"Canadian Forest Fire Danger Rating System","Description":"This project provides a group of new functions to calculate the\n outputs of the two main components of the Canadian Forest Fire Danger Rating\n System (CFFDRS) at various time scales: the Fire Weather Index (FWI) System and\n the Fire Behaviour Prediction (FBP) System. Some functions have two versions,\n table and raster based.","Published":"2017-04-06","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cg","Version":"1.0-3","Title":"Compare Groups, Analytically and Graphically","Description":"Comprehensive data analysis software, and the name \"cg\" stands for \"compare groups.\" Its genesis and evolution are driven by common needs to compare administrations, conditions, etc. in medicine research and development. The current version provides comparisons of unpaired samples, i.e. a linear model with one factor of at least two levels. It also provides comparisons of two paired samples. Good data graphs, modern statistical methods, and useful displays of results are emphasized.","Published":"2016-01-04","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cgam","Version":"1.6","Title":"Constrained Generalized Additive Model","Description":"A constrained generalized additive model is fitted by the cgam routine. Given a set of predictors, each of which may have a shape or order restrictions, the maximum likelihood estimator for the constrained generalized additive model is found using an iteratively re-weighted cone projection algorithm. The ShapeSelect routine chooses a subset of predictor variables and describes the component relationships with the response. For each predictor, the user need only specify a set of possible shape or order restrictions. A model selection method chooses the shapes and orderings of the relationships as well as the variables. The cone information criterion (CIC) is used to select the best combination of variables and shapes. A genetic algorithm may be used when the set of possible models is large. In addition, the wps routine implements a two-dimensional isotonic regression without additivity assumptions. ","Published":"2016-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cgAUC","Version":"1.2.1","Title":"Calculate AUC-type measure when gold standard is continuous and\nthe corresponding optimal linear combination of variables with\nrespect to it","Description":"The cgAUC can calculate the AUC-type measure of Obuchowski(2006) when gold standard is continuous, and find the optimal linear combination of variables with respect to this measure.","Published":"2014-08-28","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cgdsr","Version":"1.2.6","Title":"R-Based API for Accessing the MSKCC Cancer Genomics Data Server\n(CGDS)","Description":"Provides a basic set of R functions for querying the Cancer \n Genomics Data Server (CGDS), hosted by the Computational Biology Center at \n Memorial-Sloan-Kettering Cancer Center (MSKCC).","Published":"2017-04-11","License":"LGPL-3","snapshot_date":"2017-06-23"}
{"Package":"cggd","Version":"0.8","Title":"Continuous Generalized Gradient Descent","Description":"Efficient procedures for fitting an entire regression\n sequences with different model types.","Published":"2012-07-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cgh","Version":"1.0-7.1","Title":"Microarray CGH analysis using the Smith-Waterman algorithm","Description":"Functions to analyze microarray comparative genome\n hybridization data using the Smith-Waterman algorithm","Published":"2010-05-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cghFLasso","Version":"0.2-1","Title":"Detecting hot spot on CGH array data with fused lasso\nregression","Description":"Spatial smoothing and hot spot detection using the fused\n lasso regression","Published":"2009-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cghRA","Version":"1.6.0","Title":"Array CGH Data Analysis and Visualization","Description":"Provides functions to import data from Agilent CGH arrays and process them according to the cghRA workflow. Implements several algorithms such as WACA, STEPS and cnvScore and an interactive graphical interface.","Published":"2017-03-03","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"cghseg","Version":"1.0.2-1","Title":"Segmentation Methods for Array CGH Analysis","Description":"cghseg is an R package dedicated to the analysis of CGH\n profiles using segmentation models.","Published":"2016-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CGP","Version":"2.0-2","Title":"Composite Gaussian process models","Description":"Fit composite Gaussian process (CGP) models as described in Ba and Joseph (2012) \"Composite Gaussian Process Models for Emulating Expensive Functions\", Annals of Applied Statistics. The CGP model is capable of approximating complex surfaces that are not second-order stationary. Important functions in this package are CGP, print.CGP, summary.CGP, predict.CGP and plotCGP.","Published":"2014-09-21","License":"LGPL-2.1","snapshot_date":"2017-06-23"}
{"Package":"cgwtools","Version":"3.0","Title":"Miscellaneous Tools","Description":"A set of tools the author has found useful for performing quick observations or evaluations of data, including a variety of ways to list objects by size, class, etc. Several other tools mimic Unix shell commands, including 'head', 'tail' ,'pushd' ,and 'popd'. The functions 'seqle' and 'reverse.seqle' mimic the base 'rle' but can search for linear sequences. The function 'splatnd' allows the user to generate zero-argument commands without the need for 'makeActiveBinding' .","Published":"2015-06-22","License":"LGPL-3","snapshot_date":"2017-06-23"}
{"Package":"ChainLadder","Version":"0.2.4","Title":"Statistical Methods and Models for Claims Reserving in General\nInsurance","Description":"Various statistical methods and models which are\n typically used for the estimation of outstanding claims reserves\n in general insurance, including those to estimate the claims\n development result as required under Solvency II.","Published":"2017-01-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"changepoint","Version":"2.2.2","Title":"Methods for Changepoint Detection","Description":"Implements various mainstream and specialised changepoint methods for finding single and multiple changepoints within data. Many popular non-parametric and frequentist methods are included. The cpt.mean(), cpt.var(), cpt.meanvar() functions should be your first point of call.","Published":"2016-10-04","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"changepoint.np","Version":"0.0.2","Title":"Methods for Nonparametric Changepoint Detection","Description":"Implements the multiple changepoint algorithm PELT with a\n nonparametric cost function based on the empirical distribution of the data. The cpt.np() function should be your first point of call.\n This package is an extension to the \\code{changepoint} package which uses parametric changepoint methods. For further information on the methods see the\n documentation for \\code{changepoint}.","Published":"2016-07-07","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"ChangepointTesting","Version":"1.0","Title":"Change Point Estimation for Clustered Signals","Description":"A multiple testing procedure for clustered alternative hypotheses. It is assumed that the p-values under the null hypotheses follow U(0,1) and that the distributions of p-values from the alternative hypotheses are stochastically smaller than U(0,1). By aggregating information, this method is more sensitive to detecting signals of low magnitude than standard methods. Additionally, sporadic small p-values appearing within a null hypotheses sequence are avoided by averaging on the neighboring p-values.","Published":"2016-05-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ChannelAttribution","Version":"1.10","Title":"Markov Model for the Online Multi-Channel Attribution Problem","Description":"Advertisers use a variety of online marketing channels to reach consumers and they want to know the degree each channel contributes to their marketing success. It's called the online multi-channel attribution problem. This package contains a probabilistic algorithm for the attribution problem. The model uses a k-order Markov representation to identifying structural correlations in the customer journey data. The package also contains three heuristic algorithms (first-touch, last-touch and linear-touch approach) for the same problem. The algorithms are implemented in C++.","Published":"2016-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ChannelAttributionApp","Version":"1.1","Title":"Shiny Web Application for the Multichannel Attribution Problem","Description":"Shiny Web Application for the Multichannel Attribution Problem. It is basically a user-friendly graphical interface for running and comparing all the attribution models in package 'ChannelAttribution'. For customizations or interest in other statistical methodologies for web data analysis please contact .","Published":"2016-02-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Chaos01","Version":"1.0.1","Title":"0-1 Test for Chaos","Description":"Computes and plot the results of the 0-1 test for chaos proposed\n by Gottwald and Melbourne (2004) . The algorithm is\n available in parallel for the independent values of parameter c.","Published":"2016-07-25","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ChaosGame","Version":"0.2","Title":"Chaos Game","Description":"The main objective of the package is to enter a word of at least two letters based on which an Iterated Function System with Probabilities (IFSP) is constructed, and a two-dimensional fractal containing the chosen word infinitely often is generated via the Chaos Game. Additionally, the package allows to project the two-dimensional fractal on several three-dimensional surfaces and to transform the fractal into another fractal with uniform marginals.","Published":"2016-03-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CharFun","Version":"0.1.0","Title":"Numerical Computation Cumulative Distribution Function and\nProbability Density Function from Characteristic Function","Description":"The Characteristic Functions Toolbox (CharFun) consists of a set of algorithms for evaluating selected characteristic functions and algorithms for numerical inversion of the (combined and/or compound) characteristic functions, used to evaluate the probability density function (PDF) and the cumulative distribution function (CDF).","Published":"2017-05-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"ChargeTransport","Version":"1.0.2","Title":"Charge Transfer Rates and Charge Carrier Mobilities","Description":"This package provides functions to compute Marcus, Marcus-Levich-Jortner or Landau-Zener charge transfer rates. These rates can then be used to perform kinetic Monte Carlo simulations to estimate charge carrier mobilities in molecular materials. The preparation of this package was supported by the the Fondazione Cariplo (PLENOS project, ref. 2011-0349).","Published":"2014-06-04","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"charlatan","Version":"0.1.0","Title":"Make Fake Data","Description":"Make fake data, supporting addresses, person names, dates,\n times, colors, coordinates, currencies, digital object identifiers\n ('DOIs'), jobs, phone numbers, 'DNA' sequences, doubles and integers\n from distributions and within a range.","Published":"2017-06-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CHAT","Version":"1.1","Title":"Clonal Heterogeneity Analysis Tool","Description":"CHAT is a collection of tools developed for tumor subclonality analysis using high density DNA SNP array data and sequencing data. The pipeline consists of four major compartments: 1) tumor aneuploid genome proportion (AGP) calculation and ploidy estimation. 2) segment-specific AGP calculation and absolute copy number estimation for somatic CNAs. 3) cancer cell fraction correction for somatic SNVs in clonal or subclonal sCNA regions. 4) number of subclones estimation using Dirichlet process prior followed by MCMC approach. ","Published":"2014-08-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CHCN","Version":"1.5","Title":"Canadian Historical Climate Network","Description":"A compilation of historical through contemporary climate\n measurements scraped from the Environment Canada Website\n Including tools for scraping data, creating metadata and\n formating temperature files.","Published":"2012-06-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cheb","Version":"0.3","Title":"Discrete Linear Chebyshev Approximation","Description":"Discrete Linear Chebyshev Approximation","Published":"2013-02-22","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"chebpol","Version":"1.3-1789","Title":"Multivariate Chebyshev Interpolation","Description":"Contains methods for creating multivariate Chebyshev\n approximation of functions on a hypercube. Some methods for\n non-Chebyshev grids are also provided.","Published":"2015-10-28","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"checkarg","Version":"0.1.0","Title":"Check the Basic Validity of a (Function) Argument","Description":"Utility functions that allow checking the basic validity of a function argument or any other value, \n including generating an error and assigning a default in a single line of code. The main purpose of\n the package is to provide simple and easily readable argument checking to improve code robustness. ","Published":"2017-05-19","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CheckDigit","Version":"0.1-1","Title":"Calculate and verify check digits","Description":"A set of functions to calculate check digits according to\n various algorithms and to verify whether a string ends in a\n valid check digit","Published":"2013-04-17","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"checkmate","Version":"1.8.2","Title":"Fast and Versatile Argument Checks","Description":"Tests and assertions to perform frequent argument checks. A\n substantial part of the package was written in C to minimize any worries\n about execution time overhead.","Published":"2016-11-02","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"checkpoint","Version":"0.4.0","Title":"Install Packages from Snapshots on the Checkpoint Server for\nReproducibility","Description":"The goal of checkpoint is to solve the problem of package\n reproducibility in R. Specifically, checkpoint allows you to install packages\n as they existed on CRAN on a specific snapshot date as if you had a CRAN time\n machine. To achieve reproducibility, the checkpoint() function installs the\n packages required or called by your project and scripts to a local library\n exactly as they existed at the specified point in time. Only those packages\n are available to your project, thereby avoiding any package updates that came\n later and may have altered your results. In this way, anyone using checkpoint's\n checkpoint() can ensure the reproducibility of your scripts or projects at any\n time. To create the snapshot archives, once a day (at midnight UTC) Microsoft\n refreshes the Austria CRAN mirror on the \"Microsoft R Archived Network\"\n server (). Immediately after completion\n of the rsync mirror process, the process takes a snapshot, thus creating the\n archive. Snapshot archives exist starting from 2014-09-17.","Published":"2017-04-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cheddar","Version":"0.1-631","Title":"Analysis and Visualisation of Ecological Communities","Description":"Provides a flexible, extendable representation of an ecological community and a range of functions for analysis and visualisation, focusing on food web, body mass and numerical abundance data. Allows inter-web comparisons such as examining changes in community structure over environmental, temporal or spatial gradients.","Published":"2016-10-10","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"chemCal","Version":"0.1-37","Title":"Calibration Functions for Analytical Chemistry","Description":"Simple functions for plotting linear\n\tcalibration functions and estimating standard errors for measurements\n\taccording to the Handbook of Chemometrics and Qualimetrics: Part A\n\tby Massart et al. There are also functions estimating the limit\n\tof detection (LOD) and limit of quantification (LOQ).\n\tThe functions work on model objects from - optionally weighted - linear\n\tregression (lm) or robust linear regression ('rlm' from the 'MASS' package).","Published":"2015-10-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"chemmodlab","Version":"1.0.0","Title":"A Cheminformatics Modeling Laboratory for Fitting and Assessing\nMachine Learning Models","Description":"Contains a set of methods for fitting models and methods for\n validating the resulting models. The statistical methodologies comprise\n a comprehensive collection of approaches whose validity and utility have\n been accepted by experts in the Cheminformatics field. As promising new\n methodologies emerge from the statistical and data-mining communities, they\n will be incorporated into the laboratory. These methods are aimed at discovering\n quantitative structure-activity relationships (QSARs). However, the user can\n directly input their own choices of descriptors and responses, so the capability\n for comparing models is effectively unlimited.","Published":"2017-04-21","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"chemometrics","Version":"1.4.2","Title":"Multivariate Statistical Analysis in Chemometrics","Description":"R companion to the book \"Introduction to Multivariate Statistical Analysis in Chemometrics\" written by K. Varmuza and P. Filzmoser (2009).","Published":"2017-03-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"ChemometricsWithR","Version":"0.1.9","Title":"Chemometrics with R - Multivariate Data Analysis in the Natural\nSciences and Life Sciences","Description":"Functions and scripts used in the book \"Chemometrics with R - Multivariate Data Analysis in the Natural Sciences and Life Sciences\" by Ron Wehrens, Springer (2011).","Published":"2015-09-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ChemometricsWithRData","Version":"0.1.3","Title":"Data for package ChemometricsWithR","Description":"The package provides data sets used in the book\n \"Chemometrics with R - Multivariate Data Analysis in the\n Natural Sciences and Life Sciences\" by Ron Wehrens, Springer\n (2011).","Published":"2012-10-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ChemoSpec","Version":"4.4.17","Title":"Exploratory Chemometrics for Spectroscopy","Description":"A collection of functions for top-down exploratory data analysis\n of spectral data obtained via nuclear magnetic resonance (NMR), infrared (IR) or\n Raman spectroscopy. Includes functions for plotting and inspecting spectra, peak\n alignment, hierarchical cluster analysis (HCA), principal components analysis\n (PCA) and model-based clustering. Robust methods appropriate for this type of\n high-dimensional data are available. ChemoSpec is designed with metabolomics\n data sets in mind, where the samples fall into groups such as treatment and\n control. Graphical output is formatted consistently for publication quality\n plots. ChemoSpec is intended to be very user friendly and help you get usable\n results quickly. A vignette covering typical operations is available.","Published":"2017-02-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cherry","Version":"0.6-11","Title":"Multiple Testing Methods for Exploratory Research","Description":"Provides an alternative approach to multiple testing\n by calculating a simultaneous upper confidence bounds for the\n number of true null hypotheses among any subset of the hypotheses of interest. \n\tSome of the functions in this package are optionally enhanced by the 'gurobi'\n\tsoftware and its accompanying R package. For their installation, please follow the \n\tinstructions at www.gurobi.com and http://www.gurobi.com/documentation, respectively.","Published":"2015-06-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CHFF","Version":"0.1.0","Title":"Closest History Flow Field Forecasting for Bivariate Time Series","Description":"The software matches the current history to the closest history in a time series to build a forecast.","Published":"2016-05-26","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"chi","Version":"0.1","Title":"The Chi Distribution","Description":"Light weight implementation of the standard distribution \n functions for the chi distribution, wrapping those for the chi-squared \n distribution in the stats package.","Published":"2017-05-07","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"chi2x3way","Version":"1.1","Title":"Partitioning Chi-Squared and Tau Index for Three-Way Contingency\nTables","Description":"Provides two index partitions for three-way contingency tables:\n partition of the association measure chi-squared and of the predictability index tau \n under several representative hypotheses about the expected frequencies (hypothesized probabilities). ","Published":"2017-01-23","License":"GPL (> 2)","snapshot_date":"2017-06-23"}
{"Package":"childsds","Version":"0.6.2","Title":"Data and Methods Around Reference Values in Pediatrics","Description":"Calculation of standard deviation scores adduced from different\n growth standards (WHO, UK, Germany, Italy, China, etc). Therefore, the calculation of SDS-values\n for different measures like BMI, weight, height, head circumference, different\n ratios, etc. are easy to carry out. Also, references for laboratory values in\n children are available: serum lipids, iron-related blood parameters. In the\n new version, there are also functions combining the gamlss lms() function with\n resampling methods for using with repeated measurements and family dependencies.","Published":"2017-06-08","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"chillR","Version":"0.66","Title":"Statistical Methods for Phenology Analysis in Temperate Fruit\nTrees","Description":"The phenology of plants (i.e. the timing of their annual life\n phases) depends on climatic cues. For temperate trees and many other plants,\n spring phases, such as leaf emergence and flowering, have been found to result\n from the effects of both cool (chilling) conditions and heat. Fruit tree\n scientists (pomologists) have developed some metrics to quantify chilling\n and heat. 'chillR' contains functions for processing temperature records into\n chilling (Chilling Hours, Utah Chill Units and Chill Portions) and heat units\n (Growing Degree Hours). Regarding chilling metrics, Chill Portions are often\n considered the most promising, but they are difficult to calculate. This package\n makes it easy. 'chillR' also contains procedures for conducting a PLS analysis\n relating phenological dates (e.g. bloom dates) to either mean temperatures or\n mean chill and heat accumulation rates, based on long-term weather and phenology\n records. As of version 0.65, it also includes functions for generating weather\n scenarios with a weather generator, for conducting climate change analyses\n for temperature-based climatic metrics and for plotting results from such\n analyses.","Published":"2017-03-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"chinese.misc","Version":"0.1.6","Title":"Miscellaneous Tools for Chinese Text Mining and More","Description":"Efforts are made to make Chinese text mining easier, faster, and robust to errors. \n Document term matrix can be generated by only one line of code; detecting encoding, \n segmenting and removing stop words are done automatically. \n\tSome convenient tools are also supplied.","Published":"2017-05-03","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"chipPCR","Version":"0.0.8-10","Title":"Toolkit of Helper Functions to Pre-Process Amplification Data","Description":"A collection of functions to pre-process amplification curve data from polymerase chain reaction (PCR) or isothermal amplification reactions. Contains functions to normalize and baseline amplification curves, to detect both the start and end of an amplification reaction, several smoothers (e.g., LOWESS, moving average, cubic splines, Savitzky-Golay), a function to detect false positive amplification reactions and a function to determine the amplification efficiency. Quantification point (Cq) methods include the first (FDM) and second approximate derivative maximum (SDM) methods (calculated by a 5-point-stencil) and the cycle threshold method. Data sets of experimental nucleic acid amplification systems (VideoScan HCU, capillary convective PCR (ccPCR)) and commercial systems are included. Amplification curves were generated by helicase dependent amplification (HDA), ccPCR or PCR. As detection system intercalating dyes (EvaGreen, SYBR Green) and hydrolysis probes (TaqMan) were used. ","Published":"2015-04-10","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ChIPtest","Version":"1.0","Title":"Nonparametric Methods for Identifying Differential Enrichment\nRegions with ChIP-Seq Data","Description":"Nonparametric Tests to identify the differential enrichment region for two conditions or time-course ChIP-seq data. It includes: data preprocessing function, estimation of a small constant used in hypothesis testing, a kernel-based two sample nonparametric test, two assumption-free two sample nonparametric test.","Published":"2016-07-20","License":"GPL (>= 2.15.1)","snapshot_date":"2017-06-23"}
{"Package":"CHMM","Version":"0.1.0","Title":"Coupled Hidden Markov Models","Description":"An exact and a variational inference for\n coupled Hidden Markov Models applied to the joint detection of copy number variations.","Published":"2017-04-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"chngpt","Version":"2016.7-31","Title":"Change Point Regression","Description":"Change point regression models are also called two-phase regression, break-point regression, split-point regression, structural change models and threshold regression models. Hypothesis testing in change point logistic regression with or without interaction terms. Several options are provided for testing in models with interaction, including a maximum of likelihood ratios test that determines p-value through Monte Carlo. Estimation under change point model is also included, but less developed at this point.","Published":"2016-07-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CHNOSZ","Version":"1.1.0","Title":"Chemical Thermodynamics and Activity Diagrams","Description":"An integrated set of tools for thermodynamic calculations in compositional\n biology and geochemistry. Thermodynamic properties are taken from a database for minerals\n and inorganic and organic aqueous species including biomolecules, or from amino acid\n group additivity for proteins. High-temperature properties are calculated using the\n revised Helgeson-Kirkham-Flowers equations of state for aqueous species. Functions are\n provided to define a system using basis species, automatically balance reactions,\n calculate the chemical affinities of reactions for selected species, and plot the results\n on potential diagrams or equilibrium activity diagrams. Experimental features are\n available to calculate activity coefficients for aqueous species or for multidimensional\n optimization of thermodynamic variables using an objective function.","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ChocoLattes","Version":"0.1.0","Title":"Processing Data from Lattes CV Files","Description":"Processes data from Lattes CV \n () XML files. Extract, condition, and plot \n lists of journal and conference papers, book chapters, books, \n and more.","Published":"2017-04-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"choiceDes","Version":"0.9-1","Title":"Design Functions for Choice Studies","Description":"This package consists of functions to design DCMs and other types of choice \n studies (including MaxDiff and other tradeoffs)","Published":"2014-11-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ChoiceModelR","Version":"1.2","Title":"Choice Modeling in R","Description":"Implements an MCMC algorithm to estimate a hierarchical\n multinomial logit model with a normal heterogeneity\n distribution. The algorithm uses a hybrid Gibbs Sampler with a\n random walk metropolis step for the MNL coefficients for each\n unit. Dependent variable may be discrete or continuous.\n Independent variables may be discrete or continuous with\n optional order constraints. Means of the distribution of\n heterogeneity can optionally be modeled as a linear function of\n unit characteristics variables.","Published":"2012-11-20","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"choplump","Version":"1.0-0.4","Title":"Choplump tests","Description":"Choplump Tests are Permutation Tests for Comparing Two Groups with Some Positive but Many Zero Responses","Published":"2014-11-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"chopthin","Version":"0.2.1","Title":"The Chopthin Resampler","Description":"Resampling is a standard step in particle filtering and in\n sequential Monte Carlo. This package implements the chopthin resampler, which\n keeps a bound on the ratio between the largest and the smallest weights after\n resampling.","Published":"2016-01-05","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ChoR","Version":"0.0-1","Title":"Chordalysis R Package","Description":"\n Learning the structure of graphical models from datasets with thousands of variables.\n More information about the research papers detailing the theory behind Chordalysis is available at\n (KDD 2016, SDM 2015, ICDM 2014, ICDM 2013).\n The R package development site is .","Published":"2017-02-20","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"chords","Version":"0.95.4","Title":"Estimation in Respondent Driven Samples","Description":"Maximum likelihood estimation in respondent driven samples.","Published":"2017-01-21","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"choroplethr","Version":"3.6.1","Title":"Simplify the Creation of Choropleth Maps in R","Description":"Choropleths are thematic maps where geographic regions, such as\n states, are colored according to some metric, such as the number of people\n who live in that state. This package simplifies this process by 1.\n Providing ready-made functions for creating choropleths of common maps. 2.\n Providing data and API connections to interesting data sources for making\n choropleths. 3. Providing a framework for creating choropleths from\n arbitrary shapefiles. 4. Overlaying those maps over reference maps from\n Google Maps. ","Published":"2017-04-16","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"choroplethrAdmin1","Version":"1.1.1","Title":"Contains an Administrative-Level-1 Map of the World","Description":"Contains an administrative-level-1 map of the world.\n Administrative-level-1 is the generic term for the largest sub-national\n subdivision of a country. This package was created for use with the\n choroplethr package.","Published":"2017-02-22","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"choroplethrMaps","Version":"1.0.1","Title":"Contains Maps Used by the 'choroplethr' Package","Description":"Contains 3 maps. 1) US States 2) US Counties 3) Countries of the\n world.","Published":"2017-01-31","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"chromer","Version":"0.1","Title":"Interface to Chromosome Counts Database API","Description":"A programmatic interface to the Chromosome Counts Database\n (http://ccdb.tau.ac.il/). This package is part of the rOpenSci suite\n (http://ropensci.org)","Published":"2015-01-13","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"chromoR","Version":"1.0","Title":"Analysis of chromosomal interactions data (correction,\nsegmentation and comparison)","Description":"chromoR provides users with a statistical pipeline for analysing chromosomal interactions data (Hi-C data).It combines wavelet methods and a Bayesian approach for correction (bias and noise) and comparison (detecting significant changes between Hi-C maps) of Hi-C contact maps.In addition, it also support detection of change points in 1D Hi-C contact profiles.","Published":"2014-02-11","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"chron","Version":"2.3-50","Title":"Chronological Objects which can Handle Dates and Times","Description":"Provides chronological objects which can handle dates and times.","Published":"2017-02-20","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CHsharp","Version":"0.4","Title":"Choi and Hall Style Data Sharpening","Description":"Functions for use in perturbing data prior to use of nonparametric smoothers\n and clustering. ","Published":"2015-10-16","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"chunked","Version":"0.3","Title":"Chunkwise Text-File Processing for 'dplyr'","Description":"Text data can be processed chunkwise using 'dplyr' commands. These\n are recorded and executed per data chunk, so large files can be processed with\n limited memory using the 'LaF' package.","Published":"2016-06-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CIAAWconsensus","Version":"1.1","Title":"Isotope Ratio Meta-Analysis","Description":"Calculation of consensus values for atomic weights, isotope amount ratios, and isotopic abundances with the associated uncertainties using multivariate meta-regression approach for consensus building.","Published":"2016-12-31","License":"Unlimited","snapshot_date":"2017-06-23"}
{"Package":"CIDnetworks","Version":"0.8.1","Title":"Generative Models for Complex Networks with Conditionally\nIndependent Dyadic Structure","Description":"Generative models for complex networks with conditionally independent dyadic structure. Now supports directed arcs!","Published":"2015-04-08","License":"GPL (> 3)","snapshot_date":"2017-06-23"}
{"Package":"CIFsmry","Version":"1.0.1.1","Title":"Weighted summary of cumulative incidence functions","Description":"Estimate of cumulative incidence function in two samples. Provide weighted summary statistics based on various methods and weights. ","Published":"2016-07-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cifti","Version":"0.4.2","Title":"Toolbox for Connectivity Informatics Technology Initiative\n('CIFTI') Files","Description":"Functions for the input/output and visualization of\n medical imaging data in the form of 'CIFTI' files \n .","Published":"2017-05-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cin","Version":"0.1","Title":"Causal Inference for Neuroscience","Description":"Many experiments in neuroscience involve randomized and fast stimulation while the continuous outcome measures respond at much slower time scale, for example event-related fMRI. This package provide valid statistical tools with causal interpretation under these challenging settings, without imposing model assumptions.","Published":"2011-12-30","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CINID","Version":"1.2","Title":"Curculionidae INstar IDentification","Description":"This package provides functions to compute a method for identifying the instar of Curculionid larvae from the observed distribution of the headcapsule size of mature larvae.","Published":"2014-10-07","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"}
{"Package":"CINOEDV","Version":"2.0","Title":"Co-Information based N-Order Epistasis Detector and Visualizer","Description":"Detecting and visualizing nonlinear interaction effects of single nucleotide polymorphisms or epistatic interactions, especially high-order epistatic interactions, are important topics in bioinformatics because of their significant mathematical and computational challenges. We present CINOEDV (Co-Information based N-Order Epistasis Detector and Visualizer) for detecting, visualizing, and analyzing high-order epistatic interactions by introducing virtual vertices into the construction of a hypergraph. CINOEDV was developed as an alternative to existing software to build a global picture of epistatic interactions and unexpected high-order epistatic interactions, which might provide useful clues for understanding the underlying genetic architecture of complex diseases.","Published":"2014-11-27","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cir","Version":"2.0.0","Title":"Centered Isotonic Regression and Dose-Response Utilities","Description":"Isotonic regression (IR), as well as a great small-sample improvement to IR called\n CIR, interval estimates for both, and additional utilities.","Published":"2017-03-16","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CircE","Version":"1.1","Title":"Circumplex models Estimation","Description":"This package contains functions for fitting circumplex\n structural models for correlation matrices (with negative\n correlation) by the method of maximum likelihood.","Published":"2014-09-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"circlize","Version":"0.4.0","Title":"Circular Visualization","Description":"Circular layout is an efficient way for the visualization of huge \n amounts of information. Here this package provides an implementation \n of circular layout generation in R as well as an enhancement of available \n software. The flexibility of the package is based on the usage of low-level \n graphics functions such that self-defined high-level graphics can be easily \n implemented by users for specific purposes. Together with the seamless \n connection between the powerful computational and visual environment in R, \n it gives users more convenience and freedom to design figures for \n better understanding complex patterns behind multiple dimensional data.","Published":"2017-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CircMLE","Version":"0.1.0","Title":"Maximum Likelihood Analysis of Circular Data","Description":"A series of wrapper functions to\n implement the 10 maximum likelihood models of animal orientation\n described by Schnute and Groot (1992) . The\n functions also include the ability to use different optimizer\n methods and calculate various model selection metrics (i.e., AIC,\n AICc, BIC).","Published":"2017-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CircNNTSR","Version":"2.2","Title":"Statistical Analysis of Circular Data using Nonnegative\nTrigonometric Sums (NNTS) Models","Description":"Includes functions for the analysis of circular data using distributions based on Nonnegative Trigonometric Sums (NNTS). The package includes functions for calculation of densities and distributions, for the estimation of parameters, for plotting and more.","Published":"2016-05-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CircOutlier","Version":"3.2.3","Title":"Detection of Outliers in Circular-Circular Regression","Description":"Detection of outliers in circular-circular regression models, modifying its and estimating of models parameters.","Published":"2016-01-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CircStats","Version":"0.2-4","Title":"Circular Statistics, from \"Topics in circular Statistics\" (2001)","Description":"Circular Statistics, from \"Topics in circular Statistics\"\n (2001) S. Rao Jammalamadaka and A. SenGupta, World Scientific.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"circular","Version":"0.4-7","Title":"Circular Statistics","Description":"Circular Statistics, from \"Topics in circular Statistics\" (2001) S. Rao Jammalamadaka and A. SenGupta, World Scientific.","Published":"2013-11-08","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CircularDDM","Version":"0.0.9","Title":"Circular Drift-Diffusion Model","Description":"Circular drift-diffusion model for continuous reports.","Published":"2017-04-04","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cIRT","Version":"1.2.1","Title":"Choice Item Response Theory","Description":"Jointly model the accuracy of cognitive responses and item choices\n within a bayesian hierarchical framework as described by Culpepper and\n Balamuta (2015) . In addition, the package\n contains the datasets used within the analysis of the paper.","Published":"2017-04-26","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cit","Version":"2.1","Title":"Causal Inference Test","Description":"A likelihood-based hypothesis testing approach is implemented for assessing causal mediation. For example, it could be used to test for mediation of a known causal association between a DNA variant, the 'instrumental variable', and a clinical outcome or phenotype by gene expression or DNA methylation, the potential mediator. Another example would be testing mediation of the effect of a drug on a clinical outcome by the molecular target. The hypothesis test generates a p-value or permutation-based FDR value with confidence intervals to quantify uncertainty in the causal inference. The outcome can be represented by either a continuous or binary variable, the potential mediator is continuous, and the instrumental variable can be continuous or binary and is not limited to a single variable but may be a design matrix representing multiple variables.","Published":"2016-11-15","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"CITAN","Version":"2015.12-2","Title":"CITation ANalysis Toolpack","Description":"Supports quantitative\n research in scientometrics and bibliometrics. Provides\n various tools for preprocessing bibliographic\n data retrieved, e.g., from Elsevier's SciVerse Scopus,\n computing bibliometric impact of individuals,\n or modeling many phenomena encountered in the social sciences.","Published":"2015-12-13","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"citbcmst","Version":"1.0.4","Title":"CIT Breast Cancer Molecular SubTypes Prediction","Description":"This package implements the approach to assign tumor gene expression dataset to the 6 CIT Breast Cancer Molecular Subtypes described in Guedj et al 2012.","Published":"2014-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"citccmst","Version":"1.0.2","Title":"CIT Colon Cancer Molecular SubTypes Prediction","Description":"This package implements the approach to assign tumor gene expression dataset to the 6 CIT Colon Cancer Molecular Subtypes described in Marisa et al 2013.","Published":"2014-01-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Cite","Version":"0.1.0","Title":"An RStudio Addin to Insert BibTex Citation in Rmarkdown\nDocuments","Description":"Contain an RStudio addin to insert BibTex citation in Rmarkdown documents with a minimal user interface.","Published":"2016-07-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"citr","Version":"0.2.0","Title":"'RStudio' Add-in to Insert Markdown Citations","Description":"Functions and an 'RStudio' add-in that search a 'BibTeX'-file to create and\n insert formatted Markdown citations into the current document.","Published":"2016-09-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CityPlot","Version":"2.0","Title":"Visualization of structure and contents of a database","Description":"Input: a csv-file for each database table and a\n controlfile describing relations between tables. Output: An\n extended ER diagram","Published":"2012-05-07","License":"LGPL","snapshot_date":"2017-06-23"}
{"Package":"CityWaterBalance","Version":"0.1.0","Title":"Track Flows of Water Through an Urban System","Description":"Retrieves data and estimates unmeasured flows of water through the \n urban network. Any city may be modeled with preassembled data, but data for \n US cities can be gathered via web services using this package and dependencies \n 'geoknife' and 'dataRetrieval'. ","Published":"2017-06-16","License":"CC0","snapshot_date":"2017-06-23"}
{"Package":"cjoint","Version":"2.0.4","Title":"AMCE Estimator for Conjoint Experiments","Description":"An R implementation of the Average Marginal Component-specific Effects (AMCE) estimator presented in Hainmueller, J., Hopkins, D., and Yamamoto T. (2014) Causal Inference in Conjoint Analysis: Understanding Multi-Dimensional Choices via Stated Preference Experiments. Political Analysis 22(1):1-30.","Published":"2016-03-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ck37r","Version":"1.0.0","Title":"Chris Kennedy's R Toolkit","Description":"Toolkit for statistical, machine learning, and targeted learning\n analyses. Functionality includes loading & auto-installing packages,\n standardizing datasets, creating missingness indicators, imputing missing\n values, creating multicore or multinode clusters, automatic SLURM integration,\n enhancing SuperLearner and TMLE with automatic parallelization, and many other\n SuperLearner analysis & plotting enhancements.","Published":"2017-06-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"ckanr","Version":"0.1.0","Title":"Client for the Comprehensive Knowledge Archive Network ('CKAN')\n'API'","Description":"Client for 'CKAN' 'API' (http://ckan.org/). Includes interface\n to 'CKAN' 'APIs' for search, list, show for packages, organizations, and\n resources. In addition, provides an interface to the 'datastore' 'API'.","Published":"2015-10-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"Ckmeans.1d.dp","Version":"4.2.0","Title":"Optimal and Fast Univariate Clustering","Description":"A fast dynamic programming algorithmic framework to\n achieve optimal univariate k-means, k-median, and k-segments\n clustering. Minimizing the sum of respective within-cluster\n distances, the algorithms guarantee optimality and\n reproducibility. Their advantage over heuristic clustering\n algorithms in efficiency and accuracy is increasingly pronounced\n as the number of clusters k increases. Weighted k-means and\n unweighted k-segments algorithms can also optimally segment time\n series and perform peak calling. An auxiliary function generates\n histograms that are adaptive to patterns in data. This package\n provides a powerful alternative to heuristic methods for\n univariate data analysis.","Published":"2017-05-30","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"cladoRcpp","Version":"0.14.4","Title":"C++ implementations of phylogenetic cladogenesis calculations","Description":"This package implements in C++/Rcpp various cladogenesis-related calculations that are slow in pure R. These include the calculation of the probability of various scenarios for the inheritance of geographic range at the divergence events on a phylogenetic tree, and other calculations necessary for models which are not continuous-time markov chains (CTMC), but where change instead occurs instantaneously at speciation events. Typically these models must assess the probability of every possible combination of (ancestor state, left descendent state, right descendent state). This means that there are up to (# of states)^3 combinations to investigate, and in biogeographical models, there can easily be hundreds of states, so calculation time becomes an issue. C++ implementation plus clever tricks (many combinations can be eliminated a priori) can greatly speed the computation time over naive R implementations. CITATION INFO: This package is the result of my Ph.D. research, please cite the package if you use it! Type: citation(package=\"cladoRcpp\") to get the citation information.","Published":"2014-05-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clampSeg","Version":"1.0-1","Title":"Idealisation of Patch Clamp Recordings","Description":"Allows for idealisation of patch clamp recordings by implementing the non-parametric JUmp Local\n dEconvolution Segmentation filter JULES.","Published":"2017-06-07","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ClamR","Version":"2.1-1","Title":"Time Series Modeling for Climate Change Proxies","Description":"Implementation of the Wilkinson and Ivany (2002) approach to paleoclimate analysis, applied to isotope data extracted from clams.","Published":"2015-07-29","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"clarifai","Version":"0.4.2","Title":"Access to Clarifai API","Description":"Get description of images from Clarifai API. For more information,\n see . Clarifai uses a large deep learning cloud to come\n up with descriptive labels of the things in an image. It also provides how\n confident it is about each of the labels.","Published":"2017-04-12","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"class","Version":"7.3-14","Title":"Functions for Classification","Description":"Various functions for classification, including k-nearest\n neighbour, Learning Vector Quantization and Self-Organizing Maps.","Published":"2015-08-30","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"classGraph","Version":"0.7-5","Title":"Construct Graphs of S4 Class Hierarchies","Description":"Construct directed graphs of S4 class hierarchies and\n visualize them. In general, these graphs typically are DAGs (directed\n acyclic graphs), often simple trees in practice.","Published":"2015-09-01","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"classifierplots","Version":"1.3.3","Title":"Generates a Visualization of Classifier Performance as a Grid of\nDiagnostic Plots","Description":"\n Generates a visualization of binary classifier performance as a grid of\n diagnostic plots with just one function call. Includes ROC curves,\n prediction density, accuracy, precision, recall and calibration plots, all using\n ggplot2 for easy modification.\n Debug your binary classifiers faster and easier!","Published":"2017-04-06","License":"BSD 3-clause License + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"classifly","Version":"0.4","Title":"Explore classification models in high dimensions","Description":"Given $p$-dimensional training data containing\n $d$ groups (the design space), a classification\n algorithm (classifier) predicts which group new data\n belongs to. Generally the input to these algorithms is\n high dimensional, and the boundaries between groups\n will be high dimensional and perhaps curvilinear or\n multi-faceted. This package implements methods for\n understanding the division of space between the groups.","Published":"2014-04-23","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"classiFunc","Version":"0.1.0","Title":"Classification of Functional Data","Description":"Efficient implementation of k-nearest neighbor estimator and a kernel estimator for functional data classification.","Published":"2017-05-29","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"classify","Version":"1.3","Title":"Classification Accuracy and Consistency under IRT models","Description":"IRT classification uses the probability that candidates of\n a given ability, will answer correctly questions of a specified\n difficulty to calculate the probability of their achieving\n every possible score in a test. Due to the IRT assumption of\n conditional independence (that is every answer given is assumed\n to depend only on the latent trait being measured) the\n probability of candidates achieving these potential scores can\n be expressed by multiplication of probabilities for item\n responses for a given ability. Once the true score and the\n probabilities of achieving all other scores have been\n determined for a candidate the probability of their score lying\n in the same category as that of their true score\n (classification accuracy), or the probability of consistent\n classification in a category over administrations\n (classification consistency), can be calculated.","Published":"2014-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"classInt","Version":"0.1-24","Title":"Choose Univariate Class Intervals","Description":"Selected commonly used methods for choosing univariate class intervals for mapping or other graphics purposes.","Published":"2017-04-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"classyfire","Version":"0.1-2","Title":"Robust multivariate classification using highly optimised SVM\nensembles","Description":"A collection of functions for the creation and application of highly optimised, robustly evaluated ensembles of support vector machines (SVMs). The package takes care of training individual SVM classifiers using a fast parallel heuristic algorithm, and combines individual classifiers into ensembles. Robust metrics of classification performance are offered by bootstrap resampling and permutation testing. ","Published":"2015-01-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cld2","Version":"1.1","Title":"Google's Compact Language Detector 2","Description":"Bindings to Google's C++ library Compact Language Detector 2\n (see for more information). Probabilistically\n detects over 80 languages in plain text or HTML. For mixed-language input it returns the\n top three detected languages and their approximate proportion of the total classified \n text bytes (e.g. 80% English and 20% French out of 1000 bytes). There is also a 'cld3'\n package on CRAN which uses a neural network model instead.","Published":"2017-06-10","License":"Apache License 2.0","snapshot_date":"2017-06-23"}
{"Package":"cld3","Version":"1.0","Title":"Google's Compact Language Detector 3","Description":"Google's Compact Language Detector 3 is a neural network model for language \n identification and the successor of 'cld2' (available from CRAN). The algorithm is still\n experimental and takes a novel approach to language detection with different properties\n and outcomes. It can be useful to combine this with the Bayesian classifier results \n from 'cld2'. See for more information.","Published":"2017-06-07","License":"Apache License 2.0","snapshot_date":"2017-06-23"}
{"Package":"cleanEHR","Version":"0.1","Title":"The Critical Care Clinical Data Processing Tools","Description":"\n A toolset to deal with the Critical Care Health Informatics Collaborative\n dataset. It is created to address various data reliability and accessibility\n problems of electronic healthcare records (EHR). It provides a unique\n platform which enables data manipulation, transformation, reduction,\n anonymisation, cleaning and validation.","Published":"2017-02-02","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cleangeo","Version":"0.2-1","Title":"Cleaning Geometries from Spatial Objects","Description":"\n Provides a set of utility tools to inspect spatial objects, facilitate\n handling and reporting of topology errors and geometry validity issues.\n Finally, it provides a geometry cleaner that will fix all geometry problems,\n and eliminate (at least reduce) the likelihood of having issues when doing\n spatial data processing.","Published":"2016-11-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cleanNLP","Version":"1.9.0","Title":"A Tidy Data Model for Natural Language Processing","Description":"Provides a set of fast tools for converting a textual corpus into a set of normalized\n tables. Users may make use of a Python back end with 'spaCy' \n or the Java back end 'CoreNLP' . A minimal back\n end with no external dependencies is also provided. Exposed annotation tasks include\n tokenization, part of speech tagging, named entity recognition, entity linking, sentiment\n analysis, dependency parsing, coreference resolution, and word embeddings. Summary\n statistics regarding token unigram, part of speech tag, and dependency type frequencies\n are also included to assist with analyses.","Published":"2017-05-27","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cleanr","Version":"1.1.3","Title":"Helps You to Code Cleaner","Description":"Check your R code for some of the most common layout flaws.\n Many tried to teach us how to write code less dreadful, be it implicitly as\n B. W. Kernighan and D. M. Ritchie (1988) \n in 'The C Programming Language' did, be it\n explicitly as R.C. Martin (2008) in\n 'Clean Code: A Handbook of Agile Software Craftsmanship' did.\n So we should check our code for files too long or wide, functions with too\n many lines, too wide lines, too many arguments or too many levels of \n nesting.\n Note: This is not a static code analyzer like pylint or the like. Checkout\n https://github.com/jimhester/lintr instead.","Published":"2017-01-18","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"clere","Version":"1.1.4","Title":"Simultaneous Variables Clustering and Regression","Description":"Implements an empirical Bayes approach for simultaneous variable clustering and regression. This version also (re)implements in C++ an R script proposed by Howard Bondell that fits the Pairwise Absolute Clustering and Sparsity (PACS) methodology (see Sharma et al (2013) ).","Published":"2016-03-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"clhs","Version":"0.5-6","Title":"Conditioned Latin Hypercube Sampling","Description":"Conditioned Latin hypercube sampling, as published by Minasny and McBratney (2006) . This method proposes to stratify sampling in presence of ancillary data. An extension of this method, which propose to associate a cost to each individual and take it into account during the optimisation process, is also proposed (Roudier et al., 2012, ).","Published":"2016-10-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ClickClust","Version":"1.1.5","Title":"Model-Based Clustering of Categorical Sequences","Description":"Clustering categorical sequences by means of finite mixtures with Markov model components is the main utility of ClickClust. The package also allows detecting blocks of equivalent states by forward and backward state selection procedures.","Published":"2016-10-23","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clickR","Version":"0.2.0","Title":"Fix Data and Create Report Tables from Different Objects","Description":"Fixes data errors in numerical, factor and date variables, checks data quality and performs report tables from models and summaries.","Published":"2017-02-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clickstream","Version":"1.2.1","Title":"Analyzes Clickstreams Based on Markov Chains","Description":"A set of tools to read, analyze and write lists of click sequences\n on websites (i.e., clickstream). A click can be represented by a number,\n character or string. Clickstreams can be modeled as zero- (only computes\n occurrence probabilities), first- or higher-order Markov chains.","Published":"2017-05-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"clifro","Version":"3.1-4","Title":"Easily Download and Visualise Climate Data from CliFlo","Description":"CliFlo is a web portal to the New Zealand National Climate\n Database and provides public access (via subscription) to around 6,500\n various climate stations (see for more\n information). Collating and manipulating data from CliFlo\n (hence clifro) and importing into R for further analysis, exploration and\n visualisation is now straightforward and coherent. The user is required to\n have an internet connection, and a current CliFlo subscription (free) if\n data from stations, other than the public Reefton electronic weather\n station, is sought.","Published":"2017-04-21","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"clikcorr","Version":"1.0","Title":"Censoring Data and Likelihood-Based Correlation Estimation","Description":"A profile likelihood based method of estimation and inference on the correlation coefficient of bivariate data with different types of censoring and missingness.","Published":"2016-06-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"climatol","Version":"3.0","Title":"Climate Tools (Series Homogenization and Derived Products)","Description":"Functions to homogenize climatological series and to produce climatological summaries and grids from the homogenized results, plus functions to draw wind-roses and Walter&Lieth diagrams.","Published":"2016-08-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"climbeR","Version":"0.0.1","Title":"Calculate Average Minimal Depth of a Maximal Subtree for\n'ranger' Package Forests","Description":"Calculates first, and second order, average minimal depth of a\n maximal subtree for a forest object produced by the R 'ranger'\n package. This variable importance metric is implemented as described in\n Ishwaran et. al. (\"High-Dimensional Variable Selection for Survival Data\",\n March 2010, ).","Published":"2016-11-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"ClimClass","Version":"2.1.0","Title":"Climate Classification According to Several Indices","Description":"Classification of climate according to Koeppen - Geiger, of aridity\n indices, of continentality indices, of water balance after Thornthwaite, of\n viticultural bioclimatic indices. Drawing climographs: Thornthwaite, Peguy,\n Bagnouls-Gaussen.","Published":"2016-08-04","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"climdex.pcic","Version":"1.1-6","Title":"PCIC Implementation of Climdex Routines","Description":"PCIC's implementation of Climdex routines for computation of\n extreme climate indices.","Published":"2015-06-30","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ClimDown","Version":"1.0.2","Title":"Climate Downscaling Library for Daily Climate Model Output","Description":"A suite of routines for downscaling coarse scale global\n climate model (GCM) output to a fine spatial resolution. Includes\n Bias-Corrected Spatial Downscaling (BCDS), Constructed Analogues\n (CA), Climate Imprint (CI), and Bias Correction/Constructed\n Analogues with Quantile mapping reordering (BCCAQ). Developed by\n the the Pacific Climate Impacts Consortium (PCIC), Victoria,\n British Columbia, Canada.","Published":"2016-12-02","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clime","Version":"0.4.1","Title":"Constrained L1-minimization for Inverse (covariance) Matrix\nEstimation","Description":"A robust constrained L1 minimization method for estimating\n a large sparse inverse covariance matrix (aka precision\n matrix), and recovering its support for building graphical\n models. The computation uses linear programming.","Published":"2012-05-06","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"climextRemes","Version":"0.1.3","Title":"Tools for Analyzing Climate Extremes","Description":"Functions for fitting GEV and POT (via point process fitting)\n models for extremes in climate data, providing return values, return\n probabilities, and return periods for stationary and nonstationary models.\n Also provides differences in return values and differences in log return\n probabilities for contrasts of covariate values. Functions for estimating risk\n ratios for event attribution analyses, including uncertainty. Under the hood,\n many of the functions use functions from extRemes, including for fitting the\n statistical models.","Published":"2017-04-22","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"climtrends","Version":"1.0.6","Title":"Statistical Methods for Climate Sciences","Description":"Absolute homogeneity tests SNHT absolute 1-breaks, 1-break, \n SD different from 1, 2-breaks, Buishand, Pettitt, von Neumann ratio and \n ratio-rank, Worsley, and Craddock, Relative homogeneity tests SNHT \n absolute 1-breaks, 1-break SD different from 1, 2-breaks, Peterson \n and Easterling, and Vincent, Differences in scale between two groups Siegel–Tukey, \n Create reference time series mean, weights/correlation, finding outliers Grubbs, \n ESD, MAD, Tietjen Moore, Hampel, etc.","Published":"2016-05-26","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"climwin","Version":"1.1.0","Title":"Climate Window Analysis","Description":"Contains functions to detect and visualise periods of climate\n sensitivity (climate windows) for a given biological response.","Published":"2016-12-20","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"clinfun","Version":"1.0.14","Title":"Clinical Trial Design and Data Analysis Functions","Description":"Utilities to make your clinical collaborations easier if not\n fun. It contains functions for designing studies such as Simon\n 2-stage and group sequential designs and for data analysis such\n as Jonckheere-Terpstra test and estimating survival quantiles.","Published":"2017-04-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clinPK","Version":"0.9.0","Title":"Clinical Pharmacokinetics Toolkit","Description":"Calculates equations commonly used in clinical pharmacokinetics and clinical pharmacology, such as equations for dose individualization, compartmental pharmacokinetics, drug exposure, anthropomorphic calculations, clinical chemistry, and conversion of common clinical parameters. Where possible and relevant, it provides multiple published and peer-reviewed equations within the respective R function.","Published":"2017-06-19","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"clinsig","Version":"1.2","Title":"Clinical Significance Functions","Description":"Functions for calculating clinical significance.","Published":"2016-07-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clinUtiDNA","Version":"1.0","Title":"Clinical Utility of DNA Testing","Description":"This package provides the estimation of an index measuring\n the clinical utility of DNA testing in the context of\n gene-environment interactions on a disease. The corresponding\n gene-environment interaction effect on the additive scale can\n also be obtained. The estimation is based on case-control or\n cohort data. The method was developed by Nguyen et al. 2013.","Published":"2013-04-10","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clipr","Version":"0.3.3","Title":"Read and Write from the System Clipboard","Description":"Simple utility functions to read from and write to the Windows,\n OS X, and X11 clipboards.","Published":"2017-06-19","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clisymbols","Version":"1.2.0","Title":"Unicode Symbols at the R Prompt","Description":"A small subset of Unicode symbols, that are useful\n when building command line applications. They fall back to\n alternatives on terminals that do not support Unicode.\n Many symbols were taken from the 'figures' 'npm' package\n (see ).","Published":"2017-05-21","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CLME","Version":"2.0-6","Title":"Constrained Inference for Linear Mixed Effects Models","Description":"Estimation and inference for linear models where some or all of the\n fixed-effects coefficients are subject to order restrictions. This package uses\n the robust residual bootstrap methodology for inference, and can handle some\n structure in the residual variance matrix.","Published":"2016-11-08","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clogitboost","Version":"1.1","Title":"Boosting Conditional Logit Model","Description":"A set of functions to fit a boosting conditional logit model.","Published":"2015-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clogitL1","Version":"1.4","Title":"Fitting exact conditional logistic regression with lasso and\nelastic net penalties","Description":"Tools for the fitting and cross validation of exact conditional logistic regression models with lasso and elastic net penalties. Uses cyclic coordinate descent and warm starts to compute the entire path efficiently.","Published":"2014-06-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"clogitLasso","Version":"1.0.1","Title":"Lasso Estimation of Conditional Logistic Regression Models for\nMatched Case-Control Studies","Description":"Fit a sequence of conditional logistic regression models with lasso, for small to large sized samples.","Published":"2016-09-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cloudUtil","Version":"0.1.12","Title":"Cloud Utilization Plots","Description":"Provides means of plots for comparing utilization data of compute systems.","Published":"2016-06-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"clpAPI","Version":"1.2.7","Title":"R Interface to C API of COIN-OR Clp","Description":"R Interface to C API of COIN-OR Clp, depends on COIN-OR Clp Version >= 1.12.0.","Published":"2016-04-19","License":"GPL-3 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CLSOCP","Version":"1.0","Title":"A smoothing Newton method SOCP solver","Description":"This package provides and implementation of a one step\n smoothing newton method for the solution of second order cone\n programming problems, originally described by Xiaoni Chi and\n Sanyang Liu.","Published":"2011-07-23","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clttools","Version":"1.3","Title":"Central Limit Theorem Experiments (Theoretical and Simulation)","Description":"Central limit theorem experiments presented by data frames or plots. Functions include generating theoretical sample space, corresponding probability, and simulated results as well.","Published":"2016-02-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"clubSandwich","Version":"0.2.2","Title":"Cluster-Robust (Sandwich) Variance Estimators with Small-Sample\nCorrections","Description":"Provides several cluster-robust variance estimators\n (i.e., sandwich estimators) for ordinary and weighted least squares linear\n regression models, including the bias-reduced linearization estimator introduced \n by Bell and McCaffrey (2002) \n and developed further by Pustejovsky and Tipton (2016) .\n The package includes functions for estimating the variance-\n covariance matrix and for testing single- and multiple-contrast hypotheses\n based on Wald test statistics. Tests of single regression coefficients use\n Satterthwaite or saddle-point corrections. Tests of multiple-contrast hypotheses \n use an approximation to Hotelling's T-squared distribution. Methods are\n provided for a variety of fitted models, including lm(), plm() (from package 'plm'),\n gls() and lme() (from 'nlme'), robu() (from 'robumeta'), and rma.uni() and rma.mv() (from\n 'metafor').","Published":"2016-12-01","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clue","Version":"0.3-53","Title":"Cluster Ensembles","Description":"CLUster Ensembles.","Published":"2017-01-15","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ClueR","Version":"1.2","Title":"Cluster Evaluation","Description":"CLUster Evaluation (CLUE) is a computational method for identifying optimal number of clusters in a given time-course dataset clustered by cmeans or kmeans algorithms and subsequently identify key kinases or pathways from each cluster. Its implementation in R is called ClueR. See Readme on for more details.","Published":"2017-04-30","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clues","Version":"0.5.9","Title":"Clustering Method Based on Local","Description":"We developed the clues R package to provide functions \n for automatically estimating the number of clusters and \n getting the final cluster partition without any input \n parameter except the stopping rule for convergence. \n The package also provides functions to\n evaluate and compare the performances of partitions of a data\n set both numerically and graphically.","Published":"2016-10-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CluMix","Version":"2.0","Title":"Clustering and Visualization of Mixed-Type Data","Description":"Provides utilities for clustering subjects and variables of mixed data types. Similarities between subjects are measured by Gower's general similarity coefficient with an extension of Podani for ordinal variables. Similarities between variables are assessed by combination of appropriate measures of association for different pairs of data types. Alternatively, variables can also be clustered by the 'ClustOfVar' approach. The main feature of the package is the generation of a mixed-data heatmap. For visualizing similarities between either subjects or variables, a heatmap of the corresponding distance matrix can be drawn. Associations between variables can be explored by a 'confounderPlot', which allows visual detection of possible confounding, collinear, or surrogate factors for some variables of primary interest. Distance matrices and dendrograms for subjects and variables can be derived and used for further visualizations and applications. This work was supported by BMBF grant 01ZX1609B, Germany.","Published":"2017-05-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clusrank","Version":"0.5-2","Title":"Wilcoxon Rank Sum Test for Clustered Data","Description":"Non-parametric tests (Wilcoxon rank sum test and Wilcoxon signed rank test) for clustered data.","Published":"2017-01-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"clust.bin.pair","Version":"0.0.6","Title":"Statistical Methods for Analyzing Clustered Matched Pair Data","Description":"Tests, utilities, and case studies for analyzing significance in clustered binary matched-pair\n data. The central function clust.bin.pair uses one of several tests to calculate a Chi-square \n statistic. Implemented are the tests Eliasziw, Obuchowski, Durkalski, and Yang with McNemar\n included for comparison. The utility functions nested.to.contingency and paired.to.contingency\n convert data between various useful formats. Thyroids and psychiatry are the canonical\n datasets from Obuchowski and Petryshen respectively.","Published":"2016-10-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"cluster","Version":"2.0.6","Title":"\"Finding Groups in Data\": Cluster Analysis Extended Rousseeuw et\nal.","Description":"Methods for Cluster analysis. Much extended the original from\n\tPeter Rousseeuw, Anja Struyf and Mia Hubert,\n\tbased on Kaufman and Rousseeuw (1990) \"Finding Groups in Data\".","Published":"2017-03-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cluster.datasets","Version":"1.0-1","Title":"Cluster Analysis Data Sets","Description":"A collection of data sets for teaching cluster analysis.","Published":"2013-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ClusterBootstrap","Version":"0.9.3","Title":"Analyze Clustered Data with Generalized Linear Models using the\nCluster Bootstrap","Description":"Provides functionality for the analysis of clustered data using the cluster bootstrap. ","Published":"2017-06-12","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clusterCrit","Version":"1.2.7","Title":"Clustering Indices","Description":"Compute clustering validation indices.","Published":"2016-05-27","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ClusteredMutations","Version":"1.0.1","Title":"Location and Visualization of Clustered Somatic Mutations","Description":"Identification and visualization of groups of closely spaced mutations in the DNA sequence of cancer genome. The extremely mutated zones are searched in the symmetric dissimilarity matrix using the anti-Robinson matrix properties. Different data sets are obtained to describe and plot the clustered mutations information. ","Published":"2016-04-29","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clusterfly","Version":"0.4","Title":"Explore clustering interactively using R and GGobi","Description":"Visualise clustering algorithms with GGobi. Contains both\n general code for visualising clustering results and specific\n visualisations for model-based, hierarchical and SOM clustering.","Published":"2014-04-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"clusterGeneration","Version":"1.3.4","Title":"Random Cluster Generation (with Specified Degree of Separation)","Description":"We developed the clusterGeneration package to provide functions \n for generating random clusters, generating random \n covariance/correlation matrices,\n calculating a separation index (data and population version)\n for pairs of clusters or cluster distributions, and 1-D and 2-D\n projection plots to visualize clusters. The package also\n contains a function to generate random clusters based on\n factorial designs with factors such as degree of separation,\n number of clusters, number of variables, number of noisy\n variables.","Published":"2015-02-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clusterGenomics","Version":"1.0","Title":"Identifying clusters in genomics data by recursive partitioning","Description":"The Partitioning Algorithm based on Recursive Thresholding\n (PART) is used to recursively uncover clusters and subclusters\n in the data. Functionality is also available for visualization\n of the clustering.","Published":"2013-07-02","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"clusterhap","Version":"0.1","Title":"Clustering Genotypes in Haplotypes","Description":"One haplotype is a combination of SNP\n (Single Nucleotide Polymorphisms) within the QTL (Quantitative Trait Loci).\n clusterhap groups together all individuals of a population with the same haplotype.\n Each group contains individual with the same allele in each SNP,\n whether or not missing data. Thus, clusterhap groups individuals,\n that to be imputed, have a non-zero probability of having the same alleles\n in the entire sequence of SNP's. Moreover, clusterhap calculates such\n probability from relative frequencies.","Published":"2016-05-16","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clustering.sc.dp","Version":"1.0","Title":"Optimal Distance-Based Clustering for Multidimensional Data with\nSequential Constraint","Description":"A dynamic programming algorithm for optimal clustering multidimensional data with sequential constraint. The algorithm minimizes the sum of squares of within-cluster distances. The sequential constraint allows only subsequent items of the input data to form a cluster. The sequential constraint is typically required in clustering data streams or items with time stamps such as video frames, GPS signals of a vehicle, movement data of a person, e-pen data, etc. The algorithm represents an extension of Ckmeans.1d.dp to multiple dimensional spaces. Similarly to the one-dimensional case, the algorithm guarantees optimality and repeatability of clustering. Method clustering.sc.dp can find the optimal clustering if the number of clusters is known. Otherwise, methods findwithinss.sc.dp and backtracking.sc.dp can be used.","Published":"2015-05-04","License":"LGPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"clusternomics","Version":"0.1.1","Title":"Integrative Clustering for Heterogeneous Biomedical Datasets","Description":"Integrative context-dependent clustering for heterogeneous\n biomedical datasets. Identifies local clustering structures in related\n datasets, and a global clusters that exist across the datasets.","Published":"2017-03-14","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"clusterPower","Version":"0.5","Title":"Power calculations for cluster-randomized and cluster-randomized\ncrossover trials","Description":"This package enables researchers to calculate power for cluster-randomized crossover trials by employing a simulation-based approach. A particular study design is specified, with fixed sample sizes for all clusters and an assumed treatment effect, and the empirical power for that study design is calculated by simulating hypothetical datasets.","Published":"2013-12-13","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ClusterR","Version":"1.0.5","Title":"Gaussian Mixture Models, K-Means, Mini-Batch-Kmeans and\nK-Medoids Clustering","Description":"Gaussian mixture models, k-means, mini-batch-kmeans and k-medoids\n clustering with the option to plot, validate, predict (new data) and estimate the\n optimal number of clusters. The package takes advantage of 'RcppArmadillo' to\n speed up the computationally intensive parts of the functions.","Published":"2017-02-11","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"ClusterRankTest","Version":"1.0","Title":"Rank Tests for Clustered Data","Description":"Nonparametric rank based tests (rank-sum tests and signed-rank tests) for clustered data, especially useful for clusters having informative cluster size and intra-cluster group size.","Published":"2016-04-28","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clusterRepro","Version":"0.5-1.1","Title":"Reproducibility of gene expression clusters","Description":"A function for validating microarry clusters via\n reproducibility","Published":"2009-03-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"clusterSEs","Version":"2.4","Title":"Calculate Cluster-Robust p-Values and Confidence Intervals","Description":"Calculate p-values and confidence intervals using cluster-adjusted\n t-statistics (based on Ibragimov and Muller (2010) , pairs cluster bootstrapped t-statistics, and wild cluster bootstrapped t-statistics (the latter two techniques based on Cameron, Gelbach, and Miller (2008) . Procedures are included for use with GLM, ivreg, plm (pooling or fixed effects), and mlogit models.","Published":"2017-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clusterSim","Version":"0.45-2","Title":"Searching for Optimal Clustering Procedure for a Data Set","Description":"Distance measures (GDM1, GDM2,\tSokal-Michener, Bray-Curtis, for symbolic interval-valued data), cluster quality indices (Calinski-Harabasz, Baker-Hubert, Hubert-Levine, Silhouette, Krzanowski-Lai, Hartigan, Gap,\tDavies-Bouldin),\tdata normalization formulas, data generation (typical and non-typical data), HINoV method,\treplication analysis, linear ordering methods, spectral clustering, agreement indices between two partitions, plot functions (for categorical and symbolic interval-valued data).","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ClusterStability","Version":"1.0.3","Title":"Assessment of Stability of Individual Objects or Clusters in\nPartitioning Solutions","Description":"Allows one to assess the stability of individual objects, clusters \n and whole clustering solutions based on repeated runs of the K-means and K-medoids \n partitioning algorithms.","Published":"2016-02-08","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clustertend","Version":"1.4","Title":"Check the Clustering Tendency","Description":"Calculate some statistics aiming to help analyzing the clustering tendency of given data. In the first version, Hopkins' statistic is implemented.","Published":"2015-05-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clusteval","Version":"0.1","Title":"Evaluation of Clustering Algorithms","Description":"An R package that provides a suite of tools to evaluate\n clustering algorithms, clusterings, and individual clusters.","Published":"2012-08-31","License":"MIT","snapshot_date":"2017-06-23"}
{"Package":"ClustGeo","Version":"1.0","Title":"Clustering of Observations with Geographical Constraints","Description":"Functions which allow to integrate geographical constraints in Ward hierarchical clustering. Geographical maps of typologies obtained can be displayed with the use of shapefiles.","Published":"2015-06-23","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"}
{"Package":"clustMD","Version":"1.2.1","Title":"Model Based Clustering for Mixed Data","Description":"Model-based clustering of mixed data (i.e. data which consist of\n continuous, binary, ordinal or nominal variables) using a parsimonious\n mixture of latent Gaussian variable models.","Published":"2017-05-08","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"clustMixType","Version":"0.1-17","Title":"k-Prototypes Clustering for Mixed Variable-Type Data","Description":"Functions to perform k-prototypes partitioning clustering for\n mixed variable-type data according to Z.Huang (1998): Extensions to the k-Means\n Algorithm for Clustering Large Data Sets with Categorical Variables, Data Mining\n and Knowledge Discovery 2, 283-304, .","Published":"2016-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ClustMMDD","Version":"1.0.4","Title":"Variable Selection in Clustering by Mixture Models for Discrete\nData","Description":"An implementation of a variable selection procedure in clustering by mixture models for discrete data (clustMMDD). Genotype data are examples of such data with two unordered observations (alleles) at each locus for diploid individual. The two-fold problem of variable selection and clustering is seen as a model selection problem where competing models are characterized by the number of clusters K, and the subset S of clustering variables. Competing models are compared by penalized maximum likelihood criteria. We considered asymptotic criteria such as Akaike and Bayesian Information criteria, and a family of penalized criteria with penalty function to be data driven calibrated. ","Published":"2016-05-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ClustOfVar","Version":"0.8","Title":"Clustering of variables","Description":"Cluster analysis of a set of variables. Variables can be\n quantitative, qualitative or a mixture of both.","Published":"2013-12-03","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"}
{"Package":"clustRcompaR","Version":"0.1.0","Title":"Easy Interface for Clustering a Set of Documents and Exploring\nGroup- Based Patterns","Description":"Provides an interface to perform cluster analysis on a corpus of text. Interfaces to \n Quanteda to assemble text corpuses easily. Deviationalizes text vectors prior to clustering \n using technique described by Sherin (Sherin, B. [2013]. A computational study of commonsense science: \n An exploration in the automated analysis of clinical interview data. Journal of the Learning Sciences, \n 22(4), 600-638. Chicago. http://dx.doi.org/10.1080/10508406.2013.836654). Uses cosine similarity as distance\n metric for two stage clustering process, involving Ward's algorithm hierarchical agglomerative clustering, \n and k-means clustering. Selects optimal number of clusters to maximize \"variance explained\" by clusters, \n adjusted by the number of clusters. Provides plotted output of clustering results as well as printed output. \n Assesses \"model fit\" of clustering solution to a set of preexisting groups in dataset.","Published":"2017-01-07","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"clustrd","Version":"1.2.0","Title":"Methods for Joint Dimension Reduction and Clustering","Description":"A class of methods that combine dimension reduction and clustering of continuous or categorical data. For continuous data, the package contains implementations of factorial K-means (Vichi and Kiers 2001; ) and reduced K-means (De Soete and Carroll 1994; ); both methods that combine principal component analysis with K-means clustering. For categorical data, the package provides MCA K-means (Hwang, Dillon and Takane 2006; ), i-FCB (Iodice D'Enza and Palumbo 2013, ) and Cluster Correspondence Analysis (van de Velden, Iodice D'Enza and Palumbo 2017; ), which combine multiple correspondence analysis with K-means.","Published":"2017-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clustsig","Version":"1.1","Title":"Significant Cluster Analysis","Description":"A complimentary package for use with hclust; simprof tests\n to see which (if any) clusters are statistically different. The\n null hypothesis is that there is no a priori group structure.\n See Clarke, K.R., Somerfield, P.J., and Gorley R.N. 2008.\n Testing of null hypothesis in exploratory community analyses:\n similarity profiles and biota-environment linkage. J. Exp. Mar.\n Biol. Ecol. 366, 56-69.","Published":"2014-01-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ClustVarLV","Version":"1.5.1","Title":"Clustering of Variables Around Latent Variables","Description":"Functions for the clustering of variables around Latent Variables.\n Each cluster of variables, which may be defined as a local or directional\n cluster, is associated with a latent variable. External variables measured on\n the same observations or/and additional information on the variables can be\n taken into account. A \"noise\" cluster or sparse latent variables can also de\n defined.","Published":"2016-12-14","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"clustvarsel","Version":"2.3","Title":"Variable Selection for Gaussian Model-Based Clustering","Description":"An R package implementing variable selection methodology for Gaussian model-based clustering which allows to find the (locally) optimal subset of variables in a data set that have group/cluster information. A greedy or headlong search can be used, either in a forward-backward or backward-forward direction, with or without sub-sampling at the hierarchical clustering stage for starting MCLUST models. By default the algorithm uses a sequential search, but parallelisation is also available.","Published":"2017-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clv","Version":"0.3-2.1","Title":"Cluster Validation Techniques","Description":"Package contains most of the popular internal and external\n cluster validation methods ready to use for the most of the\n outputs produced by functions coming from package \"cluster\".\n Package contains also functions and examples of usage for\n cluster stability approach that might be applied to algorithms\n implemented in \"cluster\" package as well as user defined\n clustering algorithms.","Published":"2013-11-11","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"clValid","Version":"0.6-6","Title":"Validation of Clustering Results","Description":"Statistical and biological validation of clustering results.","Published":"2014-03-25","License":"LGPL-3","snapshot_date":"2017-06-23"}
{"Package":"cmaes","Version":"1.0-11","Title":"Covariance Matrix Adapting Evolutionary Strategy","Description":"Single objective optimization using a CMA-ES.","Published":"2011-01-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cmaesr","Version":"1.0.3","Title":"Covariance Matrix Adaptation Evolution Strategy","Description":"Pure R implementation of the Covariance Matrix Adaptation -\n Evolution Strategy (CMA-ES) with optional restarts (IPOP-CMA-ES).","Published":"2016-12-04","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CMC","Version":"1.0","Title":"Cronbach-Mesbah Curve","Description":"Calculation and plot of the stepwise Cronbach-Mesbah Curve","Published":"2012-10-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CMF","Version":"1.0","Title":"Collective matrix factorization","Description":"Collective matrix factorization (CMF) finds joint low-rank representations for a collection of matrices with shared row or column entities. This code learns variational Bayesian approximation for CMF, supporting multiple likelihood potentials and missing data, while identifying both factors shared by multiple matrices and factors private for each matrix.","Published":"2014-03-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cmm","Version":"0.8","Title":"Categorical Marginal Models","Description":"Quite extensive package for the estimation of marginal models for categorical data.","Published":"2015-01-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cmna","Version":"1.0.0","Title":"Computational Methods for Numerical Analysis","Description":"Provides the source and examples for James P. Howard, II, \n \"Computational Methods for Numerical Analysis with R,\" \n\t\t\t , a forthcoming book on\n\t\t\t numerical methods in R.","Published":"2017-06-13","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CMPControl","Version":"1.0","Title":"Control Charts for Conway-Maxwell-Poisson Distribution","Description":"The main purpose of this package is to juxtapose the different control limits obtained by modelling a data set through the COM-Poisson distribution vs. the classical Poisson distribution. Accordingly, this package offers the ability to compute the COM-Poisson parameter estimates and plot associated Shewhart control charts for a given data set.","Published":"2014-04-30","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CMplot","Version":"3.2.0","Title":"Circle Manhattan Plot","Description":"Manhattan plot, a type of scatter plot, was widely used to display the association results. However, it is usually time-consuming and laborious for a\n non-specialist user to write scripts and adjust parameters of an elaborate plot. Moreover, the ever-growing traits measured have necessitated the \n integration of results from different Genome-wide association study researches. Circle Manhattan Plot is the first open R package that can lay out \n Genome-wide association study P-value results in both traditional rectangular patterns, QQ-plot and novel circular ones. United in only one bull's eye style \n plot, association results from multiple traits can be compared interactively, thereby to reveal both similarities and differences between signals.","Published":"2017-03-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cmpprocess","Version":"1.0","Title":"Flexible Modeling of Count Processes","Description":"A toolkit for flexible modeling of count processes where data (over- or under-) dispersion exists.\n Estimations can be obtained under two data constructs where one has:\n (1) data on number of events in an s-unit time interval, or (2) only wait-time data.\n This package is supplementary to the work set forth in Zhu et al. (2016) .","Published":"2017-03-11","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cmprsk","Version":"2.2-7","Title":"Subdistribution Analysis of Competing Risks","Description":"Estimation, testing and regression modeling of\n subdistribution functions in competing risks, as described in Gray\n (1988), A class of K-sample tests for comparing the cumulative\n incidence of a competing risk, Ann. Stat. 16:1141-1154, and Fine JP and\n Gray RJ (1999), A proportional hazards model for the subdistribution\n of a competing risk, JASA, 94:496-509.","Published":"2014-06-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cmprskQR","Version":"0.9.1","Title":"Analysis of Competing Risks Using Quantile Regressions","Description":"Estimation, testing and regression modeling of\n subdistribution functions in competing risks using quantile regressions,\n as described in Peng and Fine (2009) .","Published":"2016-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cmrutils","Version":"1.3","Title":"Misc Functions of the Center for the Mathematical Research","Description":"A collection of useful helper routines developed by\n students of the Center for the Mathematical Research, Stankin,\n Moscow.","Published":"2015-09-11","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"cmsaf","Version":"1.7.2","Title":"Tools for CM SAF NetCDF Data","Description":"The Satellite Application Facility on Climate Monitoring (CM SAF) \n is a ground segment of the European Organization for the Exploitation of \n Meteorological Satellites (EUMETSAT) and one of EUMETSATs Satellite Application \n Facilities. The CM SAF contributes to the sustainable observing of the climate \n system by providing Essential Climate Variables related to the energy and water \n cycle of the atmosphere (). It is a joint cooperation of seven \n National Meteorological and Hydrological Services, including the Deutscher\n Wetterdienst (DWD).\n The 'cmsaf' R-package provides a small collection of R-functions, which are \n inspired by the Climate Data Operators ('cdo'). This gives the opportunity to \n analyse and manipulate CM SAF data without the need of installing cdo. \n The 'cmsaf' R-package is tested for CM SAF NetCDF data, which are structured \n in three-dimensional arrays (longitude, latitude, time) on a rectangular grid. \n Layered CM SAF data have to be converted with the provided 'levbox_mergetime()' \n function. The 'cmsaf' R-package functions have only minor checks for deviations \n from the recommended data structure, and give only few specific error messages. \n Thus, there is no warranty of accurate results.\n Scripts for an easy application of the functions are provided at the CM SAF homepage \n ().","Published":"2017-03-14","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"cmvnorm","Version":"1.0-3","Title":"The Complex Multivariate Gaussian Distribution","Description":"Various utilities for the complex multivariate Gaussian distribution.","Published":"2015-11-24","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cna","Version":"2.0.0","Title":"Causal Modeling with Coincidence Analysis","Description":"Provides comprehensive functionalities for causal modeling with Coincidence Analysis (CNA), which is a configurational comparative method of causal data analysis that was first introduced in Baumgartner (2009) . CNA is related to Qualitative Comparative Analysis (QCA), but contrary to the latter, it is custom-built for uncovering causal structures with multiple outcomes. While previous versions have only been capable of processing dichotomous variables, the current version generalizes CNA for multi-value and continuous variables whose values are interpreted as membership scores in fuzzy sets.","Published":"2017-04-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cncaGUI","Version":"1.0","Title":"Canonical Non-Symmetrical Correspondence Analysis in R","Description":"A GUI with which users can construct and interact\n with Canonical Correspondence Analysis and Canonical Non-Symmetrical Correspondence Analysis and provides inferential results by using Bootstrap Methods.","Published":"2015-06-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CNLTreg","Version":"0.1","Title":"Complex-Valued Wavelet Lifting for Signal Denoising","Description":"Implementations of recent complex-valued wavelet shrinkage procedures for smoothing irregularly sampled signals.","Published":"2017-03-04","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CNLTtsa","Version":"0.1","Title":"Complex-Valued Wavelet Lifting for Univariate and Bivariate Time\nSeries Analysis","Description":"Implementations of recent complex-valued wavelet spectral procedures for analysis of irregularly sampled signals.","Published":"2017-03-08","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cnmlcd","Version":"1.0-0","Title":"Maximum Likelihood Estimation of a Log-Concave Density Function","Description":"Contains functions for computing the nonparametric maximum\n\t likelihood estimate of a log-concave density function from\n\t univariate observations. The log-density estimate is always a\n\t piecewise linear function.","Published":"2015-10-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CNOGpro","Version":"1.1","Title":"Copy Numbers of Genes in prokaryotes","Description":"Methods for assigning copy number states and breakpoints in resequencing experiments of prokaryotic organisms.","Published":"2015-01-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CNprep","Version":"2.0","Title":"Pre-process DNA Copy Number (CN) Data for Detection of CN Events","Description":"This package evaluates DNA copy number data, using both their initial form (copy number as a noisy function of genomic position) and their approximation by a piecewise-constant function (segmentation), for the purpose of identifying genomic regions where the copy number differs from the norm.","Published":"2014-12-31","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CNull","Version":"1.0","Title":"Fast Algorithms for Frequency-Preserving Null Models in Ecology","Description":"Efficient computations for null models that require shuffling columns on big matrix data.\n This package provides functions for faster computation of diversity measure statistics\n when independent random shuffling is applied to the columns of a given matrix. \n Given a diversity measure f and a matrix M, the provided functions can generate random samples \n (shuffled matrix rows of M), the mean and variance of f, and the p-values of this measure \n for two different null models that involve independent random shuffling of the columns of M.\n The package supports computations of alpha and beta diversity measures. ","Published":"2017-03-16","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CNVassoc","Version":"2.2","Title":"Association Analysis of CNV Data and Imputed SNPs","Description":"Carries out analysis of common \n Copy Number Variants (CNVs) and imputed Single Nucleotide \n Polymorphisms (SNPs) in population-based studies. \n It includes tools for estimating association under a series \n of study designs (case-control, cohort, etc), using several \n dependent variables (class status, censored data, counts) \n as response, adjusting for covariates and considering \n various inheritance models. Moreover, it is possible to \n perform epistasis studies with pairs of CNVs or imputed SNPs.\n It has been optimized in order to make feasible the analyses \n of Genome Wide Association studies (GWAs) with hundreds of \n thousands of genetic variants (CNVs / imputed SNPs). Also, \n it incorporates functions for inferring copy number (CNV \n genotype calling). Various classes and methods for generic \n functions (print, summary, plot, anova, ...) have been \n created to facilitate the analysis. ","Published":"2016-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CNVassocData","Version":"1.0","Title":"Example data sets for association analysis of CNV data","Description":"This package contains example data sets with Copy Number Variants and imputed SNPs to be used by CNVassoc package.","Published":"2013-08-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"coala","Version":"0.5.0","Title":"A Framework for Coalescent Simulation","Description":"Coalescent simulators can rapidly simulate biological sequences\n evolving according to a given model of evolution.\n You can use this package to specify such models, to conduct the simulations\n and to calculate additional statistics from the results.\n It relies on existing simulators for doing the simulation, and currently\n supports the programs 'ms', 'msms' and 'scrm'. It also supports finite-sites\n mutation models by combining the simulators with the program 'seq-gen'.","Published":"2016-12-29","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"coalescentMCMC","Version":"0.4-1","Title":"MCMC Algorithms for the Coalescent","Description":"Flexible framework for coalescent analyses in R. It includes a main function running the MCMC algorithm, auxiliary functions for tree rearrangement, and some functions to compute population genetic parameters.","Published":"2015-03-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"coarseDataTools","Version":"0.6-3","Title":"A Collection of Functions to Help with Analysis of Coarsely\nObserved Data","Description":"Functions to analyze coarse data.\n Specifically, it contains functions to (1) fit parametric accelerated\n failure time models to interval-censored survival time data, and (2)\n estimate the case-fatality ratio in scenarios with under-reporting.\n This package's development was motivated by applications to infectious\n disease: in particular, problems with estimating the incubation period and\n the case fatality ratio of a given disease. Sample data files are included\n in the package.","Published":"2016-03-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cobalt","Version":"2.1.0","Title":"Covariate Balance Tables and Plots","Description":"Generate balance tables and plots for covariates of groups\n preprocessed through matching, weighting or subclassification, for example,\n using propensity scores. Includes integration with 'MatchIt', 'twang', 'Matching', 'optmatch', \n 'CBPS', and 'ebal' for assessing balance on the output of their preprocessing functions. Users\n can also specify data for balance assessment not generated through the above packages. Also \n included are methods for assessing balance in clustered or multiply imputed data sets.","Published":"2017-05-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"COBRA","Version":"0.99.4","Title":"Nonlinear Aggregation of Predictors","Description":"This package performs prediction for regression-oriented problems, aggregating in a nonlinear scheme any basic regression machines suggested by the context and provided by the user. If the user has no valuable knowledge on the data, four defaults machines wrappers are implemented so as to cover a minimal spectrum of prediction methods. If necessary, the computations may be parallelized. The method is described in Biau, Fischer, Guedj and Malley (2013), \"COBRA: A Nonlinear Aggregation Strategy\".","Published":"2013-07-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"cobs","Version":"1.3-3","Title":"Constrained B-Splines (Sparse Matrix Based)","Description":"Qualitatively Constrained (Regression) Smoothing Splines via\n Linear Programming and Sparse Matrices.","Published":"2017-03-31","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CoClust","Version":"0.3-1","Title":"Copula Based Cluster Analysis","Description":"Copula Based Cluster Analysis.","Published":"2015-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"COCONUT","Version":"1.0.1","Title":"COmbat CO-Normalization Using conTrols (COCONUT)","Description":"Allows for pooled analysis of microarray data by batch-correcting control samples, and then applying the derived correction parameters to non-control samples to obtain bias-free, inter-dataset corrected data.","Published":"2016-06-29","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cocor","Version":"1.1-3","Title":"Comparing Correlations","Description":"Statistical tests for the comparison between two correlations\n based on either independent or dependent groups. Dependent correlations can\n either be overlapping or nonoverlapping. A web interface is available on the\n website http://comparingcorrelations.org. A plugin for the R GUI and IDE RKWard\n is included. Please install RKWard from https://rkward.kde.org to use this\n feature. The respective R package 'rkward' cannot be installed directly from a\n repository, as it is a part of RKWard.","Published":"2016-05-28","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"cocoreg","Version":"0.1.1","Title":"Extract Shared Variation in Collections of Data Sets Using\nRegression Models","Description":"The algorithm extracts shared variation from a collection of data sets using regression models.","Published":"2017-05-30","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"cocorresp","Version":"0.3-0","Title":"Co-Correspondence Analysis Methods","Description":"Fits predictive and symmetric co-correspondence analysis (CoCA) models to relate one data matrix\n to another data matrix. More specifically, CoCA maximises the weighted covariance \n between the weighted averaged species scores of one community and the weighted averaged species\n scores of another community. CoCA attempts to find patterns that are common to both communities.","Published":"2016-02-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cocron","Version":"1.0-1","Title":"Statistical Comparisons of Two or more Alpha Coefficients","Description":"Statistical tests for the comparison between two or more alpha\n coefficients based on either dependent or independent groups of individuals.\n A web interface is available at http://comparingcronbachalphas.org. A plugin\n for the R GUI and IDE RKWard is included. Please install RKWard from https://\n rkward.kde.org to use this feature. The respective R package 'rkward' cannot be\n installed directly from a repository, as it is a part of RKWard.","Published":"2016-03-12","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"coda","Version":"0.19-1","Title":"Output Analysis and Diagnostics for MCMC","Description":"Provides functions for summarizing and plotting the\n\toutput from Markov Chain Monte Carlo (MCMC) simulations, as\n\twell as diagnostic tests of convergence to the equilibrium\n\tdistribution of the Markov chain.","Published":"2016-12-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"codadiags","Version":"1.0","Title":"Markov chain Monte Carlo burn-in based on \"bridge\" statistics","Description":"Markov chain Monte Carlo burn-in based on \"bridge\" statistics, in the way of coda::heidel.diag, but including non asymptotic tabulated statistics.","Published":"2013-11-18","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cOde","Version":"0.2.2","Title":"Automated C Code Generation for Use with the 'deSolve' and\n'bvpSolve' Packages","Description":"Generates all necessary C functions allowing the user to work with\n the compiled-code interface of ode() and bvptwp(). The implementation supports\n \"forcings\" and \"events\". Also provides functions to symbolically compute\n Jacobians, sensitivity equations and adjoint sensitivities being the basis for\n sensitivity analysis.","Published":"2016-05-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CodeDepends","Version":"0.5-3","Title":"Analysis of R Code for Reproducible Research and Code\nComprehension","Description":"Tools for analyzing R expressions\n or blocks of code and determining the dependencies between them.\n It focuses on R scripts, but can be used on the bodies of functions.\n There are many facilities including the ability to summarize or get a high-level\n view of code, determining dependencies between variables, code improvement\n suggestions.","Published":"2017-05-29","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"codep","Version":"0.6-5","Title":"Multiscale Codependence Analysis","Description":"Computation of Multiscale Codependence Analysis and spatial eigenvector maps, as an additional feature. Early development version.","Published":"2017-01-25","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"codetools","Version":"0.2-15","Title":"Code Analysis Tools for R","Description":"Code analysis tools for R.","Published":"2016-10-05","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"codingMatrices","Version":"0.3.1","Title":"Alternative Factor Coding Matrices for Linear Model Formulae","Description":"A collection of coding functions as alternatives to the standard\n functions in the stats package, which have names starting with 'contr.'. Their\n main advantage is that they provide a consistent method for defining marginal\n effects in factorial models. In a simple one-way ANOVA model the\n intercept term is always the simple average of the class means.","Published":"2017-05-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"codyn","Version":"1.1.0","Title":"Community Dynamics Metrics","Description":"A toolbox of ecological community dynamics metrics that are\n explicitly temporal. Functions fall into two categories: temporal diversity\n indices and community stability metrics. The diversity indices are temporal\n analogs to traditional diversity indices such as richness and rank-abundance\n curves. Specifically, functions are provided to calculate species turnover, mean\n rank shifts, and lags in community similarity between time points. The community\n stability metrics calculate overall stability and patterns of species covariance\n and synchrony over time.","Published":"2016-04-27","License":"Apache License (== 2.0)","snapshot_date":"2017-06-23"}
{"Package":"coefficientalpha","Version":"0.5","Title":"Robust Coefficient Alpha and Omega with Missing and Non-Normal\nData","Description":"Cronbach's alpha and McDonald's omega are widely used reliability or internal consistency measures in social, behavioral and education sciences. Alpha is reported in nearly every study that involves measuring a construct through multiple test items. The package 'coefficientalpha' calculates coefficient alpha and coefficient omega with missing data and non-normal data. Robust standard errors and confidence intervals are also provided. A test is also available to test the tau-equivalent and homogeneous assumptions. Version 0.5 added the bootstrap confidence intervals.","Published":"2015-05-31","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"coefplot","Version":"1.2.4","Title":"Plots Coefficients from Fitted Models","Description":"Plots the coefficients from model objects. This very quickly shows the user the point estimates and confidence intervals for fitted models.","Published":"2016-01-10","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"coenocliner","Version":"0.2-2","Title":"Coenocline Simulation","Description":"Simulate species occurrence and abundances (counts) along\n gradients.","Published":"2016-05-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"coenoflex","Version":"2.2-0","Title":"Gradient-Based Coenospace Vegetation Simulator","Description":"Simulates the composition of samples of vegetation\n according to gradient-based vegetation theory. Features a\n flexible algorithm incorporating competition and complex\n multi-gradient interaction.","Published":"2016-09-20","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"coexist","Version":"1.0","Title":"Species coexistence modeling and analysis","Description":"species coexistence modeling under asymmetric dispersal\n and fluctuating source-sink dynamics;testing the proportion of\n coexistence scenarios driven by neutral and niche processes","Published":"2012-08-02","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"}
{"Package":"cofeatureR","Version":"1.0.1","Title":"Generate Cofeature Matrices","Description":"Generate cofeature (feature by sample) matrices. The package \n utilizes ggplot2::geom_tile() to generate the matrix allowing for easy\n additions from the base matrix.","Published":"2016-01-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CoFRA","Version":"0.1002","Title":"Complete Functional Regulation Analysis","Description":"Calculates complete functional regulation analysis and visualize\n the results in a single heatmap. The provided example data is for biological\n data but the methodology can be used for large data sets to compare quantitative\n entities that can be grouped. For example, a store might divide entities into\n cloth, food, car products etc and want to see how sales changes in the groups\n after some event. The theoretical background for the calculations are provided\n in New insights into functional regulation in MS-based drug profiling, Ana Sofia\n Carvalho, Henrik Molina & Rune Matthiesen, Scientific Reports .","Published":"2017-04-06","License":"GPL-2 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"coga","Version":"0.1.0","Title":"Convolution of Gamma Distributions","Description":"Convolution of gamma distributions in R. The convolution of \n gamma distributions is the sum of series of gamma \n distributions and all gamma distributions here can have different \n parameters. This package can calculate density, distribution function \n and do simulation work.","Published":"2017-05-25","License":"GPL (>= 3.0)","snapshot_date":"2017-06-23"}
{"Package":"CoImp","Version":"0.3-1","Title":"Copula Based Imputation Method","Description":"Copula based imputation method. A semiparametric imputation procedure for missing multivariate data based on conditional copula specifications.","Published":"2016-08-08","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"coin","Version":"1.2-0","Title":"Conditional Inference Procedures in a Permutation Test Framework","Description":"Conditional inference procedures for the general independence\n problem including two-sample, K-sample (non-parametric ANOVA), correlation,\n censored, ordered and multivariate problems.","Published":"2017-06-22","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CoinMinD","Version":"1.1","Title":"Simultaneous Confidence Interval for Multinomial Proportion","Description":"Methods for obtaining simultaneous confidence interval for\n multinomial proportion have been proposed by many authors and\n the present study include a variety of widely applicable\n procedures. Seven classical methods (Wilson, Quesenberry and\n Hurst, Goodman, Wald with and without continuity correction,\n Fitzpatrick and Scott, Sison and Glaz) and Bayesian Dirichlet\n models are included in the package. The advantage of MCMC pack\n has been exploited to derive the Dirichlet posterior directly\n and this also helps in handling the Dirichlet prior parameters.\n This package is prepared to have equal and unequal values for\n the Dirichlet prior distribution that will provide better scope\n for data analysis and associated sensitivity analysis.","Published":"2013-05-28","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"cointmonitoR","Version":"0.1.0","Title":"Consistent Monitoring of Stationarity and Cointegrating\nRelationships","Description":"We propose a consistent monitoring procedure to detect a\n structural change from a cointegrating relationship to a spurious\n relationship. The procedure is based on residuals from modified least\n squares estimation, using either Fully Modified, Dynamic or Integrated\n Modified OLS. It is inspired by Chu et al. (1996) in\n that it is based on parameter estimation on a pre-break \"calibration\" period\n only, rather than being based on sequential estimation over the full sample.\n See the discussion paper for further information.\n This package provides the monitoring procedures for both the cointegration\n and the stationarity case (while the latter is just a special case of the\n former one) as well as printing and plotting methods for a clear\n presentation of the results.","Published":"2016-06-14","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cointReg","Version":"0.2.0","Title":"Parameter Estimation and Inference in a Cointegrating Regression","Description":"Cointegration methods are widely used in empirical macroeconomics\n and empirical finance. It is well known that in a cointegrating\n regression the ordinary least squares (OLS) estimator of the\n parameters is super-consistent, i.e. converges at rate equal to the\n sample size T. When the regressors are endogenous, the limiting\n distribution of the OLS estimator is contaminated by so-called second\n order bias terms, see e.g. Phillips and Hansen (1990) .\n The presence of these bias terms renders inference difficult. Consequently,\n several modifications to OLS that lead to zero mean Gaussian mixture\n limiting distributions have been proposed, which in turn make\n standard asymptotic inference feasible. These methods include\n the fully modified OLS (FM-OLS) approach of Phillips and Hansen\n (1990) , the dynamic OLS (D-OLS) approach of Phillips\n and Loretan (1991) , Saikkonen (1991)\n and Stock and Watson (1993)\n and the new estimation approach called integrated\n modified OLS (IM-OLS) of Vogelsang and Wagner (2014)\n . The latter is based on an augmented\n partial sum (integration) transformation of the regression model. IM-OLS is\n similar in spirit to the FM- and D-OLS approaches, with the key difference\n that it does not require estimation of long run variance matrices and avoids\n the need to choose tuning parameters (kernels, bandwidths, lags). However,\n inference does require that a long run variance be scaled out.\n This package provides functions for the parameter estimation and inference\n with all three modified OLS approaches. That includes the automatic\n bandwidth selection approaches of Andrews (1991) and\n of Newey and West (1994) as well as the calculation of\n the long run variance.","Published":"2016-06-14","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"colf","Version":"0.1.2","Title":"Constrained Optimization on Linear Function","Description":"Performs least squares constrained optimization on a linear objective function. It contains\n a number of algorithms to choose from and offers a formula syntax similar to lm().","Published":"2016-12-03","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CollapsABEL","Version":"0.10.11","Title":"Generalized CDH (GCDH) Analysis","Description":"Implements a generalized version of the CDH test ( and )\n for detecting compound heterozygosity on a\n genome-wide level, due to usage of generalized linear models it allows flexible\n analysis of binary and continuous traits with covariates.","Published":"2016-12-11","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"collapsibleTree","Version":"0.1.4","Title":"Interactive Collapsible Tree Diagrams using 'D3.js'","Description":"\n Interactive Reingold-Tilford tree diagrams created using 'D3.js', where every node can be expanded and collapsed by clicking on it.\n Tooltips and color gradients can be mapped to nodes using a numeric column in the source data frame.\n See 'collapsibleTree' website for more information and examples.","Published":"2017-03-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"CollocInfer","Version":"1.0.4","Title":"Collocation Inference for Dynamic Systems","Description":"These functions implement collocation-inference\n for continuous-time and discrete-time stochastic processes.\n They provide model-based smoothing, gradient-matching,\n generalized profiling and forwards prediction error methods.","Published":"2016-11-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"collpcm","Version":"1.0","Title":"Collapsed Latent Position Cluster Model for Social Networks","Description":"Markov chain Monte Carlo based inference routines for collapsed latent position cluster models or social networks, which includes searches over the model space (number of clusters in the latent position cluster model). The label switching algorithm used is that of Nobile and Fearnside (2007) which relies on the algorithm of Carpaneto and Toth (1980) . ","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"collUtils","Version":"1.0.5","Title":"Auxiliary Package for Package 'CollapsABEL'","Description":"Provides some low level functions for processing PLINK input and output files.","Published":"2016-03-31","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"coloc","Version":"2.3-1","Title":"Colocalisation tests of two genetic traits","Description":"Performs the colocalisation tests described in Plagnol et al\n (2009), Wallace et al (2013) and Giambartolomei et al (2013).","Published":"2013-09-29","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"colordistance","Version":"0.8.0","Title":"Distance Metrics for Image Color Similarity","Description":"Loads and displays images, selectively masks specified background\n colors, bins pixels by color using either data-dependent or automatically\n generated color bins, quantitatively measures color similarity among images\n using one of several distance metrics for comparing pixel color clusters, and \n clusters images by object color similarity. Originally written for use with\n organism coloration (reef fish color diversity, butterfly mimicry, etc), but\n easily applicable for any image set.","Published":"2017-06-03","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"coloredICA","Version":"1.0.0","Title":"Implementation of Colored Independent Component Analysis and\nSpatial Colored Independent Component Analysis","Description":"It implements colored Independent Component Analysis (Lee et al., 2011) and spatial colored Independent Component Analysis (Shen et al., 2014). They are two algorithms to perform ICA when sources are assumed to be temporal or spatial stochastic processes, respectively.","Published":"2015-02-24","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"colorfulVennPlot","Version":"2.4","Title":"Plot and add custom coloring to Venn diagrams for 2-dimensional,\n3-dimensional and 4-dimensional data","Description":"Given 2-,3- or 4-dimensional data, plots a Venn diagram, i.e. 'crossing circles'. The user can specify values, labels for each circle-group and unique colors for each plotted part. Here is what it would look like for a 3-dimensional plot: http://elliotnoma.files.wordpress.com/2011/02/venndiagram.png. To see what the 4-dimensional plot looks like, go to http://elliotnoma.files.wordpress.com/2013/03/4dplot.png.","Published":"2013-11-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"colorhcplot","Version":"1.0","Title":"Colorful Hierarchical Clustering Dendrograms","Description":"This function takes a hierarchical cluster-class object and a factor describing the groups as arguments and generates colorful dendrograms in which leaves belonging to different groups are identified by colors.","Published":"2015-10-02","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"colormap","Version":"0.1.4","Title":"Color Palettes using Colormaps Node Module","Description":"Allows to generate colors from palettes defined in the colormap module of 'Node.js'. (see for more information). In total it provides 44 distinct palettes made from sequential and/or diverging colors. In addition to the pre defined palettes you can also specify your own set of colors. There are also scale functions that can be used with 'ggplot2'.","Published":"2016-11-15","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"ColorPalette","Version":"1.0-1","Title":"Color Palettes Generator","Description":"Different methods to generate a color palette based on a specified base color and a number of colors that should be created.","Published":"2015-06-24","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"colorpatch","Version":"0.1.2","Title":"Optimized Rendering of Fold Changes and Confidence Values","Description":"Shows color patches for encoding fold changes (e.g. log ratios) together with confidence values \n within a single diagram. This is especially useful for rendering gene expression data as well as\n other types of differential experiments. In addition to different rendering methods (ggplot extensions)\n functionality for perceptually optimizing color palettes are provided.\n Furthermore the package provides extension methods of the colorspace color-class in order to\n simplify the work with palettes (a.o. length, as.list, and append are supported).","Published":"2017-06-10","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"colorplaner","Version":"0.1.3","Title":"A 'ggplot2' Extension to Visualize Two Variables per Color\nAesthetic Through Color Space Projections","Description":"A 'ggplot2' extension to visualize two\n variables through one color aesthetic via mapping to a color space\n projection. With this technique for 2-D color mapping, one can create a\n bivariate choropleth in R as well as other visualizations with multivariate\n color scales. Includes two new scales and a new guide for 'ggplot2'.","Published":"2016-11-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"colorr","Version":"1.0.0","Title":"Color Palettes for EPL, MLB, NBA, NHL, and NFL Teams","Description":"Color palettes for EPL, MLB, NBA, NHL, and NFL teams.","Published":"2017-02-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"colorRamps","Version":"2.3","Title":"Builds color tables","Description":"Builds gradient color maps","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"colorscience","Version":"1.0.4","Title":"Color Science Methods and Data","Description":"Methods and data for color science - color conversions by observer,\n illuminant and gamma. Color matching functions and chromaticity diagrams.\n Color indices, color differences and spectral data conversion/analysis.","Published":"2016-10-02","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"colorspace","Version":"1.3-2","Title":"Color Space Manipulation","Description":"Carries out mapping between assorted color spaces including\n RGB, HSV, HLS, CIEXYZ, CIELUV, HCL (polar CIELUV),\n\t CIELAB and polar CIELAB. Qualitative, sequential, and\n\t diverging color palettes based on HCL colors are provided\n\t along with an interactive palette picker (with either a Tcl/Tk\n\t or a shiny GUI).","Published":"2016-12-14","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"colorSpec","Version":"0.5-3","Title":"Color Calculations with Emphasis on Spectral Data","Description":"Calculate with spectral properties of light sources, materials, cameras, eyes, and scanners.\n Build complex systems from simpler parts using a spectral product algebra. For light sources,\n compute CCT and CRI. For object colors, compute optimal colors and Logvinenko coordinates.\n Work with the standard CIE illuminants and color matching functions, and read spectra from \n text files, including CGATS files. Sample text files, and 4 vignettes are included.","Published":"2016-05-17","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"colortools","Version":"0.1.5","Title":"Tools for colors in a Hue-Saturation-Value (HSV) color model","Description":"R package with handy functions to help users select and play with\n color schemes in an HSV color model","Published":"2013-12-19","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"colourlovers","Version":"0.2.2","Title":"R Client for the COLOURlovers API","Description":"Provides access to the COLOURlovers \n API, which offers color inspiration and color palettes.","Published":"2016-10-31","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"colourpicker","Version":"0.3","Title":"A Colour Picker Tool for Shiny and for Selecting Colours in\nPlots","Description":"A colour picker that can be used as an input in Shiny apps\n or 'Rmarkdown' documents. A Plot Colour Helper tool is available as an \n 'RStudio' addin, which helps you pick colours to use in your plots. A more \n generic Colour Picker 'RStudio' addin is also provided to let you select \n colours for use in your R code.","Published":"2016-12-05","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"colourvision","Version":"1.1","Title":"Colour Vision Models","Description":"Colour vision models, colour spaces and colour thresholds. Includes Vorobyev & Osorio Receptor Noise Limited models, Chittka colour hexagon, and Endler & Mielke model. Models have been extended to accept any number of photoreceptor types.","Published":"2017-03-13","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"colr","Version":"0.1.900","Title":"Functions to Select and Rename Data","Description":"Powerful functions to select and rename columns in dataframes, lists and numeric types \n by 'Perl' regular expression. Regular expression ('regex') are a very powerful grammar to match \n strings, such as column names. ","Published":"2017-01-03","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"comato","Version":"1.0","Title":"Analysis of Concept Maps","Description":"Provides methods for the import/export and automated analysis of concept maps.","Published":"2014-03-18","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"COMBAT","Version":"0.0.2","Title":"A Combined Association Test for Genes using Summary Statistics","Description":"To compute gene-based genetic association statistics from P values at multiple SNPs and genotype data of ancestry matched reference samples. COMBined Association Test (COMBAT) incorporates strengths from multiple existing gene-based tests, including VEGAS, GATES and SimpleM, and achieves much improved performance than any individual test.","Published":"2017-01-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"COMBIA","Version":"1.0-4","Title":"Synergy/Antagonism Analyses of Drug Combinations","Description":"A comprehensive synergy/antagonism analyses of drug combinations with\n quality graphics and data. The analyses can be performed by Bliss independence and Loewe\n additivity models. COMBIA provides improved statistical analysis and makes only very weak assumption of data variability \n while calculating bootstrap intervals (BIs). Finally, package saves analyzed data, \n 2D and 3D plots ready to use in research publications. COMBIA does not require manual\n data entry. Data can be directly input from wetlab experimental platforms \n for example fluostar, automated robots etc. One needs to call a single function only \n to perform all analysis (examples are provided with sample data).","Published":"2015-07-26","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"combinat","Version":"0.0-8","Title":"combinatorics utilities","Description":"routines for combinatorics","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Combine","Version":"1.0","Title":"Game-Theoretic Probability Combination","Description":"Suite of R functions for combination of probabilities using a game-theoretic method.","Published":"2015-09-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CombinePortfolio","Version":"0.3","Title":"Estimation of Optimal Portfolio Weights by Combining Simple\nPortfolio Strategies","Description":"Estimation of optimal portfolio weights as combination of simple portfolio strategies, like the tangency, global minimum variance (GMV) or naive (1/N) portfolio. It is based on a utility maximizing 8-fund rule. Popular special cases like the Kan-Zhou(2007) 2-fund and 3-fund rule or the Tu-Zhou(2011) estimator are nested.","Published":"2016-06-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CombinePValue","Version":"1.0","Title":"Combine a Vector of Correlated p-values","Description":"We offer two statistical tests to combine p-values: selfcontained.test vs competitive.test. The goal is to test whether a vector of pvalues are jointly significant when we combine them together.","Published":"2014-11-03","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CombinS","Version":"1.1-1","Title":"Construction Methods of some Series of PBIB Designs","Description":"Series of partially balanced incomplete block designs (PBIB) based on the combinatory method (S) introduced in (Imane Rezgui et al, 2014) ; and it gives their associated U-type design.","Published":"2016-11-23","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"combiter","Version":"1.0.2","Title":"Combinatorics Iterators","Description":"Provides iterators for combinations, permutations, subsets, and\n Cartesian product, which allow one to go through all elements without creating a\n huge set of all possible values.","Published":"2017-05-26","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CombMSC","Version":"1.4.2","Title":"Combined Model Selection Criteria","Description":"Functions for computing optimal convex combinations of\n model selection criteria based on ranks, along with utility\n functions for constructing model lists, MSCs, and priors on\n model lists.","Published":"2012-10-29","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"comclim","Version":"0.9.4","Title":"Community climate statistics","Description":"Computes community climate statistics for volume and mismatch using species' climate niches either unscaled or scaled relative to a regional species pool. These statistics can be used to describe biogeographic patterns and infer community assembly processes. Includes a vignette outlining usage.","Published":"2014-09-19","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"cometExactTest","Version":"0.1.3","Title":"Exact Test from the Combinations of Mutually Exclusive\nAlterations (CoMEt) Algorithm","Description":"An algorithm for identifying combinations of mutually exclusive alterations in cancer genomes. CoMEt represents the mutations in a set M of k genes with a 2^k dimensional contingency table, and then computes the tail probability of observing T(M) exclusive alterations using an exact statistical test.","Published":"2015-10-31","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"comf","Version":"0.1.7","Title":"Functions for Thermal Comfort Research","Description":"Functions to calculate various common and less common thermal comfort indices, convert physical variables, and evaluate the performance of thermal comfort indices.","Published":"2017-05-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ComICS","Version":"1.0.3","Title":"Computational Methods for Immune Cell-Type Subsets","Description":"Provided are Computational methods for Immune Cell-type Subsets, including:(1) DCQ (Digital Cell Quantifier) to infer global dynamic changes in immune cell quantities within a complex tissue; and (2) VoCAL (Variation of Cell-type Abundance Loci) a deconvolution-based method that utilizes transcriptome data to infer the quantities of immune-cell types, and then uses these quantitative traits to uncover the underlying DNA loci.","Published":"2016-03-07","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"commandr","Version":"1.0.1","Title":"Command pattern in R","Description":"An S4 representation of the Command design pattern. The\n Operation class is a simple implementation using closures and supports\n forward and reverse (undo) evaluation. The more complicated Protocol\n framework represents each type of command (or analytical protocol) by\n a formal S4 class. Commands may be grouped and consecutively executed\n using the Pipeline class. Example use cases include logging, do/undo,\n analysis pipelines, GUI actions, parallel processing, etc.","Published":"2014-08-25","License":"Artistic-2.0","snapshot_date":"2017-06-23"}
{"Package":"CommEcol","Version":"1.6.4","Title":"Community Ecology Analyses","Description":"Autosimilarity curves, dissimilarity indexes that overweight rare species, phylogenetic and functional (pairwise and multisample) dissimilarity indexes and nestedness for phylogenetic, functional and other diversity metrics. This should be a complement to available packages, particularly 'vegan'. ","Published":"2016-07-28","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"commentr","Version":"1.0.4","Title":"Print Nicely Formatted Comments for Use in Script Files","Description":"Functions to\n produce nicely formatted comments to use in R-scripts (or\n Latex/HTML/markdown etc). A comment with formatting is printed to the\n console and can then be copied to a script.","Published":"2016-03-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CommonJavaJars","Version":"1.0-5","Title":"Useful libraries for building a Java based GUI under R","Description":"Useful libraries for building a Java based GUI under R","Published":"2014-08-25","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"commonmark","Version":"1.2","Title":"High Performance CommonMark and Github Markdown Rendering in R","Description":"The CommonMark specification defines a rationalized version of markdown\n syntax. This package uses the 'cmark' reference implementation for converting\n markdown text into various formats including html, latex and groff man. In\n addition it exposes the markdown parse tree in xml format. The latest version of\n this package also adds support for Github extensions including tables, autolinks\n and strikethrough text.","Published":"2017-03-01","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"commonsMath","Version":"1.0.0","Title":"JAR Files of the Apache Commons Mathematics Library","Description":"Java JAR files for the Apache Commons Mathematics Library for use by users and other packages.","Published":"2017-05-24","License":"Apache License 2.0 | file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"CommonTrend","Version":"0.7-1","Title":"Extract and plot common trends from a cointegration system.\nCalculate P-value for Johansen Statistics","Description":"Directly extract and plot stochastic common trends from\n a cointegration system using different approaches, currently\n including Kasa (1992) and Gonzalo and Granger (1995). \n\tThe approach proposed by Gonzalo and Granger, also known as\n Permanent-Transitory Decomposition, is widely used in\n macroeconomics and market microstructure literature. \n\tKasa's approach, on the other hand, has a nice property that it only\n uses the super consistent estimator: the cointegration vector\n 'beta'. \n\tThis package also provides functions calculate P-value\n from Johansen Statistics according to the approximation method\n proposed by Doornik (1998).\n\tUpdate:\n\t0.7-1: Fix bugs in calculation alpha. Add formulas and more explanations.\n 0.6-1: Rewrite the description file.\n 0.5-1: Add functions to calculate P-value from Johansen statistic, and vice versa.","Published":"2013-09-05","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CommT","Version":"0.1.1","Title":"Comparative Phylogeographic Analysis using the Community Tree\nFramework","Description":"Provides functions to measure the difference between constrained and unconstrained gene tree distributions using various tree distance metrics. Constraints are enforced prior to this analysis via the estimation of a tree under the community tree model.","Published":"2015-06-16","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"COMMUNAL","Version":"1.1.0","Title":"Robust Selection of Cluster Number K","Description":"Facilitates optimal clustering of a data set. Provides a framework to run a wide range of clustering algorithms to determine the optimal number (k) of clusters in the data. Then analyzes the cluster assignments from each clustering algorithm to identify samples that repeatedly classify to the same group. We call these 'core clusters', providing a basis for later class discovery.","Published":"2015-10-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CommunityCorrelogram","Version":"1.0","Title":"Ecological Community Correlogram","Description":"The CommunityCorrelogram package is designed for the geostatistical analysis of ecological community datasets with either a spatial or temporal distance component.","Published":"2014-06-19","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Comp2ROC","Version":"1.1.4","Title":"Compare Two ROC Curves that Intersect","Description":"Comparison of two ROC curves through the methodology proposed by Ana C. Braga.","Published":"2016-07-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"compactr","Version":"0.1","Title":"Creates empty plots with compact axis notation","Description":"Creates empty plots with compact axis notation to which users can\n add whatever they like (points, lines, text, etc.) The notation is more\n compact in the sense that the axis-labels and tick-labels are closer to the\n axis and the tick-marks are shorter. Also, if the plot appears as part of a\n matrix, the x-axis notation is suppressed unless the plot appears along the\n bottom row and the y-axis notation is suppress unless the plot appears\n along the left column.","Published":"2013-08-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"compare","Version":"0.2-6","Title":"Comparing Objects for Differences","Description":"Functions to compare a model object to a comparison object.\n If the objects are not identical, the functions can be instructed to\n explore various modifications of the objects (e.g., sorting rows,\n dropping names) to see if the modified versions are identical.","Published":"2015-08-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"compareC","Version":"1.3.1","Title":"Compare Two Correlated C Indices with Right-censored Survival\nOutcome","Description":"Proposed by Harrell, the C index or concordance C, is considered an overall measure of discrimination in survival analysis between a survival outcome that is possibly right censored and a predictive-score variable, which can represent a measured biomarker or a composite-score output from an algorithm that combines multiple biomarkers. This package aims to statistically compare two C indices with right-censored survival outcome, which commonly arise from a paired design and thus resulting two correlated C indices.","Published":"2015-01-28","License":"GPL (>= 2.0)","snapshot_date":"2017-06-23"}
{"Package":"CompareCausalNetworks","Version":"0.1.5","Title":"Interface to Diverse Estimation Methods of Causal Networks","Description":"Unified interface for the estimation of causal networks, including\n the methods 'backShift' (from package 'backShift'), 'bivariateANM' (bivariate\n additive noise model), 'bivariateCAM' (bivariate causal additive model),\n 'CAM' (causal additive model) (from package 'CAM'), 'hiddenICP' (invariant\n causal prediction with hidden variables), 'ICP' (invariant causal prediction)\n (from package 'InvariantCausalPrediction'), 'GES' (greedy equivalence\n search), 'GIES' (greedy interventional equivalence search), 'LINGAM', 'PC' (PC\n Algorithm), 'RFCI' (really fast causal inference) (all from package 'pcalg') and\n regression.","Published":"2016-12-01","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"compareDF","Version":"1.1.0","Title":"Do a Git Style Diff of the Rows Between Two Dataframes with\nSimilar Structure","Description":"Compares two dataframes which have the same column\n structure to show the rows that have changed. Also gives a git style diff format\n to quickly see what has changes in addition to summary statistics.","Published":"2017-01-18","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"compareGroups","Version":"3.2.4","Title":"Descriptive Analysis by Groups","Description":"Create data summaries for quality control, extensive reports for exploring data, as well as publication-ready univariate or bivariate tables in several formats (plain text, HTML,LaTeX, PDF, Word or Excel. Create figures to quickly visualise the distribution of your data (boxplots, barplots, normality-plots, etc.). Display statistics (mean, median, frequencies, incidences, etc.). Perform the appropriate tests (t-test, Analysis of variance, Kruskal-Wallis, Fisher, log-rank, ...) depending on the nature of the described variable (normal, non-normal or qualitative). Summarize genetic data (Single Nucleotide Polymorphisms) data displaying Allele Frequencies and performing Hardy-Weinberg Equilibrium tests among other typical statistics and tests for these kind of data.","Published":"2017-03-14","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"compareODM","Version":"1.2","Title":"comparison of medical forms in CDISC ODM format","Description":"Input: 2 ODM files (ODM version 1.3) Output: list of\n identical, matching, similar and differing data items","Published":"2013-05-27","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"CompareTests","Version":"1.2","Title":"Correct for Verification Bias in Diagnostic Accuracy & Agreement","Description":"A standard test is observed on all specimens. We treat the second test (or sampled test) as being conducted on only a stratified sample of specimens. Verification Bias is this situation when the specimens for doing the second (sampled) test is not under investigator control. We treat the total sample as stratified two-phase sampling and use inverse probability weighting. We estimate diagnostic accuracy (category-specific classification probabilities; for binary tests reduces to specificity and sensitivity, and also predictive values) and agreement statistics (percent agreement, percent agreement by category, Kappa (unweighted), Kappa (quadratic weighted) and symmetry tests (reduces to McNemar's test for binary tests)). See: Katki HA, Li Y, Edelstein DW, Castle PE. Estimating the agreement and diagnostic accuracy of two diagnostic tests when one test is conducted on only a subsample of specimens. Stat Med. 2012 Feb 28; 31(5) .","Published":"2017-02-06","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"comparison","Version":"1.0-4","Title":"Multivariate likelihood ratio calculation and evaluation","Description":"Functions for calculating and evaluating likelihood ratios from uni/multivariate continuous observations","Published":"2013-11-05","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"compeir","Version":"1.0","Title":"Event-specific incidence rates for competing risks data","Description":"The package enables to compute event-specific incidence\n rates for competing risks data, to compute rate ratios,\n event-specific incidence proportions and cumulative incidence\n functions from these, and to plot these in a comprehensive\n multi-state type graphic.","Published":"2011-03-09","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"compendiumdb","Version":"1.0.3","Title":"Tools for Retrieval and Storage of Functional Genomics Data","Description":"Package for the systematic retrieval and storage of\n functional genomics data via a MySQL database.","Published":"2015-10-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"compete","Version":"0.1","Title":"Analyzing Social Hierarchies","Description":"Organizing and Analyzing Social Dominance\n Hierarchy Data.","Published":"2016-06-17","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CompetingRisk","Version":"1.0","Title":"The Semi-Parametric Cumulative Incidence Function","Description":"Computing the point estimator and pointwise confidence interval of the cumulative incidence function from the cause-specific hazards model.","Published":"2017-03-02","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CompGLM","Version":"1.0","Title":"Conway-Maxwell-Poisson GLM and distribution functions","Description":"The package contains a function (which uses a similar interface to\n the `glm' function) for the fitting of a Conway-Maxwell-Poisson GLM. There\n are also various methods for analysis of the model fit. The package also\n contains functions for the Conway-Maxwell-Poisson distribution in a similar\n interface to functions `dpois', `ppois' and `rpois'. The functions are\n generally quick, since the workhorse functions are written in C++ (thanks\n to the Rcpp package).","Published":"2014-07-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"compHclust","Version":"1.0-3","Title":"Complementary Hierarchical Clustering","Description":"Performs the complementary hierarchical clustering procedure and returns X' (the expected residual matrix) and a vector of the relative gene importances.","Published":"2017-05-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Compind","Version":"1.2","Title":"Composite Indicators Functions","Description":"Contains several functions to enhance approaches to the Composite Indicators methods, focusing, in particular, on the normalisation and weighting-aggregation steps.","Published":"2017-06-20","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"complexity","Version":"1.1.1","Title":"Calculate the Proportion of Permutations in Line with an\nInformative Hypothesis","Description":"Allows for the easy computation of complexity: the proportion of the parameter space in line with the hypothesis by chance. The package comes with a Shiny application in which the calculations can be conducted as well. ","Published":"2017-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"complexplus","Version":"2.1","Title":"Functions of Complex or Real Variable","Description":"Extension of several functions to the complex domain, including the matrix exponential and logarithm, and the determinant.","Published":"2017-05-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"complmrob","Version":"0.6.1","Title":"Robust Linear Regression with Compositional Data as Covariates","Description":"Provides functionality to perform robust regression\n on compositional data. To get information on the distribution of the\n estimates, various bootstrapping methods are implemented for the\n compositional as well as for standard robust regression models, to provide\n a direct comparison between them.","Published":"2015-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CompLognormal","Version":"3.0","Title":"Functions for actuarial scientists","Description":"Computes the probability density function, cumulative distribution function, quantile function, random numbers of any composite model based on the lognormal distribution.","Published":"2013-08-04","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"compoisson","Version":"0.3","Title":"Conway-Maxwell-Poisson Distribution","Description":"Provides routines for density and moments of the\n Conway-Maxwell-Poisson distribution as well as functions for\n fitting the COM-Poisson model for over/under-dispersed count\n data.","Published":"2012-10-29","License":"BSD","snapshot_date":"2017-06-23"}
{"Package":"COMPoissonReg","Version":"0.4.1","Title":"Conway-Maxwell Poisson (COM-Poisson) Regression","Description":"Fit Conway-Maxwell Poisson (COM-Poisson or CMP) regression models\n to count data (Sellers & Shmueli, 2010) . The\n package provides functions for model estimation, dispersion testing, and\n diagnostics. Zero-inflated CMP regression (Sellers & Raim, 2016)\n is also supported.","Published":"2017-05-03","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"Compositional","Version":"2.4","Title":"Compositional Data Analysis","Description":"Regression, classification, contour plots, hypothesis testing, fitting of distributions are the main function included.","Published":"2017-05-29","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"compositions","Version":"1.40-1","Title":"Compositional Data Analysis","Description":"The package provides functions for the consistent analysis of\n compositional data (e.g. portions of substances) and positive numbers\n (e.g. concentrations) in the way proposed by Aitchison and Pawlowsky-Glahn.","Published":"2014-06-07","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"compound.Cox","Version":"3.3","Title":"Estimation, Gene Selection, and Survival Prediction Based on the\nCompound Covariate Method Under the Cox Proportional Hazard\nModel","Description":"Estimation, gene selection, and survival prediction based on the compound covariate method under the Cox model with high-dimensional gene expressions.\n Available are survival data for non-small-cell lung cancer patients with gene expressions (Chen et al 2007 New Engl J Med) ,\n statistical methods in Emura et al (2012 PLoS ONE) and\n Emura & Chen (2016 Stat Methods Med Res) . Algorithms for generating correlated gene expressions are also available.","Published":"2017-03-18","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Compounding","Version":"1.0.2","Title":"Computing Continuous Distributions","Description":"Computing Continuous Distributions Obtained by Compounding\n a Continuous and a Discrete Distribution","Published":"2013-02-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CompQuadForm","Version":"1.4.3","Title":"Distribution Function of Quadratic Forms in Normal Variables","Description":"Computes the distribution function of quadratic forms in normal variables using Imhof's method, Davies's algorithm, Farebrother's algorithm or Liu et al.'s algorithm.","Published":"2017-04-12","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"CompR","Version":"1.0","Title":"Paired Comparison Data Analysis","Description":"Different tools for describing and analysing paired comparison data are presented. Main methods are estimation of products scores according Bradley Terry Luce model. A segmentation of the individual could be conducted on the basis of a mixture distribution approach. The number of classes can be tested by the use of Monte Carlo simulations. This package deals also with multi-criteria paired comparison data. ","Published":"2015-07-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CompRandFld","Version":"1.0.3-4","Title":"Composite-Likelihood Based Analysis of Random Fields","Description":"A set of procedures for the analysis of Random Fields using likelihood and non-standard likelihood methods is provided. Spatial analysis often involves dealing with large dataset. Therefore even simple studies may be too computationally demanding. Composite likelihood inference is emerging as a useful tool for mitigating such computational problems. This methodology shows satisfactory results when compared with other techniques such as the tapering method. Moreover, composite likelihood (and related quantities) have some useful properties similar to those of the standard likelihood.","Published":"2015-02-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"compute.es","Version":"0.2-4","Title":"Compute Effect Sizes","Description":"This package contains several functions for calculating the most\n widely used effect sizes (ES), along with their variances, confidence\n intervals and p-values. The output includes ES's of d (mean difference), g\n (unbiased estimate of d), r (correlation coefficient), z' (Fisher's z), and\n OR (odds ratio and log odds ratio). In addition, NNT (number needed to\n treat), U3, CLES (Common Language Effect Size) and Cliff's Delta are\n computed. This package uses recommended formulas as described in The\n Handbook of Research Synthesis and Meta-Analysis (Cooper, Hedges, &\n Valentine, 2009).","Published":"2014-09-16","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"Conake","Version":"1.0","Title":"Continuous Associated Kernel Estimation","Description":"Continuous smoothing of probability density function on a compact or semi-infinite support is performed using four continuous associated kernels: extended beta, gamma, lognormal and reciprocal inverse Gaussian. The cross-validation technique is also implemented for bandwidth selection.","Published":"2015-03-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"concatenate","Version":"1.0.0","Title":"Human-Friendly Text from Unknown Strings","Description":"Simple functions for joining strings. Construct human-friendly messages whose elements aren't known in advance, like in stop, warning, or message, from clean code.","Published":"2016-05-08","License":"GPL (>= 3.2)","snapshot_date":"2017-06-23"}
{"Package":"conclust","Version":"1.1","Title":"Pairwise Constraints Clustering","Description":"There are 4 main functions in this package: ckmeans(), lcvqe(), mpckm() and ccls(). They take an unlabeled dataset and two lists of must-link and cannot-link constraints as input and produce a clustering as output.","Published":"2016-08-15","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ConConPiWiFun","Version":"0.4.6","Title":"Optimisation with Continuous Convex Piecewise (Linear and\nQuadratic) Functions","Description":"Continuous convex piecewise linear (ccpl) resp. quadratic (ccpq) functions can be implemented with sorted breakpoints and slopes. This includes functions that are ccpl (resp. ccpq) on a convex set (i.e. an interval or a point) and infinite out of the domain. These functions can be very useful for a large class of optimisation problems. Efficient manipulation (such as log(N) insertion) of such data structure is obtained with map standard template library of C++ (that hides balanced trees). This package is a wrapper on such a class based on Rcpp modules. ","Published":"2015-11-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"concor","Version":"1.0-0.1","Title":"Concordance","Description":"The four functions svdcp (cp for column partitioned),\n svdbip or svdbip2 (bip for bi-partitioned), and svdbips (s for\n a simultaneous optimization of one set of r solutions),\n correspond to a \"SVD by blocks\" notion, by supposing each block\n depending on relative subspaces, rather than on two whole\n spaces as usual SVD does. The other functions, based on this\n notion, are relative to two column partitioned data matrices x\n and y defining two sets of subsets xi and yj of variables and\n amount to estimate a link between xi and yj for the pair (xi,\n yj) relatively to the links associated to all the other pairs.","Published":"2012-10-29","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"concordance","Version":"1.6","Title":"Product Concordance","Description":"A set of utilities for matching products in different classification codes used in international trade research. It supports concordance between HS (Combined), ISIC Rev. 2,3, and SITC1,2,3,4 product classification codes, as well as BEC, NAICS, and SIC classifications. It also provides code nomenclature / descriptions look-up, Rauch classification look-up (via concordance to SITC2) and trade elasticity look-up (via concordance to SITC2/3 or HS3.ss).","Published":"2016-01-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"concreg","Version":"0.6","Title":"Concordance Regression","Description":"Implements concordance regression which can be used to estimate generalized odds of concordance.\n\tCan be used for non- and semi-parametric survival analysis with non-proportional hazards, for binary and \n for continuous outcome data.","Published":"2016-12-22","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"cond","Version":"1.2-3","Title":"Approximate conditional inference for logistic and loglinear\nmodels","Description":"Higher order likelihood-based inference for logistic and \n loglinear models","Published":"2014-06-27","License":"GPL (>= 2) | file LICENCE","snapshot_date":"2017-06-23"}
{"Package":"condformat","Version":"0.6.0","Title":"Conditional Formatting in Data Frames","Description":"Apply and visualize conditional formatting to data frames in R.\n It renders a data frame with cells formatted according to\n criteria defined by rules, using a syntax similar to 'ggplot2'. The table is\n printed either opening a web browser or within the 'RStudio' viewer if\n available. The conditional formatting rules allow to highlight cells\n matching a condition or add a gradient background to a given column. This\n package supports both 'HTML' and 'LaTeX' outputs in 'knitr' reports, and\n exporting to an 'xlsx' file.","Published":"2017-05-18","License":"BSD_3_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"condGEE","Version":"0.1-4","Title":"Parameter estimation in conditional GEE for recurrent event gap\ntimes","Description":"Solves for the mean parameters, the variance parameter, and their asymptotic variance in a conditional GEE for recurrent event gap times, as described by Clement and Strawderman (2009) in the journal Biostatistics. Makes a parametric assumption for the length of the censored gap time.","Published":"2013-08-17","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"condir","Version":"0.1.1","Title":"Computation of P Values and Bayes Factors for Conditioning Data","Description":"Set of functions for the easy analyses of conditioning data.","Published":"2017-02-15","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"conditions","Version":"0.1","Title":"Standardized Conditions for R","Description":"Implements specialized conditions, i.e., typed errors,\n warnings and messages. Offers a set of standardized conditions (value error,\n deprecated warning, io message, ...) in the fashion of Python's built-in\n exceptions.","Published":"2017-01-18","License":"BSD_2_clause + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"condmixt","Version":"1.0","Title":"Conditional Density Estimation with Neural Network Conditional\nMixtures","Description":"Conditional density estimation with mixtures for\n heavy-tailed distributions","Published":"2012-05-01","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"condMVNorm","Version":"2015.2-1","Title":"Conditional Multivariate Normal Distribution","Description":"Computes conditional multivariate normal probabilities, random deviates and densities.","Published":"2015-02-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CONDOP","Version":"1.0","Title":"Condition-Dependent Operon Predictions","Description":"An implementation of the computational strategy for the\n comprehensive analysis of condition-dependent operon maps in prokaryotes\n proposed by Fortino et al. (2014) . \n It uses RNA-seq transcriptome profiles to improve prokaryotic operon map inference.","Published":"2016-02-24","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"CondReg","Version":"0.20","Title":"Condition Number Regularized Covariance Estimation","Description":"Based on\n \\url{http://statistics.stanford.edu/~ckirby/techreports/GEN/2012/2012-10.pdf}","Published":"2014-07-10","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"condSURV","Version":"2.0.1","Title":"Estimation of the Conditional Survival Function for Ordered\nMultivariate Failure Time Data","Description":"Method to implement some newly developed methods for the\n estimation of the conditional survival function.","Published":"2016-12-21","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"condvis","Version":"0.4-1","Title":"Conditional Visualization for Statistical Models","Description":"Exploring fitted models by interactively taking 2-D and 3-D\n sections in data space.","Published":"2016-10-18","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"coneproj","Version":"1.11","Title":"Primal or Dual Cone Projections with Routines for Constrained\nRegression","Description":"Routines doing cone projection and quadratic programming, as well as doing estimation and inference for constrained parametric regression and shape-restricted regression problems.","Published":"2016-09-01","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"conf.design","Version":"2.0.0","Title":"Construction of factorial designs","Description":"This small library contains a series of simple tools for\n constructing and manipulating confounded and fractional\n factorial designs.","Published":"2013-02-23","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"confidence","Version":"1.1-0","Title":"Confidence Estimation of Environmental State Classifications","Description":"Functions for estimating and reporting multiyear averages and\n corresponding confidence intervals and distributions. A potential use case\n is reporting the chemical and ecological status of surface waters according\n to the European Water Framework Directive.","Published":"2014-10-22","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"config","Version":"0.2","Title":"Manage Environment Specific Configuration Values","Description":"Manage configuration values across multiple environments (e.g.\n development, test, production). Read values using a function that determines\n the current environment and returns the appropriate value.","Published":"2016-08-02","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"configr","Version":"0.3.0","Title":"An Implementation of Parsing and Writing Configuration File\n(JSON/INI/YAML/TOML)","Description":"\n Implements the JSON, INI, YAML and TOML parser for R setting and writing of configuration file. The functionality of this package is similar to that of package 'config'. ","Published":"2017-06-22","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"confinterpret","Version":"0.2.0","Title":"Descriptive Interpretations of Confidence Intervals","Description":"Produces descriptive interpretations of confidence intervals.\n Includes (extensible) support for various test types, specified as sets\n of interpretations dependent on where the lower and upper confidence limits\n sit.","Published":"2017-05-11","License":"AGPL-3","snapshot_date":"2017-06-23"}
{"Package":"conformal","Version":"0.2","Title":"Conformal Prediction for Regression and Classification","Description":"Implementation of conformal prediction using caret models for classification and regression.","Published":"2016-03-07","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"ConfoundedMeta","Version":"1.1.0","Title":"Sensitivity Analyses for Unmeasured Confounding in Meta-Analyses","Description":"Conducts sensitivity analyses for unmeasured confounding in\n random-effects meta-analysis per Mathur & VanderWeele (in preparation).\n Given output from a random-effects meta-analysis with a relative risk\n outcome, computes point estimates and inference for: (1) the proportion\n of studies with true causal effect sizes more extreme than a specified threshold\n of scientific significance; and (2) the minimum bias factor and confounding\n strength required to reduce to less than a specified threshold the proportion\n of studies with true effect sizes of scientifically significant size.\n Creates plots and tables for visualizing these metrics across a range of bias values.\n Provides tools to easily scrape study-level data from a published forest plot or \n summary table to obtain the needed estimates when these are not reported. ","Published":"2017-06-12","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"confreq","Version":"1.5.1","Title":"Configural Frequencies Analysis Using Log-Linear Modeling","Description":"Offers several functions for Configural Frequencies\n Analysis (CFA), which is a useful statistical tool for the analysis of\n multiway contingency tables. CFA was introduced by G. A. Lienert as\n 'Konfigurations Frequenz Analyse - KFA'. Lienert, G. A. (1971). \n Die Konfigurationsfrequenzanalyse: I. Ein neuer Weg zu Typen und Syndromen. \n Zeitschrift für Klinische Psychologie und Psychotherapie, 19(2), 99–115.","Published":"2016-12-28","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"confSAM","Version":"0.1","Title":"Estimates and Bounds for the False Discovery Proportion, by\nPermutation","Description":"For multiple testing.\n Computes estimates and confidence bounds for the\n False Discovery Proportion (FDP), the fraction of false positives among\n all rejected hypotheses.\n The methods in the package use permutations of the data. Doing so, they\n take into account the dependence structure in the data.","Published":"2017-01-18","License":"GNU General Public License","snapshot_date":"2017-06-23"}
{"Package":"congressbr","Version":"0.1.1","Title":"Downloads, Unpacks and Tidies Legislative Data from the\nBrazilian Federal Senate and Chamber of Deputies","Description":"Downloads and tidies data from the Brazilian Federal Senate and Chamber of Deputies Application Programming Interfaces available at and respectively.","Published":"2017-06-20","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"conicfit","Version":"1.0.4","Title":"Algorithms for Fitting Circles, Ellipses and Conics Based on the\nWork by Prof. Nikolai Chernov","Description":"Geometric circle fitting with Levenberg-Marquardt (a, b, R), Levenberg-Marquardt reduced (a, b), Landau, Spath and Chernov-Lesort. Algebraic circle fitting with Taubin, Kasa, Pratt and Fitzgibbon-Pilu-Fisher. Geometric ellipse fitting with ellipse LMG (geometric parameters) and conic LMA (algebraic parameters). Algebraic ellipse fitting with Fitzgibbon-Pilu-Fisher and Taubin.","Published":"2015-10-05","License":"GPL (>= 3)","snapshot_date":"2017-06-23"}
{"Package":"conics","Version":"0.3","Title":"Plot Conics","Description":"plot conics (ellipses, hyperbolas, parabolas)","Published":"2013-12-10","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"Conigrave","Version":"0.1.1","Title":"Flexible Tools for Multiple Imputation","Description":"Provides a set of tools that can be used across 'data.frame' and\n 'imputationList' objects.","Published":"2017-03-13","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"conjoint","Version":"1.39","Title":"Conjoint analysis package","Description":"Conjoint is a simple package that implements a conjoint\n analysis method to measure the preferences.","Published":"2013-08-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ConjointChecks","Version":"0.0.9","Title":"A package to check the cancellation axioms of conjoint\nmeasurement","Description":"Implementation of a procedure (Domingue, 2012; see also\n Karabatsos, 2001 and Kyngdon, 2011) to test the single and\n double cancellation axioms of conjoint measure in data that is\n dichotomously coded and measured with error.","Published":"2012-12-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"connect3","Version":"0.1.0","Title":"A Tool for Reproducible Research by Converting 'LaTeX' Files\nGenerated by R Sweave to Rich Text Format Files","Description":"Converts 'LaTeX' files (with extension '.tex') generated by R Sweave using package 'knitr' to Rich Text Format files (with extension '.rtf'). Rich Text Format files can be read and written by most word processors.","Published":"2015-12-05","License":"GPL","snapshot_date":"2017-06-23"}
{"Package":"ConnMatTools","Version":"0.3.3","Title":"Tools for Working with Connectivity Data","Description":"Collects several different methods for analyzing and\n working with connectivity data in R. Though primarily oriented towards\n marine larval dispersal, many of the methods are general and useful for\n terrestrial systems as well.","Published":"2016-11-02","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"conover.test","Version":"1.1.4","Title":"Conover-Iman Test of Multiple Comparisons Using Rank Sums","Description":"Computes the Conover-Iman test (1979) for stochastic dominance and reports the results among multiple pairwise comparisons after a Kruskal-Wallis test for stochastic dominance among k groups (Kruskal and Wallis, 1952). The interpretation of stochastic dominance requires an assumption that the CDF of one group does not cross the CDF of the other. conover.test makes k(k-1)/2 multiple pairwise comparisons based on Conover-Iman t-test-statistic of the rank differences. The null hypothesis for each pairwise comparison is that the probability of observing a randomly selected value from the first group that is larger than a randomly selected value from the second group equals one half; this null hypothesis corresponds to that of the Wilcoxon-Mann-Whitney rank-sum test. Like the rank-sum test, if the data can be assumed to be continuous, and the distributions are assumed identical except for a difference in location, Conover-Iman test may be understood as a test for median difference. conover.test accounts for tied ranks. The Conover-Iman test is strictly valid if and only if the corresponding Kruskal-Wallis null hypothesis is rejected.","Published":"2017-04-04","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ConR","Version":"1.2.1","Title":"Computation of Parameters Used in Preliminary Assessment of\nConservation Status","Description":"Multi-species estimation of geographical range parameters\n\tfor preliminary assessment of conservation status following Criterion B of the \n\tInternational Union for Conservation of Nature (IUCN, \n\tsee ).","Published":"2017-06-13","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"CONS","Version":"0.1.1","Title":"Consonance Analysis Module","Description":"Consonance Analysis is a useful numerical and graphical approach\n for evaluating the consistency of the measurements and the panel of people\n involved in sensory evaluation. It makes use of several uni and multivariate\n techniques either graphical or analytical. It shows the implementation of this\n procedure in a graphical interface.","Published":"2017-03-09","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ConSpline","Version":"1.1","Title":"Partial Linear Least-Squares Regression using Constrained\nSplines","Description":"Given response y, continuous predictor x, and covariate matrix, the relationship between E(y) and x is estimated with a shape constrained regression spline. Function outputs fits and various types of inference.","Published":"2015-08-29","License":"GPL-2 | GPL-3","snapshot_date":"2017-06-23"}
{"Package":"ConsRank","Version":"2.0.1","Title":"Compute the Median Ranking(s) According to the Kemeny's\nAxiomatic Approach","Description":"Compute the median ranking according to the Kemeny's axiomatic approach. \n Rankings can or cannot contain ties, rankings can be both complete or incomplete. \n The package contains both branch-and-bound algorithms and heuristic solutions recently proposed.\n The package also provide some useful utilities for deal with preference rankings.\n Essential references:\n Emond, E.J., and Mason, D.W. (2002) ; \n D'Ambrosio, A., Amodio, S., and Iorio, C. (2015) ; \n Amodio, S., D'Ambrosio, A., and Siciliano R. (2016) ; \n D'Ambrosio, A., Mazzeo, G., Iorio, C., and Siciliano, R. (2017) .","Published":"2017-04-28","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"constrainedKriging","Version":"0.2.4","Title":"Constrained, Covariance-Matching Constrained and Universal Point\nor Block Kriging","Description":"Provides functions for\n efficient computations of nonlinear spatial predictions with\n local change of support. This package supplies functions for\n tow-dimensional spatial interpolation by constrained,\n covariance-matching constrained and universal (external drift)\n kriging for points or block of any shape for data with a\n nonstationary mean function and an isotropic weakly stationary\n variogram. The linear spatial interpolation methods,\n constrained and covariance-matching constrained kriging,\n provide approximately unbiased prediction for nonlinear target\n values under change of support. This package\n extends the range of geostatistical tools available in R and\n provides a veritable alternative to conditional simulation for\n nonlinear spatial prediction problems with local change of\n support.","Published":"2015-04-30","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ContaminatedMixt","Version":"1.1","Title":"Model-Based Clustering and Classification with the Multivariate\nContaminated Normal Distribution","Description":"Fits mixtures of multivariate contaminated normal distributions\n (with eigen-decomposed scale matrices) via the expectation conditional-\n\tmaximization algorithm under a clustering or classification paradigm.","Published":"2017-02-14","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"contfrac","Version":"1.1-10","Title":"Continued Fractions","Description":"Various utilities for evaluating continued fractions.","Published":"2016-05-26","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"conting","Version":"1.6","Title":"Bayesian Analysis of Contingency Tables","Description":"Bayesian analysis of complete and incomplete contingency tables.","Published":"2016-08-11","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"contoureR","Version":"1.0.5","Title":"Contouring of Non-Regular Three-Dimensional Data","Description":"Create contour lines for a non regular series of points, potentially from a non-regular canvas.","Published":"2015-08-25","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"ContourFunctions","Version":"0.1.0","Title":"Create Contour Plots from Data or a Function","Description":"Provides functions for making contour plots.\n The contour plot can be created from grid data, a function,\n or a data set. If non-grid data is given, then a Gaussian\n process is fit to the data and used to create the contour plot.","Published":"2017-05-04","License":"GPL-3","snapshot_date":"2017-06-23"}
{"Package":"contrast","Version":"0.21","Title":"A Collection of Contrast Methods","Description":"One degree of freedom contrasts for lm, glm, gls, and geese objects.","Published":"2016-09-22","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"controlTest","Version":"1.0","Title":"Median Comparison for Two-Sample Right-Censored Survival Data","Description":"Nonparametric two-sample procedure for comparing the median survival time. ","Published":"2015-06-17","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ConvCalendar","Version":"1.2","Title":"Converts dates between calendars","Description":"Converts between the Date class and d/m/y for several\n calendars, including Persian, Islamic, and Hebrew","Published":"2013-04-02","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"ConvergenceConcepts","Version":"1.2.1","Title":"Seeing Convergence Concepts in Action","Description":"This is a pedagogical package, designed to help students understanding convergence of\n random variables. It provides a way to investigate interactively various modes of\n\t convergence (in probability, almost surely, in law and in mean) of a sequence of i.i.d.\n\t random variables. Visualisation of simulated sample paths is possible through interactive\n\t plots. The approach is illustrated by examples and exercises through the function\n\t 'investigate', as described in\n\t Lafaye de Micheaux and Liquet (2009) .\n\t The user can study his/her own sequences of random variables.","Published":"2017-04-15","License":"GPL (>= 2)","snapshot_date":"2017-06-23"}
{"Package":"convertGraph","Version":"0.1","Title":"Convert Graphical Files Format","Description":"Converts graphical file formats (SVG,\n PNG, JPEG, BMP, GIF, PDF, etc) to one another. The exceptions are the\n SVG file format that can only be converted to other formats and in contrast,\n PDF format, which can only be created from others graphical formats.\n The main purpose of the package was to provide a solution for converting SVG\n file format to PNG which is often needed for exporting graphical files\n produced by R widgets.","Published":"2016-04-16","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"convertr","Version":"0.1","Title":"Convert Between Units","Description":"Provides conversion functionality between a broad range of\n scientific, historical, and industrial unit types.","Published":"2016-10-13","License":"MIT + file LICENSE","snapshot_date":"2017-06-23"}
{"Package":"convevol","Version":"1.0","Title":"Quantifies and assesses the significance of convergent evolution","Description":"Quantifies and assesses the significance of convergent evolution.","Published":"2014-12-22","License":"GPL-2","snapshot_date":"2017-06-23"}
{"Package":"convexjlr","Version":"0.5.1","Title":"Disciplined Convex Programming in R using Convex.jl","Description":"Package convexjlr provides a simple high-level wrapper for\n Julia package 'Convex.jl' (see