Working with large datasets

Working with large datasets

There are three issues to consider when working with large datasets:

(a) efficient programming to speed execution,

(b) storing data externally to limit memory issues,

(c) using specialized statistical routines designed to efficiently analyze massive amounts of data.

 

Efficient programming

1. Vectorize calculations when possible. Use R’s built-in functions for manipulating vectors, matrices, and lists (for example, sapply, lappy, and mapply) and avoid loops (for and while) when feasible.

2. Use matrices rather than data frames (they have less overhead).

 

 

3. When using theread.table() family of functions to input external data into data frames, specify the colClasses and nrows options explicitly, set comment.char = "", and specify "NULL" for columns that aren’t needed. This will decrease memory usage and speed up processing considerably. When reading external data into a matrix, use the scan() function instead.

 

 

4. Test programs on subsets of the data, in order to optimize code and remove bugs, before attempting a run on the full dataset.

 

 

5. Delete temporary objects and objects that are no longer needed. The call rm(list=ls()) will remove all objects from memory, providing a clean slate. Specific objects can be removed with rm(object).

 

 

6. Use the function .ls.objects() described in Jeromy Anglim’s blog entry “Memory Management in R: A Few Tips and Tricks” (jeromyanglim.blogspot.com), to list all workspace objects sorted by size (MB). This function will help you find and deal with memory hogs.

 

 

7. Profile your programs to see how much time is being spent in each function.You can accomplish this with the Rprof() and summaryRprof() functions. The system.time() function can also help. The profr and prooftools packages provide functions that can help in analyzing profiling output.

 

 

8. The Rcpp package can be used to transfer R objects to C++ functions and back when more optimized subroutines are needed.

 

 

Storing data outside of RAM

There are several packages available for storing data outside of R’s main memory.

clip_image002

clip_image004

 

Analytic packages for large datasets

The biglm and speedglm packages fit linear and generalized linear models to large datasets in a memory efficient manner. This offers lm() and glm() type functionality when dealing with massive datasets.

 

Several packages offer analytic functions for working with the massive matrices produced by the bigmemory ackage. The biganalytics package offers k-means clustering, column statistics, and a wrapper to biglm. The bigtabulate package provides table() , split() , and tapply() functionality and the bigalgebra package provides advanced linear algebra functions.

 

The biglars package offers least-angle regression, lasso, and stepwise regression for datasets that are too large to be held in memory, when used in conjunction with the ff package .

 

The Brobdingnagpackage can be used to manipulate large numbers (numbers larger than 2^1024).

 

High-Performance and Parallel Computing with R (cran.r-project.org/web/views)

原文链接: https://www.cnblogs.com/buttonwood/archive/2012/07/16/2593953.html

欢迎关注

微信关注下方公众号,第一时间获取干货硬货;公众号内回复【pdf】免费获取数百本计算机经典书籍

    Working with large datasets

原创文章受到原创版权保护。转载请注明出处:https://www.ccppcoding.com/archives/55540

非原创文章文中已经注明原地址,如有侵权,联系删除

关注公众号【高性能架构探索】,第一时间获取最新文章

转载文章受原作者版权保护。转载请注明原作者出处!

(0)
上一篇 2023年2月9日 上午6:36
下一篇 2023年2月9日 上午6:38

相关推荐