Type: | Package |
Title: | Estimate and Simulate from Location Dependent Marked Point Processes |
Version: | 1.0.4 |
Maintainer: | Lane Drew <lanetdrew@gmail.com> |
Description: | A suite of tools for estimating, assessing model fit, simulating from, and visualizing location dependent marked point processes characterized by regularity in the pattern. You provide a reference marked point process, a set of raster images containing location specific covariates, and select the estimation algorithm and type of mark model. 'ldmppr' estimates the process and mark models and allows you to check the appropriateness of the model using a variety of diagnostic tools. Once a satisfactory model fit is obtained, you can simulate from the model and visualize the results. Documentation for the package 'ldmppr' is available in the form of a vignette. |
License: | GPL (≥ 3) |
Encoding: | UTF-8 |
LazyData: | true |
Imports: | Rcpp (≥ 1.0.12), terra, doParallel, xgboost, ranger, parsnip, dials, bundle, recipes, rsample, tune, workflows, magrittr, hardhat, ggplot2, spatstat.geom, spatstat.explore, nloptr, GET, progress, dplyr, future, furrr, yardstick |
LinkingTo: | Rcpp, RcppArmadillo |
URL: | https://github.com/lanedrew/ldmppr |
BugReports: | https://github.com/lanedrew/ldmppr/issues |
RoxygenNote: | 7.3.2 |
Suggests: | knitr, rmarkdown |
VignetteBuilder: | knitr |
Depends: | R (≥ 3.5.0) |
NeedsCompilation: | yes |
Packaged: | 2025-02-24 20:44:49 UTC; lanedrew |
Author: | Lane Drew |
Repository: | CRAN |
Date/Publication: | 2025-02-24 21:00:02 UTC |
ldmppr: Estimate and Simulate from Location Dependent Marked Point Processes
Description
A suite of tools for estimating, assessing model fit, simulating from, and visualizing location dependent marked point processes characterized by regularity in the pattern. You provide a reference marked point process, a set of raster images containing location specific covariates, and select the estimation algorithm and type of mark model. 'ldmppr' estimates the process and mark models and allows you to check the appropriateness of the model using a variety of diagnostic tools. Once a satisfactory model fit is obtained, you can simulate from the model and visualize the results. Documentation for the package 'ldmppr' is available in the form of a vignette.
Author(s)
Maintainer: Lane Drew lanetdrew@gmail.com (ORCID) [copyright holder]
Authors:
Andee Kaplan (ORCID)
See Also
Useful links:
Pipe operator
Description
See magrittr::%>%
for details.
Usage
lhs %>% rhs
Arguments
lhs |
A value or the magrittr placeholder. |
rhs |
A function call using the magrittr semantics. |
Value
The result of calling 'rhs(lhs)'.
calculates c_theta
Description
calculates c_theta
Usage
C_theta2_i(xgrid, ygrid, tgrid, data, params, bounds)
Arguments
xgrid |
a vector of grid values for x. |
ygrid |
a vector of grid values for y. |
tgrid |
a t value. |
data |
a matrix of data. |
params |
a vector of parameters. |
bounds |
a vector of bounds for time, x, and y. |
Value
returns the product.
Check the fit of estimated self-correcting model on the reference point pattern dataset
Description
Allows the user to perform global envelope tests for the nonparametric L, F, G, J, E, and V summary functions from the spatstat
package.
These tests serve as a goodness of fit measure for the estimated model relative to the reference dataset of interest.
Usage
check_model_fit(
reference_data,
t_min = 0,
t_max = 1,
sc_params = NULL,
anchor_point = NULL,
raster_list = NULL,
scaled_rasters = FALSE,
mark_model = NULL,
xy_bounds = NULL,
include_comp_inds = FALSE,
thinning = TRUE,
correction = "none",
competition_radius = 15,
n_sim = 2500,
save_sims = TRUE,
verbose = TRUE,
seed = 0
)
Arguments
reference_data |
a ppp object for the reference dataset. |
t_min |
minimum value for time. |
t_max |
maximum value for time. |
sc_params |
vector of parameter values corresponding to (alpha_1, beta_1, gamma_1, alpha_2, beta_2, alpha_3, beta_3, gamma_3). |
anchor_point |
vector of (x,y) coordinates of point to condition on. |
raster_list |
a list of raster objects. |
scaled_rasters |
'TRUE' or 'FALSE' indicating whether the rasters have been scaled. |
mark_model |
a model object (typically from |
xy_bounds |
a vector of domain bounds (2 for x, 2 for y). |
include_comp_inds |
'TRUE' or 'FALSE' indicating whether to generate and use competition indices as covariates. |
thinning |
'TRUE' or 'FALSE' indicating whether to use the thinned or unthinned simulated values. |
correction |
type of correction to apply ("none" or "toroidal"). |
competition_radius |
distance for competition radius if |
n_sim |
number of simulated datasets to generate. |
save_sims |
'TRUE' or 'FALSE' indicating whether to save and return the simulated datasets. |
verbose |
'TRUE' or 'FALSE' indicating whether to show progress of model checking. |
seed |
an integer value to set the seed for reproducibility. |
Details
This function relies on the spatstat
package for the calculation of the point pattern metrics
and the GET
package for the global envelope tests. The L, F, G, J, E, and V functions are a collection of
non-parametric summary statistics that describe the spatial distribution of points and marks in a point pattern.
See the documentation for [spatstat.explore::Lest()], [spatstat.explore::Fest()], [spatstat.explore::Gest()],
[spatstat.explore::Jest()], [spatstat.explore::Emark()], and [spatstat.explore::Vmark()] for more information.
Also, see the [GET::global_envelope_test()] function for more information on the global envelope tests.
Value
a list containing a combined global envelope test, individual global envelope tests for the L, F, G, J, E, and V functions, and simulated metric values (if specified).
References
Baddeley, A., Rubak, E., & Turner, R. (2015). *Spatial Point Patterns: Methodology and Applications with R*. Chapman and Hall/CRC Press, London. ISBN 9781482210200. Available at: https://www.routledge.com/Spatial-Point-Patterns-Methodology-and-Applications-with-R/Baddeley-Rubak-Turner/p/book/9781482210200.
Myllymäki, M., & Mrkvička, T. (2023). GET: Global envelopes in R. arXiv:1911.06583 [stat.ME]. doi:10.48550/arXiv.1911.06583.
Examples
# Note: The example below is provided for illustrative purposes and may take some time to run.
# Load the small example data
data(small_example_data)
# Load the example mark model that previously was trained on the small example data
file_path <- system.file("extdata", "example_mark_model.rds", package = "ldmppr")
mark_model <- bundle::unbundle(readRDS(file_path))
# Load the raster files
raster_paths <- list.files(system.file("extdata", package = "ldmppr"),
pattern = "\\.tif$", full.names = TRUE)
raster_paths <- raster_paths[!grepl("_med\\.tif$", raster_paths)]
rasters <- lapply(raster_paths, terra::rast)
# Scale the rasters
scaled_raster_list <- scale_rasters(rasters)
# Generate the reference pattern
reference_data <- generate_mpp(
locations = small_example_data[, c("x", "y")],
marks = small_example_data$size,
xy_bounds = c(0, 25, 0, 25)
)
# Define an anchor point
M_n <- as.matrix(small_example_data[1, c("x", "y")])
# Specify the estimated parameters of the self-correcting process
# Note: These would generally be estimated using estimate_parameters_sc
# or estimate_parameters_sc_parallel. These values are estimates from
# the small_example_data generating script.
estimated_parameters <- c(
1.42936311, 8.59251417, 0.02153924, 1.89763856,
2.33256061, 1.09522235, 2.66250000, 0.16499789
)
# Check the model fit
example_model_fit <- check_model_fit(
reference_data = reference_data,
t_min = 0,
t_max = 1,
sc_params = estimated_parameters,
anchor_point = M_n,
raster_list = scaled_raster_list,
scaled_rasters = TRUE,
mark_model = mark_model,
xy_bounds = c(0, 25, 0, 25),
include_comp_inds = TRUE,
thinning = TRUE,
correction = "none",
competition_radius = 10,
n_sim = 100,
save_sims = FALSE,
verbose = TRUE,
seed = 90210
)
plot(example_model_fit$combined_env)
calculates sum of values < t
Description
calculates sum of values < t
Usage
conditional_sum(obs_t, eval_t, y)
Arguments
obs_t |
a vector of observed t values. |
eval_t |
a t value. |
y |
a vector of values. |
Value
the conditional sum.
calculates sum of values < t
Description
calculates sum of values < t
Usage
conditional_sum_logical(obs_t, eval_t, y)
Arguments
obs_t |
a vector of observed t values. |
eval_t |
a t value. |
y |
a vector of values. |
Value
the conditional sum.
calculates distance in one dim
Description
calculates distance in one dim
Usage
dist_one_dim(eval_t, obs_t)
Arguments
eval_t |
a t value. |
obs_t |
a vector of t values. |
Value
distance between a single t and the vector of all t values.
Estimate parameters of the self-correcting model using log-likelihood optimization
Description
Estimate the parameters of the self-correcting model using the [nloptr::nloptr()] function given a formatted dataset.
Usage
estimate_parameters_sc(
data,
x_grid = NULL,
y_grid = NULL,
t_grid = NULL,
parameter_inits = NULL,
upper_bounds = NULL,
opt_algorithm = "NLOPT_LN_SBPLX",
nloptr_options = list(maxeval = 400, xtol_rel = 1e-05, maxtime = NULL),
verbose = TRUE
)
Arguments
data |
a matrix or data frame of times and locations in the form (time, x, y). |
x_grid |
a vector of grid values for x. |
y_grid |
a vector of grid values for y. |
t_grid |
a vector of grid values for t. |
parameter_inits |
a vector of parameter initialization values. |
upper_bounds |
a vector of upper bounds for time, x, and y. |
opt_algorithm |
the NLopt algorithm to use for optimization. |
nloptr_options |
a list of named options for [nloptr::nloptr()] including "maxeval", "xtol_rel", and "maxtime". |
verbose |
'TRUE' or 'FALSE' indicating whether to show progress of optimization. |
Details
This function estimates the parameters of the self-correcting model presented in Møller et al. (2016) using the full likelihood. Details regarding the self-correcting model and the estimation procedure can be found in the references.
Value
an [nloptr::nloptr()] object with details of the optimization including solution.
References
Møller, J., Ghorbani, M., & Rubak, E. (2016). Mechanistic spatio-temporal point process models for marked point processes, with a view to forest stand data. Biometrics, 72(3), 687–696. doi:10.1111/biom.12466.
Examples
# Load the small example data
data(small_example_data)
small_example_data <- small_example_data %>%
dplyr::mutate(time = power_law_mapping(size, .5)) %>%
dplyr::select(time, x, y)
# Define the grid values
x_grid <- seq(0, 25, length.out = 5)
y_grid <- seq(0, 25, length.out = 5)
t_grid <- seq(0, 1, length.out = 5)
# Define the parameter initialization values
parameter_inits <- c(1.5, 8.5, .015, 1.5, 3.2, .75, 3, .08)
# Define the upper bounds
upper_bounds <- c(1, 25, 25)
# Estimate the parameters
estimate_parameters_sc(
data = small_example_data,
x_grid = x_grid,
y_grid = y_grid,
t_grid = t_grid,
parameter_inits = parameter_inits,
upper_bounds = upper_bounds,
opt_algorithm = "NLOPT_LN_SBPLX",
nloptr_options = list(
maxeval = 25,
xtol_rel = 1e-2
),
verbose = TRUE
)
Estimate parameters of the self-correcting model using log-likelihood maximization in parallel
Description
Estimate the parameters of the self-correcting model using [nloptr::nloptr()] given a set of delta values. Makes use of parallel computation to allow the user to identify the optimal delta value more quickly given available computational resources.
Usage
estimate_parameters_sc_parallel(
data,
x_grid = NULL,
y_grid = NULL,
t_grid = NULL,
delta_values = NULL,
parameter_inits = NULL,
upper_bounds = NULL,
opt_algorithm = "NLOPT_LN_SBPLX",
nloptr_options = list(maxeval = 400, xtol_rel = 1e-05, maxtime = NULL),
verbose = TRUE,
num_cores = parallel::detectCores() - 1,
set_future_plan = FALSE
)
Arguments
data |
a data frame of locations and sizes labelled ("x", "y", "size"). |
x_grid |
a vector of grid values for x. |
y_grid |
a vector of grid values for y. |
t_grid |
a vector of grid values for t. |
delta_values |
a vector of delta values. |
parameter_inits |
a vector of parameter initialization values. |
upper_bounds |
a vector of upper bounds for time, x, and y. |
opt_algorithm |
the NLopt algorithm to use for maximization. |
nloptr_options |
a list of named options for nloptr including "maxeval", "xtol_rel", and "maxtime". |
verbose |
'TRUE' or 'FALSE' indicating whether to show progress of optimization. |
num_cores |
number of cores to use in parallel estimation. |
set_future_plan |
'TRUE' or 'FALSE' indicating whether to change the parallelization plan for use in the function. |
Details
Details regarding the self-correcting model and the estimation procedure can be found in the references.
Value
a list containing the optimal obtained values and the full [nloptr::nloptr()] results for all provided delta values.
References
Møller, J., Ghorbani, M., & Rubak, E. (2016). Mechanistic spatio-temporal point process models for marked point processes, with a view to forest stand data. Biometrics, 72(3), 687–696. doi:10.1111/biom.12466.
Examples
# Note: This function is designed to be run in parallel and may be computationally intensive.
# Load the small example data
data(small_example_data)
# Define the grid values
x_grid <- seq(0, 25, length.out = 5)
y_grid <- seq(0, 25, length.out = 5)
t_grid <- seq(0, 1, length.out = 5)
# Define the delta values
delta_values <- seq(0.25, 1, by = 0.25)
# Define the parameter initialization values
parameter_inits <- c(1.5, 8.5, .015, 1.5, 3.2, .75, 3, .08)
# Define the upper bounds
upper_bounds <- c(1, 25, 25)
# Estimate the parameters in parallel
estimate_parameters_sc_parallel(
data = small_example_data,
x_grid = x_grid,
y_grid = y_grid,
t_grid = t_grid,
delta_values = delta_values,
parameter_inits = parameter_inits,
upper_bounds = upper_bounds,
opt_algorithm = "NLOPT_LN_SBPLX",
nloptr_options = list(
maxeval = 50,
xtol_rel = 1e-2
),
verbose = TRUE,
set_future_plan = TRUE
)
Extract covariate values from a set of rasters
Description
Extract covariate values from a set of rasters
Usage
extract_covars(locations, raster_list)
Arguments
locations |
a data frame of (x,y) locations with names "x" and "y". |
raster_list |
a list of raster objects. |
Value
a matrix of covariates drawn from the provided rasters.
Examples
# Load example raster data
raster_paths <- list.files(system.file("extdata", package = "ldmppr"),
pattern = "\\.tif$", full.names = TRUE
)
rasters <- lapply(raster_paths, terra::rast)
# Scale the rasters
scaled_raster_list <- scale_rasters(rasters)
# Load example locations
locations <- small_example_data %>%
dplyr::select(x, y) %>%
as.matrix()
# Extract covariates
example_covars <- extract_covars(locations, scaled_raster_list)
head(example_covars)
calculates full product for one grid point
Description
calculates full product for one grid point
Usage
full_product(xgrid, ygrid, tgrid, data, params)
Arguments
xgrid |
a vector of grid values for x. |
ygrid |
a vector of grid values for y. |
tgrid |
a t value. |
data |
a matrix of data. |
params |
a vector of parameters. |
Value
returns the product.
calculates full self-correcting log-likelihood
Description
calculates full self-correcting log-likelihood
Usage
full_sc_lhood(xgrid, ygrid, tgrid, tobs, data, params, bounds)
Arguments
xgrid |
a vector of grid values for x. |
ygrid |
a vector of grid values for y. |
tgrid |
a vector of grid values for t. |
tobs |
a vector of observed values for t. |
data |
a matrix of times and locations. |
params |
a vector of parameters. |
bounds |
a vector of bounds for time, x, and y. |
Value
evaluation of full log-likelihood.
Generate a marked process given locations and marks
Description
Creates an object of class "ppp" that represents a marked point pattern in the two-dimensional plane.
Usage
generate_mpp(locations, marks = NULL, xy_bounds = NULL)
Arguments
locations |
a data frame of (x,y) locations with names "x" and "y". |
marks |
a vector of marks. |
xy_bounds |
a vector of domain bounds (2 for x, 2 for y). |
Value
a ppp object with marks.
Examples
# Load example data
data(small_example_data)
# Generate a marked point process
generate_mpp(
locations = small_example_data %>% dplyr::select(x, y),
marks = small_example_data$size,
xy_bounds = c(0, 25, 0, 25)
)
calculates spatio-temporal interaction
Description
calculates spatio-temporal interaction
Usage
interaction_st(data, params)
Arguments
data |
a matrix of times and locations. |
params |
a vector of parameters. |
Value
a vector of interaction probabilities for every point.
Medium Example Data
Description
A medium sized example dataset consisting of 111 observations in a (50m x 50m) square domain.
Usage
data("medium_example_data")
Format
## 'medium_example_data' A data frame with 111 rows and 3 columns:
- x
x coordinate
- y
y coordinate
- size
Size
...
Details
The dataset was generated using the Snodgrass dataset available at https://data.ess-dive.lbl.gov/view/doi:10.15485/2476543.
The full code to generate this dataset is available in the package's 'data_raw' directory.
Source
Real example dataset. Code to generate it can be found in 'data_raw/medium_example_data.R'.
calculates part 1-1 full
Description
calculates part 1-1 full
Usage
part_1_1_full(data, params)
Arguments
data |
a matrix of locations and times. |
params |
a vector of parameters. |
Value
full likelihood evaluation for part 1.
calculates part 1-2 full
Description
calculates part 1-2 full
Usage
part_1_2_full(data, params)
Arguments
data |
a matrix of locations and times. |
params |
a vector of parameters. |
Value
full likelihood evaluation for part 2.
calculates part 1-3
Description
calculates part 1-3
Usage
part_1_3_full(xgrid, ygrid, tgrid, data, params, bounds)
Arguments
xgrid |
a vector of grid values for x. |
ygrid |
a vector of grid values for y. |
tgrid |
a t value. |
data |
a matrix of times and locations. |
params |
a vector of parameters. |
bounds |
a vector of time, x, and y bounds. |
Value
full likelihood evaluation for part 3.
calculates part 1-4
Description
calculates part 1-4
Usage
part_1_4_full(data, params)
Arguments
data |
a matrix of times and locations. |
params |
a vector of parameters. |
Value
full likelihood evaluation for part 4.
calculates part 1 of the likelihood
Description
calculates part 1 of the likelihood
Usage
part_1_full(xgrid, ygrid, tgrid, data, params, bounds)
Arguments
xgrid |
a vector of grid values for x. |
ygrid |
a vector of grid values for y. |
tgrid |
a t value. |
data |
a matrix of times and locations. |
params |
a vector of parameters. |
bounds |
a vector of bounds for time, x, and y. |
Value
full evaluation of first part of likelihood.
calculates part 2 of the likelihood
Description
calculates part 2 of the likelihood
Usage
part_2_full(xgrid, ygrid, tgrid, data, params, bounds)
Arguments
xgrid |
a vector of grid values for x. |
ygrid |
a vector of grid values for y. |
tgrid |
a vector of grid values for t. |
data |
a matrix of times and locations. |
params |
a vector of parameters. |
bounds |
a vector of bounds for time, x, and y. |
Value
full evaluation of second part of likelihood.
Plot a marked point process
Description
Plot a marked point process
Usage
plot_mpp(mpp_data, pattern_type)
Arguments
mpp_data |
ppp object with marks or data frame with columns (x, y, size). |
pattern_type |
type of pattern to plot ("reference" or "simulated"). |
Value
a ggplot object of the marked point process.
Examples
# Load example data
data(small_example_data)
mpp_data <- generate_mpp(
locations = small_example_data %>% dplyr::select(x, y),
marks = small_example_data$size,
xy_bounds = c(0, 25, 0, 25)
)
# Plot the marked point process
plot_mpp(mpp_data, pattern_type = "reference")
Gentle decay (power-law) mapping function from sizes to arrival times
Description
Gentle decay (power-law) mapping function from sizes to arrival times
Usage
power_law_mapping(sizes, delta)
Arguments
sizes |
vector of sizes to be mapped to arrival times. |
delta |
numeric value (greater than 0) for the exponent in the mapping function. |
Value
vector of arrival times.
Examples
# Generate a vector of sizes
sizes <- runif(100, 0, 100)
# Map the sizes to arrival times using a power-law mapping with delta = .5
power_law_mapping(sizes, .5)
Predict values from the mark distribution
Description
Predict values from the mark distribution
Usage
predict_marks(
sim_realization,
raster_list = NULL,
scaled_rasters = FALSE,
mark_model = NULL,
xy_bounds = NULL,
include_comp_inds = FALSE,
competition_radius = 15,
correction = "none"
)
Arguments
sim_realization |
a data frame containing a thinned or unthinned realization from |
raster_list |
a list of raster objects. |
scaled_rasters |
'TRUE' or 'FALSE' indicating whether the rasters have been scaled. |
mark_model |
a model object (typically from |
xy_bounds |
a vector of domain bounds (2 for x, 2 for y). |
include_comp_inds |
'TRUE' or 'FALSE' indicating whether to generate and use competition indices as covariates. |
competition_radius |
distance for competition radius if |
correction |
type of correction to apply ("none" or "toroidal"). |
Value
a vector of predicted mark values.
Examples
# Simulate a realization
generating_parameters <- c(2, 8, .02, 2.5, 3, 1, 2.5, .2)
M_n <- matrix(c(10, 14), ncol = 1)
generated_locs <- simulate_sc(
t_min = 0,
t_max = 1,
sc_params = generating_parameters,
anchor_point = M_n,
xy_bounds = c(0, 25, 0, 25)
)
# Load the raster files
raster_paths <- list.files(system.file("extdata", package = "ldmppr"),
pattern = "\\.tif$", full.names = TRUE
)
rasters <- lapply(raster_paths, terra::rast)
# Scale the rasters
scaled_raster_list <- scale_rasters(rasters)
# Load the example mark model
file_path <- system.file("extdata", "example_mark_model.rds", package = "ldmppr")
example_mark_model <- readRDS(file_path)
# Unbundle the model
mark_model <- bundle::unbundle(example_mark_model)
# Predict the mark values
predict_marks(
sim_realization = generated_locs$thinned,
raster_list = scaled_raster_list,
scaled_rasters = TRUE,
mark_model = mark_model,
xy_bounds = c(0, 25, 0, 25),
include_comp_inds = TRUE,
competition_radius = 10,
correction = "none"
)
Scale a set of rasters
Description
Scale a set of rasters
Usage
scale_rasters(raster_list, reference_resolution = NULL)
Arguments
raster_list |
a list of raster objects. |
reference_resolution |
the resolution to resample the rasters to. |
Value
a list of scaled raster objects.
Examples
# Create two example rasters
rast_a <- terra::rast(
ncol = 10, nrow = 10,
xmin = 0, xmax = 10,
ymin = 0, ymax = 10,
vals = runif(100)
)
rast_b <- terra::rast(
ncol = 10, nrow = 10,
xmin = 0, xmax = 10,
ymin = 0, ymax = 10,
vals = runif(100)
)
# Scale example rasters in a list
rast_list <- list(rast_a, rast_b)
scale_rasters(rast_list)
Simulate the spatial component of the self-correcting model
Description
Simulate the spatial component of the self-correcting model
Usage
sim_spatial_sc(M_n, params, nsim_t, xy_bounds)
Arguments
M_n |
a vector of (x,y)-coordinates for largest point. |
params |
a vector of parameters (alpha_2, beta_2). |
nsim_t |
number of points to simulate. |
xy_bounds |
vector of lower and upper bounds for the domain (2 for x, 2 for y). |
Value
a matrix of point locations in the (x,y)-plane.
Simulate the temporal component of the self-correcting model
Description
Simulate the temporal component of the self-correcting model
Usage
sim_temporal_sc(Tmin = 0, Tmax = 1, params = as.numeric(c(0, 0, 0)))
Arguments
Tmin |
minimum time value. |
Tmax |
maximum time value. |
params |
a vector of parameters (alpha_1, beta_1, gamma_1). |
Value
a vector of thinned and unthinned temporal samples.
Simulate a realization of a location dependent marked point process
Description
Simulate a realization of a location dependent marked point process
Usage
simulate_mpp(
sc_params = NULL,
t_min = 0,
t_max = 1,
anchor_point = NULL,
raster_list = NULL,
scaled_rasters = FALSE,
mark_model = NULL,
xy_bounds = NULL,
include_comp_inds = FALSE,
competition_radius = 15,
correction = "none",
thinning = TRUE
)
Arguments
sc_params |
vector of parameter values corresponding to (alpha_1, beta_1, gamma_1, alpha_2, beta_2, alpha_3, beta_3, gamma_3). |
t_min |
minimum value for time. |
t_max |
maximum value for time. |
anchor_point |
vector of (x,y) coordinates of point to condition on. |
raster_list |
list of raster objects. |
scaled_rasters |
'TRUE' or 'FALSE' indicating whether the rasters have been scaled. |
mark_model |
a model object (typically from |
xy_bounds |
a vector of domain bounds (2 for x, 2 for y). |
include_comp_inds |
'TRUE' or 'FALSE' indicating whether to generate and use competition indices as covariates. |
competition_radius |
distance for competition radius if |
correction |
type of correction to apply ("none" or "toroidal"). |
thinning |
'TRUE' or 'FALSE' indicating whether to thin the realization. |
Value
a list containing the marked point process realization and the data frame of the realization.
Examples
# Specify the generating parameters of the self-correcting process
generating_parameters <- c(2, 8, .02, 2.5, 3, 1, 2.5, .2)
# Specify an anchor point
M_n <- matrix(c(10, 14), ncol = 1)
# Load the raster files
raster_paths <- list.files(system.file("extdata", package = "ldmppr"),
pattern = "\\.tif$", full.names = TRUE
)
raster_paths <- raster_paths[!grepl("_med\\.tif$", raster_paths)]
rasters <- lapply(raster_paths, terra::rast)
# Scale the rasters
scaled_raster_list <- scale_rasters(rasters)
# Load the example mark model
file_path <- system.file("extdata", "example_mark_model.rds", package = "ldmppr")
mark_model <- bundle::unbundle(readRDS(file_path))
# Simulate a realization
example_mpp <- simulate_mpp(
sc_params = generating_parameters,
t_min = 0,
t_max = 1,
anchor_point = M_n,
raster_list = scaled_raster_list,
scaled_rasters = TRUE,
mark_model = mark_model,
xy_bounds = c(0, 25, 0, 25),
include_comp_inds = TRUE,
competition_radius = 10,
correction = "none",
thinning = TRUE
)
# Plot the realization
plot_mpp(example_mpp$mpp, pattern_type = "simulated")
Simulate from the self-correcting model
Description
Allows the user to simulate a realization from the self-correcting model given a set of parameters and a point to condition on.
Usage
simulate_sc(
t_min = 0,
t_max = 1,
sc_params = NULL,
anchor_point = NULL,
xy_bounds = NULL
)
Arguments
t_min |
minimum value for time. |
t_max |
maximum value for time. |
sc_params |
vector of parameter values corresponding to (alpha_1, beta_1, gamma_1, alpha_2, beta_2, alpha_3, beta_3, gamma_3). |
anchor_point |
vector of (x,y) coordinates of point to condition on. |
xy_bounds |
a vector of domain bounds (2 for x, 2 for y). |
Value
a list containing the thinned and unthinned simulation realizations.
Examples
# Specify the generating parameters of the self-correcting process
generating_parameters <- c(2, 8, .02, 2.5, 3, 1, 2.5, .2)
# Specify an anchor point
M_n <- matrix(c(10, 14), ncol = 1)
# Simulate the self-correcting process
generated_locs <- simulate_sc(
t_min = 0,
t_max = 1,
sc_params = generating_parameters,
anchor_point = M_n,
xy_bounds = c(0, 25, 0, 25)
)
Small Example Data
Description
A small example dataset for testing and examples consisting of 121 observations in a (25m x 25m) square domain.
Usage
data("small_example_data")
Format
## 'small_example_data' A data frame with 121 rows and 3 columns:
- x
x coordinate
- y
y coordinate
- size
Size
...
Details
The dataset was generated using the example raster data and an exponential decay size function.
The full code to generate this dataset is available in the package's 'data_raw' directory.
Source
Simulated dataset. Code to generate it can be found in 'data_raw/small_example_data.R'.
calculates spatial interaction
Description
calculates spatial interaction
Usage
spat_interaction(Hist, newp, params)
Arguments
Hist |
a matrix of points. |
newp |
a new point vector. |
params |
a vector of parameters. |
Value
calculated probability of new point.
calculates temporal likelihood
Description
calculates temporal likelihood
Usage
temporal_sc(params, eval_t, obs_t)
Arguments
params |
a vector of parameters (alpha_1, beta_1, gamma_1). |
eval_t |
a t value. |
obs_t |
a vector of t values. |
Value
evaluation of full temporal likelihood.
Optimized function to compute toroidal distance matrix over a rectangular domain
Description
Optimized function to compute toroidal distance matrix over a rectangular domain
Usage
toroidal_dist_matrix_optimized(location_matrix, x_bound, y_bound)
Arguments
location_matrix |
a 2 column matrix of (x,y) coordinates. |
x_bound |
the upper bound for the x dimension. |
y_bound |
the upper bound for the y dimension. |
Value
a matrix of toroidal distances.
Examples
# Generate a matrix of locations
location_matrix <- matrix(c(1, 2, 3, 4, 5, 6), ncol = 2)
x_bound <- 10
y_bound <- 10
# Compute the toroidal distance matrix
toroidal_dist_matrix_optimized(location_matrix, x_bound, y_bound)
Train a flexible model for the mark distribution
Description
Trains a predictive model for the mark distribution of a spatio-temporal process. Allows the user to incorporate location specific information and competition indices as covariates in the mark model.
Usage
train_mark_model(
data,
raster_list = NULL,
scaled_rasters = FALSE,
model_type = "xgboost",
xy_bounds = NULL,
save_model = FALSE,
save_path = NULL,
parallel = TRUE,
include_comp_inds = FALSE,
competition_radius = 15,
correction = "none",
selection_metric = "rmse",
cv_folds = 5,
tuning_grid_size = 200,
verbose = TRUE
)
Arguments
data |
a data frame containing named vectors x, y, size, and time. |
raster_list |
a list of raster objects. |
scaled_rasters |
'TRUE' or 'FALSE' indicating whether the rasters have been scaled. |
model_type |
the machine learning model type ("xgboost" or "random_forest"). |
xy_bounds |
a vector of domain bounds (2 for x, 2 for y). |
save_model |
'TRUE' or 'FALSE' indicating whether to save the generated model. |
save_path |
path for saving the generated model. |
parallel |
'TRUE' or 'FALSE' indicating whether to use parallelization in model training. |
include_comp_inds |
'TRUE' or 'FALSE' indicating whether to generate and use competition indices as covariates. |
competition_radius |
distance for competition radius if |
correction |
type of correction to apply ("none", "toroidal", or "truncation"). |
selection_metric |
metric to use for identifying the optimal model ("rmse" or "mae"). |
cv_folds |
number of cross-validation folds to use in model training. |
tuning_grid_size |
size of the tuning grid for hyperparameter tuning. |
verbose |
'TRUE' or 'FALSE' indicating whether to show progress of model training. |
Value
a list containing the raw trained model and a bundled model object.
Examples
# Load example raster data
raster_paths <- list.files(system.file("extdata", package = "ldmppr"),
pattern = "\\.tif$", full.names = TRUE
)
raster_paths <- raster_paths[!grepl("_med\\.tif$", raster_paths)]
rasters <- lapply(raster_paths, terra::rast)
# Scale the rasters
scaled_raster_list <- scale_rasters(rasters)
# Load example locations
locations <- small_example_data %>%
dplyr::mutate(time = power_law_mapping(size, .5))
# Train the model
train_mark_model(
data = locations,
raster_list = scaled_raster_list,
scaled_rasters = TRUE,
model_type = "xgboost",
xy_bounds = c(0, 25, 0, 25),
parallel = FALSE,
include_comp_inds = FALSE,
competition_radius = 10,
correction = "none",
selection_metric = "rmse",
cv_folds = 3,
tuning_grid_size = 2,
verbose = TRUE
)
calculates euclidean distance
Description
calculates euclidean distance
Usage
vec_dist(x, y)
Arguments
x |
a vector of x values. |
y |
a vector of y values. |
Value
the distance between the two vectors.
calculates euclidean distance between a vector and a matrix
Description
calculates euclidean distance between a vector and a matrix
Usage
vec_to_mat_dist(eval_u, x_col, y_col)
Arguments
eval_u |
a vector of x and y coordinates. |
x_col |
a vector of x coordinates. |
y_col |
a vector of y coordinates. |
Value
a vector of distances between a vector and each row of a matrix.