Where is "DataIterator"

Here is the “Trainer” I find in current doc. Compared to the old version, it seems removed parameter “iterator”. I’m confused, is there any substitute? How can I define the batch_size or anything else?

    model: allennlp.models.model.Model,
    optimizer: torch.optim.optimizer.Optimizer,
    data_loader: torch.utils.data.dataloader.DataLoader,
    patience: Optional[int] = None,
    validation_metric: str = '-loss',
    validation_data_loader: torch.utils.data.dataloader.DataLoader = None,
    num_epochs: int = 20,
    serialization_dir: Optional[str] = None,
    num_serialized_models_to_keep: int = 20,
    keep_serialized_model_every_num_seconds: int = None,
    checkpointer: allennlp.training.checkpointer.Checkpointer = None,
    model_save_interval: float = None,
    cuda_device: int = -1,
    grad_norm: Optional[float] = None,
    grad_clipping: Optional[float] = None,
    learning_rate_scheduler: Optional[allennlp.training.learning_rate_schedulers.learning_rate_scheduler.LearningRateScheduler] = None,
    momentum_scheduler: Optional[allennlp.training.momentum_schedulers.momentum_scheduler.MomentumScheduler] = None,
    summary_interval: int = 100,
    histogram_interval: int = None,
    should_log_parameter_statistics: bool = True,
    should_log_learning_rate: bool = False,
    log_batch_size_period: Optional[int] = None,
    moving_average: Optional[allennlp.training.moving_average.MovingAverage] = None,
    distributed: bool = False,
    local_rank: int = 0,
    world_size: int = 1,
    num_gradient_accumulation_steps: int = 1,
    opt_level: Optional[str] = None,
) -> None

We switched from our own implementation of DataIterator to pytorch’s native DataLoader. See the documentation here: http://docs.allennlp.org/master/api/data/dataloader/ (also look at the samplers).