Skip to main content
. 2024 Dec 20;26(12):1114. doi: 10.3390/e26121114
Algorithm 2 Fine Tune Language Model.
  •   1:

    Input: pre_trained_model, children_stories_dataset

  •   2:

    Output: fine_tuned_model

  •   3:

    Begin

  •   4:

    Initialize pre-trained model (Mistral, Zephyr, or BERT)

  •   5:

    Load children_stories_dataset with suitable content

  •   6:

    Preprocess dataset

  •   7:

       Tokenize text data

  •   8:

       Normalize text to lower case, remove unwanted symbols

  •   9:

       Split data into training and validation sets

  • 10:

    Configure training parameters

  • 11:

       Set learning rate, batch size, epochs

  • 12:

       Use early stopping to prevent overfitting

  • 13:

    Fine-tune model

  • 14:

       Train model on training set

  • 15:

       Validate on validation set after each epoch

  • 16:

    Evaluate the model using validation loss and metrics (ROUGE, METEOR)

  • 17:

    Save the fine-tuned model for story generation

  • 18:

    End