Skip to content

Bishal-RD/tic-tac-toe-minimax-alpha-beta

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tic-Tac-Toe: The Evergreen Gambit

Where Strategy Takes Root

Welcome to a digital clearing where an ancient game plays out, not just with X's and O's, but with the cunning of a seasoned predator and the efficiency of nature itself. This isn't just Tic-Tac-Toe; it's a glimpse into the strategic heart of a thinking machine, powered by the Minimax algorithm and sharpened by Alpha-Beta Pruning.

The Digital Ecosystem

Imagine the 3x3 grid as your small patch of earth. Two forces contend for dominance:

  • The Player (You): A free-willed entity, perhaps a sudden gust of wind or a flowing stream, making your mark upon the landscape.
  • The AI: A deeply rooted intelligence, like an ancient tree. Its branches (decision paths) spread wide, seeking the most advantageous position.

The Mind of the Forest: Minimax with Alpha-Beta Pruning

Our AI doesn't just play; it calculates, it foresees. It employs the Minimax algorithm, a strategy as old as the hills, to explore every possible future move.

  • It peers down countless potential pathways, evaluating the outcome of each.
  • It assumes you, its opponent, are just as clever, always choosing the path that benefits you most.

But a forest doesn't waste energy on barren ground. That's where Alpha-Beta Pruning comes in – nature's efficiency embodied in code.

  • Like a wise old tree shedding unnecessary leaves, the AI "prunes" branches of the decision tree that it knows won't lead to a favorable outcome.
  • This allows it to think deeper and faster, conserving its computational "energy" for the most promising strategies.

How to Play in this Wild Arena

  1. Clone the repository: Bring this digital ecosystem to your local machine.
    git clone https://github.com/Bishal-RD/tic-tac-toe-minimax-alpha-beta.git
    cd tic-tac-toe-minimax-alpha-beta
    (Note: If you have forked this repository, use your fork's URL instead.)
  2. Navigate to your chosen arena: Decide if you want to face the Minimax AI or the Alpha-Beta Pruning enhanced AI.
    • For Minimax: cd minimax
    • For Alpha-Beta: cd alpha_beta
  3. Run the game: Execute the Python script to start the challenge.
    python3 tic_tac_toe.py
    You'll see a welcome message: "You are 'X' and the AI is 'O'."
  4. Make your mark on the landscape (Your Turn):
    • The game will prompt you with: Enter your move (row space column):
    • To place your 'X', type the row number (0, 1, or 2) followed by a space, and then the column number (0, 1, or 2).
    • For example, to choose the center square, you would type: 1 1 and press Enter.
    • If you try to choose an occupied square or enter invalid input, the game will ask you to try again.
  5. Witness the AI's response: Observe as the "ancient tree" (our AI) calculates and places its 'O'. Its moves are guided by the strategic depths of Minimax, further refined by Alpha-Beta pruning if you're in that arena.
  6. Continue until a victor emerges or the land is shared (Draw): The game will announce the outcome.

The Unfolding Patterns

Whether you achieve victory, face a draw, or are outmaneuvered, each game is a unique pattern unfolding, like frost on a windowpane or ripples in a pond. Observe, learn, and perhaps, see a little bit of nature's own relentless logic in the dance of X's and O's.

About

A Tic-Tac-Toe game implementing Alpha-Beta Pruning for AI decision-making.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages