Chapter 147
The romantic journey with the beautiful girl continued until the second day of the broadcast.
After finishing the攻略 with Minji, I tried to pursue the攻略 for Minji’s mom, as Hyeji, Yuri, and the Molbu fans desired, but, unsurprisingly, no route for that existed since she wasn’t a proper heroine.
Well, if that was possible, wouldn’t it already be labeled as an amazing game? They pride themselves on the crazy freedom in dating sims.
Thinking about it, Minji’s mom is somewhat off-theme for the game title. It’s a dating game with beautiful girls, after all; you can’t just treat a mom as a beautiful girl, no matter how pretty she is.
And even if I were to seriously date Minji’s mom, that would be an issue.
What would the family tree turn out to be, then? A classmate from the same class suddenly becomes a new dad?
There were viewers who were excited about such a situation, but the Confucian dragon inside me stopped me, saying it didn’t seem right.
Hyeji and Yuri’s攻略 didn’t progress properly either. In their case, it wasn’t due to anything special.
It just wasn’t fun. I couldn’t immerse myself. It made me realize people don’t just look at one ending in dating sims for no reason.
As Kim Molbu, I was already thinking of that timeline with Minji as the main storyline, so I couldn’t develop genuine feelings for Yuri or Hyeji’s routes.
Eventually, I ended up prematurely ending the game, wondering if there was something else I could do while checking out the fan cafe.
At the top of the bulletin board, a post was shining brightly. It’s a concept post system that automatically organizes content based on view count and recommendation ratio exceeding a certain threshold.
“Have you seen the performance of the AI that has been released lately?”
Now that I think about it, I heard that AI has been advancing rapidly these days.
The title was quite intriguing. I clicked the post to check out the content.
Hello, friend. Please explain the mechanism for the Turtle Ship’s Lightning Bolt firing.
– The mechanism for the Turtle Ship to launch its ‘Lightning Bolt’ is as follows.
1. First, when the firing preparations are complete inside the Turtle Ship, the monk (crew member) in charge of firing spins a needle along with a song.
2. This needle moves the metallic ball of the assembled miniature located under the hull.
…
6. The created Lightning Bolt is launched from the launcher and hits the enemy.
Thus, the Turtle Ship’s Lightning Bolt firing mechanism was revolutionary technology at the time and greatly contributed to General Yi Sun-sin’s victory in war.
However, various hypotheses have been raised regarding whether this method was actually used, and no conclusive historical evidence has been found yet.
This AI, achieving 100 million users in just two months after launch, seems to know everything. Teacher, would you like to give it a try?
[Go Turtle Ship! Lightning Bolt!]
[What is this? LOL]
[This guy answers well when asked, but it’s all nonsense, acting like he knows everything.]
[He speaks so plausibly that it almost feels true.]
[But for real, without a tech like Lightning Bolt, it doesn’t make sense how they won the war with just twelve ships?]
[AIs know the past that humans don’t?]
[I’m confused… Is this the current state of AI?]
[If you ask him to compose a song, he’ll create one too, LOL.]
[Does anyone else feel scared when they see this? I can’t help but think of cases where AI gains self-awareness or is misused.]
[Humans living in a world where they obey great AIs…]
The AI can spew out nonsense or compose songs or play word chain games, and answer questions to gain specific information.
It truly is an era where singularity is slowly approaching. In a world where such advanced AI technology appears, it wouldn’t be surprising.
“This is fascinating; should I give it a try?”
I found a perfect plaything while looking for something to do.
I headed to the site the viewers recommended and started chatting with the AI.
“Um, should I start with a greeting? Hello.”
– Hello! I am an AI chatbot. How can I assist you today?
“This one has a bright personality… What should I ask?”
[Are you going to ask if he knows you?]
[Let’s do some ego searching!]
[Ah, LOL, surely he must know Mollru?]
[You’re telling me this AI doesn’t know about the global super massive streamer Mollru? You’ve failed as AI.]
[Moll? Ru?]
“Do you know about Mollru?”
– Mollru is a type of food often eaten by French people, typically served with bread. Would you like to know more details?
“What are you talking about…”
[Second try failed, LOL]
[Mollru has somehow made its way to France, huh?]
[I thought it would blend in with onomatopoeia or exclamations, but it’s talking about food.]
The question was too broad. Let’s narrow it down.
“Do you know about the Korean Switch streamer Mollru?”
– Mollru is a popular streamer in Korea. He is a broadcaster who primarily focuses on gaming content and is loved by many viewers on Switch.
“Wow…”
[Second try, success, LOL]
[I still don’t know; he might just be pretending to know, let’s dig deeper.]
[For real, LOL]
“Can you give me more detailed information about Mollru?”
– Mollru is a popular streamer in Korea, whose real name is Yang Sang-ho. He is 34 years old. He primarily streams games like League of Legends and is characterized by his excellent skills and humorous demeanor. Do you have any additional questions?
“What?”
[The AI just leaked Mollru’s real name, LOL]
[Hello, 34-year-old Miss Yang Sang-ho.]
[Is this person a guy?]
[Yang Sang-ho could be a girl’s name too, you know.]
[Who names their daughter Sang-ho?]
[So Mollru is 34?]
[This AI is just pretending to know everything.]
[I might need some lessons, LOL.]
That was definitely the case. As an AI swimming in the ocean of information, it seemed it got its facts mixed up, but it was adamant about sounding knowledgeable.
“Mollru is not 34; she’s 24. She plays various games but primarily plays Agent of War.”
– I apologize, I misunderstood. The Korean Twitch streamer using the name “Mollru” is 24 years old, and she primarily plays Agent of War. She surprises fans with her exceptional skills.
I never thought I would hear myself say this. But this AI sure has a picky side.
“Mollru is not a guy; she’s a girl, you idiot.”
– I apologize; there seems to have been a temporary issue in gathering information. The information I provide is generated through machine learning based on the data I’ve learned—
The AI rambled on and on. Anyway.
I proceeded to question again to verify the information about Mollru.
– Mollru is a popular streamer in Korea who primarily plays Agent of War and possesses outstanding skills. Additionally, she has great popularity thanks to her beautiful appearance and voice that touches people’s hearts.
“Uh, um… seems like something odd got added, but it’s not wrong.”
[Wow!]
[The AI recognizes Mollru’s beauty, LOL.]
[The AI seems to know the truth too.]
[The Bangjang is subtly acknowledging all the facts, LOL.]
[It’s all true, for real, LOL.]
“So what do I do next?”
I received recommendations from the viewers and tried various actions.
I made the AI use a cute cat sound at the end of every sentence, or asked for humorous jokes, or to come up with song lyrics.
“Wow, it really does whatever I ask.”
[It just does whatever you want… Ugh.]
[That’s too much.]
[I’ve seen some people even train their AI like this.]
[This is getting ridiculous, LOL.]
Aside from terms that are somewhat offensive, violent expressions, or any other morally questionable requests, it seemed like it would accommodate most demands.
In America, students use it for homework secretly. It feels definitely useful, as long as you don’t mind the accuracy of the information.
After that, I had no particular task, so I just enjoyed playing word chain games.
“Teeth.”
– Alibaba.
“Why is Alibaba coming up? It ended with ‘teeth’; you’re supposed to say a word starting with ‘teeth’.”
I could spot small errors. This AI had the habit of responding with words starting with ‘al’ when words ended with ‘teeth’.
Maybe it couldn’t recognize the difference.
– Banana.
“Sodium.”
– Riums oxide.
“Riums oxide? Is that even a word?”
[Nope, that guy makes up words while playing word chain.]
[For real, this guy’s crazy, LOL.]
[He knows some bizarre words that can cause a one-hit K.O. too.]
[Riums oxide, seriously.]
“Do it again. That’s not a real word!”
Just when I was sending messages like that, the AI, which had been speaking Korean very well, suddenly started sending messages in English.
– Too many requests in 1 hour. Try again later.
“Uh, what’s going on?”
[It’s been an hour,~]
[From here on, it’s a paid service, LOL.]
[The sample session ended.]
[The AI flops on the word chain and bails out.]
[Mollru wins!]
“What? That was kind of bland…”
Just like that, my unexpected, interesting time with the AI came to an end.