The differences between various types of the AT29C256 (a 256Kbit [32K x 8] 5-volt only Flash memory chip) primarily relate to speed, packaging, temperature ratings, and manufacturing reliability. While the core memory functionality remains the same, these factors determine the best fit for specific applications.
Here are the key areas of variation based on part number suffixes:
1. Access Speed (Speed Ratings)
The numbers following the hyphen (e.g., 90, 12, 15) indicate the maximum access time in nanoseconds (ns).
70/700: 70ns (Fastest)
90/900: 90ns
12/120: 120ns
15/150: 150ns
Faster chips can be used in place of slower ones, but not always vice versa.
2. Package Types
The package affects how the chip is soldered or socketed onto a board.
P (PDIP): 28-pin Plastic Dual Inline Package (ideal for through-hole breadboards).
J (PLCC): 32-pin Plastic Leaded Chip Carrier (surface mount or socket).
T (TSOP): 28-lead Thin Small Outline Package (smaller surface mount).
3. Temperature Range and Reliability
Suffixes after the package type indicate the environmental rating.
C: Commercial (0°C to 70°C).
I: Industrial (-40°C to +85°C).
RoHS Compliance: Some newer versions are RoHS compliant (lead-free), whereas older stock might not be.
MRUK: Use the Mixed Reality Utility Kit (MRUK) to achieve passthrough relighting.
Import the sample package: Import the necessary passthrough relighting sample package into your project to access the required components.
Replace OVR: The MRUK component replaces the OVR scene manager and includes an effect mesh component that applies a material to your room models, handling highlights and shadows correctly.
This video explains how to use passthrough relighting in Quest 3 with Unity:
Mixed reality (MR) passthrough lighting
Quest 3 passthrough applications must solve the complex problem of blending virtual and real-world lighting. This requires using the Meta SDK to create virtual lights and shadows that interact with your actual physical environment.
1. Enable passthrough relighting
Add the OVR components: Begin by setting up passthrough in your scene with the OVR Manager and OVR Passthrough Layer components. You will need to set the Passthrough Support to Supported.
Import the Meta XR Interaction SDK: Passthrough relighting capabilities are part of the Meta XR Interaction SDK (MRUK package). Import this into your Unity project to gain access to the necessary components and prefabs.
Add the MRUK prefab: Add the MRUK prefab to your scene hierarchy. The MRUK handles the generation of the real-world meshes that your virtual lights and shadows can interact with.
2. Configure shadows and occlusion
Enable occlusion meshes: The Meta SDK generates a virtual mesh of your room. This “scene mesh” is required for shadows and occlusion to work correctly.
Cast shadows on the real world: With the scene mesh in place, you can configure your virtual lights to cast shadows onto the real-world environment. This adds depth and realism to your mixed-reality experience.
Virtual lights will cast shadows onto the generated scene mesh, and objects marked as casting shadows will appear to block real-world light.
3. Estimate real-world lighting for reflections
To make virtual objects reflect real-world light, you can leverage the headset’s cameras to generate a real-time reflection probe.
Create a real-time reflection probe: Add a reflection probe to your scene.
Map to a cubemap: Map the passthrough camera feed to a cubemap and project this onto the reflection probe. This allows the probe to capture the surrounding real-world lighting environment.
Improve PBR material plausibility: When used on physically based rendering (PBR) materials, this technique allows virtual objects to realistically reflect the real-world environment, dramatically improving visual plausibility.
Realtime Lighting Estimation on Quest 3 This video demonstrates the unlocked potential of Meta Quest 3 Camera Access via the Media Projection API. When mapped to a cubemap and projected onto a real-time reflection probe in Unity, you can finally use the environment to light your PBR materials. This dramatically improves the plausibility of the 3D Model in the environment – especially of course when you are in Mixed Reality. The video shows how this approach allows to adopt to different lighting conditions in “Real Time”.
For Meta Quest 3 and Unity, use mixed lighting with a combination of baked lighting for static objects and real-time lighting for dynamic objects to balance performance and visual quality. Key optimization techniques include using the Lightmap parameter for baked lighting, leveraging light probes for dynamic objects, and for passthrough AR, using the passthrough relighting features for realistic blending with the real world.
This video explains the basics of lighting in Unity, including real-time, baked, and mixed lighting:
Lighting types and performance
Baked Lighting: Pre-calculates how light bounces off static objects and stores this information in lightmaps. This is highly performant, but dynamic objects can appear flat or lack proper shadows.
Real-time Lighting: Calculates lighting and shadows dynamically as the scene changes. This is very expensive and generally not suitable for Quest 3 due to performance limitations.
Mixed Lighting: A combination of both baked and real-time lighting. Static objects use baked lighting, while dynamic objects use real-time lighting and cast their own shadows. This is the recommended approach for Quest 3.
Key considerations for Quest 3 lighting link1, link2
Use the Universal Render Pipeline (URP): All modern VR projects for Quest should be built using URP. It is optimized for mobile hardware and offers the best performance for Quest devices.
Manage shadow quality: Real-time shadows can be very costly. If you use them, configure the URP settings to balance shadow quality with performance.
Profile your project: Use the Unity Profiler and Oculus Debug Tool to monitor your application’s performance. Watch for performance drops related to lighting, as real-time lighting can quickly overwhelm the Quest’s mobile processor.
Optimization and setup
You can watch this video to learn how to optimize lighting in Unity for VR on Quest:
Set up lighting:
Go to Window > Rendering > Lighting to open the Lighting window.
For best performance, enable “Baked Global Illumination” under the “Mixed” lighting setting. Leave the lightmapper settings at their defaults for now. For a powerful GPU, you can try “GPU Processing”.
Bake static lighting:
Ensure your static objects are marked as “Static” in the inspector.
Press the “Generate Lighting” button to bake the lightmaps for your scene.
Illuminate dynamic objects:
Place Light Probes strategically around your scene to capture baked lighting information. This will allow dynamic objects (like the player’s hands) to receive a more realistic lighting response without needing expensive real-time calculations for static elements.
AR and passthrough relighting:
For AR projects, use Passthrough Relighting to make virtual objects blend more realistically with the real world. This feature allows virtual lights and shadows to interact with real-world surfaces like floors and walls.
Optimizations for Quest 3
Pixel Light Count: In your project’s Quality settings (for URP), ensure the Additional Lights setting is set to “Per Pixel” and increase the Per Object Limit to allow more light sources to affect objects.
Light Probes: Use light probes to add lighting to dynamic objects, which is crucial for a more immersive experience. Place probes strategically in areas with significant lighting changes, ensuring they don’t intersect with other objects to avoid issues.
Baked vs. Realtime: For scenes with many static objects, bake the lighting to reduce performance costs. If you need dynamic shadows, use the mixed lighting mode, which allows dynamic objects to cast shadows while static objects use baked shadows.
Lighting Settings: Access and adjust lighting settings by navigating to Window > Rendering > Lighting in the Unity editor. You can adjust scene lighting and optimize your precomputed lighting data here.
This video explains how to use light probes to optimize lighting in VR:
Optimizing Universal Render Pipeline (URP) shadow settings for the Quest 3 requires balancing visual fidelity with performance. The goal is to achieve realistic-looking shadows that don’t cause frame rate drops, which are particularly jarring and disorienting in VR.
Most shadow settings are found in your URP Asset, which you can locate in your project’s Assets > Settings folder. The settings should be configured differently for your directional “main light” versus additional lights like spots and points.
Shadow performance fundamentals
Before adjusting settings, remember these key concepts:
Shadowmaps are expensive: Rendering shadows requires generating separate texture maps (shadowmaps) from the perspective of each light source. A point light, for instance, requires six shadowmaps for its cube mapping, making it extremely costly.
Draw calls are critical: The number of objects casting shadows can be a major performance bottleneck. Minimizing draw calls is a top priority for mobile VR development.
Bake what you can: Whenever possible, use baked shadows for static objects. This eliminates the runtime performance cost entirely and generally produces higher-quality, more stable results.
URP Asset shadow settings
Access your URP Asset and navigate to the Shadows section to adjust these settings:
Main light shadows
The settings for your directional light are crucial, as it typically illuminates the largest area.
Max Distance: This is the most important setting for a directional light’s shadows.
Action: Set the value as low as artistically acceptable. The shadowmap is stretched across this distance, so a shorter distance results in a denser, higher-quality shadow near the player.
Reason: By limiting the range of rendered shadows, you reduce the area that must be calculated, freeing up significant GPU resources. You can use fog to hide where the shadows are culled.
Shadow Resolution: Choose the smallest resolution that still provides an acceptable visual result.
Action: Start with a low resolution, like 512, and increase it only if the shadows are too blurry.
Reason: High shadowmap resolutions have a direct impact on performance.
Cascade Count: For VR, you should only use a single cascade.
Action: Set the cascade count to 1.
Reason: Shadow cascades, which divide the shadow frustum into multiple maps, are a performance drain and often unnecessary for VR given the close-up, immediate nature of the experience.
Soft Shadows: Soft shadows generally look better, but they are more expensive to render.
Action: Leave soft shadows disabled. If you must have them for aesthetic reasons, set them to the lowest quality and profile for performance impact.
Additional lights (point and spot)
Shadow Atlas Resolution: Control the maximum size of the texture atlas for all additional light shadows.
Action: Set this to a low value, such as 512 or 1024, to limit video memory usage.
Shadow Resolution Tiers: Reduce the resolution for lights farther away from the camera.
Action: Set tiers to low or medium. This ensures that only the most important, nearby additional lights get higher-resolution shadows.
Optimizing shadow casters
In addition to the global URP settings, you can optimize shadows on a per-object basis.
Reduce shadow-casting objects: Avoid having every object cast a shadow.
Action: For static objects, mark them as static and bake their shadows. For dynamic objects, go to the Mesh Renderer component and set Cast Shadows to Off for non-essential items.
Use simplified shadow meshes: For complex dynamic characters, you can create a simplified, invisible mesh to cast the shadow.
Action: Create a low-polygon version of the mesh. On the original mesh’s Mesh Renderer, set Cast Shadows to Off. On the simplified mesh, set Cast Shadows to Shadows Only.
Disable shadows for small objects: For small items like pebbles or debris, disable their shadow casting.
Action: Use a script or manually set the Cast Shadows property to Off on their Mesh Renderer components.
Disable shadows based on distance: For dynamic lights, use a script to disable shadows when the light source is far from the camera.
Shadow artifacts and profiling
Shadow Acne and Peter Panning: These are common shadow artifacts. You can adjust the Depth Bias and Normal Bias settings in your URP Asset or on individual lights to fix them. Start with small changes and observe the results.
Profile your application: When unsure about the performance impact of a setting, use the Unity Profiler and the Oculus Debug Tool to measure frame rates and draw calls directly on the Quest 3. This is the most reliable way to confirm if a change is an improvement or a detriment.
Other ways to optimize shadows in Unity for VR
Beyond scripts and URP settings, several other techniques can dramatically optimize shadows for VR, especially on mobile VR devices like the Quest 3. These methods often trade a small amount of visual quality for a significant gain in performance.
1. The Shadowmask lighting mode
This is a hybrid approach that provides high-quality static shadows while also allowing for real-time shadows from dynamic objects.
How it works: Static geometry has its shadows pre-calculated and stored in a lightmap (the “shadow mask”), which is very cheap to render. Dynamic objects then cast real-time shadows on top of the static baked lightmap.
Best for: Scenes with a mix of static and moving objects, where you want high-quality shadows from your environment without the runtime cost.
Implementation:
Go to Project Settings > Quality and set the Shadowmask Mode to Shadowmask.
For static objects, ensure the Mesh Renderer has Contribute Global Illumination enabled and Cast Shadows set to On.
Generate your lighting in the Lighting Window (Window > Rendering > Lighting).
2. Simplified shadow meshes
For complex or high-poly models, rendering the full mesh into the shadow map is computationally expensive. Using a low-poly proxy mesh for shadow casting can save significant GPU time.
How it works: You create a simplified, low-poly version of a complex model (e.g., a character) that is only visible to the light source. The original model is configured to not cast shadows, and the simplified model casts “Shadows Only.”.
Best for: Characters, complex props, or vehicles that are constantly moving and must cast real-time shadows.
Implementation:
Duplicate your high-poly mesh.
Use a 3D modeling tool to simplify the new mesh, or create one manually.
On the original mesh’s Mesh Renderer, set Cast Shadows to Off.
On the new, simplified mesh, set Cast Shadows to Shadows Only.
3. Blob shadows
For less realistic art styles, blob shadows are a classic performance optimization trick. Instead of rendering a complex shadow map, you project a simple, circular, or custom-textured decal onto the ground below a character.
How it works: A simple plane with a transparent texture is placed under a dynamic object. It follows the object’s movement but has no real-time lighting calculation, making it extremely fast.
Best for: Cartoonish or stylized games where realism is not the main goal. It’s a very cheap way to provide a sense of depth and contact with the ground.
Implementation:
Create a transparent texture of a faded circle or blob.
Create a simple quad or plane and assign a transparent, unlit material with the blob texture to it.
Parent this quad to your character and ensure it always stays just above the ground.
Adjust the color and transparency of the quad to create the shadow effect.
4. Occlusion culling
While not a shadow-specific technique, occlusion culling can indirectly boost shadow performance by reducing the number of objects rendered.
How it works: Occlusion culling prevents rendering objects that are blocked from the camera’s view by other objects. Since objects that are not rendered cannot cast shadows, this automatically reduces the workload for the shadow pass.
Best for: Large, indoor environments with many rooms, corridors, or structures that can occlude geometry from view.
Implementation:
Mark your static environment geometry as Occluder Static in the Inspector.
In the Window > Rendering > Occlusion Culling window, bake the occlusion data.
Ensure your VR camera is set up to use occlusion culling.
5. Level of Detail (LOD) for shadows
For objects with LOD groups, you can configure their lower-detail levels to not cast shadows or to use a simplified shadow.
How it works: As an object gets farther away and switches to a lower LOD, you can set the Mesh Renderer on that LOD level to disable shadow casting. This saves rendering time for shadows that would be imperceptible at a distance.
Best for: Large, complex models like buildings, trees, or characters that are part of a LOD group.
Implementation:
Select the GameObject with the LOD Group component.
For each LOD level, click the small box next to the mesh renderer.
In the Mesh Renderer component for the farther LOD levels, set the Cast Shadows property to Off.
Examples of scripts that disable shadows based on distance
For VR development, you should generally rely on the URP Shadow Distance setting to manage shadow visibility based on distance for all objects. For more specific needs, like disabling shadows for a single moving object that is far away, you can use scripts.
1. Script for a single dynamic object
This script is attached to a specific GameObject to manage its shadow-casting behavior. The script compares the object’s distance from the main camera to a predefined threshold.
using UnityEngine;
[RequireComponent(typeof(MeshRenderer))]
public class DistanceBasedShadows : MonoBehaviour
{
// The main camera in the scene. In VR, this is the camera that renders the player's view.
private Transform mainCameraTransform;
// The MeshRenderer component of this object.
private MeshRenderer meshRenderer;
// The maximum distance at which the object should cast shadows.
public float maxShadowDistance = 20f;
// A small buffer distance to prevent constant flickering at the threshold.
public float distanceHysteresis = 2f;
// The initial shadow casting mode.
private UnityEngine.Rendering.ShadowCastingMode initialShadowCastingMode;
void Start()
{
// Find the main camera.
mainCameraTransform = Camera.main.transform;
// Get the MeshRenderer component.
meshRenderer = GetComponent<MeshRenderer>();
// Store the initial shadow casting mode to restore later.
initialShadowCastingMode = meshRenderer.shadowCastingMode;
}
void Update()
{
// If the main camera is not set, exit to prevent errors.
if (mainCameraTransform == null)
{
return;
}
// Calculate the distance from the camera to this object.
float distance = Vector3.Distance(transform.position, mainCameraTransform.position);
// Toggle shadows based on the distance.
if (distance > maxShadowDistance)
{
// Disable shadows if they are not already off.
if (meshRenderer.shadowCastingMode != UnityEngine.Rendering.ShadowCastingMode.Off)
{
meshRenderer.shadowCastingMode = UnityEngine.Rendering.ShadowCastingMode.Off;
}
}
else if (distance < maxShadowDistance - distanceHysteresis)
{
// Re-enable shadows if they are not already on and the object is close enough.
if (meshRenderer.shadowCastingMode != initialShadowCastingMode)
{
meshRenderer.shadowCastingMode = initialShadowCastingMode;
}
}
}
}
2. Script for managing multiple objects
For a cleaner approach, you can create a single “manager” script that controls multiple shadow-casting objects. This avoids the overhead of having an Update loop running on many individual game objects.
using System.Collections.Generic;
using UnityEngine;
public class ShadowDistanceManager : MonoBehaviour
{
// A list of all the MeshRenderers that this script will manage.
public List<MeshRenderer> managedRenderers;
// The main camera in the scene.
private Transform mainCameraTransform;
// The maximum distance for shadows.
public float maxShadowDistance = 20f;
// A small buffer to prevent flickering at the threshold.
public float distanceHysteresis = 2f;
// Storage for the initial shadow casting mode of each renderer.
private Dictionary<MeshRenderer, UnityEngine.Rendering.ShadowCastingMode> initialModes = new Dictionary<MeshRenderer, UnityEngine.Rendering.ShadowCastingMode>();
void Start()
{
mainCameraTransform = Camera.main.transform;
// Store the initial state of each renderer.
foreach (var renderer in managedRenderers)
{
if (renderer != null)
{
initialModes[renderer] = renderer.shadowCastingMode;
}
}
}
void Update()
{
if (mainCameraTransform == null)
{
return;
}
// Iterate through all managed renderers.
foreach (var renderer in managedRenderers)
{
if (renderer == null) continue;
float distance = Vector3.Distance(renderer.transform.position, mainCameraTransform.position);
// Toggle shadows based on distance.
if (distance > maxShadowDistance)
{
if (renderer.shadowCastingMode != UnityEngine.Rendering.ShadowCastingMode.Off)
{
renderer.shadowCastingMode = UnityEngine.Rendering.ShadowCastingMode.Off;
}
}
else if (distance < maxShadowDistance - distanceHysteresis)
{
if (renderer.shadowCastingMode != initialModes[renderer])
{
renderer.shadowCastingMode = initialModes[renderer];
}
}
}
}
}
3. Script for a single point or spot light
This script manages the shadows of a specific light source, such as a torch or lamp, disabling them when the light is far from the player.
using UnityEngine;
[RequireComponent(typeof(Light))]
public class LightShadowToggler : MonoBehaviour
{
private Transform mainCameraTransform;
private Light lightComponent;
public float maxShadowDistance = 15f;
public float distanceHysteresis = 1f;
private LightShadows initialShadowsSetting;
void Start()
{
mainCameraTransform = Camera.main.transform;
lightComponent = GetComponent<Light>();
initialShadowsSetting = lightComponent.shadows;
}
void Update()
{
if (mainCameraTransform == null)
{
return;
}
float distance = Vector3.Distance(transform.position, mainCameraTransform.position);
if (distance > maxShadowDistance)
{
if (lightComponent.shadows != LightShadows.None)
{
lightComponent.shadows = LightShadows.None;
}
}
else if (distance < maxShadowDistance - distanceHysteresis)
{
if (lightComponent.shadows != initialShadowsSetting)
{
lightComponent.shadows = initialShadowsSetting;
}
}
}
}
I was researching for a story backdrop for a game prototype. Preferably an adventure that involves traveling, making friends, and fighting evils. As someone who grew up in Asia, the first two that came to my mind was the Journey to the West (西遊記) and Momotarō (桃太郎). When I research further, I found some interesting articles about using a modified version of the story Momotarō as a propaganda tool in the colonial Taiwan.
Finding 1: 日本國民童話「桃太郎」在殖民地臺灣的傳播 LINK Japanese citizen of country children’s story Momotarō expanse in colony Taiwan
The motivation for this study originated from my first visit to Taiwan, during which I encountered an elderly Taiwanese woman who could speak Japanese. This inspired me to analyze the fairy tale Momotarō from the Japanese-language elementary school textbooks used during the Japanese colonial period, in order to explore the educational policies of the Japanese colonial government and the impact of Japanese education on the Taiwanese people.
First, this study compares prewar Japanese language textbooks with those used in Taiwan during the colonial period. It analyzes which fairy tales appear in both sets of textbooks and compares the differences between Japanese folktales as they appear in oral tradition and as they are presented in textbooks.
Next, it closely examines the Momotarō story as found in both Japanese and Taiwanese textbooks, analyzing the meanings assigned to the tale within these educational materials.
Finally, it investigates how the Momotarō story was used beyond textbooks—in newspapers, magazines, school arts festivals, speeches, and other settings—to analyze the tale’s dissemination. In addition, interviews were conducted with eight Taiwanese individuals who had received a Japanese education, to explore their impressions of the Momotarō story.
This study argues that among the five major Japanese fairy tales, Momotarō is the most characteristically Japanese and the easiest to utilize. During the colonial period, Japanese educators used school curricula and various channels to popularize the Momotarō tale—imbued with Japanese national identity—in colonial Taiwan as a means of instructing and assimilating Taiwanese children.
Before the full-scale war between China and Japan, Momotarō was a prominent fairy tale figure closely followed by both countries. The Japanese construction of Momotarō consistently revolved around a modern colonial cultural logic that framed “justice–Momotarō–Japan” against “evil–demons–conquered territories.” Through means such as sending intellectuals to give lectures in Taiwan, adding the Momotarō story to elementary school textbooks, and promoting it via newspapers and magazines, Japan facilitated the popularization, transplantation, and transformation of the Momotarō image in Taiwan.
However, Chinese intellectuals had already seen through Japan’s use of Momotaro as a tool of “colonial justification propaganda” in its external expansion. For example, Zhang Taiyan criticized the story’s underlying message of aggression, which in turn inspired Akutagawa Ryunosuke to rewrite Momotarō and expose the hypocrisy in Japan’s so-called “Momotarō-ism.” Lian Heng traced the story’s roots in Han Chinese cultural traditions, expressing strong national identity and patriotic sentiment. Meanwhile, Yang Kui extracted a leftist spirit from the tale, advocating for proactive “activism” and encouraging the working class to courageously resist colonial plunder and class oppression.
This reminds me of the street skit “Drop Your Whip” that I learned at the drama club in college. “Drop Your Whip” (放下你的鞭子) is a well-known Chinese street performance with propaganda significance, but it actually predates the Cultural Revolution and was later revived and repurposed during that period.
“Drop Your Whip” was originally a street skit (快板剧 or 小品) created in 1931 by dramatist Chen Liting (陈鲤庭) based on a play by Tian Han during the Anti-Japanese War era. The performance featured a young beggar girl being beaten by her stepfather. A passerby intervenes, scolds the stepfather, and gives the girl money. In the skit, the girl explains that her suffering is due to the Japanese invasion, not personal misfortune. This scene becomes a metaphor for national suffering under imperialism, encouraging anti-Japanese resistance.
During the Cultural Revolution (1966–1976), the piece was revived and adapted as revolutionary propaganda, often performed in public squares, workplaces, and rural areas. Its class struggle themes—a cruel oppressor (the stepfather) and a suffering innocent (the girl)—aligned neatly with Maoist ideology. The performance was used to foster revolutionary fervor, targeting both historical enemies (like imperialists) and class enemies within (landlords, counterrevolutionaries). In some cases, it was even staged as a criticism or denunciation session, with parallels drawn between the stepfather figure and actual individuals labeled as “class enemies.”
The “whip” came to symbolize oppression, and dropping it was a metaphor for overthrowing old systems—imperialism, feudalism, and capitalism. The emotional appeal, direct messaging, and performative nature made it effective for mass mobilization, especially in rural and less literate populations. Therefore, while “Drop Your Whip” wasn’t originally created during the Cultural Revolution, it became part of its broader repertoire of revolutionary propaganda, often adapted to fit shifting political narratives.