Conversation
📝 Walkthrough변경 사항 분석Walkthrough이 PR은 gRPC 기반 디바이스 토큰 서비스, Kafka 이벤트 소비자, Firebase Cloud Messaging 푸시 알림 통합을 도입합니다. finda-auth에서 디바이스 토큰 조회 엔드포인트를 제공하고, finda-notification에서 공지사항 및 자원봉사자 알림 이벤트를 처리하는 기능을 추가합니다. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as 클라이언트
participant AuthGrpc as Auth gRPC<br/>(finda-auth)
participant AuthDb as Auth DB
participant NotifService as 알림 서비스<br/>(finda-notification)
participant FcmClient as FCM 클라이언트
participant Firebase as Firebase
Client->>NotifService: 공지사항 생성 요청
NotifService->>NotifService: 공지사항 저장
NotifService->>NotifService: Kafka 이벤트 발행<br/>(NOTICE-FIRED)
NotifService->>NotifService: 컨슈머 수신<br/>(NoticeScheduledFiredEvent)
NotifService->>NotifService: 활성화된 사용자 조회
NotifService->>AuthGrpc: 디바이스 토큰 조회<br/>(사용자 ID 목록)
AuthGrpc->>AuthDb: 토큰 쿼리<br/>(findAllByUser_IdIn)
AuthDb-->>AuthGrpc: 토큰 반환
AuthGrpc-->>NotifService: 디바이스 토큰 응답
NotifService->>FcmClient: 알림 발송<br/>(제목, 본문, 토큰)
FcmClient->>Firebase: 푸시 알림<br/>(Message)
Firebase-->>FcmClient: 전송 완료
FcmClient-->>NotifService: 성공
NotifService->>NotifService: 알림 레코드 저장
NotifService-->>Client: 완료
sequenceDiagram
participant Kafka as Kafka
participant VolConsumer as VolunteerFiredConsumer<br/>(finda-notification)
participant VolService as VolunteerNotificationService
participant AuthGrpc as Auth gRPC<br/>(finda-auth)
participant FcmClient as FCM 클라이언트
participant Firebase as Firebase
Kafka->>VolConsumer: VolunteerStatusChangedFiredEvent<br/>(자원봉사자 ID, 상태, 진도)
VolConsumer->>VolService: sendStatusChanged(event)
VolService->>VolService: 알림 선호도 조회
alt 선호도 비활성화
VolService->>VolService: 조기 반환
else 선호도 활성화
VolService->>AuthGrpc: 디바이스 토큰 조회<br/>(자원봉사자 ID)
AuthGrpc-->>VolService: 토큰 반환
VolService->>FcmClient: 알림 발송<br/>(상태 기반 제목/본문)
FcmClient->>Firebase: 푸시 알림
Firebase-->>FcmClient: 완료
VolService->>VolService: 알림 레코드 저장
end
VolConsumer->>Kafka: 메시지 승인
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 19
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notification/entity/NotificationJpaEntity.kt (1)
15-15:⚠️ Potential issue | 🔴 Critical생성자
id파라미터가 무시되어 엔티티 식별자가 매번 새로 생성됩니다.NotificationMapper에서
domain.id를 전달하지만(라인 25), NotificationJpaEntity의 생성자는 이를 받은 후BaseEntity()로 호출할 때 전달하지 않습니다(라인 29). 결과적으로 받은id파라미터가 버려지고, BaseEntity의@GeneratedValue가항상 새로운 UUID를 생성하게 됩니다. 이로 인해 기존 엔티티 업데이트 시에도 새 id가 할당되어 기록 식별 오류와 데이터 무결성 문제가 발생합니다.수정 방안:
BaseEntity()→BaseEntity(id)로 변경하거나- NotificationJpaEntity 생성자에서
id: UUID?파라미터를 제거하세요.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notification/entity/NotificationJpaEntity.kt` at line 15, NotificationJpaEntity의 생성자에서 전달된 id 파라미터가 무시되어 BaseEntity의 `@GeneratedValue로` 항상 새 UUID가 생성되고 있으므로, NotificationJpaEntity 생성자 호출부를 수정해 전달된 id를 BaseEntity로 넘기거나 생성자에서 id 파라미터를 제거하세요; 구체적으로 NotificationJpaEntity의 constructor(id: UUID?, ...) 호출 시 BaseEntity() 대신 BaseEntity(id)로 변경하여 domain.id가 실제 엔티티 식별자로 사용되도록 하거나, 만약 id를 외부에서 받지 않기로 한다면 NotificationJpaEntity의 생성자 시그니처에서 id: UUID?를 삭제하고 NotificationMapper에서의 전달을 제거하도록 변경하십시오.finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notificationpreference/entity/NotificationPreferenceJpaEntity.kt (1)
12-23:⚠️ Potential issue | 🟠 Major생성자
id인자가 현재 반영되지 않습니다.Line [13]의
id가 어디에도 사용되지 않아, 호출부에서 ID를 넘겨도 무시됩니다. 기존 엔티티 식별자를 의도한 흐름이라면 동작 불일치가 생길 수 있습니다.id를 쓰지 않을 계획이면 생성자에서 제거해 혼동을 없애는 편이 안전합니다.🔧 제안 수정
class NotificationPreferenceJpaEntity( - id: UUID?, - `@Column`(name = "type", nullable = false) val type: String, @@ `@Column`(name = "enabled", nullable = false) val enabled: Boolean ) : BaseEntity()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notificationpreference/entity/NotificationPreferenceJpaEntity.kt` around lines 12 - 23, The constructor parameter id in NotificationPreferenceJpaEntity is unused; either remove it or forward it to the superclass. Fix by updating the class declaration for NotificationPreferenceJpaEntity to pass the id value to BaseEntity (e.g., call BaseEntity(id)) so the provided UUID is used as the entity identifier (ensure BaseEntity has a matching constructor), or if you intentionally don't accept external IDs, delete the id parameter from NotificationPreferenceJpaEntity's constructor to avoid confusion.finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notice/entity/NoticeJpaEntity.kt (1)
13-30:⚠️ Potential issue | 🔴 Critical생성자 파라미터
id를BaseEntity에 전달하지 않음.
id: UUID?파라미터가 생성자에 정의되어 있고NoticeMapper에서 제공되지만,BaseEntity()로 전달되지 않습니다. 이로 인해 id 값이 손실되고BaseEntity의 기본값인null을 사용하게 됩니다.BaseEntity(id)로 수정해야 합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notice/entity/NoticeJpaEntity.kt` around lines 13 - 30, The constructor for NoticeJpaEntity is not passing the incoming id parameter to its superclass, so the id from NoticeMapper is lost; update the class declaration to call the BaseEntity constructor with the id (pass id into BaseEntity) so BaseEntity receives the provided UUID (modify NoticeJpaEntity to invoke BaseEntity(id) using the existing id constructor parameter).
🧹 Nitpick comments (20)
docker-compose.yml (3)
8-9: 이미지 버전 태그 명시 필요버전 태그 없이
wurstmeister/kafka를 사용하면 암묵적으로latest가 적용되어 빌드 재현성이 떨어집니다. 만약 현재 구성을 유지한다면 특정 버전을 명시하세요 (예:wurstmeister/kafka:2.13-2.8.1).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` around lines 8 - 9, 서비스 정의 'kafka'의 image 필드가 태그 없이 지정되어 재현성이 떨어집니다; docker-compose의 kafka service 이미지(wurstmeister/kafka)를 특정 태그로 고정하도록 image 값을 변경하세요 (예: set image to "wurstmeister/kafka:2.13-2.8.1" or another pinned version) so the 'kafka' service uses a fixed version rather than implicit latest.
17-18: Health check 누락으로 인한 시작 순서 문제 가능성
depends_on만으로는 Zookeeper가 완전히 준비된 후 Kafka가 시작되는 것을 보장하지 않습니다. 로컬 개발에서는 큰 문제가 아닐 수 있지만, 간헐적으로 Kafka 연결 실패가 발생할 수 있습니다.♻️ Health check 추가 예시
services: zookeeper: image: wurstmeister/zookeeper container_name: zookeeper ports: - "2181:2181" + healthcheck: + test: ["CMD", "nc", "-z", "localhost", "2181"] + interval: 10s + timeout: 5s + retries: 5 kafka: image: wurstmeister/kafka container_name: kafka ports: - "9092:9092" environment: KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_ADVERTISED_HOST_NAME: localhost KAFKA_ADVERTISED_PORT: 9092 depends_on: - - zookeeper + zookeeper: + condition: service_healthy🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` around lines 17 - 18, depends_on alone doesn't guarantee Zookeeper is ready before Kafka starts; add a Docker healthcheck for the zookeeper service (e.g., a simple zkCli.sh or nc check) and then update the kafka service's depends_on to wait for zookeeper's health (use the service_healthy condition or ensure compose version supports waiting on health). Target the zookeeper service block to add the healthcheck and the kafka service's depends_on entry to reference zookeeper: condition: service_healthy so Kafka waits until Zookeeper's healthcheck passes.
3-16: 유지보수되지 않는 Docker 이미지 및 deprecated 설정 사용
wurstmeister/kafka이미지는 3년 이상 업데이트되지 않았으며, 더 이상 제작되지 않는 이미지입니다. 보안 패치가 누락되어 있을 수 있습니다.또한,
advertised.host.name과advertised.port설정은 deprecated 되었으며KAFKA_ADVERTISED_LISTENERS사용이 권장됩니다.Kafka 4.0부터는 Zookeeper 없이 KRaft 모드로 운영이 가능합니다. 로컬 개발 환경에서도 최신 이미지 사용을 권장합니다.
♻️ bitnami/kafka (KRaft 모드) 사용 권장
services: - zookeeper: - image: wurstmeister/zookeeper - container_name: zookeeper - ports: - - "2181:2181" - - kafka: - image: wurstmeister/kafka - container_name: kafka - ports: - - "9092:9092" - environment: - KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 - KAFKA_ADVERTISED_HOST_NAME: localhost - KAFKA_ADVERTISED_PORT: 9092 - depends_on: - - zookeeper + kafka: + image: bitnami/kafka:3.9 + container_name: kafka + ports: + - "9092:9092" + - "9094:9094" + environment: + - KAFKA_CFG_NODE_ID=0 + - KAFKA_CFG_PROCESS_ROLES=controller,broker + - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094 + - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://localhost:9094 + - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT + - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka:9093 + - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER + - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 + - ALLOW_PLAINTEXT_LISTENER=yes🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` around lines 3 - 16, Replace the unmaintained wurstmeister images and deprecated env vars: swap the kafka service image (service name "kafka") to a maintained image such as bitnami/kafka (or another up-to-date provider), remove or replace the zookeeper service if you want KRaft mode, and stop using KAFKA_ADVERTISED_HOST_NAME / KAFKA_ADVERTISED_PORT; instead set KAFKA_ADVERTISED_LISTENERS and listener configuration env vars supported by the new image. If you opt for KRaft (no Zookeeper), remove the "zookeeper" service and configure the broker with the KRaft-specific envs required by the chosen image (e.g., KAFKA_CFG_PROCESS_ROLES, KAFKA_CFG_NODE_ID, KAFKA_CFG_CONTROLLER_QUORUM_VOTERS or image-specific KRAFT flags) so the "kafka" service runs as a KRaft broker using the modern advertised listeners configuration.finda-notification/src/main/kotlin/finda/findanotification/application/service/kafka/VolunteerNotificationService.kt (1)
23-69: 중복 코드 추출 고려.
sendStatusChanged와sendRemind메서드가 거의 동일한 패턴을 따릅니다: preference 조회 → deviceToken 획득 → FCM 전송 → 알림 저장. 공통 로직을 private 헬퍼 함수로 추출하면 유지보수성이 향상됩니다.♻️ 헬퍼 함수 추출 예시
private fun sendNotificationToVolunteer( volunteerId: UUID, title: String, body: String ) { val preference = volunteerNotificationPreferenceRepository .findByVolunteerId(volunteerId.toString()) ?: return if (!preference.enabled) return val deviceToken = authGrpcClient.getDeviceToken(preference.userId) fcmClient.sendNotification(deviceToken, title, body) saveNotificationPort.save( Notification( id = UUID.randomUUID(), title = title, body = body, type = NotificationType.NOTIFICATION, volunteerId = volunteerId.toString() ) ) } override fun sendStatusChanged(event: VolunteerStatusChangedFiredEvent) { sendNotificationToVolunteer( volunteerId = event.volunteerId, title = "봉사 상태가 변경되었습니다", body = "상태: ${event.status} / 진행: ${event.progress}" ) } override fun sendRemind(event: VolunteerRemindFiredEvent) { sendNotificationToVolunteer( volunteerId = event.volunteerId, title = "봉사 활동 리마인드", body = "오늘 봉사 활동이 있습니다" ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/application/service/kafka/VolunteerNotificationService.kt` around lines 23 - 69, Extract the duplicated logic in sendStatusChanged and sendRemind into a private helper (e.g., sendNotificationToVolunteer) that accepts volunteerId: UUID, title: String, body: String; inside the helper use volunteerNotificationPreferenceRepository.findByVolunteerId(...), check preference.enabled, get deviceToken via authGrpcClient.getDeviceToken(preference.userId), call fcmClient.sendNotification(deviceToken, title, body), and persist the Notification via saveNotificationPort.save(...) with NotificationType.NOTIFICATION; then update sendStatusChanged and sendRemind to call this helper with their respective titles/bodies and event.volunteerId.finda-notification/src/main/kotlin/finda/findanotification/application/port/in/kafka/SendVolunteerNotificationUseCase.kt (1)
3-8: Port 인터페이스가 Adapter 계층 DTO에 의존하고 있습니다.Hexagonal Architecture에서
application.port.in인터페이스가adapter.in.kafka.dto패키지의 DTO를 import하고 있습니다. Port 계층은 adapter 계층에 의존하지 않는 것이 이상적입니다.대안으로 DTO를
application.port.in.kafka패키지로 이동하거나, 도메인 타입을 직접 사용하는 것을 고려해 보세요.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/application/port/in/kafka/SendVolunteerNotificationUseCase.kt` around lines 3 - 8, The port interface SendVolunteerNotificationUseCase currently depends on adapter DTOs VolunteerStatusChangedFiredEvent and VolunteerRemindFiredEvent; move those DTO types out of the adapter layer or replace them with port-level types to remove adapter-to-port dependency. Update SendVolunteerNotificationUseCase to accept new port-level DTOs (e.g., application.port.in.kafka.VolunteerStatusChangedEvent and VolunteerRemindEvent) or use domain types, then refactor the adapter `in.kafka` code to map its adapter DTOs to the new port-level types before calling sendStatusChanged/sendRemind so the interface no longer imports adapter.`in`.kafka.dto classes.finda-notification/src/main/kotlin/finda/findanotification/adapter/in/kafka/VolunteerFiredConsumer.kt (1)
16-32: 예외 발생 시 에러 처리 고려 필요.
sendVolunteerNotificationUseCase호출 중 예외가 발생하면acknowledgment.acknowledge()가 호출되지 않아 메시지가 무한 재처리될 수 있습니다. 의도적인 재시도 전략이라면 현재 구조가 맞지만, 재처리 불가능한 예외(poison pill)에 대한 처리가 필요할 수 있습니다.♻️ 에러 처리 추가 예시
`@KafkaListener`(topics = ["VOLUNTEER-STATUS-FIRED"]) fun consumeVolunteerStatusChanged( `@Payload` event: VolunteerStatusChangedFiredEvent, acknowledgment: Acknowledgment ) { + try { sendVolunteerNotificationUseCase.sendStatusChanged(event) + } catch (e: Exception) { + // 로깅 및 필요시 DLQ 전송 + logger.error("Failed to process status changed event: ${event.volunteerId}", e) + } acknowledgment.acknowledge() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/adapter/in/kafka/VolunteerFiredConsumer.kt` around lines 16 - 32, Wrap the sendVolunteerNotificationUseCase calls in consumeVolunteerStatusChanged and consumeVolunteerRemind with error handling so acknowledgment.acknowledge() is always called (e.g., call acknowledge() in a finally block) and distinguish retriable vs non-retriable errors: catch exceptions from sendStatusChanged/sendRemind, log the full error with context, for poison-pill/non-retriable exceptions route the event to a dead-letter sink or mark it handled, and only rethrow or skip acknowledge for intentional retry strategies; ensure the catch references sendVolunteerNotificationUseCase.sendStatusChanged, sendVolunteerNotificationUseCase.sendRemind, and acknowledgment.acknowledge().finda-notification/src/main/kotlin/finda/findanotification/adapter/out/fcm/FcmClient.kt (1)
28-31: 배치 multicast API 사용으로 성능 개선 권장현재 구현은 토큰 수만큼 개별 네트워크 요청이 발생합니다. Firebase Admin SDK의
sendEachForMulticast()를 사용하면 여러 토큰을 한 번에 전송하여 효율성이 크게 개선됩니다. 다만 MulticastMessage는 최대 500개 토큰만 지원하므로, 500개를 초과하는 경우 토큰 리스트를 배치로 분할하여 처리해야 합니다.BatchResponse를 통해 토큰별 전송 결과(성공/실패, 메시지 ID, 예외)를 확인할 수 있습니다.예시 수정안 (배치 multicast)
+import com.google.firebase.messaging.MulticastMessage +import com.google.firebase.messaging.Notification + fun sendNotifications(deviceTokens: List<String>, title: String, body: String) { - deviceTokens.forEach { token -> - sendNotification(token, title, body) - } + if (deviceTokens.isEmpty()) return + + deviceTokens.chunked(500).forEach { batch -> + val message = MulticastMessage.builder() + .addAllTokens(batch) + .setNotification( + Notification.builder() + .setTitle(title) + .setBody(body) + .build() + ) + .build() + + val response = FirebaseMessaging.getInstance().sendEachForMulticast(message) + // 토큰별 결과 처리 (선택사항) + response.responses.forEachIndexed { index, sendResponse -> + if (!sendResponse.isSuccessful) { + // 실패한 토큰 처리 + } + } + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/fcm/FcmClient.kt` around lines 28 - 31, The current sendNotifications implementation calls sendNotification per token causing N network requests; change it to build MulticastMessage batches and call FirebaseMessaging.getInstance().sendEachForMulticast() in sendNotifications, splitting deviceTokens into chunks of up to 500, and for each BatchResponse iterate results to log or handle per-token successes/failures (messageId/exception). Keep sendNotification for single-token use or remove/repurpose it, but ensure sendNotifications uses MulticastMessage, FirebaseMessaging.sendEachForMulticast(), and correctly maps BatchResponse results back to the original tokens for error handling and retry logic.finda-auth/src/main/resources/application.yml (1)
5-7: gRPC 서버 포트는 환경변수로 외부화하는 편이 안전합니다.Line 7이 고정값(9090)이라 다중 실행/환경 분리 시 충돌 여지가 있습니다.
${GRPC_SERVER_PORT:9090}형태로 외부화 권장합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-auth/src/main/resources/application.yml` around lines 5 - 7, The grpc server port is hardcoded under the grpc.server.port setting; change it to read from an environment variable by replacing the literal 9090 with a Spring/YAML placeholder that falls back to 9090 (e.g., ${GRPC_SERVER_PORT:9090}) so grpc.server.port uses GRPC_SERVER_PORT when provided; update the grpc.server.port entry accordingly to externalize configuration.finda-notification/src/main/kotlin/finda/findanotification/adapter/in/kafka/NoticeFiredConsumer.kt (1)
15-15: 토픽명을 설정값으로 분리하는 것을 권장합니다.Line [15] 하드코딩은 환경별 운영 시 변경 비용이 큽니다. 프로퍼티 기반으로 주입하면 배포 안정성이 좋아집니다.
🔧 제안 수정
- `@KafkaListener`(topics = ["NOTICE-FIRED"]) + `@KafkaListener`(topics = ["\${kafka.topic.notice-fired}"])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/adapter/in/kafka/NoticeFiredConsumer.kt` at line 15, The `@KafkaListener` annotation in NoticeFiredConsumer currently hardcodes the topic name ("NOTICE-FIRED"); change it to reference a configuration property (e.g. kafka.topics.notice-fired) so the topic is injected from application properties/env. Update the `@KafkaListener` on class NoticeFiredConsumer to use a property placeholder, add the kafka.topics.notice-fired key to your application.yml/properties (and env/secret management as needed), and ensure Spring property resolution is enabled so deployments can change the topic without code changes.finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notice/NoticePersistenceAdapter.kt (1)
15-19: Non-null assertion(!!) 사용 검토
NotificationPersistenceAdapter와 동일하게!!연산자 사용으로 인한 잠재적 NPE 위험이 있습니다. 일관성 있게requireNotNull로 변경을 권장합니다.♻️ requireNotNull 사용 제안
override fun save(notice: Notice): Notice { val entity = noticeMapper.toEntity(notice) val saved = noticeRepository.save(entity) - return noticeMapper.toDomain(saved)!! + return requireNotNull(noticeMapper.toDomain(saved)) { + "Failed to map saved notice entity to domain" + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notice/NoticePersistenceAdapter.kt` around lines 15 - 19, The save(notice: Notice) method in NoticePersistenceAdapter uses a non-null assertion on noticeMapper.toDomain(saved) which risks an NPE; replace the "!!" with requireNotNull(...) and supply a helpful error message (e.g., "Failed to map NoticeEntity to domain") to mirror the pattern used in NotificationPersistenceAdapter, updating the return to requireNotNull(noticeMapper.toDomain(saved)) with the message so failures are explicit and safe.finda-notification/src/main/kotlin/finda/findanotification/adapter/in/notice/NoticeWebAdapter.kt (1)
17-27: REST API 응답 및 입력 검증 개선
@Valid어노테이션이 없어CreateNoticeWebRequest의 필드 검증이 수행되지 않습니다.- POST 요청에 대해 생성된 리소스 ID 또는
201 Created상태 코드 반환이 없습니다.♻️ 개선 제안
+import jakarta.validation.Valid +import org.springframework.http.HttpStatus +import org.springframework.web.bind.annotation.ResponseStatus + `@RestController` `@RequestMapping`("/notice") class NoticeWebAdapter( private val createNoticeUseCase: CreateNoticeUseCase ) { `@PostMapping` - fun createNotice(`@RequestBody` request: CreateNoticeWebRequest) { + `@ResponseStatus`(HttpStatus.CREATED) + fun createNotice(`@Valid` `@RequestBody` request: CreateNoticeWebRequest) { createNoticeUseCase.execute(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/adapter/in/notice/NoticeWebAdapter.kt` around lines 17 - 27, Add request validation and proper POST response handling in createNotice: annotate the CreateNoticeWebRequest parameter with `@Valid` (and import javax/ jakarta.validation as used in the project) so field constraints are applied, and change the createNotice method to return a ResponseEntity that returns 201 Created with either the new resource ID (from createNoticeUseCase.execute returning or exposing an ID) in the body and/or a Location header pointing to the new resource URI; update the call site around createNoticeUseCase.execute/CreateNoticeCommand to capture the created ID (or adapt the use case to return it) and build the ResponseEntity accordingly.finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notification/NotificationPersistenceAdapter.kt (1)
15-19: Non-null assertion(!!) 사용 검토
notificationMapper.toDomain(saved)!!에서!!연산자 사용 시 null이 반환되면NullPointerException이 발생합니다.NoticeMapper.toDomain()이 nullable 반환 타입을 가지고 있다면, 명시적 예외 처리 또는requireNotNull을 사용하여 더 명확한 에러 메시지를 제공하는 것이 좋습니다.♻️ requireNotNull 사용 제안
override fun save(notification: Notification): Notification { val entity = notificationMapper.toEntity(notification) val saved = notificationRepository.save(entity) - return notificationMapper.toDomain(saved)!! + return requireNotNull(notificationMapper.toDomain(saved)) { + "Failed to map saved notification entity to domain" + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notification/NotificationPersistenceAdapter.kt` around lines 15 - 19, In NotificationPersistenceAdapter.save, avoid the unsafe non-null assertion on notificationMapper.toDomain(saved)!!; instead call notificationMapper.toDomain(saved) and guard the nullable result with requireNotNull (or throw a clear exception) providing a descriptive message (e.g., "Failed to map NotificationEntity to domain for id=...") so the failure is explicit; locate the mapping flow through notificationMapper.toEntity, notificationRepository.save, and notificationMapper.toDomain to implement this check and return the non-null domain object.finda-notification/src/main/kotlin/finda/findanotification/global/config/KafkaConfig.kt (1)
37-49: Consumer 설정 개선 권장사항
MANUAL_IMMEDIATEack 모드를 사용하지만 consumer factory에 에러 핸들러가 설정되어 있지 않습니다. 메시지 처리 실패 시 적절한 에러 처리 및 dead-letter topic 설정을 고려해 주세요.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/global/config/KafkaConfig.kt` around lines 37 - 49, consumerFactory() currently builds a DefaultKafkaConsumerFactory but lacks an error handler; update your Kafka listener container factory (the bean that uses consumerFactory()) to attach a DefaultErrorHandler (or SeekToCurrentErrorHandler depending on Spring Kafka version) with a DeadLetterPublishingRecoverer or a recoverer that publishes to a DLT and appropriate backoff, and ensure MANUAL_IMMEDIATE ack mode behavior is preserved by configuring the container factory’s setCommonErrorHandler / setErrorHandler and ackMode; reference consumerFactory(), DefaultKafkaConsumerFactory, and the Kafka listener container factory bean when making these changes.finda-notification/src/main/kotlin/finda/findanotification/application/service/notice/CreateNoticeService.kt (1)
21-37: 트랜잭션 처리 누락 검토
saveNoticePort.save()이후sendNoticeScheduledEventPort.send()또는noticeNotificationService.sendImmediate()호출 시 예외가 발생하면 Notice는 저장되었지만 알림 이벤트는 발행되지 않는 불일치 상태가 될 수 있습니다.
@Transactional어노테이션 추가 또는 아웃박스 패턴 적용을 권장합니다.♻️ `@Transactional` 추가 제안
+import org.springframework.transaction.annotation.Transactional + `@Service` class CreateNoticeService( private val saveNoticePort: SaveNoticePort, private val sendNoticeScheduledEventPort: SendNoticeScheduledEventPort, private val noticeNotificationService: NoticeNotificationService ) : CreateNoticeUseCase { + `@Transactional` override fun execute(request: CreateNoticeCommand) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/application/service/notice/CreateNoticeService.kt` around lines 21 - 37, The save-notify sequence in CreateNoticeService.execute can leave DB and notification out-of-sync if an exception occurs after saveNoticePort.save(); either make the operation atomic by annotating the transactional boundary or persist an outbox event instead: add `@Transactional` to the CreateNoticeService.execute (or to the class) so saveNoticePort.save(...) and subsequent sendNoticeScheduledEventPort.send(...) / noticeNotificationService.sendImmediate(...) run in one transaction (and move external call to afterCommit if necessary), or replace the direct sends with persisting an OutboxEvent via a new saveOutboxPort.saveOutbox(event) and let an outbox publisher ship the events. Ensure you reference CreateNoticeService.execute, saveNoticePort.save, sendNoticeScheduledEventPort.send, noticeNotificationService.sendImmediate (or new saveOutboxPort.saveOutbox) when making the change.finda-auth/src/main/kotlin/finda/findaauth/application/exception/devicetoken/DeviceTokenNotFoundException.kt (1)
6-6: 싱글톤 예외 사용 시 스택 트레이스 주의.
object로 예외를 정의하면 스택 트레이스가 항상 초기화 위치를 가리키게 됩니다. 디버깅 시 실제 발생 위치를 파악하기 어려울 수 있습니다. 다른 예외들도 동일한 패턴을 사용한다면 일관성 측면에서 괜찮지만, 인지하고 있어야 합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-auth/src/main/kotlin/finda/findaauth/application/exception/devicetoken/DeviceTokenNotFoundException.kt` at line 6, DeviceTokenNotFoundException is defined as an object singleton which makes its stack trace point to the initialization site rather than the throw site; replace the singleton with a class (e.g., class DeviceTokenNotFoundException : FindaException(ErrorCode.DEVICE_TOKEN_NOT_FOUND)) so each throw creates a new instance and preserves the correct stack trace, ensuring consistency with FindaException usage and other exception types that should carry per-throw stack information.finda-auth/build.gradle.kts (1)
45-61: 버전 상수 중복 사용.
protoc및protoc-gen-grpc-java버전이 하드코딩되어 있지만,DependencyVersions.kt에 이미PROTOBUF = "3.24.0"및GRPC = "1.58.0"이 정의되어 있습니다. 버전 불일치를 방지하기 위해 상수를 참조하는 것이 좋습니다.♻️ DependencyVersions 상수 사용 제안
protobuf { protoc { - artifact = "com.google.protobuf:protoc:3.24.0" + artifact = "com.google.protobuf:protoc:${DependencyVersions.PROTOBUF}" } plugins { create("grpc") { - artifact = "io.grpc:protoc-gen-grpc-java:1.58.0" + artifact = "io.grpc:protoc-gen-grpc-java:${DependencyVersions.GRPC}" } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-auth/build.gradle.kts` around lines 45 - 61, Replace the hardcoded protoc and protoc-gen-grpc-java version strings with the shared constants from DependencyVersions (use DependencyVersions.PROTOBUF for protoc and DependencyVersions.GRPC for protoc-gen-grpc-java) so the protoc artifact and the grpc plugin artifact are built using "${DependencyVersions.PROTOBUF}" and "${DependencyVersions.GRPC}" respectively; update the two artifact assignments inside the protobuf { protoc { ... } } and plugins { create("grpc") { ... } } blocks to reference those constants (fully qualify DependencyVersions if needed).finda-notification/src/main/proto/auth.proto (1)
1-29: 중복 proto 정의 - 공유 모듈 사용 권장 (이전 파일 참조)이 파일은
finda-auth모듈의 proto 파일과 동일한 서비스 정의를 포함합니다. 앞서 언급한 대로 공유 proto 모듈을 통해 중복을 제거하는 것이 장기적인 유지보수에 유리합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/proto/auth.proto` around lines 1 - 29, This proto duplicates the AuthService and related messages (AuthService, GetDeviceToken, GetDeviceTokens, UserRequest, UserListRequest, DeviceTokenResponse, DeviceTokenListResponse); remove this local proto and instead import/use the shared proto from the finda-auth shared module in your build (or reference the shared proto artifact) so the service and message definitions come from the single canonical proto; update any gRPC generation / build config to point to the shared proto and adjust imports/usages to the shared package so there is no local duplicate definition.finda-auth/src/main/proto/auth.proto (1)
1-29: Proto 파일 중복 문제를 고려하세요.
finda-auth와finda-notification모듈에 동일한 proto 파일이 존재합니다. 이는 다음과 같은 문제를 야기할 수 있습니다:
- 두 파일의 동기화 유지 필요
- 변경 시 양쪽 모두 수정해야 함
공유 모듈(예:
finda-proto또는finda-common)에 proto 파일을 배치하고 양쪽 모듈에서 의존성으로 참조하는 것을 권장합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-auth/src/main/proto/auth.proto` around lines 1 - 29, The auth.proto duplication issue: move the proto (auth.proto containing package auth and service AuthService with messages UserRequest, UserListRequest, DeviceTokenResponse, DeviceTokenListResponse) into a shared proto module (e.g., finda-proto or finda-common), remove the duplicate files from both finda-auth and finda-notification, and update each module's build configuration to depend on the shared module (so code generation pulls the single shared auth.proto); ensure java_package and option values remain correct and adjust import paths/usages where UserRequest/DeviceTokenResponse types are referenced.finda-notification/src/main/kotlin/finda/findanotification/adapter/out/grpc/AuthGrpcClient.kt (1)
9-12: 생성자 주입을 사용하는 것이 좋습니다.
@GrpcClient와lateinit var를 필드 주입으로 사용하고 있습니다. 테스트 용이성과 불변성을 위해 생성자 주입을 권장합니다.♻️ 생성자 주입으로 변경 제안
`@Component` -class AuthGrpcClient { - `@GrpcClient`("auth-service") - private lateinit var authServiceStub: AuthServiceGrpc.AuthServiceBlockingStub +class AuthGrpcClient( + `@GrpcClient`("auth-service") + private val authServiceStub: AuthServiceGrpc.AuthServiceBlockingStub +) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/grpc/AuthGrpcClient.kt` around lines 9 - 12, Replace field injection of the gRPC stub with constructor injection: remove the lateinit var authServiceStub field and instead add a constructor parameter of type AuthServiceGrpc.AuthServiceBlockingStub annotated with `@GrpcClient`("auth-service") inside class AuthGrpcClient so the stub becomes an immutable dependency; update usages in AuthGrpcClient to reference the constructor-injected property and remove the field-level `@GrpcClient` annotation.finda-notification/build.gradle.kts (1)
40-56: Protobuf 버전을 중앙 집중식으로 관리하세요.
protoc와protoc-gen-grpc-java버전이 하드코딩되어 있습니다(3.24.0, 1.58.0).Dependencies.kt에서 다른 의존성들은DependencyVersions를 통해 관리되고 있으므로, 일관성을 위해 이 버전들도 동일하게 관리하는 것이 좋습니다.♻️ 버전 중앙화 제안
DependencyVersions.kt에 버전 추가:const val PROTOC = "3.24.0" const val GRPC_JAVA_PLUGIN = "1.58.0"그 후 build.gradle.kts에서 참조:
protobuf { protoc { - artifact = "com.google.protobuf:protoc:3.24.0" + artifact = "com.google.protobuf:protoc:${DependencyVersions.PROTOC}" } plugins { create("grpc") { - artifact = "io.grpc:protoc-gen-grpc-java:1.58.0" + artifact = "io.grpc:protoc-gen-grpc-java:${DependencyVersions.GRPC_JAVA_PLUGIN}" } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@finda-notification/build.gradle.kts` around lines 40 - 56, Replace hardcoded protoc and protoc-gen-grpc-java versions with centralized constants: add PROTOC and GRPC_JAVA_PLUGIN to DependencyVersions (e.g., const val PROTOC = "3.24.0", const val GRPC_JAVA_PLUGIN = "1.58.0") and update the protobuf block in build.gradle.kts (the protoc { artifact = ... } and plugins { create("grpc") { artifact = ... } } entries used by generateProtoTasks) to reference DependencyVersions.PROTOC and DependencyVersions.GRPC_JAVA_PLUGIN instead of literal version strings so the versions are managed centrally and consistently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@buildSrc/src/main/kotlin/DependencyVersions.kt`:
- Around line 7-11: The GRPC dependency constant GRPC currently pins a
vulnerable version (1.58.0) — update the value of the GRPC constant in
DependencyVersions.kt to at least "1.58.3" (recommended "1.79.0") to remediate
CVE-2024-7246; while you're editing, consider bumping GRPC_STARTER to
"3.1.0.RELEASE" and FIRE_BASE to "9.7.1" to keep related libraries current, then
run a build and dependency compatibility tests to verify no regressions.
In `@finda-auth/build.gradle.kts`:
- Around line 45-61: Add the Kotlin compiler options to the finda-auth module's
build.gradle.kts to match other modules: inside the build script (next to the
existing protobuf { ... } block) add a Kotlin compiler configuration that sets
the JVM Kotlin compiler options to include the "-Xjsr305=strict" flag (e.g., via
tasks.withType<KotlinCompile> or kotlin { sourceSets / compileKotlin
kotlinOptions.freeCompilerArgs } so that the module uses the same strict
null-safety checks as
finda-volunteer/finda-notification/finda-batch/finda-gateway). Ensure the unique
flag "-Xjsr305=strict" is present and applied to all Kotlin compile tasks in
this file.
In
`@finda-auth/src/main/kotlin/finda/findaauth/adapter/in/grpc/AuthGrpcService.kt`:
- Around line 30-50: getDeviceTokens currently calls UUID.fromString on each
request.userIdsList without guarding against malformed UUIDs; wrap the
conversion and subsequent service call in a try/catch that catches
IllegalArgumentException (or Exception) and respond via responseObserver.onError
using a gRPC Status like Status.INVALID_ARGUMENT.withDescription("invalid user
id").withCause(e).asRuntimeException(), then return; otherwise proceed to build
and send DeviceTokenListResponse as before. Target symbols: getDeviceTokens,
UUID.fromString, getDeviceTokenService.getAllByUserIds,
responseObserver.onError, Status.INVALID_ARGUMENT.
- Around line 16-28: Wrap the body of getDeviceToken in a try/catch that handles
UUID.fromString and service errors: catch IllegalArgumentException and call
responseObserver.onError with a gRPC Status.INVALID_ARGUMENT (include the
invalid userId message), and catch DeviceTokenNotFoundException (thrown by
getDeviceTokenService.getByUserId) and call responseObserver.onError with
Status.NOT_FOUND (include a descriptive message); leave the successful response
flow (DeviceTokenResponse builder, responseObserver.onNext/onCompleted)
unchanged and ensure any other unexpected exceptions are propagated as
Status.INTERNAL via responseObserver.onError.
In
`@finda-auth/src/main/kotlin/finda/findaauth/adapter/out/persistence/devicetoken/repository/DeviceTokenRepository.kt`:
- Line 11: DeviceTokenPersistenceAdapter.getAllByUserIds currently forwards an
empty userIds list to the repository which can produce an IN () DB error; add an
early guard at the start of DeviceTokenPersistenceAdapter.getAllByUserIds to
return emptyList() when userIds.isEmpty() so that findAllByUser_IdIn is never
called with an empty list (this fixes the call chain:
DeviceTokenPersistenceAdapter.getAllByUserIds ->
GetDeviceTokenService.getAllByUserIds -> AuthGrpcService.getDeviceTokens).
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/in/notice/dto/request/CreateNoticeWebRequest.kt`:
- Around line 6-11: CreateNoticeWebRequest currently allows empty title/body and
accepts only one of noticeDate/noticeTime, so add input invariants: enforce
non-blank title and body and require either both noticeDate and noticeTime be
non-null or both be null; implement this in the CreateNoticeWebRequest class
(e.g., an init block or a companion factory/validate() called by callers) and
throw a clear IllegalArgumentException with a descriptive message when the
invariants fail, referencing the fields title, body, noticeDate, and noticeTime.
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/fcm/FcmClient.kt`:
- Around line 14-16: The sendNotification functions (e.g.,
sendNotification(deviceToken: String, ...) and the overloaded variant handling
multiple tokens) lack input validation; add defensive checks to return early
without calling FCM when deviceToken is null/blank or when the device token list
is null/empty/contains only blank entries. In sendNotification(deviceToken:
String) trim and check isNullOrBlank and log/return; in the multi-token method
filter out blank tokens, check if resulting list is empty and log/return before
building/sending the Message, so no unnecessary external FCM calls occur.
- Around line 28-31: sendNotifications currently calls sendNotification in a
forEach without per-token error handling so a
FirebaseMessaging.getInstance().send(message) FirebaseMessagingException will
stop the whole loop; instead, wrap each sendNotification(token, title, body)
call in a try/catch inside sendNotifications, catch
FirebaseMessagingException/Exception per token, and collect per-token results
(e.g., a List or Map of token -> success/failure or token -> error) to return to
the caller so the upper layer can retry or persist history; update
sendNotification/sendNotifications signatures as needed to propagate or map
results rather than letting exceptions abort the loop.
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/kafka/NoticeScheduledEventProducer.kt`:
- Around line 17-22: The send method in NoticeScheduledEventProducer currently
only logs Kafka send failures and does not propagate them, which allows
CreateNoticeService to consider the operation successful even if event
publication failed; update NoticeScheduledEventProducer.send (which uses
kafkaTemplate.send(TOPIC, NoticeScheduledEvent(notice.id))) to surface failures
to callers—either perform a synchronous send by calling get() on the returned
Future to throw the exception on failure, or convert the CompletableFuture
result and rethrow a meaningful exception so CreateNoticeService can handle
retries/compensation; ensure the thrown exception includes context (notice.id)
so callers can act on the failure.
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notice/mapper/NoticeMapper.kt`:
- Line 29: The mapping uses a forced non-null assertion for domain.adminId in
NoticeMapper (adminId = domain.adminId!!), which can cause a runtime NPE; change
the mapping to explicitly handle nulls by either validating and throwing a clear
exception (e.g., IllegalStateException with context) inside the mapper or by
mapping to the entity's nullable adminId field if the DB allows nulls; update
the map function (NoticeMapper / related mapToEntity method) to perform this
explicit check and produce a descriptive error or a safe nullable assignment
instead of using !!.
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notificationpreference/repository/VolunteerNotificationPreferenceRepository.kt`:
- Line 9: The repository method findByVolunteerId currently returns a single
VolunteerNotificationPreferenceJpaEntity which will throw
IncorrectResultSizeDataAccessException if multiple rows share the same
volunteer_id; either (A) change the signature in
VolunteerNotificationPreferenceRepository from fun
findByVolunteerId(volunteerId: String):
VolunteerNotificationPreferenceJpaEntity? to fun findByVolunteerId(volunteerId:
String): List<VolunteerNotificationPreferenceJpaEntity> and update all callers
to handle a list (choose first element or handle multiple appropriately), or (B)
enforce uniqueness by adding a DB unique constraint on volunteer_id, annotate
the entity column with `@Column`(name = "volunteer_id", unique = true) on
VolunteerNotificationPreferenceJpaEntity, and add a migration to apply the
unique constraint so the single-return signature remains safe.
In
`@finda-notification/src/main/kotlin/finda/findanotification/application/port/in/kafka/SendNoticeNotificationUseCase.kt`:
- Around line 3-6: The SendNoticeNotificationUseCase interface currently depends
on adapter DTO NoticeScheduledFiredEvent; introduce an application-level
event/command type (e.g., ApplicationNoticeScheduledEvent or SendNoticeCommand)
in the application.port.in package and change the interface signature in
SendNoticeNotificationUseCase::send to accept that application type instead of
NoticeScheduledFiredEvent; update the Kafka consumer/adapter to map
NoticeScheduledFiredEvent to the new application type before calling the use
case (so only the adapter layer imports
adapter.`in`.kafka.dto.NoticeScheduledFiredEvent and the application layer
remains adapter-agnostic).
In
`@finda-notification/src/main/kotlin/finda/findanotification/application/service/kafka/NoticeNotificationService.kt`:
- Around line 38-55: sendToAllUsers currently calls external services without
guarding for empty userIds and without handling failures from
authGrpcClient.getDeviceTokens or fcmClient.sendNotifications; modify
sendToAllUsers to first return early if userIds.isEmpty(), then wrap the gRPC
and FCM calls in a try/catch and implement a retry or error-handling path (e.g.,
retry logic or publish to a DLQ / log and abort) so exceptions don’t escape the
Kafka consumer; only call saveNotificationPort.save after successful delivery
(or record failure state) and reference authGrpcClient.getDeviceTokens,
fcmClient.sendNotifications, and saveNotificationPort.save when making these
changes.
- Around line 26-29: The send method quietly returns when a Notice isn't found;
replace the silent return with explicit handling: use the class logger to
error-log that the Notice was missing (include event.noticeId and event details)
and then either throw an exception (e.g., IllegalStateException) to surface the
failure to the scheduler or, if soft-fail is desired, keep the log and
return—update the send method in NoticeNotificationService (the override fun
send(event: NoticeScheduledFiredEvent) that calls
noticeRepository.findByIdOrNull and sendToAllUsers) to perform the logging and,
if choosing to escalate, throw an exception after logging so the scheduler can
detect the failure.
In
`@finda-notification/src/main/kotlin/finda/findanotification/application/service/kafka/VolunteerNotificationService.kt`:
- Around line 30-34: The code calls
authGrpcClient.getDeviceToken(preference.userId) and
fcmClient.sendNotification(...) without validating the deviceToken or handling
RPC/FCM errors; modify VolunteerNotificationService (the block using
authGrpcClient.getDeviceToken and fcmClient.sendNotification) to first check if
deviceToken is null or blank and skip/log the notification when so, and wrap the
external calls in a try/catch that logs failures (including userId,
event.id/status/progress) and prevents exceptions from bubbling up; ensure you
reference authGrpcClient.getDeviceToken, fcmClient.sendNotification,
deviceToken, preference.userId, and event in the error logs.
In
`@finda-notification/src/main/kotlin/finda/findanotification/global/config/FirebaseConfig.kt`:
- Around line 16-23: The initialize() method opens
ClassPathResource("firebase-service-account.json").inputStream and passes it to
GoogleCredentials.fromStream(...) without closing the stream and calls
FirebaseApp.initializeApp(options) unconditionally; fix by wrapping the
InputStream in a try-with-resources (use Kotlin's use { ... }) so the stream is
closed after GoogleCredentials.fromStream, and guard
FirebaseApp.initializeApp(options) by checking FirebaseApp.getApps().isEmpty()
(or catching IllegalStateException) to avoid duplicate initialization; update
the initialize() function to use these two changes referencing
ClassPathResource, GoogleCredentials.fromStream, and FirebaseApp.initializeApp.
In
`@finda-notification/src/main/kotlin/finda/findanotification/global/config/KafkaConfig.kt`:
- Around line 46-47: 현재 KafkaConfig의 props 설정에서
JsonDeserializer.TRUSTED_PACKAGES를 "*"로 열어두어 모든 패키지 역직렬화를 허용하고 있으므로, 이를 프로젝트 루트
패키지로 제한하도록 수정하세요: KafkaConfig에서 props[JsonDeserializer.TRUSTED_PACKAGES] 값을
와일드카드("*") 대신 애플리케이션 기본 패키지 이름(예: "finda.findanotification" 또는 필요한 여러 패키지는 쉼표로
구분)으로 설정하고, 여러 패키지를 허용해야 하면 쉼표로 구분된 목록을 사용해 JsonDeserializer.TRUSTED_PACKAGES을
안전하게 지정합니다.
In `@finda-notification/src/main/resources/application.yml`:
- Line 8: 라인에 하드코딩된 주소값 "static://localhost:9090"을 사용한 "address" 설정은 컨테이너/서버
환경에서 잘못된 루프백 연결을 유발하므로, application.yml의 해당 "address" 항목을 환경변수로 외부화하도록 변경하세요; 예를
들어 "address" 값을 직접 "static://localhost:9090"으로 두지 않고 ENV로 읽도록 바꾸고(예:
AUTH_SERVICE_ADDRESS 또는 AUTH_SERVICE_HOST/PORT 등), 배포/컨테이너 설정에서 해당 환경변수에 실제
auth-service 호스트와 포트를 제공하도록 업데이트하십시오.
---
Outside diff comments:
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notice/entity/NoticeJpaEntity.kt`:
- Around line 13-30: The constructor for NoticeJpaEntity is not passing the
incoming id parameter to its superclass, so the id from NoticeMapper is lost;
update the class declaration to call the BaseEntity constructor with the id
(pass id into BaseEntity) so BaseEntity receives the provided UUID (modify
NoticeJpaEntity to invoke BaseEntity(id) using the existing id constructor
parameter).
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notification/entity/NotificationJpaEntity.kt`:
- Line 15: NotificationJpaEntity의 생성자에서 전달된 id 파라미터가 무시되어 BaseEntity의
`@GeneratedValue로` 항상 새 UUID가 생성되고 있으므로, NotificationJpaEntity 생성자 호출부를 수정해 전달된
id를 BaseEntity로 넘기거나 생성자에서 id 파라미터를 제거하세요; 구체적으로 NotificationJpaEntity의
constructor(id: UUID?, ...) 호출 시 BaseEntity() 대신 BaseEntity(id)로 변경하여 domain.id가
실제 엔티티 식별자로 사용되도록 하거나, 만약 id를 외부에서 받지 않기로 한다면 NotificationJpaEntity의 생성자 시그니처에서
id: UUID?를 삭제하고 NotificationMapper에서의 전달을 제거하도록 변경하십시오.
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notificationpreference/entity/NotificationPreferenceJpaEntity.kt`:
- Around line 12-23: The constructor parameter id in
NotificationPreferenceJpaEntity is unused; either remove it or forward it to the
superclass. Fix by updating the class declaration for
NotificationPreferenceJpaEntity to pass the id value to BaseEntity (e.g., call
BaseEntity(id)) so the provided UUID is used as the entity identifier (ensure
BaseEntity has a matching constructor), or if you intentionally don't accept
external IDs, delete the id parameter from NotificationPreferenceJpaEntity's
constructor to avoid confusion.
---
Nitpick comments:
In `@docker-compose.yml`:
- Around line 8-9: 서비스 정의 'kafka'의 image 필드가 태그 없이 지정되어 재현성이 떨어집니다;
docker-compose의 kafka service 이미지(wurstmeister/kafka)를 특정 태그로 고정하도록 image 값을
변경하세요 (예: set image to "wurstmeister/kafka:2.13-2.8.1" or another pinned
version) so the 'kafka' service uses a fixed version rather than implicit
latest.
- Around line 17-18: depends_on alone doesn't guarantee Zookeeper is ready
before Kafka starts; add a Docker healthcheck for the zookeeper service (e.g., a
simple zkCli.sh or nc check) and then update the kafka service's depends_on to
wait for zookeeper's health (use the service_healthy condition or ensure compose
version supports waiting on health). Target the zookeeper service block to add
the healthcheck and the kafka service's depends_on entry to reference zookeeper:
condition: service_healthy so Kafka waits until Zookeeper's healthcheck passes.
- Around line 3-16: Replace the unmaintained wurstmeister images and deprecated
env vars: swap the kafka service image (service name "kafka") to a maintained
image such as bitnami/kafka (or another up-to-date provider), remove or replace
the zookeeper service if you want KRaft mode, and stop using
KAFKA_ADVERTISED_HOST_NAME / KAFKA_ADVERTISED_PORT; instead set
KAFKA_ADVERTISED_LISTENERS and listener configuration env vars supported by the
new image. If you opt for KRaft (no Zookeeper), remove the "zookeeper" service
and configure the broker with the KRaft-specific envs required by the chosen
image (e.g., KAFKA_CFG_PROCESS_ROLES, KAFKA_CFG_NODE_ID,
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS or image-specific KRAFT flags) so the "kafka"
service runs as a KRaft broker using the modern advertised listeners
configuration.
In `@finda-auth/build.gradle.kts`:
- Around line 45-61: Replace the hardcoded protoc and protoc-gen-grpc-java
version strings with the shared constants from DependencyVersions (use
DependencyVersions.PROTOBUF for protoc and DependencyVersions.GRPC for
protoc-gen-grpc-java) so the protoc artifact and the grpc plugin artifact are
built using "${DependencyVersions.PROTOBUF}" and "${DependencyVersions.GRPC}"
respectively; update the two artifact assignments inside the protobuf { protoc {
... } } and plugins { create("grpc") { ... } } blocks to reference those
constants (fully qualify DependencyVersions if needed).
In
`@finda-auth/src/main/kotlin/finda/findaauth/application/exception/devicetoken/DeviceTokenNotFoundException.kt`:
- Line 6: DeviceTokenNotFoundException is defined as an object singleton which
makes its stack trace point to the initialization site rather than the throw
site; replace the singleton with a class (e.g., class
DeviceTokenNotFoundException : FindaException(ErrorCode.DEVICE_TOKEN_NOT_FOUND))
so each throw creates a new instance and preserves the correct stack trace,
ensuring consistency with FindaException usage and other exception types that
should carry per-throw stack information.
In `@finda-auth/src/main/proto/auth.proto`:
- Around line 1-29: The auth.proto duplication issue: move the proto (auth.proto
containing package auth and service AuthService with messages UserRequest,
UserListRequest, DeviceTokenResponse, DeviceTokenListResponse) into a shared
proto module (e.g., finda-proto or finda-common), remove the duplicate files
from both finda-auth and finda-notification, and update each module's build
configuration to depend on the shared module (so code generation pulls the
single shared auth.proto); ensure java_package and option values remain correct
and adjust import paths/usages where UserRequest/DeviceTokenResponse types are
referenced.
In `@finda-auth/src/main/resources/application.yml`:
- Around line 5-7: The grpc server port is hardcoded under the grpc.server.port
setting; change it to read from an environment variable by replacing the literal
9090 with a Spring/YAML placeholder that falls back to 9090 (e.g.,
${GRPC_SERVER_PORT:9090}) so grpc.server.port uses GRPC_SERVER_PORT when
provided; update the grpc.server.port entry accordingly to externalize
configuration.
In `@finda-notification/build.gradle.kts`:
- Around line 40-56: Replace hardcoded protoc and protoc-gen-grpc-java versions
with centralized constants: add PROTOC and GRPC_JAVA_PLUGIN to
DependencyVersions (e.g., const val PROTOC = "3.24.0", const val
GRPC_JAVA_PLUGIN = "1.58.0") and update the protobuf block in build.gradle.kts
(the protoc { artifact = ... } and plugins { create("grpc") { artifact = ... } }
entries used by generateProtoTasks) to reference DependencyVersions.PROTOC and
DependencyVersions.GRPC_JAVA_PLUGIN instead of literal version strings so the
versions are managed centrally and consistently.
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/in/kafka/NoticeFiredConsumer.kt`:
- Line 15: The `@KafkaListener` annotation in NoticeFiredConsumer currently
hardcodes the topic name ("NOTICE-FIRED"); change it to reference a
configuration property (e.g. kafka.topics.notice-fired) so the topic is injected
from application properties/env. Update the `@KafkaListener` on class
NoticeFiredConsumer to use a property placeholder, add the
kafka.topics.notice-fired key to your application.yml/properties (and env/secret
management as needed), and ensure Spring property resolution is enabled so
deployments can change the topic without code changes.
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/in/kafka/VolunteerFiredConsumer.kt`:
- Around line 16-32: Wrap the sendVolunteerNotificationUseCase calls in
consumeVolunteerStatusChanged and consumeVolunteerRemind with error handling so
acknowledgment.acknowledge() is always called (e.g., call acknowledge() in a
finally block) and distinguish retriable vs non-retriable errors: catch
exceptions from sendStatusChanged/sendRemind, log the full error with context,
for poison-pill/non-retriable exceptions route the event to a dead-letter sink
or mark it handled, and only rethrow or skip acknowledge for intentional retry
strategies; ensure the catch references
sendVolunteerNotificationUseCase.sendStatusChanged,
sendVolunteerNotificationUseCase.sendRemind, and acknowledgment.acknowledge().
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/in/notice/NoticeWebAdapter.kt`:
- Around line 17-27: Add request validation and proper POST response handling in
createNotice: annotate the CreateNoticeWebRequest parameter with `@Valid` (and
import javax/ jakarta.validation as used in the project) so field constraints
are applied, and change the createNotice method to return a ResponseEntity that
returns 201 Created with either the new resource ID (from
createNoticeUseCase.execute returning or exposing an ID) in the body and/or a
Location header pointing to the new resource URI; update the call site around
createNoticeUseCase.execute/CreateNoticeCommand to capture the created ID (or
adapt the use case to return it) and build the ResponseEntity accordingly.
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/fcm/FcmClient.kt`:
- Around line 28-31: The current sendNotifications implementation calls
sendNotification per token causing N network requests; change it to build
MulticastMessage batches and call
FirebaseMessaging.getInstance().sendEachForMulticast() in sendNotifications,
splitting deviceTokens into chunks of up to 500, and for each BatchResponse
iterate results to log or handle per-token successes/failures
(messageId/exception). Keep sendNotification for single-token use or
remove/repurpose it, but ensure sendNotifications uses MulticastMessage,
FirebaseMessaging.sendEachForMulticast(), and correctly maps BatchResponse
results back to the original tokens for error handling and retry logic.
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/grpc/AuthGrpcClient.kt`:
- Around line 9-12: Replace field injection of the gRPC stub with constructor
injection: remove the lateinit var authServiceStub field and instead add a
constructor parameter of type AuthServiceGrpc.AuthServiceBlockingStub annotated
with `@GrpcClient`("auth-service") inside class AuthGrpcClient so the stub becomes
an immutable dependency; update usages in AuthGrpcClient to reference the
constructor-injected property and remove the field-level `@GrpcClient` annotation.
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notice/NoticePersistenceAdapter.kt`:
- Around line 15-19: The save(notice: Notice) method in NoticePersistenceAdapter
uses a non-null assertion on noticeMapper.toDomain(saved) which risks an NPE;
replace the "!!" with requireNotNull(...) and supply a helpful error message
(e.g., "Failed to map NoticeEntity to domain") to mirror the pattern used in
NotificationPersistenceAdapter, updating the return to
requireNotNull(noticeMapper.toDomain(saved)) with the message so failures are
explicit and safe.
In
`@finda-notification/src/main/kotlin/finda/findanotification/adapter/out/persistence/notification/NotificationPersistenceAdapter.kt`:
- Around line 15-19: In NotificationPersistenceAdapter.save, avoid the unsafe
non-null assertion on notificationMapper.toDomain(saved)!!; instead call
notificationMapper.toDomain(saved) and guard the nullable result with
requireNotNull (or throw a clear exception) providing a descriptive message
(e.g., "Failed to map NotificationEntity to domain for id=...") so the failure
is explicit; locate the mapping flow through notificationMapper.toEntity,
notificationRepository.save, and notificationMapper.toDomain to implement this
check and return the non-null domain object.
In
`@finda-notification/src/main/kotlin/finda/findanotification/application/port/in/kafka/SendVolunteerNotificationUseCase.kt`:
- Around line 3-8: The port interface SendVolunteerNotificationUseCase currently
depends on adapter DTOs VolunteerStatusChangedFiredEvent and
VolunteerRemindFiredEvent; move those DTO types out of the adapter layer or
replace them with port-level types to remove adapter-to-port dependency. Update
SendVolunteerNotificationUseCase to accept new port-level DTOs (e.g.,
application.port.in.kafka.VolunteerStatusChangedEvent and VolunteerRemindEvent)
or use domain types, then refactor the adapter `in.kafka` code to map its
adapter DTOs to the new port-level types before calling
sendStatusChanged/sendRemind so the interface no longer imports
adapter.`in`.kafka.dto classes.
In
`@finda-notification/src/main/kotlin/finda/findanotification/application/service/kafka/VolunteerNotificationService.kt`:
- Around line 23-69: Extract the duplicated logic in sendStatusChanged and
sendRemind into a private helper (e.g., sendNotificationToVolunteer) that
accepts volunteerId: UUID, title: String, body: String; inside the helper use
volunteerNotificationPreferenceRepository.findByVolunteerId(...), check
preference.enabled, get deviceToken via
authGrpcClient.getDeviceToken(preference.userId), call
fcmClient.sendNotification(deviceToken, title, body), and persist the
Notification via saveNotificationPort.save(...) with
NotificationType.NOTIFICATION; then update sendStatusChanged and sendRemind to
call this helper with their respective titles/bodies and event.volunteerId.
In
`@finda-notification/src/main/kotlin/finda/findanotification/application/service/notice/CreateNoticeService.kt`:
- Around line 21-37: The save-notify sequence in CreateNoticeService.execute can
leave DB and notification out-of-sync if an exception occurs after
saveNoticePort.save(); either make the operation atomic by annotating the
transactional boundary or persist an outbox event instead: add `@Transactional` to
the CreateNoticeService.execute (or to the class) so saveNoticePort.save(...)
and subsequent sendNoticeScheduledEventPort.send(...) /
noticeNotificationService.sendImmediate(...) run in one transaction (and move
external call to afterCommit if necessary), or replace the direct sends with
persisting an OutboxEvent via a new saveOutboxPort.saveOutbox(event) and let an
outbox publisher ship the events. Ensure you reference
CreateNoticeService.execute, saveNoticePort.save,
sendNoticeScheduledEventPort.send, noticeNotificationService.sendImmediate (or
new saveOutboxPort.saveOutbox) when making the change.
In
`@finda-notification/src/main/kotlin/finda/findanotification/global/config/KafkaConfig.kt`:
- Around line 37-49: consumerFactory() currently builds a
DefaultKafkaConsumerFactory but lacks an error handler; update your Kafka
listener container factory (the bean that uses consumerFactory()) to attach a
DefaultErrorHandler (or SeekToCurrentErrorHandler depending on Spring Kafka
version) with a DeadLetterPublishingRecoverer or a recoverer that publishes to a
DLT and appropriate backoff, and ensure MANUAL_IMMEDIATE ack mode behavior is
preserved by configuring the container factory’s setCommonErrorHandler /
setErrorHandler and ackMode; reference consumerFactory(),
DefaultKafkaConsumerFactory, and the Kafka listener container factory bean when
making these changes.
In `@finda-notification/src/main/proto/auth.proto`:
- Around line 1-29: This proto duplicates the AuthService and related messages
(AuthService, GetDeviceToken, GetDeviceTokens, UserRequest, UserListRequest,
DeviceTokenResponse, DeviceTokenListResponse); remove this local proto and
instead import/use the shared proto from the finda-auth shared module in your
build (or reference the shared proto artifact) so the service and message
definitions come from the single canonical proto; update any gRPC generation /
build config to point to the shared proto and adjust imports/usages to the
shared package so there is no local duplicate definition.
| const val GRPC_STARTER = "2.15.0.RELEASE" | ||
| const val GRPC = "1.58.0" | ||
| const val PROTOBUF = "3.24.0" | ||
| const val JAVAX_ANNOTATION = "1.3.2" | ||
| const val FIRE_BASE = "9.5.0" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
gRPC 1.58.0 security vulnerabilities CVE
💡 Result:
gRPC Core/C++ (and the Python/Ruby bindings that use gRPC Core) version 1.58.0 is affected by at least these CVEs:
- CVE-2024-7246 — HPACK table poisoning via HTTP/2 proxies; can cause other clients’ requests to fail and can leak other clients’ HTTP header keys (not values). Fixed in gRPC 1.58.3 (so 1.58.0 is affected). (nvd.nist.gov)
- CVE-2023-4785 — DoS due to missing error handling in the TCP server on POSIX platforms; affects gRPC C++/Python/Ruby, not gRPC Java/Go. Since it affects gRPC starting from 1.23, 1.58.0 is affected. (nvd.nist.gov)
Note: gRPC-Go is a separate implementation; for example CVE-2023-44487 affects some gRPC-Go 1.58.x patch releases after 1.58.0 (so it’s not “gRPC 1.58.0” in the Core/C++ sense, and not 1.58.0 specifically). (resolvedsecurity.com)
Citations:
- 1: https://nvd.nist.gov/vuln/detail/CVE-2024-7246?utm_source=openai
- 2: https://nvd.nist.gov/vuln/detail/cve-2023-4785?utm_source=openai
- 3: https://www.resolvedsecurity.com/vulnerability-catalog/CVE-2023-44487?utm_source=openai
🌐 Web query:
Firebase Admin SDK 9.5.0 security vulnerabilities
💡 Result:
For Firebase Admin Node.js SDK (firebase-admin) v9.5.0 (released Feb 10, 2021), there are no known “direct” published vulnerabilities in the firebase-admin package itself in Snyk’s vulnerability DB. [1]
However, v9.5.0 is very old, and the project later updated jsonwebtoken to v9.0.0 “to address security concerns” (that change landed in firebase-admin v11.4.1, Dec 22, 2022), which implies older versions like 9.5.0 were on an earlier jsonwebtoken major. [2] jsonwebtoken < 9 has had multiple published security issues (example: “broken or risky cryptographic algorithm” / verification misconfiguration concerns), so your real risk with 9.5.0 is likely via dependencies / transitive dependencies, not firebase-admin itself. [3]
What to do
- Upgrade: move off 9.5.0 (current
firebase-adminis 13.x per Snyk). [1] - Verify in your app: run
npm audit(and/or OSV/Snyk/Dependabot) against your lockfile, since the exact vulnerable packages depend on the resolved dependency tree.
Sources
[1] Snyk package page: firebase-admin direct vulnerabilities status / latest version info. (security.snyk.io)
[2] Firebase Admin Node.js release notes: v11.4.1 updated jsonwebtoken to v9.0.0; v9.5.0 release date. (firebase.google.com)
[3] Snyk report example describing jsonwebtoken security issue class and remediation (upgrade to v9+). (snyk.io)
Citations:
- 1: https://security.snyk.io/package/npm/firebase-admin?utm_source=openai
- 2: https://firebase.google.com/support/release-notes/admin/node?utm_source=openai
- 3: https://snyk.io/test/github/e2fyi/firebase-admin?utm_source=openai
🌐 Web query:
gRPC 1.58.0 Protobuf 3.24.0 compatibility
💡 Result:
For gRPC-Java 1.58.0, the published Maven artifact io.grpc:grpc-protobuf:1.58.0 has a direct compile dependency on com.google.protobuf:protobuf-java:3.24.0, so Protobuf 3.24.0 is the version gRPC 1.58.0 is built/tested against and is compatible out of the box. [1]
If you’re using the lite stack (common on Android), io.grpc:grpc-protobuf-lite:1.58.0 similarly depends on com.google.protobuf:protobuf-javalite:3.24.0. [2]
Sources: [1] [2]
🌐 Web query:
Maven Central gRPC latest version 2025 2026
💡 Result:
As of Feb 27, 2026, the latest gRPC Java release on Maven Central (groupId io.grpc) is 1.79.0 (published Feb 03, 2026). This applies to the commonly used artifacts, e.g. grpc-bom, grpc-core, and grpc-netty-shaded. [1] [2] [3] [4]
Key references (Maven Central):
io.grpc:grpc-bom:1.79.0[1]io.grpc:grpc-core:1.79.0[3]io.grpc:grpc-netty-shaded:1.79.0[2]
Latest in 2025 (end-of-year): 1.78.0 (Dec 23, 2025). [1] [2]
Sources: [1] [2] [3] [4]
🌐 Web query:
Firebase Admin latest version 2025 2026
💡 Result:
As of Feb 27, 2026, the latest released Firebase Admin SDK versions are:
- Node.js (
firebase-adminon npm): 13.6.1 (published ~Feb 2026) [1] - Java (
com.google.firebase:firebase-admin): 9.7.1 (Jan 29, 2026) [2] - Python (
firebase-adminon PyPI): 7.1.0 (Jul 31, 2025) [3] - Go (
firebase.google.com/go/v4): v4.19.0 (Jan 21, 2026) [4] - .NET (
FirebaseAdminon NuGet): 3.4.0 (Sep 8, 2025) [5]
Sources: [1] [2] [3] [4] [5]
🌐 Web query:
Firebase Admin Java SDK 9.5.0 Maven Central
💡 Result:
Firebase Admin Java SDK 9.5.0 (Maven Central)
- Coordinates:
com.google.firebase:firebase-admin:9.5.0[1] - Maven Central (Sonatype) page:
https://central.sonatype.com/artifact/com.google.firebase/firebase-admin/9.5.0[1] - Repository directory (all files / checksums / POM):
https://repo1.maven.org/maven2/com/google/firebase/firebase-admin/9.5.0/(also visible in the version index) [2] - Release date: May 29, 2025 [3]
Maven
<dependency>
<groupId>com.google.firebase</groupId>
<artifactId>firebase-admin</artifactId>
<version>9.5.0</version>
</dependency>Gradle
implementation("com.google.firebase:firebase-admin:9.5.0")Sources: [1] [2] [3]
🌐 Web query:
CVE-2024-7246 gRPC Java impact
💡 Result:
CVE-2024-7246 is an HPACK table poisoning issue that occurs when a gRPC client talks to a backend through an HTTP/2 proxy, which can cause other clients’ requests to fail and may leak HTTP header keys (not values). It was fixed upstream in gRPC 1.58.3 / 1.59.5 / 1.60.2 / 1.61.3 / 1.62.3 / 1.63.2 / 1.64.3 / 1.65.4 (and later). [1]
gRPC Java impact (grpc-java):
- The CVE is described by the vendor/NVD as impacting gRPC “clients communicating with an HTTP/2 proxy” and references a C-core HTTP/2/HPACK parsing issue (the linked upstream issue is labeled lang/c++). [1][2]
- Multiple vulnerability databases summarize the affected implementations as gRPC C++ / Python / Ruby (C-core-based), not grpc-java. [3][4]
Practical takeaway: if you are using grpc-java artifacts (e.g., io.grpc:grpc-netty, grpc-netty-shaded, grpc-okhttp) and not the gRPC C-core, this CVE is generally not applicable to Java; it mainly matters for C-core-based stacks behind an HTTP/2 proxy. [2][3]
Sources: [1] NVD CVE-2024-7246, [2] grpc/grpc issue #36245, [3] CVE Details, [4] cve.circl.lu.
🌐 Web query:
javax.annotation 1.3.2 compatibility
💡 Result:
javax.annotation:javax.annotation-api:1.3.2 is the Java EE / JSR-250 “Common Annotations” API in the legacy javax.annotation.* namespace (e.g., @PostConstruct, @Resource). [1]
Compatibility summary
- Java 8: Often already present via the JDK’s bundled Java EE modules; adding
javax.annotation-api:1.3.2is typically harmless but may be redundant. - Java 9–10: Still works, but those annotations started moving out of the JDK into standalone artifacts as part of Java EE module deprecation/removal. [2]
- Java 11+: Commonly required as an explicit dependency because the old Java EE modules (including the common-annotations module) were removed from the JDK, so
javax.annotation.*imports will fail without adding it (or an equivalent). [2] - JPMS (module-path): The jar is typically seen as an automatic module named
java.annotation(viaAutomatic-Module-Name). [3]
When it’s not compatible / recommended
- Jakarta EE 9+ / Spring Boot 3+: These moved from
javax.*→jakarta.*, sojavax.annotation-api:1.3.2will not satisfy code expectingjakarta.annotation.*. Migrate tojakarta.annotation:jakarta.annotation-apiinstead. [4]
(Mixingjavax.annotation-apiinto a Jakarta-based app is often redundant at best and can be confusing.) [5]
Rule of thumb:
- If your code imports
javax.annotation.*→javax.annotation-api:1.3.2. - If your code imports
jakarta.annotation.*→jakarta.annotation-api(don’t use 1.3.2). [4]
Sources: Maven Central artifact info [1], Java 11 removal context [2], OpenJDK JPMS/Automatic-Module-Name note [3], migration guidance to Jakarta annotations [4], Spring Boot 3 / javax vs jakarta discussion example [5].
🌐 Web query:
grpc-spring-boot-starter 2.15.0.RELEASE latest version 2025 2026
💡 Result:
For net.devh:grpc-*-spring-boot-starter (the artifact line that includes 2.15.0.RELEASE), the latest released version available in Maven Central is 3.1.0.RELEASE (released Apr 14, 2024). [1] [2]
2.15.0.RELEASE is an older line released Sep 28, 2023. [1] [3]
So, for 2025–2026, the latest published version still appears to be 3.1.0.RELEASE (no newer release is listed in Maven Central). [2] [1]
gRPC 버전 업데이트 필요 (보안 취약점 해결)
추가된 의존성 중 gRPC 1.58.0에 CVE-2024-7246 (HPACK 테이블 중독 취약점)이 존재합니다. 최소 버전 1.58.3 이상으로 업그레이드하거나, 최신 버전 1.79.0 (2026년 2월 3일 출시)으로 업그레이드를 권장합니다.
이외 의존성 상태:
- Protobuf 3.24.0: gRPC 1.58.0과 호환 확인됨 ✓
- Firebase Admin 9.5.0: 2025년 5월 29일 출시로 최근 버전이며 현재 9.7.1(2026년 1월)이 최신입니다.
- javax.annotation 1.3.2: 보안 문제 없음 ✓
- gRPC Spring Boot Starter 2.15.0.RELEASE: 선택사항으로 최신 버전 3.1.0.RELEASE로 업그레이드 고려 가능
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@buildSrc/src/main/kotlin/DependencyVersions.kt` around lines 7 - 11, The GRPC
dependency constant GRPC currently pins a vulnerable version (1.58.0) — update
the value of the GRPC constant in DependencyVersions.kt to at least "1.58.3"
(recommended "1.79.0") to remediate CVE-2024-7246; while you're editing,
consider bumping GRPC_STARTER to "3.1.0.RELEASE" and FIRE_BASE to "9.7.1" to
keep related libraries current, then run a build and dependency compatibility
tests to verify no regressions.
| protobuf { | ||
| protoc { | ||
| artifact = "com.google.protobuf:protoc:3.24.0" | ||
| } | ||
| plugins { | ||
| create("grpc") { | ||
| artifact = "io.grpc:protoc-gen-grpc-java:1.58.0" | ||
| } | ||
| } | ||
| generateProtoTasks { | ||
| all().forEach { | ||
| it.plugins { | ||
| create("grpc") | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for compilerOptions and Xjsr305 usage in Kotlin build files
echo "=== Searching for compilerOptions ==="
rg -n "compilerOptions" --type kotlin -g "*.kts"
echo ""
echo "=== Searching for Xjsr305 ==="
rg -n "Xjsr305" --type kotlin -g "*.kts"
echo ""
echo "=== Listing all build.gradle.kts files ==="
find . -name "build.gradle.kts" -type f | head -20Repository: team-FINDA/Finda-Backend
Length of output: 1192
finda-auth 모듈에 Kotlin compilerOptions 추가 필요
프로젝트의 다른 모듈들(finda-volunteer, finda-notification, finda-batch, finda-gateway)에서는 모두 compilerOptions 블록과 -Xjsr305=strict 플래그를 포함하고 있으나, finda-auth 모듈에만 이 설정이 누락되어 있습니다. 일관성 유지와 null-safety 경고 지원을 위해 다른 모듈들과 동일한 compilerOptions 설정을 추가해야 합니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@finda-auth/build.gradle.kts` around lines 45 - 61, Add the Kotlin compiler
options to the finda-auth module's build.gradle.kts to match other modules:
inside the build script (next to the existing protobuf { ... } block) add a
Kotlin compiler configuration that sets the JVM Kotlin compiler options to
include the "-Xjsr305=strict" flag (e.g., via tasks.withType<KotlinCompile> or
kotlin { sourceSets / compileKotlin kotlinOptions.freeCompilerArgs } so that the
module uses the same strict null-safety checks as
finda-volunteer/finda-notification/finda-batch/finda-gateway). Ensure the unique
flag "-Xjsr305=strict" is present and applied to all Kotlin compile tasks in
this file.
| override fun getDeviceToken( | ||
| request: UserRequest, | ||
| responseObserver: StreamObserver<DeviceTokenResponse> | ||
| ) { | ||
| val token = getDeviceTokenService.getByUserId(UUID.fromString(request.userId)) | ||
| responseObserver.onNext( | ||
| DeviceTokenResponse.newBuilder() | ||
| .setDeviceToken(token.deviceToken) | ||
| .setOs(token.os.name) | ||
| .build() | ||
| ) | ||
| responseObserver.onCompleted() | ||
| } |
There was a problem hiding this comment.
gRPC 예외 처리가 누락되었습니다.
UUID.fromString()은 잘못된 형식의 문자열에 대해 IllegalArgumentException을 발생시키고, getByUserId는 DeviceTokenNotFoundException을 발생시킬 수 있습니다. 현재 구현에서는 이러한 예외들이 gRPC 클라이언트에 적절한 상태 코드 없이 전파됩니다.
🛡️ 예외 처리 추가 제안
override fun getDeviceToken(
request: UserRequest,
responseObserver: StreamObserver<DeviceTokenResponse>
) {
+ try {
val token = getDeviceTokenService.getByUserId(UUID.fromString(request.userId))
responseObserver.onNext(
DeviceTokenResponse.newBuilder()
.setDeviceToken(token.deviceToken)
.setOs(token.os.name)
.build()
)
responseObserver.onCompleted()
+ } catch (e: IllegalArgumentException) {
+ responseObserver.onError(
+ io.grpc.Status.INVALID_ARGUMENT
+ .withDescription("Invalid UUID format")
+ .asRuntimeException()
+ )
+ } catch (e: DeviceTokenNotFoundException) {
+ responseObserver.onError(
+ io.grpc.Status.NOT_FOUND
+ .withDescription("Device token not found")
+ .asRuntimeException()
+ )
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| override fun getDeviceToken( | |
| request: UserRequest, | |
| responseObserver: StreamObserver<DeviceTokenResponse> | |
| ) { | |
| val token = getDeviceTokenService.getByUserId(UUID.fromString(request.userId)) | |
| responseObserver.onNext( | |
| DeviceTokenResponse.newBuilder() | |
| .setDeviceToken(token.deviceToken) | |
| .setOs(token.os.name) | |
| .build() | |
| ) | |
| responseObserver.onCompleted() | |
| } | |
| override fun getDeviceToken( | |
| request: UserRequest, | |
| responseObserver: StreamObserver<DeviceTokenResponse> | |
| ) { | |
| try { | |
| val token = getDeviceTokenService.getByUserId(UUID.fromString(request.userId)) | |
| responseObserver.onNext( | |
| DeviceTokenResponse.newBuilder() | |
| .setDeviceToken(token.deviceToken) | |
| .setOs(token.os.name) | |
| .build() | |
| ) | |
| responseObserver.onCompleted() | |
| } catch (e: IllegalArgumentException) { | |
| responseObserver.onError( | |
| io.grpc.Status.INVALID_ARGUMENT | |
| .withDescription("Invalid UUID format") | |
| .asRuntimeException() | |
| ) | |
| } catch (e: DeviceTokenNotFoundException) { | |
| responseObserver.onError( | |
| io.grpc.Status.NOT_FOUND | |
| .withDescription("Device token not found") | |
| .asRuntimeException() | |
| ) | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@finda-auth/src/main/kotlin/finda/findaauth/adapter/in/grpc/AuthGrpcService.kt`
around lines 16 - 28, Wrap the body of getDeviceToken in a try/catch that
handles UUID.fromString and service errors: catch IllegalArgumentException and
call responseObserver.onError with a gRPC Status.INVALID_ARGUMENT (include the
invalid userId message), and catch DeviceTokenNotFoundException (thrown by
getDeviceTokenService.getByUserId) and call responseObserver.onError with
Status.NOT_FOUND (include a descriptive message); leave the successful response
flow (DeviceTokenResponse builder, responseObserver.onNext/onCompleted)
unchanged and ensure any other unexpected exceptions are propagated as
Status.INTERNAL via responseObserver.onError.
| override fun getDeviceTokens( | ||
| request: UserListRequest, | ||
| responseObserver: StreamObserver<DeviceTokenListResponse> | ||
| ) { | ||
| val tokens = getDeviceTokenService.getAllByUserIds( | ||
| request.userIdsList.map { UUID.fromString(it) } | ||
| ) | ||
| responseObserver.onNext( | ||
| DeviceTokenListResponse.newBuilder() | ||
| .addAllTokens( | ||
| tokens.map { | ||
| DeviceTokenResponse.newBuilder() | ||
| .setDeviceToken(it.deviceToken) | ||
| .setOs(it.os.name) | ||
| .build() | ||
| } | ||
| ) | ||
| .build() | ||
| ) | ||
| responseObserver.onCompleted() | ||
| } |
There was a problem hiding this comment.
getDeviceTokens에도 동일한 예외 처리가 필요합니다.
request.userIdsList의 각 요소에 대해 UUID.fromString()이 호출되며, 하나라도 잘못된 형식이면 예외가 발생합니다. 일관된 에러 응답을 위해 예외 처리를 추가하세요.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@finda-auth/src/main/kotlin/finda/findaauth/adapter/in/grpc/AuthGrpcService.kt`
around lines 30 - 50, getDeviceTokens currently calls UUID.fromString on each
request.userIdsList without guarding against malformed UUIDs; wrap the
conversion and subsequent service call in a try/catch that catches
IllegalArgumentException (or Exception) and respond via responseObserver.onError
using a gRPC Status like Status.INVALID_ARGUMENT.withDescription("invalid user
id").withCause(e).asRuntimeException(), then return; otherwise proceed to build
and send DeviceTokenListResponse as before. Target symbols: getDeviceTokens,
UUID.fromString, getDeviceTokenService.getAllByUserIds,
responseObserver.onError, Status.INVALID_ARGUMENT.
| interface DeviceTokenRepository : CrudRepository<DeviceTokenJpaEntity, UUID> | ||
| interface DeviceTokenRepository : CrudRepository<DeviceTokenJpaEntity, UUID> { | ||
| fun findByUser_Id(userId: UUID): DeviceTokenJpaEntity? | ||
| fun findAllByUser_IdIn(userIds: List<UUID>): List<DeviceTokenJpaEntity> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for calls to findAllByUser_IdIn
rg -n "findAllByUser_IdIn" --type kotlinRepository: team-FINDA/Finda-Backend
Length of output: 441
🏁 Script executed:
#!/bin/bash
# Find the repository file and surrounding context
fd "DeviceTokenRepository.kt" --type fRepository: team-FINDA/Finda-Backend
Length of output: 182
🏁 Script executed:
#!/bin/bash
# Search for isEmpty/isNotEmpty checks related to userIds
rg -n "userIds" --type kotlin -B2 -A2Repository: team-FINDA/Finda-Backend
Length of output: 5728
🏁 Script executed:
#!/bin/bash
# Get the full DeviceTokenPersistenceAdapter.kt file
fd "DeviceTokenPersistenceAdapter.kt" --type f -exec cat -n {} \;Repository: team-FINDA/Finda-Backend
Length of output: 298
🏁 Script executed:
#!/bin/bash
# Get the full GetDeviceTokenService.kt file
fd "GetDeviceTokenService.kt" --type f -exec cat -n {} \;Repository: team-FINDA/Finda-Backend
Length of output: 298
🏁 Script executed:
#!/bin/bash
# Get the full AuthGrpcService.kt file to check the entry point
fd "AuthGrpcService.kt" --type f -path "*/finda-auth/*" -exec cat -n {} \;Repository: team-FINDA/Finda-Backend
Length of output: 236
🏁 Script executed:
#!/bin/bash
# Find and read DeviceTokenPersistenceAdapter.kt
find . -name "DeviceTokenPersistenceAdapter.kt" -type f | head -1 | xargs cat -nRepository: team-FINDA/Finda-Backend
Length of output: 1404
🏁 Script executed:
#!/bin/bash
# Find and read GetDeviceTokenService.kt
find . -name "GetDeviceTokenService.kt" -type f | head -1 | xargs cat -nRepository: team-FINDA/Finda-Backend
Length of output: 761
🏁 Script executed:
#!/bin/bash
# Find and read AuthGrpcService.kt in finda-auth module
find . -path "*/finda-auth/*" -name "AuthGrpcService.kt" -type f | head -1 | xargs cat -nRepository: team-FINDA/Finda-Backend
Length of output: 2040
어댑터에서 빈 리스트에 대한 가드를 추가하세요.
findAllByUser_IdIn 메서드는 userIds가 비어 있을 때 DB 방언에 따라 IN () 관련 예외가 발생할 수 있습니다. 현재 코드는 DeviceTokenPersistenceAdapter.getAllByUserIds() → GetDeviceTokenService.getAllByUserIds() → AuthGrpcService.getDeviceTokens() 전체 호출 경로에서 빈 리스트 검사가 없습니다. DeviceTokenPersistenceAdapter.getAllByUserIds()에서 다음과 같이 early return 처리를 해주세요:
if (userIds.isEmpty()) {
return emptyList()
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@finda-auth/src/main/kotlin/finda/findaauth/adapter/out/persistence/devicetoken/repository/DeviceTokenRepository.kt`
at line 11, DeviceTokenPersistenceAdapter.getAllByUserIds currently forwards an
empty userIds list to the repository which can produce an IN () DB error; add an
early guard at the start of DeviceTokenPersistenceAdapter.getAllByUserIds to
return emptyList() when userIds.isEmpty() so that findAllByUser_IdIn is never
called with an empty list (this fixes the call chain:
DeviceTokenPersistenceAdapter.getAllByUserIds ->
GetDeviceTokenService.getAllByUserIds -> AuthGrpcService.getDeviceTokens).
| private fun sendToAllUsers(title: String, body: String) { | ||
| val userIds = notificationPreferenceRepository.findAllByEnabledTrue() | ||
| .map { it.userId } | ||
|
|
||
| val deviceTokens = authGrpcClient.getDeviceTokens(userIds) | ||
|
|
||
| fcmClient.sendNotifications(deviceTokens, title, body) | ||
|
|
||
| saveNotificationPort.save( | ||
| Notification( | ||
| id = UUID.randomUUID(), | ||
| title = title, | ||
| body = body, | ||
| type = NotificationType.NOTIFICATION, | ||
| volunteerId = null | ||
| ) | ||
| ) | ||
| } |
There was a problem hiding this comment.
외부 호출(gRPC, FCM)에 대한 에러 처리 부재.
authGrpcClient.getDeviceTokens() 또는 fcmClient.sendNotifications() 호출 실패 시 예외가 전파됩니다. Kafka consumer 컨텍스트에서 이는 재시도 또는 메시지 유실로 이어질 수 있습니다. 적절한 예외 처리 전략(재시도, DLQ 등)을 고려하세요.
또한, userIds가 빈 리스트일 경우 불필요한 gRPC 호출을 방지하는 것이 좋습니다.
🛡️ 빈 리스트 체크 및 에러 처리 제안
private fun sendToAllUsers(title: String, body: String) {
val userIds = notificationPreferenceRepository.findAllByEnabledTrue()
.map { it.userId }
+ if (userIds.isEmpty()) {
+ logger.info("No users with enabled notifications")
+ return
+ }
+
val deviceTokens = authGrpcClient.getDeviceTokens(userIds)
+
+ if (deviceTokens.isEmpty()) {
+ logger.info("No device tokens found for users")
+ return
+ }
fcmClient.sendNotifications(deviceTokens, title, body)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@finda-notification/src/main/kotlin/finda/findanotification/application/service/kafka/NoticeNotificationService.kt`
around lines 38 - 55, sendToAllUsers currently calls external services without
guarding for empty userIds and without handling failures from
authGrpcClient.getDeviceTokens or fcmClient.sendNotifications; modify
sendToAllUsers to first return early if userIds.isEmpty(), then wrap the gRPC
and FCM calls in a try/catch and implement a retry or error-handling path (e.g.,
retry logic or publish to a DLQ / log and abort) so exceptions don’t escape the
Kafka consumer; only call saveNotificationPort.save after successful delivery
(or record failure state) and reference authGrpcClient.getDeviceTokens,
fcmClient.sendNotifications, and saveNotificationPort.save when making these
changes.
| val deviceToken = authGrpcClient.getDeviceToken(preference.userId) | ||
| val title = "봉사 상태가 변경되었습니다" | ||
| val body = "상태: ${event.status} / 진행: ${event.progress}" | ||
|
|
||
| fcmClient.sendNotification(deviceToken, title, body) |
There was a problem hiding this comment.
deviceToken null/empty 체크 및 외부 호출 에러 처리 필요.
authGrpcClient.getDeviceToken() 호출 결과가 null이거나 빈 문자열일 경우, FCM 전송이 실패할 수 있습니다. 또한 gRPC/FCM 호출 실패 시 예외 처리가 없어 전체 프로세스가 실패합니다.
🛡️ 에러 처리 추가 예시
- val deviceToken = authGrpcClient.getDeviceToken(preference.userId)
+ val deviceToken = authGrpcClient.getDeviceToken(preference.userId)
+ if (deviceToken.isNullOrBlank()) {
+ // 로깅 후 early return 또는 예외 처리
+ return
+ }
+
val title = "봉사 상태가 변경되었습니다"
val body = "상태: ${event.status} / 진행: ${event.progress}"
- fcmClient.sendNotification(deviceToken, title, body)
+ runCatching { fcmClient.sendNotification(deviceToken, title, body) }
+ .onFailure { /* 로깅 처리 */ }
+ .getOrNull() ?: return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@finda-notification/src/main/kotlin/finda/findanotification/application/service/kafka/VolunteerNotificationService.kt`
around lines 30 - 34, The code calls
authGrpcClient.getDeviceToken(preference.userId) and
fcmClient.sendNotification(...) without validating the deviceToken or handling
RPC/FCM errors; modify VolunteerNotificationService (the block using
authGrpcClient.getDeviceToken and fcmClient.sendNotification) to first check if
deviceToken is null or blank and skip/log the notification when so, and wrap the
external calls in a try/catch that logs failures (including userId,
event.id/status/progress) and prevents exceptions from bubbling up; ensure you
reference authGrpcClient.getDeviceToken, fcmClient.sendNotification,
deviceToken, preference.userId, and event in the error logs.
| @PostConstruct | ||
| fun initialize() { | ||
| val serviceAccount = ClassPathResource("firebase-service-account.json").inputStream | ||
| val options = FirebaseOptions.builder() | ||
| .setCredentials(GoogleCredentials.fromStream(serviceAccount)) | ||
| .build() | ||
| FirebaseApp.initializeApp(options) | ||
| } |
There was a problem hiding this comment.
InputStream 리소스 누수 및 중복 초기화 방지 필요
두 가지 문제가 있습니다:
-
리소스 누수:
ClassPathResource.inputStream으로 열린 스트림이 닫히지 않습니다.GoogleCredentials.fromStream()은 스트림을 읽기만 하고 닫지 않으므로, 파일 디스크립터 누수가 발생합니다. -
중복 초기화:
FirebaseApp.initializeApp()은 이미 초기화된 상태에서 재호출 시IllegalStateException을 발생시킵니다. 테스트 환경이나 hot reload 시나리오에서 문제가 될 수 있습니다.
🐛 리소스 관리 및 중복 초기화 방지 적용
`@PostConstruct`
fun initialize() {
- val serviceAccount = ClassPathResource("firebase-service-account.json").inputStream
- val options = FirebaseOptions.builder()
- .setCredentials(GoogleCredentials.fromStream(serviceAccount))
- .build()
- FirebaseApp.initializeApp(options)
+ if (FirebaseApp.getApps().isEmpty()) {
+ ClassPathResource("firebase-service-account.json").inputStream.use { serviceAccount ->
+ val options = FirebaseOptions.builder()
+ .setCredentials(GoogleCredentials.fromStream(serviceAccount))
+ .build()
+ FirebaseApp.initializeApp(options)
+ }
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @PostConstruct | |
| fun initialize() { | |
| val serviceAccount = ClassPathResource("firebase-service-account.json").inputStream | |
| val options = FirebaseOptions.builder() | |
| .setCredentials(GoogleCredentials.fromStream(serviceAccount)) | |
| .build() | |
| FirebaseApp.initializeApp(options) | |
| } | |
| `@PostConstruct` | |
| fun initialize() { | |
| if (FirebaseApp.getApps().isEmpty()) { | |
| ClassPathResource("firebase-service-account.json").inputStream.use { serviceAccount -> | |
| val options = FirebaseOptions.builder() | |
| .setCredentials(GoogleCredentials.fromStream(serviceAccount)) | |
| .build() | |
| FirebaseApp.initializeApp(options) | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@finda-notification/src/main/kotlin/finda/findanotification/global/config/FirebaseConfig.kt`
around lines 16 - 23, The initialize() method opens
ClassPathResource("firebase-service-account.json").inputStream and passes it to
GoogleCredentials.fromStream(...) without closing the stream and calls
FirebaseApp.initializeApp(options) unconditionally; fix by wrapping the
InputStream in a try-with-resources (use Kotlin's use { ... }) so the stream is
closed after GoogleCredentials.fromStream, and guard
FirebaseApp.initializeApp(options) by checking FirebaseApp.getApps().isEmpty()
(or catching IllegalStateException) to avoid duplicate initialization; update
the initialize() function to use these two changes referencing
ClassPathResource, GoogleCredentials.fromStream, and FirebaseApp.initializeApp.
| props[JsonDeserializer.USE_TYPE_INFO_HEADERS] = false | ||
| props[JsonDeserializer.TRUSTED_PACKAGES] = "*" |
There was a problem hiding this comment.
TRUSTED_PACKAGES 보안 위험
JsonDeserializer.TRUSTED_PACKAGES = "*"는 모든 패키지의 클래스 역직렬화를 허용하여 신뢰할 수 없는 메시지로 인한 역직렬화 공격에 취약해질 수 있습니다.
프로젝트 패키지로 제한하는 것을 권장합니다.
🔒 보안 개선 제안
props[JsonDeserializer.USE_TYPE_INFO_HEADERS] = false
- props[JsonDeserializer.TRUSTED_PACKAGES] = "*"
+ props[JsonDeserializer.TRUSTED_PACKAGES] = "finda.findanotification"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| props[JsonDeserializer.USE_TYPE_INFO_HEADERS] = false | |
| props[JsonDeserializer.TRUSTED_PACKAGES] = "*" | |
| props[JsonDeserializer.USE_TYPE_INFO_HEADERS] = false | |
| props[JsonDeserializer.TRUSTED_PACKAGES] = "finda.findanotification" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@finda-notification/src/main/kotlin/finda/findanotification/global/config/KafkaConfig.kt`
around lines 46 - 47, 현재 KafkaConfig의 props 설정에서
JsonDeserializer.TRUSTED_PACKAGES를 "*"로 열어두어 모든 패키지 역직렬화를 허용하고 있으므로, 이를 프로젝트 루트
패키지로 제한하도록 수정하세요: KafkaConfig에서 props[JsonDeserializer.TRUSTED_PACKAGES] 값을
와일드카드("*") 대신 애플리케이션 기본 패키지 이름(예: "finda.findanotification" 또는 필요한 여러 패키지는 쉼표로
구분)으로 설정하고, 여러 패키지를 허용해야 하면 쉼표로 구분된 목록을 사용해 JsonDeserializer.TRUSTED_PACKAGES을
안전하게 지정합니다.
| grpc: | ||
| client: | ||
| auth-service: | ||
| address: static://localhost:9090 |
There was a problem hiding this comment.
gRPC 대상 주소를 localhost로 고정하면 배포 환경에서 연결 실패할 수 있습니다.
Line 8은 컨테이너/서버 환경에서 auth-service가 아닌 자기 자신(loopback)을 바라보게 됩니다. 호스트/포트를 환경변수로 외부화해 주세요.
제안 수정안
- address: static://localhost:9090
+ address: ${AUTH_GRPC_ADDRESS:static://localhost:9090}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| address: static://localhost:9090 | |
| address: ${AUTH_GRPC_ADDRESS:static://localhost:9090} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@finda-notification/src/main/resources/application.yml` at line 8, 라인에 하드코딩된
주소값 "static://localhost:9090"을 사용한 "address" 설정은 컨테이너/서버 환경에서 잘못된 루프백 연결을 유발하므로,
application.yml의 해당 "address" 항목을 환경변수로 외부화하도록 변경하세요; 예를 들어 "address" 값을 직접
"static://localhost:9090"으로 두지 않고 ENV로 읽도록 바꾸고(예: AUTH_SERVICE_ADDRESS 또는
AUTH_SERVICE_HOST/PORT 등), 배포/컨테이너 설정에서 해당 환경변수에 실제 auth-service 호스트와 포트를 제공하도록
업데이트하십시오.
✨ 리뷰 시 참고 사항을 작성해주세요
👩💻 작업한 내용을 설명해주세요
📸 결과물을 캡쳐해주세요
🔗 관련 이슈 번호를 첨부하여주세요
Summary by CodeRabbit
릴리스 노트
새로운 기능
인프라