June 2024, Week 2: June 3rd - June 9th
June 3 -> 2486. Append Characters to String to Make Subsequence
You are given two strings s
and t
consisting of only lowercase English letters.
Return the minimum number of characters that need to be appended to the end of s
so that t
becomes a subsequence of s
.
A subsequence is a string that can be derived from another string by deleting some or no characters without changing the order of the remaining characters.
Example 1:
Input:
s = "coaching"
,t = "coding"
Output: 4
Explanation: Append the characters "ding" to the end of
s
so thats = "coachingding"
. Now,t
is a subsequence ofs
("coachingding"
). It can be shown that appending any three characters to the end ofs
will never maket
a subsequence.
Example 2:
Input:
s = "abcde"
,t = "a"
Output: 0
Explanation:
t
is already a subsequence ofs
("abcde"
).
Example 3:
Input:
s = "z"
,t = "abcde"
Output: 5
Explanation: Append the characters
"abcde"
to the end ofs
so thats = "zabcde"
. Now,t
is a subsequence ofs
("zabcde"
). It can be shown that appending any four characters to the end ofs
will never maket
a subsequence.
Constraints:
1 <= s.length, t.length <= 10^5
s
andt
consist only of lowercase English letters.
Approach 1: Two Pointer Technique
Understanding the Core Idea
The central concept of this solution is to leverage a two-pointer technique to efficiently compare and traverse both strings simultaneously. This approach allows us to determine how much of string t
is already a subsequence of s
, and consequently, how many characters from t
need to be appended to s
.
Subsequence Matching: The solution exploits the fact that we only need to find characters of
t
ins
in the same order, not necessarily consecutively.Efficient Traversal: By using two pointers, we can avoid unnecessary comparisons and achieve linear time complexity.
Code Walkthrough
Initialization:
s_index = 0 t_index = 0 s_length = len(s) t_length = len(t)We initialize two pointers,
s_index
andt_index
, to keep track of our current positions in stringss
andt
respectively. We also store the lengths of both strings for easier access and improved readability.Main Comparison Loop:
while s_index < s_length and t_index < t_length: if s[s_index] == t[t_index]: t_index += 1 s_index += 1This is the core of the algorithm. We iterate through both strings simultaneously:
If the characters at the current positions match, we move forward in both strings (incrementing both
s_index
andt_index
).If they don't match, we only move forward in
s
(incrementing onlys_index
).The loop continues until we reach the end of either string.
This approach ensures that we find the longest subsequence of
t
that exists ins
.Result Calculation:
return t_length - t_indexAfter the loop,
t_index
represents how many characters oft
we successfully matched ins
. Therefore,t_length - t_index
gives us the number of remaining characters int
that need to be appended tos
.
Complexity Analysis
Time Complexity:
, where is the length of the longer string between s
andt
. This is because we iterate through each character ofs
at most once, and each character oft
at most once. The worst-case scenario occurs whens
andt
have no matching characters, causing us to traverse the entire length ofs
.
Space Complexity:
, or constant space. The algorithm uses a fixed number of variables ( s_index
,t_index
,s_length
,t_length
) regardless of the input size. It doesn't require any additional data structures that grow with the input size.
Example
Input:
Step-by-Step Walkthrough:
Initialization:
s_index
(pointer fors
) is set to 0.t_index
(pointer fort
) is set to 0.s_length
is calculated as 8 (length of "coaching").t_length
is calculated as 6 (length of "coding").
Main Loop (Comparing 's' and 't'):
Iteration 1:
Comparison:
s[0]
('c') matchest[0]
('c').Action: Both
s_index
andt_index
are incremented to 1.
Iteration 2:
Comparison:
s[1]
('o') matchest[1]
('o').Action: Both
s_index
andt_index
are incremented to 2.
Iterations 3 - 8:
Comparison: In each iteration,
s[s_index]
does not matcht[t_index]
('d').Action: Only
s_index
is incremented.Note: We're essentially skipping over characters in
s
("aching") that are not part of the subsequence we're looking for ("coding").
Loop Termination:
The loop terminates because
s_index
reaches 8 (end ofs
).At this point,
t_index
is still at 2.
Iteration Summary:
s_index
t_index
s[s_index]
t[t_index]
Match?
0
0
c
c
Yes
1
1
o
o
Yes
2
2
a
d
No
3
2
c
d
No
4
2
h
d
No
5
2
i
d
No
6
2
n
d
No
7
2
g
d
No
Result Calculation:
t_length - t_index
= 6 - 2 = 4.This indicates that
4
characters ("ding") need to be appended tos
to maket
a subsequence.
June 4 -> 409. Longest Palindrome
Given a string s
which consists of lowercase or uppercase letters, return the length of the longest palindrome that can be built with those letters.
Letters are case-sensitive, for example, "Aa"
is not considered a palindrome.
Example 1:
Input:
s = "abccccdd"
Output: 7
Explanation: One longest palindrome that can be built is
"dccaccd"
, whose length is 7.
Example 2:
Input:
s = "a"
Output: 1
Explanation: The longest palindrome that can be built is
a
, whose length is 1.
Constraints:
1 <= s.length <= 2000
s
consists of lowercase and/or uppercase English letters only.
Approach 1: Frequency Counting with Dictionary
Understanding the Core Idea
The central concept of this solution is to leverage a frequency counting technique using a dictionary to determine the maximum length of a palindrome that can be constructed from the given string. This approach exploits two key properties of palindromes:
Even-count Characters: Characters that appear an even number of times can be fully used in a palindrome, with half on each side.
Odd-count Characters: For characters that appear an odd number of times, we can use all but one occurrence in pairs, and potentially use one leftover character as the center of the palindrome.
Code Walkthrough
Initialization:
char_count = defaultdict(int)We initialize a
defaultdict
to store the frequency of each character. Usingdefaultdict(int)
allows us to increment counts without explicitly checking if a key exists.Character Counting:
for char in s: char_count[char] += 1This loop iterates through each character in the input string
s
, incrementing its count in thechar_count
dictionary.Palindrome Length Calculation:
result = 0 odd_exists = False for _, count in char_count.items(): if count % 2 == 0: result += count else: result += count - 1 odd_exists = TrueHere, we iterate through the character counts:
For even counts, we add the full count to the result.
For odd counts, we add one less than the count (making it even) and set
odd_exists
toTrue
.
Handling Center Character:
if odd_exists: result += 1If we encountered any odd-count characters, we can use one as the center of the palindrome, so we add 1 to the result.
Return Result:
return resultThe function returns the calculated length of the longest possible palindrome.
Complexity Analysis
Time Complexity:
, where is the length of the input string s
. This is because we iterate through the string once to count characters, and then iterate through the character counts. Since there's a fixed number of possible characters, the second iteration is bounded by a constant.
Space Complexity:
, because the char_count
dictionary will contain at most 52 entries (26 lowercase and 26 uppercase letters), which is a constant regardless of the input size.
Example
Input:
Step-by-Step Walkthrough:
Initialization:
char_count
is initialized as a defaultdict:char_count = defaultdict(int)
result
is initialized to 0:result = 0
odd_exists
is initialized to False:odd_exists = False
Character Frequency Counting:
The loop iterates through each character in the string
s = "abccccdd"
.char_count
is updated as follows:{'a': 1}
(after processing 'a'){'a': 1, 'b': 1}
(after processing 'b'){'a': 1, 'b': 1, 'c': 1}
... and so on until:{'a': 1, 'b': 1, 'c': 4, 'd': 2}
(after processing all characters)
Palindrome Length Calculation:
The loop iterates over the
char_count
dictionary:'a'
: Count is 1 (odd), so 0 is added toresult
(result remains 0), andodd_exists
is set toTrue
.'b'
: Count is 1 (odd), so 0 is added toresult
(result remains 0), andodd_exists
remainsTrue
.'c'
: Count is 4 (even), so 4 is added toresult
(result becomes 4).'d'
: Count is 2 (even), so 2 is added toresult
(result becomes 6).
Center Character Adjustment:
Since
odd_exists
isTrue
(we found odd-frequency characters), 1 is added toresult
. The finalresult
is 7.
Result Calculation/Return:
The function returns the final value of
result
, which is 7. This indicates that the longest possible palindrome we can construct from the letters in"abccccdd"
has a length of 7 (e.g.,"dccaccd"
).
Approach 2:Set-based Character Pairing
Understanding the Core Idea
The central concept of this solution is to use a set data structure to efficiently pair up characters for forming a palindrome. This approach leverages the following key ideas:
Character Pairing: Each time we encounter a character that's already in the set, we can form a pair, contributing 2 to the palindrome length.
Unpaired Characters: Characters that don't find a pair remain in the set, and one of them can potentially be used as the center of the palindrome.
Code Walkthrough
Initialization:
character_set = set() result = 0We initialize an empty set to keep track of unpaired characters and a variable to store the palindrome length.
Character Pairing Process:
for char in s: if char in character_set: result += 2 character_set.remove(char) else: character_set.add(char)This loop iterates through each character in the input string
s
:If the character is already in the set, we've found a pair. We increment
result
by 2 and remove the character from the set.If the character is not in the set, we add it to the set as a potential future pair.
Handling Center Character:
if character_set: result += 1After processing all characters, if the set is not empty, it means we have unpaired characters. We can use one of these as the center of the palindrome, so we add 1 to the result.
Return Result:
return resultThe function returns the calculated length of the longest possible palindrome.
Complexity Analysis
Time Complexity:
, where is the length of the input string s
. We iterate through the string once, performingoperations (set lookup, addition, and removal) for each character.
Space Complexity:
, because the character_set
will contain at most 52 unique characters (26 lowercase and 26 uppercase letters), which is a constant regardless of the input size.
Example
Input:
Step-by-Step Walkthrough:
Initialization:
An empty set
character_set
is created.result
is set to 0:result = 0
Character Processing Loop:
Iteration 1: The character 'a' is not in
character_set
, so it's added to the set.Iteration 2: The character 'b' is not in
character_set
, so it's added to the set.Iteration 3 & 4: The character 'c' is first added to the set. In the next iteration, 'c' is found in the set, so it's removed, and
result
is incremented by 2.Iteration 5 & 6: The character 'c' is again added and then removed from the set, incrementing
result
by 2 again.Iteration 7 & 8: The character 'd' follows the same add-remove pattern as 'c,' increasing
result
by another 2.
Iteration Summary (Palindrome Length Calculation):
Iteration
Character
Action
Character Set
Result
1
a
Add 'a' to set
{'a'}
0
2
b
Add 'b' to set
{'a', 'b'}
0
3
c
Add 'c' to set
{'a', 'b', 'c'}
0
4
c
Remove 'c', result += 2
{'a', 'b'}
2
5
c
Add 'c' to set
{'a', 'b', 'c'}
2
6
c
Remove 'c', result += 2
{'a', 'b'}
4
7
d
Add 'd' to set
{'a', 'b', 'd'}
4
8
d
Remove 'd', result += 2
{'a', 'b'}
6
Final character_set:
After processing all characters,
character_set
contains{'a', 'b'}
.
Center Character Check:
Since
character_set
is not empty, we can use one of the remaining characters ('a' or 'b') as the center of the palindrome.result
is incremented by 1:result = 7
Result Calculation/Return:
The function returns the final
result
value of 7. This means the longest possible palindrome we can construct from"abccccdd"
has a length of 7. One possible palindrome is"dccaccd"
.
June 5 -> 1002. Find Common Characters
Given a string array words
, return an array of all characters that show up in all strings within the words
(including duplicates). You may return the answer in any order.
Example 1:
Input: words = ["bella","label","roller"] Output: ["e","l","l"]
Example 2:
Input: words = ["cool","lock","cook"] Output: ["c","o"]
Constraints:
1 <= words.length <= 100
1 <= words[i].length <= 100
words[i]
consists of lowercase English letters.
Approach 1: Iterative Character Counting
Understanding the Core Idea
The central concept of this solution is to leverage the set of unique characters from the first word as a reference point, then count the occurrences of these characters in all words. By finding the minimum count for each character across all words, we determine how many times that character appears in every word.
Reference Word: The first word in the list serves as a reference for potential common characters.
Character Counting: For each unique character in the reference word, we count its occurrences in all words.
Minimum Frequency: The minimum count of a character across all words represents how many times it appears in every word.
Code Walkthrough
Initialization:
common_chars = []We initialize an empty list to store the common characters. This will be our final result.
Iterating Through Unique Characters:
for char in set(words[0]):We iterate through the set of unique characters in the first word. This is efficient as we only need to check each unique character once, and it limits our search to characters that actually appear in at least one word.
Counting Character Occurrences:
char_counts = [word.count(char) for word in words]For each character, we create a list comprehension that counts its occurrences in every word. This gives us a list of counts for the current character across all words.
Adding Common Characters:
common_chars.extend([char] * min(char_counts))We add the current character to our result list. The number of times we add it is determined by the minimum count across all words. This ensures we only include the character as many times as it appears in every word.
Result Return:
return common_charsAfter processing all characters, we return the list of common characters.
Complexity Analysis
Time Complexity:
, where is the number of words and is the average length of the words. This is because for each unique character in the first word (which takes time, where is the number of unique characters in the first word), we count its occurrences in all words. The count()
method takestime on average for each word. Therefore, we have
, but since is bounded by the alphabet size (26 lowercase English letters), the time complexity is effectively .
Space Complexity:
, where is the number of unique characters in the first word. In the worst case, this is equal to the length of the first word. However, since we're dealing with lowercase English letters only, is bounded by 26, making the space complexity effectively (constant).
Example
Input:
Step-by-Step Walkthrough:
Initialization:
We start by initializing an empty list
common_chars = []
to store the common characters.
Main Loop (Iterating through unique characters of the first word):
The set of unique characters results in
{'e', 'b', 'a', 'l'}
(sets are unordered).Iteration 1: Character 'e'
We examine the character 'e' from the set of unique characters in 'bella'.
We count occurrences of 'e' in each word:
'bella': 1
'label': 1
'roller': 1
We calculate the minimum count: min([1, 1, 1]) = 1
We add 'e' to
common_chars
once:common_chars = ['e']
Iteration 2: Character 'b'
We examine the character 'b'.
We count occurrences of 'b' in each word:
'bella': 1
'label': 1
'roller': 0
We calculate the minimum count: min([1, 1, 0]) = 0
We don't add 'b' to
common_chars
as the minimum count is 0.
Iteration 3: Character 'a'
We examine the character 'a'.
We count occurrences of 'a' in each word:
'bella': 1
'label': 1
'roller': 0
We calculate the minimum count: min([1, 1, 0]) = 0
We don't add 'a' to
common_chars
as the minimum count is 0.
Iteration 4: Character 'l'
We examine the character 'l'.
We count occurrences of 'l' in each word:
'bella': 2
'label': 2
'roller': 2
We calculate the minimum count: min([2, 2, 2]) = 2
We add 'l' to
common_chars
twice:common_chars = ['e', 'l', 'l']
Loop Termination:
The loop terminates after we've examined all unique characters in the first word ('bella').
At this point,
common_chars
contains ['e', 'l', 'l'].
Visual Aid:
Here's a table summarizing the iterations:
Iteration
Character
Counts in Words
Min Count
Added to Result
1
'e'
[1, 1, 1]
1
['e']
2
'b'
[1, 1, 0]
0
[]
3
'a'
[1, 1, 0]
0
[]
4
'l'
[2, 2, 2]
2
['l', 'l']
Result Calculation/Final Steps:
After examining all unique characters in the first word, we have our final result in
common_chars
.The function returns this list:
['e', 'l', 'l']
Approach 2: Counter Intersection
Understanding the Core Idea
The central concept of this solution is to use Python's Counter
class, which provides an efficient way to count occurrences of elements and perform set-like operations on these counts. By intersecting Counter
objects for each word, we effectively find the common characters and their frequencies across all words.
Counter Objects: Each word is converted into a
Counter
object, which maps characters to their frequencies.Counter Intersection: The
&
operation betweenCounter
objects keeps the minimum count for each character present in bothCounter
s.Iterative Reduction: By iteratively intersecting
Counter
s for all words, we end up with counts of only the common characters.
Key Insight: The Counter
class provides a highly efficient way to perform the intersection operation, which is the core of finding common elements with their frequencies.
Code Walkthrough
Initial Counter Creation:
char_counts = Counter(words[0])We create a
Counter
object for the first word. This gives us the character frequencies for our initial reference.Iterative Counter Intersection:
for word in words[1:]: char_counts &= Counter(word)We iterate through the remaining words, creating a
Counter
for each and intersecting it with our runningchar_counts
. The&=
operation updateschar_counts
to keep only the characters present in both, with the minimum of their counts.Result Generation and Return:
return list(char_counts.elements())The
elements()
method ofCounter
returns an iterator over elements repeating all as many times as its count. We convert this to a list to get our final result of common characters, including duplicates.
Complexity Analysis
Time Complexity:
, where is the number of words and is the average length of the words. Creating a Counter
for each word takestime, and we do this for each of the words. The intersection operation ( &=
) between Counters is typically, where and are the Counters being intersected. In our case, this is bounded by the alphabet size (26 lowercase English letters), making the overall time complexity .
Space Complexity:
(constant), because we're dealing with a fixed alphabet of lowercase English letters. The Counter
objects will never have more than 26 entries, regardless of the input size. Therefore, the space used is bounded by a constant.
Example
Input:
Step-by-Step Walkthrough:
Initialization:
We start by creating a
Counter
object for the first word 'bella':char_counts = Counter('bella') = Counter({'l': 2, 'b': 1, 'e': 1, 'a': 1})
This
Counter
represents the initial common character counts.
Main Loop (Intersecting Character Counts):
Iteration 1: Comparing with the word 'label'
We create a
Counter
for 'label':Counter({'l': 2, 'a': 1, 'b': 1, 'e': 1})
We perform an intersection operation (
&=
) between the currentchar_counts
and the newCounter
:The result is:
Counter({'l': 2, 'b': 1, 'e': 1, 'a': 1})
This operation keeps the minimum count for each character present in both Counters.
Iteration 2: Comparing with the word 'roller'
We create a
Counter
for 'roller':Counter({'r': 2, 'l': 2, 'o': 1, 'e': 1})
We perform another intersection operation with the current
char_counts
:The result is:
Counter({'l': 2, 'e': 1})
Characters 'b' and 'a' are removed as they don't appear in 'roller'.
The count for 'l' remains 2 as it appears at least twice in all words so far.
Loop Termination:
The loop terminates after we've processed all words in the input list.
At this point,
char_counts
containsCounter({'l': 2, 'e': 1})
.
Visual Aid:
Here's a table summarizing the iterations:
Iteration
Word
Common Chars Counter
Initial
bella
Counter({'l': 2, 'b': 1, 'e': 1, 'a': 1})
1
label
Counter({'l': 2, 'b': 1, 'e': 1, 'a': 1})
2
roller
Counter({'l': 2, 'e': 1})
Result Calculation/Final Steps:
After processing all words, we have our final
char_counts
Counter.We use the
elements()
method of the Counter to get an iterator of the elements:char_counts.elements() -> ['e', 'l', 'l']This iterator repeats each element as many times as its count in the Counter.
We convert this iterator to a list:
list(char_counts.elements())
The function returns this list:
['e', 'l', 'l']
June 6 -> 846. Hand of Straights
Alice has some number of cards, and she wants to rearrange the cards into groups so that each group is of size groupSize
, and consists of groupSize
consecutive cards.
Given an integer array hand
where hand[i]
is the value written on the ith
card and an integer groupSize
, return true
if she can rearrange the cards, or false
otherwise.
Example 1:
Input: hand = [1,2,3,6,2,3,4,7,8], groupSize = 3
Output: true
Explanation: Alice's hand can be rearranged as [1,2,3],[2,3,4],[6,7,8]
Example 2:
Input: hand = [1,2,3,4,5], groupSize = 4
Output: false
Explanation: Alice's hand cannot be rearranged into groups of 4.
Constraints:
1 <= hand.length <= 10^4
0 <= hand[i] <= 10^9
1 <= groupSize <= hand.length
Approach 1: Greedy Algorithm with Min Heap
Understanding the Core Idea
The central concept of this solution is to leverage a greedy algorithm combined with a min heap to efficiently group consecutive cards. This approach exploits the ordered nature of the problem and the constraint of forming groups of consecutive cards.
Greedy Strategy: The solution starts with the smallest card and attempts to form a group of consecutive cards from there. This greedy approach works because if a valid grouping exists, it must include the smallest card in some group.
Min Heap Usage: A min heap is used to always have quick access to the smallest card value remaining. This allows the algorithm to efficiently process cards in ascending order.
Counter for Frequency: A Counter is used to keep track of the frequency of each card value, allowing for efficient updates and checks.
Code Walkthrough
Initial Validation:
if len(hand) % group_size != 0: return FalseThis check ensures that the total number of cards is divisible by the group size. If not, it's impossible to form valid groups, so we return
False
immediately.Initialization of Data Structures:
card_count = Counter(hand) card_min_heap = list(card_count.keys()) heapq.heapify(card_min_heap)Here, we create a Counter to store the frequency of each card value. We then create a min heap (
card_min_heap
) from the unique card values. This heap will allow us to efficiently access the smallest card value at any point.Main Loop - Processing Cards:
while card_min_heap: start_card = card_min_heap[0]We continue processing cards as long as there are cards left in the min heap. The smallest card value becomes the start of a potential group.
Group Formation:
for offset in range(group_size): current_card = start_card + offset if card_count[current_card] == 0: return FalseWe attempt to form a group of
group_size
consecutive cards starting fromstart_card
. If at any point we find a card that doesn't exist (frequency is 0), we returnFalse
as we can't form a valid group.Updating Card Counts:
card_count[current_card] -= 1 if card_count[current_card] == 0: heapq.heappop(card_min_heap)After using a card in a group, we decrease its count. If the count becomes 0, we remove that card value from the min heap as it's no longer available for future groups.
Result:
return TrueIf we've processed all cards without returning
False
, it means we've successfully grouped all cards, so we returnTrue
.
Complexity Analysis
Time Complexity:
, where is the number of cards and is the number of unique card values. This is because: Creating the Counter and initializing the heap takes
time. The main loop runs at most
times (once for each card). Each iteration of the inner loop is
, but may involve a heap operation ( ) if a card count reaches zero. Thus, the overall time complexity is dominated by the heap operations, resulting in
.
Space Complexity:
, where is the number of unique card values. This is because: The Counter and the min heap both store at most
elements (one for each unique card value). No other significant extra space is used.
While
could potentially be as large as , in practice it's often smaller, especially given the constraint on the range of card values.
Example
Input:
Step-by-Step Walkthrough:
Initialization:
First, we check if the length of
hand
(9) is divisible bygroup_size
(3). 9 % 3 = 0, so this condition is satisfied.We create a
Counter
objectcard_count
to store the frequency of each card:card_count = {1: 1, 2: 2, 3: 2, 6: 1, 4: 1, 7: 1, 8: 1}
We initialize
card_min_heap
with the unique card values:card_min_heap = [1, 2, 3, 6, 4, 7, 8]
After heapifying,
card_min_heap
remains[1, 2, 3, 6, 4, 7, 8]
(already in min-heap order)
Main Loop:
Iteration 1:
start_card = 1
(the smallest card in the heap)We attempt to form a group of size 3 starting from 1
Process cards 1, 2, and 3:
Decrement
card_count
for each:{1: 0, 2: 1, 3: 1, 6: 1, 4: 1, 7: 1, 8: 1}
Remove 1 from
card_min_heap
as its count becomes 0
After this iteration:
card_min_heap = [2, 4, 3, 6, 8, 7]
Iteration 2:
start_card = 2
(now the smallest card in the heap)We form another group of size 3 starting from 2
Process cards 2, 3, and 4:
Decrement
card_count
for each:{1: 0, 2: 0, 3: 0, 6: 1, 4: 0, 7: 1, 8: 1}
Remove 2, 3, and 4 from
card_min_heap
as their counts become 0
After this iteration:
card_min_heap = [6, 8, 7]
Iteration 3:
start_card = 6
(now the smallest card in the heap)We form the final group of size 3 starting from 6
Process cards 6, 7, and 8:
Decrement
card_count
for each:{1: 0, 2: 0, 3: 0, 6: 0, 4: 0, 7: 0, 8: 0}
Remove 6, 7, and 8 from
card_min_heap
as their counts become 0
After this iteration:
card_min_heap = []
(empty)
Loop Termination:
The loop terminates as
card_min_heap
is now emptyAll cards have been successfully grouped into consecutive triples
Visual Aid:
Iteration Summary Table:
Iteration
Group Size
Card Count
Min Heap
1
3
Counter({2: 1, 3: 1, 6: 1, 4: 1, 7: 1, 8: 1, 1: 0})
[2, 4, 3, 6, 8, 7]
2
3
Counter({6: 1, 7: 1, 8: 1, 1: 0, 2: 0, 3: 0, 4: 0})
[6, 8, 7]
3
3
Counter({1: 0, 2: 0, 3: 0, 6: 0, 4: 0, 7: 0, 8: 0})
[]
Result Calculation:
The algorithm successfully grouped all cards into consecutive triples: (1, 2, 3), (2, 3, 4), (6, 7, 8)
No cards remain ungrouped, and all groups satisfy the consecutive condition
Therefore, the function returns
True
, indicating that it is possible to arrange the cards into groups of size 3 with consecutive values
Approach 2: Optimized Unsorted Greedy Algorithm
Understanding the Core Idea
The central concept of this solution is to leverage a greedy algorithm that processes cards in their original order, without the need for sorting. This approach exploits the problem's nature by forming groups on-the-fly as it iterates through the cards.
Unsorted Processing: Unlike approaches that sort the cards first, this method processes cards in their given order, reducing the overhead of sorting.
Dynamic Group Formation: The algorithm dynamically finds and forms groups as it goes, starting from the lowest possible card for each group.
Frequency Tracking: A Counter is used to keep track of the frequency of each card value, allowing for efficient updates and checks without modifying the original list.
Code Walkthrough
Initial Validation:
if len(hand) % group_size != 0: return FalseThis check ensures that the total number of cards is divisible by the group size. If not, it's impossible to form valid groups, so we return
False
immediately.Initialization of Data Structure:
card_count = Counter(hand)Here, we create a Counter to store the frequency of each card value. This allows for efficient lookup and modification of card counts.
Helper Function: Finding Group Start:
def find_group_start(card): group_start = card while card_count[group_start - 1]: group_start -= 1 return group_startThis function finds the start of a potential group for a given card. It moves backwards from the current card until it finds a gap in the sequence. This is crucial for ensuring we start groups at the earliest possible card, which is necessary for a valid grouping.
Helper Function: Removing a Group:
def remove_group(group_start): for card in range(group_start, group_start + group_size): if not card_count[card]: return False card_count[card] -= 1 return TrueThis function attempts to remove a group of
group_size
consecutive cards starting fromgroup_start
. It checks if each card in the potential group exists and decrements its count. If at any point a required card is missing, it returnsFalse
, indicating an invalid group.Main Loop - Processing Cards:
for card in hand: group_start = find_group_start(card) while group_start <= card: while card_count[group_start]: if not remove_group(group_start): return False group_start += 1This is the core of the algorithm. For each card, it finds the start of its potential group and then attempts to remove all possible groups starting from that point up to the current card. This ensures that we process all necessary groups for each card while avoiding redundant checks.
Result:
return TrueIf we've processed all cards without returning
False
, it means we've successfully grouped all cards, so we returnTrue
.
Complexity Analysis
Time Complexity:
, where is the number of cards. This is because: Each card is processed at most twice: once when finding the group start, and once when removing it as part of a group.
Although there are nested loops, the total number of iterations across all loops is still proportional to
. The use of a Counter allows for O(1) lookups and updates.
Space Complexity:
, where is the number of unique card values. This is because: The Counter stores at most
elements (one for each unique card value). No other significant extra space is used.
While
could potentially be as large as , in practice it's often smaller, especially given the constraint on the range of card values.
Example
Input:
Step-by-Step Walkthrough:
Initialization:
First, we check if the length of
hand
(9) is divisible bygroup_size
(3). 9 % 3 = 0, so this condition is satisfied.We create a
Counter
objectcard_count
to store the frequency of each card:card_count = {1: 1, 2: 2, 3: 2, 6: 1, 4: 1, 7: 1, 8: 1}
Main Loop: The function iterates through each card in
hand
. For each card, it finds the start of its group and attempts to remove complete groups.Processing card 1:
find_group_start(1)
returns 1 (no lower consecutive cards)group_start = 1
Enter while loop (
group_start (1) <= card (1)
)Enter inner while loop (
card_count[1] = 1
)Attempt to remove group starting at 1
Decrement
card_count[1]
from 1 to 0Decrement
card_count[2]
from 2 to 1Decrement
card_count[3]
from 2 to 1
Group removed successfully
Move
group_start
from 1 to 2
Processing card 2:
find_group_start(2)
returns 2 (no lower consecutive cards)group_start = 2
Enter while loop (
group_start (2) <= card (2)
)Enter inner while loop (
card_count[2] = 1
)Attempt to remove group starting at 2
Decrement
card_count[2]
from 1 to 0Decrement
card_count[3]
from 1 to 0Decrement
card_count[4]
from 1 to 0
Group removed successfully
Move
group_start
from 2 to 3
Processing card 3:
find_group_start(3)
returns 3 (no lower consecutive cards)group_start = 3
Enter while loop (
group_start (3) <= card (3)
)Move
group_start
from 3 to 4 (no group to remove ascard_count[3] = 0
)
Processing card 6:
find_group_start(6)
returns 6 (no lower consecutive cards)group_start = 6
Enter while loop (
group_start (6) <= card (6)
)Enter inner while loop (
card_count[6] = 1
)Attempt to remove group starting at 6
Decrement
card_count[6]
from 1 to 0Decrement
card_count[7]
from 1 to 0Decrement
card_count[8]
from 1 to 0
Group removed successfully
Move
group_start
from 6 to 7
Processing remaining cards (2, 3, 4, 7, 8):
For each of these cards,
find_group_start
is called, but no groups are removed as their counts incard_count
are already 0.
Visual Aid:
Iteration Summary Table:
Card
Group Start
Card Count After Each Removal
1
1
[Counter({2: 1, 3: 1, 6: 1, 4: 1, 7: 1, 8: 1, 1: 0})]
2
2
[Counter({3: 1, 6: 1, 4: 1, 7: 1, 8: 1, 1: 0, 2: 0})]
3
3
[Counter({6: 1, 4: 1, 7: 1, 8: 1, 1: 0, 2: 0, 3: 0})]
Result Calculation:
The algorithm successfully grouped all cards into consecutive triples: (1, 2, 3), (2, 3, 4), (6, 7, 8)
No cards remain ungrouped, and all groups satisfy the consecutive condition
Therefore, the function returns
True
, indicating that it is possible to arrange the cards into groups of size 3 with consecutive values
June 7 -> 648. Replace Words
In English, we have a concept called root, which can be followed by some other word to form another longer word; let's call this word derivative. For example, when the root "help"
is followed by the word "ful"
, we can form a derivative "helpful"
.
Given a dictionary
consisting of many roots and a sentence
consisting of words separated by spaces, replace all the derivatives in the sentence with the root forming it. If a derivative can be replaced by more than one root, replace it with the root that has the shortest length.
Return the sentence
after the replacement.
Example 1:
Input:
dictionary = ["cat","bat","rat"]
,sentence = "the cattle was rattled by the battery"
Output:
"the cat was rat by the bat"
Example 2:
Input:
dictionary = ["a","b","c"]
,sentence = "aadsfasf absbs bbab cadsfafs"
Output: "a a b c"
Constraints:
1 <= dictionary.length <= 1000
1 <= dictionary[i].length <= 100
dictionary[i]
consists of only lower-case letters.1 <= sentence.length <= 10^6
sentence
consists of only lower-case letters and spaces.The number of words in
sentence
is in the range[1, 1000]
The length of each word in
sentence
is in the range[1, 1000]
Every two consecutive words in
sentence
will be separated by exactly one space.sentence
does not have leading or trailing spaces.
Approach 1: String Matching with List Comprehension
Understanding the Core Idea
The central concept of this solution is to leverage string matching and list comprehension to efficiently replace derivative words with their shortest roots. This approach iterates through each word in the sentence, checking if it starts with any root from the dictionary, and replacing it with the shortest matching root if found.
Dictionary as Root Repository: The solution uses the given dictionary as a repository of root words, against which each word in the sentence is checked.
Prefix Matching: The core technique involves checking if each word starts with any of the roots, effectively treating roots as prefixes.
Shortest Root Priority: When multiple roots match a word, the shortest one is chosen, adhering to the problem's requirement.
List Comprehension for Efficiency: The solution uses Python's list comprehension to concisely apply the root-finding logic to each word in the sentence.
Code Walkthrough
Helper Function Definition:
def find_shortest_root(word: str) -> str: matching_roots = [root for root in dictionary if word.startswith(root)] return min(matching_roots, key=len, default=word)This inner function finds the shortest root for a given word. It uses a list comprehension to collect all matching roots and then returns the shortest one using
min()
with akey
function. If no roots match, it returns the original word.Sentence Processing:
return " ".join([find_shortest_root(word) for word in sentence.split()])This line does several things:
Splits the sentence into words
Applies
find_shortest_root
to each word using a list comprehensionJoins the resulting words back into a sentence
This concise approach efficiently processes each word and reconstructs the sentence in a single line of code.
Complexity Analysis
Time Complexity:
, where is the number of words in the sentence, is the average length of each word, and is the number of roots in the dictionary. This is because:
For each word in the sentence (
) We potentially check against every root in the dictionary (
) Using the
startswith
method, which in the worst case examines every character of the word ()
The sentence splitting contributes
to the time complexity, but it doesn't affect the overall complexity.
Space Complexity:
, where is the number of words in the sentence and is the number of roots in the dictionary. This accounts for:
The space used by the split words list (
) The list comprehension result (
) The potential worst-case size of
matching_roots
() in the helper function
Example
Input:
Step-by-Step Walkthrough:
Initialization:
The
dictionary
list is initialized with four roots:['pre', 'post', 'anti', 'pro']
.The
sentence
string is initialized with "The prewar and postwar eras had both prowar and antiwar sentiments".The sentence is split into words:
['The', 'prewar', 'and', 'postwar', 'eras', 'had', 'both', 'prowar', 'and', 'antiwar', 'sentiments']
.
Main Loop (List Comprehension):
The main processing occurs within a list comprehension that iterates over each word in the sentence. For each word, the
find_shortest_root
function is called:Iteration 1: 'The'
No root in the dictionary matches, so 'The' remains unchanged.
Iteration 2: 'prewar'
Matching roots:
['pre']
Shortest root: 'pre'
'prewar' is replaced with 'pre'
Iteration 3: 'and'
No matching roots, 'and' remains unchanged.
Iteration 4: 'postwar'
Matching roots:
['post']
Shortest root: 'post'
'postwar' is replaced with 'post'
Iteration 5: 'eras'
No matching roots, 'eras' remains unchanged.
Iteration 6: 'had'
No matching roots, 'had' remains unchanged.
Iteration 7: 'both'
No matching roots, 'both' remains unchanged.
Iteration 8: 'prowar'
Matching roots:
['pro']
Shortest root: 'pro'
'prowar' is replaced with 'pro'
Iteration 9: 'and'
No matching roots, 'and' remains unchanged.
Iteration 10: 'antiwar'
Matching roots:
['anti']
Shortest root: 'anti'
'antiwar' is replaced with 'anti'
Iteration 11: 'sentiments'
No matching roots, 'sentiments' remains unchanged.
Loop Termination:
The list comprehension completes after processing all words in the sentence.
A new list is created with the processed words:
['The', 'pre', 'and', 'post', 'eras', 'had', 'both', 'pro', 'and', 'anti', 'sentiments']
Visual Aid:
Iteration Summary Table:
Word #
Original Word
Replaced Word
Action
1
The
The
Unchanged
2
prewar
pre
Replaced
3
and
and
Unchanged
4
postwar
post
Replaced
5
eras
eras
Unchanged
6
had
had
Unchanged
7
both
both
Unchanged
8
prowar
pro
Replaced
9
and
and
Unchanged
10
antiwar
anti
Replaced
11
sentiments
sentiments
Unchanged
Result Calculation/Final Steps:
The list of processed words is joined back into a string using
" ".join()
.The final replaced sentence is formed: 'The pre and post eras had both pro and anti sentiments'
This final string is returned as the result of the
replaceWords1
function.
Approach 2: Trie-based Root Matching
Understanding the Core Idea
The central concept of this solution is to leverage a Trie data structure to efficiently store and search for root words. This approach first constructs a Trie from the dictionary of roots, then processes each word in the sentence to find the shortest matching root.
Trie for Efficient Prefix Matching: The solution uses a Trie (prefix tree) to store all the roots from the dictionary. This data structure is particularly well-suited for prefix matching operations.
Early Termination: The Trie allows for early termination of the search when no matching roots are found, improving efficiency.
Immediate Return of Shortest Root: As soon as a valid root is found in the Trie, it can be immediately returned, ensuring the shortest root is always selected.
Optimized for Large Dictionaries: This approach is especially effective when dealing with a large number of roots or long words, as the time complexity for searching is independent of the number of roots.
Code Walkthrough
Trie Node Class Definition:
class TrieNode: def __init__(self): self.children = {} self.is_word_end = FalseThis class defines the structure of each node in the Trie. Each node has a dictionary of children (representing later characters) and a flag indicating if it's the end of a word.
Trie Construction:
def build_trie(words: List[str]) -> TrieNode: root = TrieNode() for word in words: node = root for char in word: if char not in node.children: node.children[char] = TrieNode() node = node.children[char] node.is_word_end = True return root root = build_trie(dictionary)This function builds the Trie from the given dictionary. It iterates through each word, adding nodes for each character and marking the end of words. If a character node doesn't exist, it creates a new node. If a character node already exists, it moves to that node for the next character. The end of the word is marked when the entire word is processed.
Root Finding Function:
def find_shortest_root(word: str) -> str: node = root for index, char in enumerate(word): if char not in node.children: break node = node.children[char] if node.is_word_end: return word[:index + 1] # Shortest root found return wordThis function traverses the Trie for each word, returning the shortest matching root if found, or the original word if no root matches. It stops at the first complete word in the Trie, ensuring the shortest root is selected.
Sentence Processing:
return " ".join(find_shortest_root(word) for word in sentence.split())This line splits the sentence, applies the
find_shortest_root
function to each word using a generator expression, and joins the results back into a sentence.
Complexity Analysis
Time Complexity:
, where is the number of roots in the dictionary, is the average length of the roots, is the number of words in the sentence, and is the average length of each word in the sentence. This is because:
Building the Trie takes
time Processing each word in the sentence takes
time in the worst case This processing is repeated for
words Sentence splitting contributes
to the time complexity
The total
simplifies to
Space Complexity:
, where is the number of roots in the dictionary, is the average length of the roots, and is the number of words in the sentence. This accounts for:
The space used by the Trie
in the worst case where all roots are distinct The space used by the list of split words in the output
Note that the actual space used by the Trie might be less than
if there are common prefixes among the roots, but we consider the worst-case scenario for complexity analysis.
Example
Input:
Step-by-Step Walkthrough:
Initialization:
The
dictionary
list is initialized with four roots:['pre', 'post', 'anti', 'pro']
.The
sentence
string is initialized with "The prewar and postwar eras had both prowar and antiwar sentiments".The sentence is split into words:
['The', 'prewar', 'and', 'postwar', 'eras', 'had', 'both', 'prowar', 'and', 'antiwar', 'sentiments']
.
Building Trie:
A Trie data structure is built using the roots from the dictionary:
'pre': Added to the Trie (new nodes for 'p', 'r', 'e')
'post': Added to the Trie (new nodes for 'o', 's', 't')
'anti': Added to the Trie (new nodes for 'a', 'n', 't', 'i')
'pro': Added to the Trie (new node for 'o' under existing 'p' node)
Each word end is marked in the Trie.
Here is a visual representation of the Trie structure:
Main Processing Loop:
The main processing occurs by iterating over each word in the sentence and calling the
find_shortest_root
function:Word 1: 'The'
Checking character: 'T'
No matching child node in Trie
Result: Unchanged ('The')
Word 2: 'prewar'
Checking characters: 'p', 'r', 'e'
Found shortest root: 'pre'
Result: Replaced ('pre')
Word 3: 'and'
Checking character: 'a'
No matching child node after 'a'
Result: Unchanged ('and')
Word 4: 'postwar'
Checking characters: 'p', 'o', 's', 't'
Found shortest root: 'post'
Result: Replaced ('post')
Word 5: 'eras'
Checking character: 'e'
No matching child node
Result: Unchanged ('eras')
Word 6: 'had'
Checking character: 'h'
No matching child node
Result: Unchanged ('had')
Word 7: 'both'
Checking character: 'b'
No matching child node
Result: Unchanged ('both')
Word 8: 'prowar'
Checking characters: 'p', 'r', 'o'
Found shortest root: 'pro'
Result: Replaced ('pro')
Word 9: 'and'
Checking character: 'a'
No matching child node after 'a'
Result: Unchanged ('and')
Word 10: 'antiwar'
Checking characters: 'a', 'n', 't', 'i'
Found shortest root: 'anti'
Result: Replaced ('anti')
Word 11: 'sentiments'
Checking character: 's'
No matching child node
Result: Unchanged ('sentiments')
Visual Aid:
Iteration Summary Table:
Word #
Original Word
Replaced Word
Action
1
The
The
Unchanged
2
prewar
pre
Replaced
3
and
and
Unchanged
4
postwar
post
Replaced
5
eras
eras
Unchanged
6
had
had
Unchanged
7
both
both
Unchanged
8
prowar
pro
Replaced
9
and
and
Unchanged
10
antiwar
anti
Replaced
11
sentiments
sentiments
Unchanged
Result Calculation/Final Steps:
The list of processed words is joined back into a string using
" ".join()
.The final replaced sentence is formed: 'The pre and post eras had both pro and anti sentiments'
This final string is returned as the result of the
replaceWords2
function.
Approach 3: Length-Based Prefix Matching
Understanding the Core Idea
The central concept of this solution is to optimize the root-finding process by leveraging knowledge of the minimum and maximum root lengths in the dictionary. This approach significantly reduces the number of prefix checks needed for each word.
Length-Based Iteration: By determining the minimum and maximum lengths of roots in the dictionary, the solution can efficiently iterate through only the relevant prefix lengths for each word.
Dictionary Lookup: The solution uses a dictionary to store roots, allowing for
lookup of root lengths and efficient prefix matching for each word. Early Termination: The search for each word stops as soon as a matching root is found, ensuring the shortest root is always selected.
Efficient Range Checking: By using a range-based iteration, the solution avoids checking prefixes that are definitely too short or unnecessarily long.
Code Walkthrough
Root Length Analysis:
root_lengths = {root: len(root) for root in dictionary} min_length = min(root_lengths.values()) max_length = max(root_lengths.values())This crucial preprocessing step determines the minimum and maximum root lengths in the dictionary. This information is key to optimizing the later search process.
Sentence Splitting:
words = sentence.split()The sentence is split into individual words for processing.
Word Processing Loop:
replaced_words = [] for word in words: replacement = wordThis initializes the loop to process each word, starting with the assumption that the word will not be replaced.
Optimized Root Matching:
for length in range(min_length, min(max_length, len(word)) + 1): prefix = word[:length] if prefix in root_lengths: replacement = prefix breakThis is the core optimization of the algorithm:
It only checks prefixes within the range of possible root lengths.
It starts from
min_length
, avoiding any unnecessary checks of shorter prefixes.It caps the search at either
max_length
or the word's length, whichever is smaller, preventing overlong prefix checks.This targeted approach significantly reduces the number of prefix checks for each word.
If a matching root is found in the dictionary, it replaces the word with the root and breaks out of the loop.
Replacement Storage:
replaced_words.append(replacement)The replacement (either the original word or the found root) is added to the list of replaced words.
Result Construction:
return " ".join(replaced_words)The processed words are joined back into a sentence and returned.
Complexity Analysis
Time Complexity:
, where is the number of roots in the dictionary, is the average length of the roots, is the number of words in the sentence, and is the average length of each word in the sentence. This is because:
Creating the
root_lengths
dictionary takestime Splitting the sentence takes
time For each word (n), we iterate over a range of lengths and perform constant-time operations. In the worst case, this is similar to iterating over each character (m). In average cases, the range is much smaller than the average word length, so the actual time complexity is lower.
The
min()
andmax()
operations onroot_lengths.values()
are, but this is dominated by the of dictionary creation
Space Complexity:
, where is the number of roots in the dictionary and is the number of words in the sentence. This accounts for:
The space used by the
root_lengths
dictionaryThe space used by the
words
listThe space used by the
replaced_words
list
Note that while we create substrings (
prefix
) during processing, these are temporary and don't significantly impact the overall space complexity.
This approach offers a good balance between time and space efficiency, particularly effective for scenarios with a large number of roots or when root lengths vary significantly.
Example
Input:
Step-by-Step Walkthrough:
Initialization:
The
dictionary
list is initialized with four roots:['pre', 'post', 'anti', 'pro']
.The
sentence
string is initialized with "The prewar and postwar eras had both prowar and antiwar sentiments".A
root_lengths
dictionary is created:Root
Length
pre
3
post
4
anti
4
pro
3
Minimum root length: 3
Maximum root length: 4
The sentence is split into words:
['The', 'prewar', 'and', 'postwar', 'eras', 'had', 'both', 'prowar', 'and', 'antiwar', 'sentiments']
Main Processing Loop:
The main processing occurs by iterating over each word in the sentence:
Word 1: 'The'
Checking prefix: 'The'
No match found
Result: Unchanged ('The')
Word 2: 'prewar'
Checking prefix: 'pre'
Match found! Replacing with: 'pre'
Result: Replaced ('pre')
Word 3: 'and'
Checking prefix: 'and'
No match found
Result: Unchanged ('and')
Word 4: 'postwar'
Checking prefix: 'pos'
No match found
Checking prefix: 'post'
Match found! Replacing with: 'post'
Result: Replaced ('post')
Word 5: 'eras'
Checking prefix: 'era'
No match found
Checking prefix: 'eras'
No match found
Result: Unchanged ('eras')
Word 6: 'had'
Checking prefix: 'had'
No match found
Result: Unchanged ('had')
Word 7: 'both'
Checking prefix: 'bot'
No match found
Checking prefix: 'both'
No match found
Result: Unchanged ('both')
Word 8: 'prowar'
Checking prefix: 'pro'
Match found! Replacing with: 'pro'
Result: Replaced ('pro')
Word 9: 'and'
Checking prefix: 'and'
No match found
Result: Unchanged ('and')
Word 10: 'antiwar'
Checking prefix: 'ant'
No match found
Checking prefix: 'anti'
Match found! Replacing with: 'anti'
Result: Replaced ('anti')
Word 11: 'sentiments'
Checking prefix: 'sen'
No match found
Checking prefix: 'sent'
No match found
Result: Unchanged ('sentiments')
Visual Aid:
Word #
Original Word
Replaced Word
Prefixes Checked
Action
1
The
The
The
Unchanged
2
prewar
pre
pre
Replaced
3
and
and
and
Unchanged
4
postwar
post
pos, post
Replaced
5
eras
eras
era, eras
Unchanged
6
had
had
had
Unchanged
7
both
both
bot, both
Unchanged
8
prowar
pro
pro
Replaced
9
and
and
and
Unchanged
10
antiwar
anti
ant, anti
Replaced
11
sentiments
sentiments
sen, sent
Unchanged
Result Calculation/Final Steps:
The list of processed words is joined back into a string using
" ".join()
.The final replaced sentence is formed: 'The pre and post eras had both pro and anti sentiments'
This final string is returned as the result of the
replaceWords3
function.
June 8 -> 523. Continuous Subarray Sum
Given an integer array nums and an integer k, return true
if nums
has a good subarray or false
otherwise.
A good subarray is a subarray where:
its length is at least two, and
the sum of the elements of the subarray is a multiple of
k
.
Note that:
A subarray is a contiguous part of the array.
An integer
x
is a multiple ofk
if there exists an integern
such thatx = n * k
.0
is always a multiple ofk
.
Example 1:
Input: nums = [23,2,4,6,7], k = 6
Output: true
Explanation: [2, 4] is a continuous subarray of size 2 whose elements sum up to 6.
Example 2:
Input: nums = [23,2,6,4,7], k = 6
Output: true
Explanation: [23, 2, 6, 4, 7] is a continuous subarray of size 5 whose elements sum up to 42.
42 is a multiple of 6 because 42 = 7 * 6 and 7 is an integer.
Example 3:
Input: nums = [23,2,6,4,7], k = 13
Output: false
Constraints:
1 <= nums.length <= 10^5
0 <= nums[i] <= 10^9
0 <= sum(nums[i]) <= 2^31 - 1
1 <= k <= 2^31 - 1
Approach 1: Prefix Sum with Modular Arithmetic
Understanding the Core Idea
The central concept of this solution is to leverage prefix sums and modular arithmetic to efficiently identify subarrays with sums divisible by k
. This approach allows us to solve the problem in a single pass through the array, without needing to check all possible subarrays explicitly.
Prefix Sum Property: If the difference between two prefix sums is divisible by
k
, then the subarray between those two points has a sum divisible byk
.Modular Arithmetic: By taking the modulus of the running sum at each step, we can identify when we've found a subarray with a sum divisible by
k
.Hash Map for Remainders: Storing the first occurrence of each remainder allows us to quickly check if we've seen the same remainder before, indicating a valid subarray.
Code Walkthrough
Initialization:
prefix_sum_mod_k = 0 remainder_index_map = {0: -1}We initialize
prefix_sum_mod_k
to track the running sum modulok
. Theremainder_index_map
is initialized with{0: -1}
to handle the case where the entire prefix sum is divisible byk
from the start.We're essentially saying that we've seen a remainder of 0 at an imaginary index -1. This clever trick allows us to correctly identify subarrays that start from the beginning of the input array and have a sum divisible by
k
.
Iterating Through the Array:
for index, num in enumerate(nums):We iterate through the array, keeping track of both the index and the number.
Updating Prefix Sum:
prefix_sum_mod_k = (prefix_sum_mod_k + num) % kWe update the running sum and take the modulus with
k
. This keeps our sum in the range[0, k-1]
, allowing us to detect cycles efficiently.Checking and Updating Remainder Map:
if prefix_sum_mod_k not in remainder_index_map: remainder_index_map[prefix_sum_mod_k] = index elif index - remainder_index_map[prefix_sum_mod_k] >= 2: return TrueIf we haven't seen this remainder before, we add it to the map. If we have seen it before, we check if the subarray length is at least 2. If so, we've found a valid subarray and return
True
.Result Calculation/Return:
return FalseIf we've gone through the entire array without finding a valid subarray, we return
False
.
Complexity Analysis
Time Complexity:
, where is the length of the input array nums
. This is because we iterate through the array exactly once, and all operations within the loop (modulus calculation, dictionary lookups, and updates) are constant time operations.
Space Complexity:
, where is the length of the input array nums
andis the divisor. This is because the remainder_index_map
stores remainders modulok
, so there can be at mostk
unique remainders. However, if, we won't encounter more than n
unique remainders. Therefore, the space used is bounded by the minimum ofn
andk
.
Example
Step-by-Step Walkthrough:
Initialization:
Initialize
prefix_sum_mod_k = 0
Initialize
remainder_index_map = {0: -1}
The
remainder_index_map
is initialized with{0: -1}
to handle the case where a valid subarray starts from the beginning of the input array.
Main Loop (Iterating through the array):
Iteration 1 (index 0, num = 23):
Calculate new
prefix_sum_mod_k
: (0 + 23) % 6 = 55 is not in
remainder_index_map
, so add it:remainder_index_map = {0: -1, 5: 0}
Iteration 2 (index 1, num = 2):
Calculate new
prefix_sum_mod_k
: (5 + 2) % 6 = 11 is not in
remainder_index_map
, so add it:remainder_index_map = {0: -1, 5: 0, 1: 1}
Iteration 3 (index 2, num = 6):
Calculate new
prefix_sum_mod_k
: (1 + 6) % 6 = 11 is already in
remainder_index_map
at index 1Check subarray length: 2 - 1 = 1, which is < 2, so continue
Iteration 4 (index 3, num = 4):
Calculate new
prefix_sum_mod_k
: (1 + 4) % 6 = 55 is already in
remainder_index_map
at index 0Check subarray length: 3 - 0 = 3, which is >= 2
Valid subarray found, return True
Loop Termination:
The loop terminates early because a valid subarray is found.
Visual Aids:
Index
Number
Prefix Sum % k
Remainder Index Map
0
23
5
{0: -1, 5: 0}
1
2
1
{0: -1, 5: 0, 1: 1}
2
6
1
{0: -1, 5: 0, 1: 1}
3
4
5
{0: -1, 5: 0, 1: 1}
Result Calculation/Final Steps:
A valid subarray is found at iteration 4, where the subarray [2, 6, 4] (from index 1 to 3) has a sum of 12, which is divisible by 6.
The function returns
True
as soon as this valid subarray is identified.
June 9 -> 974. Subarray Sums Divisible by K
Given an integer array nums
and an integer k
, return the number of non-empty subarrays that have a sum divisible by k
.
A subarray is a contiguous part of an array.
Example 1:
Input: nums = [4,5,0,-2,-3,1], k = 5
Output: 7
Explanation: There are seven subarrays with a sum divisible by k = 5: [4, 5, 0, -2, -3, 1], [5], [5, 0], [5, 0, -2, -3], [0], [0, -2, -3], [-2, -3]
Example 2:
Input: nums = [5], k = 9
Output: 0
Constraints:
1 <= nums.length <= 3 * 10^4
-10^4 <= nums[i] <= 10^4
2 <= k <= 10^4
Approach 1: Prefix Sum with Modular Arithmetic
Understanding the Core Idea
The central concept of this solution is to leverage prefix sums and modular arithmetic to efficiently count subarrays with sums divisible by k
. This approach exploits the properties of remainders to identify divisible subarrays without explicitly calculating all possible subarray sums.
Prefix Sum: The solution uses a running sum (cumulative sum) to efficiently compute subarray sums.
Modular Arithmetic: By tracking remainders of the cumulative sum divided by
k
, we can identify divisible subarrays.Frequency Counting: A dictionary is used to count the frequency of each remainder, allowing for quick identification of divisible subarrays.
Code Walkthrough
Initialization:
remainder_frequency = {0: 1} cumulative_sum = 0 divisible_subarray_count = 0We initialize three key variables:
remainder_frequency
: A dictionary to track the frequency of remainders, initialized with {0: 1} to account for the case where the cumulative sum is divisible byk
from the start.cumulative_sum
: To keep track of the running sum of elements.divisible_subarray_count
: To store the count of subarrays divisible byk
.
Iterating Through the Array:
for num in nums:We iterate through each number in the input array
nums
to process them one by one.Updating Cumulative Sum and Calculating Remainder:
cumulative_sum += num remainder = cumulative_sum % kFor each number, we update the
cumulative_sum
and calculate theremainder
when divided byk
. This remainder is key to identifying divisible subarrays.Counting Divisible Subarrays:
if remainder in remainder_frequency: divisible_subarray_count += remainder_frequency[remainder] remainder_frequency[remainder] += 1 else: remainder_frequency[remainder] = 1If we've seen this remainder before, it means we've found subarrays divisible by
k
. We add the frequency of this remainder to our count, as each previous occurrence forms a valid subarray with the current element. We then increment the frequency of this remainder.Result Return:
return divisible_subarray_countAfter processing all numbers, we return the total count of divisible subarrays.
Complexity Analysis
Time Complexity:
, where is the length of the input array nums
. This is because we iterate through the array once, and all operations within the loop (dictionary lookups, updates, and arithmetic operations) areon average.
Space Complexity:
, where is the length of the input array and is the divisor. This is because: In the worst case, when
is large and all prefixes have unique remainders, the remainder_frequency
dictionary could store up toentries. However, there can only be
distinct remainders when dividing by , so the dictionary size is also bounded by . Therefore, the space complexity is the minimum of these two bounds.
Example
Input:
Step-by-Step Walkthrough:
Initialization:
We start by initializing our key variables:
remainder_frequency = {0: 1}
: This dictionary keeps track of the frequency of remainders. We initialize it with {0: 1} to account for the case where the entire subarray from the start is divisible byk
.cumulative_sum = 0
: This variable will keep track of the running sum of elements.divisible_subarray_count = 0
: This variable will store the count of subarrays divisible byk
.
Main Loop (Iterating through the array):
Iteration 1:
Current number: 4
Update
cumulative_sum
: 0 + 4 = 4Calculate remainder: 4 % 5 = 4
4 is not in
remainder_frequency
, so we add it:remainder_frequency = {0: 1, 4: 1}
No subarrays found yet,
divisible_subarray_count
remains 0
Iteration 2:
Current number: 5
Update
cumulative_sum
: 4 + 5 = 9Calculate remainder: 9 % 5 = 4
4 is in
remainder_frequency
, so we've found a subarray divisible by 5Update
divisible_subarray_count
: 0 + 1 = 1 (subarray [5])Update
remainder_frequency
: {0: 1, 4: 2}
Iteration 3:
Current number: 0
Update
cumulative_sum
: 9 + 0 = 9Calculate remainder: 9 % 5 = 4
4 is in
remainder_frequency
, we've found two more subarraysUpdate
divisible_subarray_count
: 1 + 2 = 3 (new subarrays: [0] and [5,0])Update
remainder_frequency
: {0: 1, 4: 3}
Iteration 4:
Current number: -2
Update
cumulative_sum
: 9 + (-2) = 7Calculate remainder: 7 % 5 = 2
2 is not in
remainder_frequency
, so we add it:remainder_frequency = {0: 1, 4: 3, 2: 1}
divisible_subarray_count
remains 3
Iteration 5:
Current number: -3
Update
cumulative_sum
: 7 + (-3) = 4Calculate remainder: 4 % 5 = 4
4 is in
remainder_frequency
, we've found three more subarraysUpdate
divisible_subarray_count
: 3 + 3 = 6 (new subarrays: [-3], [-2,-3], [5,0,-2,-3])Update
remainder_frequency
: {0: 1, 4: 4, 2: 1}
Iteration 6:
Current number: 1
Update
cumulative_sum
: 4 + 1 = 5Calculate remainder: 5 % 5 = 0
0 is in
remainder_frequency
, we've found one more subarrayUpdate
divisible_subarray_count
: 6 + 1 = 7 (new subarray: [4,5,0,-2,-3,1])Update
remainder_frequency
: {0: 2, 4: 4, 2: 1}
Visual Aid:
Here's a summary table of the iterations:
Iteration
Number
Cumulative Sum
Remainder
Divisible Subarray Count
Remainder Frequency
1
4
4
4
0
{0: 1, 4: 1}
2
5
9
4
1
{0: 1, 4: 2}
3
0
9
4
3
{0: 1, 4: 3}
4
-2
7
2
3
{0: 1, 4: 3, 2: 1}
5
-3
4
4
6
{0: 1, 4: 4, 2: 1}
6
1
5
0
7
{0: 2, 4: 4, 2: 1}
Result Calculation:
After processing all numbers, our final
divisible_subarray_count
is 7.This means we found seven non-empty subarrays with a sum divisible by 5:
[5]
[0]
[5,0]
[-2,-3]
[0,-2,-3]
[5,0,-2,-3]
[4,5,0,-2,-3,1]
The function returns this final count of 7.